diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Cadmas 11 46 A Snow Sport Helmet with Advanced Features.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Cadmas 11 46 A Snow Sport Helmet with Advanced Features.md deleted file mode 100644 index ffe94c456771c3e23275af8498a0e2d85eef4487..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Cadmas 11 46 A Snow Sport Helmet with Advanced Features.md +++ /dev/null @@ -1,121 +0,0 @@ - -

What is Cadmas 11 46?

-

If you are interested in online assessment and comic books, you might have heard of Cadmas 11 46. But what exactly is it? Is it a software, a comic book, or something else? In this article, we will explore what Cadmas and 11 46 are, how they are related, and how they can be used for educational purposes.

-

What is Cadmas?

-

Cadmas is an online assessment platform that helps higher education providers achieve institutional goals through better assessment experiences. It is a secure, online environment that facilitates an end-to-end assessment workflow, simplifying the process of implementing best practice assessment at scale. By empowering academics and supporting students, Cadmas can be used to solve the biggest challenges faced by universities today, such as academic integrity, student retention, remote learning, and online exams.

-

cadmas 11 46


Download ……… https://byltly.com/2uKvPI



-

How does Cadmus work?

-

Cadmus has several features and benefits for both learners and educators. For learners, Cadmus provides a supportive and scaffolded assessment experience that helps them develop their academic skills and achieve better outcomes. For example, Cadmus offers:

- -

For educators, Cadmus simplifies the process of designing and delivering high-quality digital assessment, consistently and at scale. For example, Cadmus offers:

- -

What are some use cases of Cadmus?

-

Cadmus can be used for a range of formative and summative, open-book written assessments and alternatives to exams. Some examples of how Cadmus can be used are:

- -

What is 11 46?

-

11 46 is a comic book series by Castle Comics that was published between November 2020 and June 2021 . It is a crime thriller that follows the lives of four strangers who are connected by a mysterious murder that took place at exactly 11:46 pm.

-

What is the plot of 11 46?

-

The plot of 11 46 revolves around four main characters who have different backgrounds and motivations. They are:

- -

The story unfolds through multiple perspectives and timelines, revealing how each character is related to the murder and how their actions affect each other. The story also explores various themes and messages, such as corruption, justice, revenge, loyalty, etc.

-

What are some themes and messages of 11 46?

-

One of the main themes of 11 46 is the idea of fate versus free will. The title of the series refers to the exact time when the murder happened, suggesting that it was predetermined by some higher power or force. However, the series also shows how each character has some degree of choice and agency in their actions. The series asks questions such as:

- -

How are Cadmus and 11 46 related?

-

At first glance, Cadmus and 11 46 seem to have nothing in common. One is an online assessment platform for higher education, while the other is a comic book series for entertainment. However, upon closer examination, we can find some possible connections and similarities between them. For example:

-

Cadmas 11 46 sway office
-Cadmas 11 46 bali finder
-Cadmas 11 46 opensea collection
-Cadmas 11 46 black panther
-Cadmas 11 46 NBA finals
-Cadmas 11 46 Jerome K Jerome
-Cadmas 11 46 Pune event management
-Cadmas 11 46 Oaxaca figs
-Cadmas 11 46 hedge fund
-Cadmas 11 46 short fiction writer
-Cadmas 11 46 glassware
-Cadmas 11 46 dance
-Cadmas 11 46 calculus
-Cadmas 11 46 emperor is dead
-Cadmas 11 46 chainsaw training
-Cadmas 11 46 workhorse
-Cadmas 11 46 fate of cadmus
-Cadmas 11 46 contemporary culinary style
-Cadmas 11 46 force crankset
-Cadmas 11 46 snow sport helmet
-Cadmas 11 46 trail running shoes
-Cadmas 11 46 board shorts
-Cadmas 11 46 slip on shoes
-Cadmas 11 46 black laces and nylon
-Cadmas 11 46 jeans by yeezy
-Cadmas 11 46 navy women's supercrew sweatshirt
-Cadmas 11 46 ebay items at great prices
-Cadmas 11 46 Baxter pharma earnings call
-Cadmas 11 46 black smoke burner
-Cadmas 11 46 Venus sign
-Cadmas 11 46 IT business solutions
-Cadmas 11 BB/38 UHM blazer SS7MU
-Cadmas #47 bdfb3a6fcd made with Microsoft sway
-Cadmus #48 cadmus and his legacy Kirstein
-Cadmus #49 history of science A.L. Kirstein

-

How can Cadmus be used to assess 11 46?

-

One way to use Cadmus to assess 11 46 is to design and deliver a Cadmus assignment based on the comic book series. For example, an educator can create an assignment that requires students to:

- -

The assignment can be aligned with the learning outcomes and assessment criteria of the course or subject. The assignment can also be tailored to suit different levels of difficulty and complexity, depending on the students' needs and abilities.

-

What are some benefits and challenges of using Cadmus for 11 46?

-

Using Cadmus for 11 46 can have some benefits and challenges for both learners and educators. Some of the benefits are:

- -

Some of the challenges are:

- -

Conclusion

-

In conclusion, Cadmas 11 46 is a combination of an online assessment platform and a comic book series that can be used for educational purposes. Cadmas is a platform that helps higher education providers achieve institutional goals through better assessment experiences. 11 46 is a series that follows the lives of four strangers who are connected by a mysterious murder. By using Cadmus to assess 11 46, learners and educators can enjoy some benefits, such as developing critical thinking skills, engaging with a creative text, ensuring academic integrity, etc. However, they may also face some challenges, such as accessing or reading the text, finding or creating suitable assessment tasks, dealing with plagiarism or cheating issues, etc. Therefore, it is important to consider these factors before using Cadmus 11 46 for assessment.

-

FAQs

-

Here are some frequently asked questions and answers about Cadmas and 11 46:

-
    -
  1. Where can I find Cadmus?
    Cadmus is an online platform that can be accessed through your LMS. You can find more information about Cadmus on their website: https://www.cadmus.io/.
  2. -
  3. Where can I find 11 46?
    11 46 is a comic book series that was published by Castle Comics. You can find more information about 11 46 on their website: https://www.castlecomics.com/1146.
  4. -
  5. How much does Cadmus cost?
    Cadmus is free for learners and educators who use it for assessment purposes. However, Cadmus may charge a fee for institutions who want to use it for other purposes.
  6. -
  7. How much does 11 46 cost?
    11 46 costs $3.99 per issue or $19.99 for the complete series. You can buy it online or in physical stores.
  8. -
  9. How long does it take to complete a Cadmus assignment?
    The length of a Cadmus assignment depends on the type and complexity of the task. However, most Cadmus assignments take between one to three hours to complete.
  10. -
-

0a6ba089eb
-
-
\ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/FSX - Maddog 2008 Professional Cracked by Komu Everything You Need to Know About the Legendary MD-80 Add-on.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/FSX - Maddog 2008 Professional Cracked by Komu Everything You Need to Know About the Legendary MD-80 Add-on.md deleted file mode 100644 index 06a69fa948f0baf85c943ab2095a97f010a806d2..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/FSX - Maddog 2008 Professional Cracked by Komu Everything You Need to Know About the Legendary MD-80 Add-on.md +++ /dev/null @@ -1,105 +0,0 @@ - -

FSX - Maddog 2008 Professional cracked by Komu: A Review

-

If you are a fan of flight simulation games, you might have heard of FSX - Maddog 2008 Professional, a popular add-on for Microsoft Flight Simulator X that lets you fly the Leonardo Maddog, a realistic and complex simulation of the McDonnell Douglas MD-80 aircraft. But did you know that there is a way to get this add-on for free, thanks to a crack made by a user named Komu? In this article, we will review FSX - Maddog 2008 Professional cracked by Komu, a download that claims to unlock all the features and benefits of the original add-on without paying a dime. We will also show you how to install and use it, as well as the pros and cons of using this crack. Finally, we will suggest some alternatives to this crack in case you are looking for other options.

-

What is FSX - Maddog 2008 Professional?

-

FSX - Maddog 2008 Professional is an add-on for Microsoft Flight Simulator X that was released in 2008 by Leonardo Software House, a company that specializes in developing flight simulation software. This add-on is a highly detailed and accurate simulation of the McDonnell Douglas MD-80 aircraft, also known as the Maddog, a twin-engine, medium-range jet airliner that was widely used by many airlines around the world from the 1980s to the 2000s.

-

FSX - Maddog 2008 Professional cracked by Komu


Download Filehttps://byltly.com/2uKvyM



-

This add-on offers many features and benefits for flight simulation enthusiasts, such as:

- -

FSX - Maddog 2008 Professional is widely regarded as one of the best add-ons for FSX in terms of realism, complexity, and immersion. However, it also comes with a price tag of $59.99 USD (as of May 2023), which might be too expensive for some users who want to enjoy this add-on without breaking the bank.

-

FSX Maddog 2008 Pro full version download
-How to install FSX Maddog 2008 Professional crack
-FSX Maddog 2008 Professional free torrent
-FSX Maddog 2008 Pro activation key
-FSX Maddog 2008 Professional patch by Komu
-FSX Maddog 2008 Pro serial number
-FSX Maddog 2008 Professional license code
-FSX Maddog 2008 Pro keygen
-FSX Maddog 2008 Professional gameplay video
-FSX Maddog 2008 Pro review
-FSX Maddog 2008 Professional system requirements
-FSX Maddog 2008 Pro manual pdf
-FSX Maddog 2008 Professional update
-FSX Maddog 2008 Pro mods
-FSX Maddog 2008 Professional liveries
-FSX Maddog 2008 Pro cockpit view
-FSX Maddog 2008 Professional tutorial
-FSX Maddog 2008 Pro tips and tricks
-FSX Maddog 2008 Pro cheats
-FSX Maddog 2008 Pro error fix
-FSX Maddog 2008 Professional forum
-FSX Maddog 2008 Professional support
-FSX Maddog 2008 Professional online multiplayer
-FSX Maddog 2008 Professional VR compatibility
-FSX Maddog 2008 Professional best settings
-FSX Maddog 2008 Pro comparison with other flight simulators
-FSX Maddog 2008 Pro realistic flight model
-FSX Maddog 2008 Pro sound pack
-FSX Maddog 2008 Pro scenery add-ons
-FSX Maddog 2008 Pro weather engine
-FSX Maddog 2008 Pro navigation database
-FSX Maddog 2008 Pro fuel planner
-FSX Maddog 2008 Pro flight plan generator
-FSX Maddog 2008 Pro charts and maps
-FSX Maddog 2008 Pro ATC communication
-FSX Maddog 2008 Pro emergency procedures
-FSX Maddog 2008 Pro failures simulation
-FSX Maddog 2008 Pro cold and dark start up
-FSX Maddog 2008 Pro take off and landing performance calculator
-FSX Maddog 2008 Pro autopilot functions
-FSX Maddog 2008 Pro FMC programming
-FSX Maddog 2008 Pro VNAV and LNAV modes
-FSX Maddog 2008 Pro SID and STAR procedures
-FSX Maddog 2008 Pro ILS approach and landing
-FSX Maddog 2008 Pro RNAV approach and landing
-FSX Maddog 2008 Pro VOR approach and landing
-FSX Maddog 2008 Pro visual approach and landing
-FSX Maddog 2008 Pro go around procedure
-FSX Maddog 2008 Pro holding pattern procedure
-FSX Maddog 2008 Pro diverting to alternate airport procedure

-

What is Komu's crack?

-

Komu's crack is a download that claims to bypass the activation process of FSX - Maddog 2008 Professional and allow users to use it for free. It was created by a user named Komu who uploaded it on various torrent sites in 2010. According to Komu's description, his crack does not modify any files or registry entries of the original add-on, but simply replaces the original .dll file with a cracked one that disables the activation check. He also claims that his crack does not affect any features or functions of the add-on, and that it works with any version of FSX.

-

Komu's crack has been downloaded by thousands of users who wanted to try FSX - Maddog 2008 Professional without paying for it. Some users have reported that the crack works as advertised and that they have not encountered any problems or issues with it. However, other users have reported that the crack does not work at all or that it causes various errors or crashes during their flights. Moreover, some users have expressed ethical concerns about using this crack, as it violates the intellectual property rights of Leonardo Software House and deprives them of their deserved revenue.

-

How to install and use FSX - Maddog 2008 Professional cracked by Komu?

-

If you want to install and use FSX - Maddog 2008 Professional cracked by Komu, you will need to follow these steps:

-
    -
  1. Download FSX - Maddog 2008 Professional cracked by Komu from one of the torrent sites where it is available. You will need a torrent client such as uTorrent or BitTorrent to do this.
  2. -
  3. Extract the downloaded file using a program such as WinRAR or 7-Zip. You will get a folder named "Maddog Pro" that contains two files: "maddog pro fsx.exe" and "maddog pro fsx crack by komu.dll".
  4. -
  5. Run "maddog pro fsx.exe" and follow the installation instructions. You will need to specify the location of your FSX folder during the installation process.
  6. -
  7. Copy "maddog pro fsx crack by komu.dll" and paste it into your FSX folder. You will need to overwrite the original .dll file with the same name.
  8. -
  9. Launch FSX and select "Fly The Maddog" from the menu. You should be able to use FSX - Maddog 2008 Professional without any activation prompts or restrictions.
  10. -
-

Note: These steps are based on Komu's instructions and user feedback. We do not endorse or recommend using this crack or any other illegal downloads. Use them at your own risk.

-

Pros and cons of FSX - Maddog 2008 Professional cracked by Komu

-

FSX - Maddog 2008 Professional cracked by Komu has some pros and cons that you should consider before using it:

-

Pros:

- -

Cons:

- -

Alternatives to FSX - Maddog 2008 Professional cracked by Komu

-

If you are looking for alternatives to <

0a6ba089eb
-
-
\ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Clarion Enterprise Edition 6.0 64 Bit UPD.md b/spaces/1gistliPinn/ChatGPT4/Examples/Clarion Enterprise Edition 6.0 64 Bit UPD.md deleted file mode 100644 index a601095a40b21f76fb0042153c84061c7ea4606a..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Clarion Enterprise Edition 6.0 64 Bit UPD.md +++ /dev/null @@ -1,9 +0,0 @@ -
-

if you are not certain what is the best budget hotel for you, please take into account your initial budget as well as the purpose of your trip. the more affordable hotels may not be suitable for your needs. you should also consider what other options are available in the area. hotels that are in less populated areas tend to be less expensive, but also are farther from popular attractions.

-

Clarion Enterprise Edition 6.0 64 bit


Download Zip »»» https://imgfil.com/2uxWUx



-

you'll be very satisfied with the hotel's service. the clarion express was very nice - the lobby had free wireless internet, and the rooms had a fridge and a coffeemaker. the walk to the hotel from downtown was fast and easy, even though i had to use the train to get to clarion. the hotel was very easy to get into, and the staff were friendly. a very nice choice.

-

the clarion express hotel is a best choice for a budget hotel with a great location. enjoy our complimentary cooked-to-order breakfast each morning before you head out exploring. we offer free wireless internet, free local calls, and 32" lcd hd tvs with free cable in every room. just hop a train to clarion university in less than three miles. our city welcomes business travelers, so clarion express is an ideal hotel for travelers seeking a modern downtown hotel with the amenities and location of a big city hotel at a reasonable price.

-

clarion city inn & suites in downtown harrisburg offers 100 rooms with complimentary internet access. non-smoking rooms include microwaves and refrigerators. rooms have microwaves, hair dryers, and coffee/tea makers. this harrisburg hotel has both seasonal and indoor pools. parking is free. complimentary breakfast is served daily.

-

899543212b
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/8 Ball Pool Long Line Tool APK The Ultimate Guide for Android Users.md b/spaces/1phancelerku/anime-remove-background/8 Ball Pool Long Line Tool APK The Ultimate Guide for Android Users.md deleted file mode 100644 index 06de35fb9c25bbde3a5cf0daddfb3ead9a40a625..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/8 Ball Pool Long Line Tool APK The Ultimate Guide for Android Users.md +++ /dev/null @@ -1,111 +0,0 @@ -
-

8 Ball Pool Long Line Tool APK: A Guide for Beginners

-

If you are a fan of billiards games, you might have heard of 8 Ball Pool, one of the most popular and addictive online pool games in the world. But did you know that there is a way to enhance your gaming experience and improve your skills with a simple tool? In this article, we will introduce you to 8 Ball Pool Long Line Tool APK, a modded version of the game that allows you to have longer aiming lines and more accurate shots. We will also show you how to download and install it on your Android device, and share some tips and tricks to win in 8 Ball Pool.

-

What is 8 Ball Pool and How to Play It?

-

8 Ball Pool is a game developed by Miniclip that simulates the real-life pool game of the same name. You can play it online with millions of players from around the world, or offline with your friends. You can also participate in tournaments, win trophies, and collect coins and cash to buy better cues and enter higher-stakes tables.

-

8 ball pool long line tool apk


Download Zip === https://jinyurl.com/2uNMpE



-

The Basics of 8 Ball Pool

-

8 Ball Pool is played with a cue ball and fifteen object balls, numbered 1 through 15. Balls 1–7 are solid colors and commonly referred to as “low balls”, and balls 9–15 are striped and commonly referred to as “high balls.” One player must pocket balls of solid colors, while the other player must pocket the striped balls. The player who pockets their entire group and then legally pockets the 8-ball wins the game.

-

The Rules of 8 Ball Pool

-

For the break shot to be legal, the breaker (with the base of the cue ball placed anywhere behind the head string) must either pocket a number ball or drive at least four (4) number balls to one or more rails. No ball is called, and the cue ball is not required to hit any particular object ball first. If the breaker fails to make the legal break requirement, the balls will be re-racked and the opponent shall have the option of breaking or requesting the offending player to break again.

-

If any numbered ball is pocketed on a legal break, the breaking player is to continue their inning. If the breaker makes a legal break but commits a foul, the game is to continue with the opponent having ball in hand anywhere behind the head-string, but must shoot an object ball beyond the head-string (outside of the “kitchen”) or it is a foul.

-

If the breaker pockets the 8-ball on a legal break shot, they win the game unless they also scratch (pocket or drive off the table) the cue ball, in which case they lose. If any other object ball leaves the table on a legal break shot, it is spotted on its original position before shooting player plays their next shot.

-

During normal play, each player remains at the table until they fail to legally pocket a ball of their group or commit a foul. If a player pockets any ball on a legal shot except for their own group or an opponent’s group (if playing an open table), they continue their inning. If they pocket their own group and an opponent’s group on one shot (if playing an open table), they continue their inning but must declare which group they are playing before their next shot.

-

If a player pockets any ball on a foul shot, it remains pocketed except for the cue ball which is returned behind head string or spotted if it leaves table. If a player pockets the 8-ball on a legal shot, they win the game unless they also scratch, in which case they lose. If a player pockets the 8-ball on an illegal shot, they lose the game.

-

8 ball pool mod apk with long lines
-8 ball pool hack apk long line tool
-8 ball pool unlimited coins and long lines apk
-8 ball pool long line tool apk download
-8 ball pool long line tool apk no root
-8 ball pool long line tool apk latest version
-8 ball pool long line tool apk for android
-8 ball pool long line tool apk free download
-8 ball pool long line tool apk online
-8 ball pool long line tool apk 2023
-8 ball pool cheat apk long line tool
-8 ball pool guideline hack apk long line tool
-8 ball pool mega mod apk long lines
-8 ball pool extended lines apk tool
-8 ball pool long line tool apk without ban
-8 ball pool aim hack apk long line tool
-8 ball pool anti ban apk long line tool
-8 ball pool premium apk long lines
-8 ball pool cracked apk long line tool
-8 ball pool modded apk with long lines
-8 ball pool unlimited guideline apk tool
-8 ball pool pro apk long line tool
-8 ball pool full version apk long lines
-8 ball pool unlocked apk long line tool
-8 ball pool patcher apk long lines
-8 ball pool generator apk long line tool
-8 ball pool trainer apk long lines
-8 ball pool mod menu apk with long lines
-8 ball pool glitch apk long line tool
-8 ball pool update apk long lines
-8 ball pool best mod apk with long lines
-8 ball pool easy win apk long line tool
-8 ball pool legendary cues apk with long lines
-8 ball pool rewards apk long line tool
-8 ball pool cash hack apk with long lines
-8 ball pool instant win apk long line tool
-8 ball pool level up hack apk with long lines
-8 ball pool auto win apk long line tool
-8 ball pool all cues unlocked apk with long lines
-8 ball pool vip mod apk with long lines

-

A foul occurs when a player fails to hit their own group of balls first, fails to hit any ball at all, scratches the cue ball, drives any ball off the table, touches any ball with their hand or cue, or violates any other rule of the game. When a foul is committed, the opponent gets ball in hand anywhere on the table. However, if the cue ball is behind the head string and an object ball is outside of the head string, the player must shoot an object ball outside of the head string or it is a foul.

-

What is 8 Ball Pool Long Line Tool APK and How to Download It?

-

8 Ball Pool Long Line Tool APK is a modified version of the original 8 Ball Pool game that gives you some extra advantages over your opponents. It is not an official app from Miniclip, but a third-party app that you can download and install on your Android device for free.

-

The Features of 8 Ball Pool Long Line Tool APK

-

Some of the features that 8 Ball Pool Long Line Tool APK offers are:

- -

The Benefits of 8 Ball Pool Long Line Tool APK

-

Some of the benefits that 8 Ball Pool Long Line Tool APK provides are:

- -

The Installation Process of 8 Ball Pool Long Line Tool APK

-

To install 8 Ball Pool Long Line Tool APK on your Android device, you need to follow these steps:

-
    -
  1. Download the APK file from a trusted source. You can search for it online or use this link: .
  2. -
  3. Enable unknown sources on your device. Go to Settings > Security > Unknown Sources and toggle it on.
  4. -
  5. Locate the downloaded APK file on your device and tap on it to start the installation.
  6. -
  7. Follow the instructions on the screen and wait for the installation to finish.
  8. -
  9. Launch the app and enjoy playing 8 Ball Pool with longer aiming lines.
  10. -
-

What are Some Tips and Tricks to Win in 8 Ball Pool?

-

Besides using 8 Ball Pool Long Line Tool APK, there are some other tips and tricks that you can apply to win in 8 Ball Pool. Here are some of them:

-

Choose Your Tables Wisely

-

When you play online, you can choose from different tables with different entry fees and prizes. The higher the entry fee, the higher the prize, but also the higher the risk. If you are a beginner, you should start with lower-level tables and work your way up gradually. Don't play on tables that are too expensive for your budget or skill level, as you might lose more than you gain.

-

Buy a Better Cue

-

A cue is one of the most important factors that affect your performance in 8 Ball Pool. A better cue can give you more power, spin, aim, and time. You can buy cues with coins or cash in the game shop, or win them in tournaments or surprise boxes. You can also upgrade your cues with coins to improve their attributes. A good cue can make a big difference in your game, so don't hesitate to invest in one.

-

Use a Little English

-

English is a term that refers to the amount of spin you put on the cue ball when you hit it. By using English, you can control the direction and speed of the cue ball after it hits an object ball or a rail. You can use English to avoid scratches, make difficult shots, or set up your next shot. To use English, you need to hit the cue ball on the left or right side, rather than the center. You can also adjust the power and angle of your shot to achieve the desired effect.

-

Shoot Faster

-

One of the challenges of playing online is that you have a limited time to make your shot. If you take too long, you might lose your turn or even the game. To avoid this, you should try to shoot faster and more confidently. You can do this by planning your shots ahead, using 8 Ball Pool Long Line Tool APK to aim better, and practicing your skills offline. Shooting faster can also put pressure on your opponent and make them nervous or impatient.

-

Extend Your Aim

-

Another way to improve your accuracy and precision in 8 Ball Pool is to extend your aim beyond the object ball. This means that you should visualize where you want the cue ball to go after it hits the object ball, and align your cue accordingly. This can help you to avoid scratches, position your cue ball better, and make more complex shots. You can also use 8 Ball Pool Long Line Tool APK to see the extended aiming lines and adjust your shots accordingly.

-

Conclusion

-

8 Ball Pool is a fun and exciting game that can keep you entertained for hours. However, if you want to take your game to the next level, you might want to try 8 Ball Pool Long Line Tool APK, a modded version of the game that gives you longer aiming lines and more accurate shots. You can download and install it on your Android device for free and enjoy playing 8 Ball Pool with an edge over your opponents. You can also use some tips and tricks to win in 8 Ball Pool, such as choosing your tables wisely, buying a better cue, using a little English, shooting faster, and extending your aim. With these tools and techniques, you can become a master of 8 Ball Pool in no time.

-

FAQs

-

Here are some frequently asked questions about 8 Ball Pool Long Line Tool APK:

-
    -
  1. Is 8 Ball Pool Long Line Tool APK safe to use?
  2. -

    Yes, 8 Ball Pool Long Line Tool APK is safe to use as long as you download it from a trusted source and follow the installation instructions carefully. It has an anti-ban system that prevents detection by Miniclip, so you don't have to worry about getting banned or losing your account.

    -
  3. Is 8 Ball Pool Long Line Tool APK compatible with all devices?
  4. -

    No, 8 Ball Pool Long Line Tool APK is only compatible with Android devices that have Android 4.1 or higher versions. It is not compatible with iOS devices or other platforms.

    -
  5. Can I play online with 8 Ball Pool Long Line Tool APK?
  6. -

    Yes, you can play online with 8 Ball Pool Long Line Tool APK as long as you have a stable internet connection and a valid Miniclip account. You can play with other players who are using the same app or the original game.

    -
  7. Can I update 8 Ball Pool Long Line Tool APK?
  8. -

    No, you cannot update 8 Ball Pool Long Line Tool APK as it is not an official app from Miniclip. If you update it, you might lose the modded features or encounter errors. You should always check for new versions of the app from the source where you downloaded it.

    -
  9. Can I use 8 Ball Pool Long Line Tool APK with other mods or hacks?
  10. -

    No, you should not use 8 Ball Pool Long Line Tool APK with other mods or hacks as they might interfere with each other or cause problems. You should only use one mod or hack at a time for optimal performance and safety.

    -

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Download FIFA 20 Mod APK with OBB Data - Enjoy Realistic Football Experience on Your Phone.md b/spaces/1phancelerku/anime-remove-background/Download FIFA 20 Mod APK with OBB Data - Enjoy Realistic Football Experience on Your Phone.md deleted file mode 100644 index db3dc834075d5877e78c0476cee6ce6fb5d30c4d..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Download FIFA 20 Mod APK with OBB Data - Enjoy Realistic Football Experience on Your Phone.md +++ /dev/null @@ -1,104 +0,0 @@ - -

Download APK FIFA 20: How to Install and Play the Latest Version of the Popular Soccer Game on Your Android Device

-

If you are a fan of soccer games, you have probably heard of FIFA 20, the latest installment of the popular FIFA series by Electronic Arts. FIFA 20 is a realistic and immersive soccer simulation game that lets you experience the thrill of playing with your favorite teams and players in various modes and competitions. Whether you want to play solo or with friends, offline or online, FIFA 20 has something for everyone.

-

download apk fifa 20


Download Zip ⚙⚙⚙ https://jinyurl.com/2uNR2u



-

But what if you don't have a console or a PC to play FIFA 20? Don't worry, you can still enjoy this amazing game on your Android device. All you need is to download and install the FIFA 20 APK and OBB data files, which are modified versions of the original game that can run on Android devices without any issues. In this article, we will show you how to do that, as well as give you some tips and tricks to play FIFA 20 like a pro.

-

What are the features and benefits of FIFA 20

-

FIFA 20 is not just another soccer game. It is a game that offers you a lot of features and benefits that make it stand out from other games in the genre. Here are some of them:

- -

How to download and install FIFA 20 APK and OBB data on your Android device

-

Now that you know the features and benefits of FIFA 20, you might be wondering how to download and install it on your Android device. Well, it's not as hard as you might think. Just follow these simple steps:

-

Step 1: Enable unknown sources on your device

-

Before you can install any APK file on your device, you need to enable the option to allow unknown sources. This will let you install apps that are not from the Google Play Store. To do this, go to your device's settings, then security, then unknown sources. Toggle the switch to enable it.

-

download apk fifa 20 mod
-download apk fifa 20 offline
-download apk fifa 20 mobile
-download apk fifa 20 android
-download apk fifa 20 latest version
-download apk fifa 20 ultimate team
-download apk fifa 20 for free
-download apk fifa 20 with obb data
-download apk fifa 20 update
-download apk fifa 20 hack
-download apk fifa 20 full version
-download apk fifa 20 online
-download apk fifa 20 cracked
-download apk fifa 20 no verification
-download apk fifa 20 without human verification
-download apk fifa 20 from apkpure
-download apk fifa 20 from google play store
-download apk fifa 20 from uptodown
-download apk fifa 20 from apkmirror
-download apk fifa 20 from apksfull
-download apk fifa 20 with commentary
-download apk fifa 20 with real faces
-download apk fifa 20 with new kits
-download apk fifa 20 with new transfers
-download apk fifa 20 with unlimited coins
-download apk fifa 20 manager mode
-download apk fifa 20 tournament mode
-download apk fifa 20 career mode
-download apk fifa 20 volta mode
-download apk fifa 20 street mode
-download apk fifa 20 ps4 camera view
-download apk fifa 20 hd graphics
-download apk fifa 20 high compress
-download apk fifa 20 low mb
-download apk fifa 20 original
-download apk fifa 20 beta
-download apk fifa 20 demo
-download apk fifa 20 pro evolution soccer (pes)
-download apk fifa 20 dream league soccer (dls)
-download apk fifa 20 first touch soccer (fts)
-download apk fifa 20 efootball (efootball)
-download apk fifa 20 world cup edition (wc)
-download apk fifa 20 champions league edition (cl)
-download apk fifa 20 euro cup edition (ec)
-download apk fifa 20 copa america edition (ca)
-download apk fifa 20 africa cup of nations edition (afcon)
-download apk fifa 20 women's world cup edition (wwc)
-download apk fifa 20 fut companion app (fut)
-download apk fifa 20 pack opener app (pack)
-download apk fifa 20 player potentials app (potentials)

-

Step 2: Download the FIFA 20 APK and OBB files from a trusted source

-

The next step is to download the FIFA 20 APK and OBB files from a trusted source. There are many websites that offer these files, but be careful not to download from shady or malicious ones. You can use this link to download the files safely and securely. The APK file is about 30 MB, while the OBB file is about 1.5 GB.

-

Step 3: Install the APK file and extract the OBB file to the right folder

-

After downloading the files, you need to install the APK file and extract the OBB file to the right folder. To do this, locate the APK file in your device's file manager and tap on it to install it. Then, use a file extractor app like ZArchiver to extract the OBB file. You will get a folder named com.ea.gp.fifaworld. Move this folder to Android/OBB in your device's internal storage.

-

Step 4: Launch the game and enjoy

-

The final step is to launch the game and enjoy. To do this, go to your app drawer and tap on the FIFA 20 icon. The game will start and ask you to verify your data. Just tap on OK and wait for a few seconds. The game will then load and take you to the main menu. You can now choose your mode and start playing.

-

What are the tips and tricks to play FIFA 20 like a pro

-

FIFA 20 is a fun and challenging game that requires skill and strategy to master. If you want to play like a pro, you need to know some tips and tricks that will help you improve your performance and win more matches. Here are some of them:

-

Customize your controls and settings

-

One of the first things you should do is customize your controls and settings according to your preference and comfort. You can do this by going to settings, then controls, then customize controls. You can choose between classic or casual controls, adjust the sensitivity and size of the buttons, enable or disable auto-switching, auto-sprint, auto-shoot, etc.

-

Choose your game mode and difficulty level

-

The next thing you should do is choose your game mode and difficulty level according to your skill and goal. You can do this by going to play, then select mode. You can choose between quick match, tournament, league, career mode, ultimate team mode, volta mode, etc. You can also choose between beginner, amateur, semi-pro, professional, world class, legendary, or ultimate difficulty level.

-

Master the skills and tactics

-

The most important thing you should do is master the skills and tactics that will help you win more matches. You can do this by practicing in training mode or playing against AI opponents. You should learn how to dribble, pass, shoot, tackle, cross, head, defend, etc. You should also learn how to use different tactics, such as formation, style, mentality, instructions, etc.

-

Build your ultimate team and manage your players

-

If you are playing ultimate team mode, you should build your ultimate team and manage your players effectively. You can do this by collecting and trading players from different leagues and nations. You should aim for high-rated players with good chemistry and attributes. You should also manage your players' fitness, morale, contracts, injuries, etc.

-

Participate in online tournaments and events

-

If you want to challenge yourself and compete with other players, you should participate in online tournaments and events. You can do this by going to play online, then select mode. You can choose between online seasons, online friendlies, online co-op seasons, online draft mode, online squad battles, online champions league mode, online world cup mode, online pro clubs mode, online division rivals mode, online weekend league mode, online fut champions mode, online fut friendlies mode, online fut events mode, online fut seasons mode. You can win rewards and trophies by playing and winning these modes.

-

Conclusion

-

FIFA 20 is a fantastic soccer game that you can download and play on your Android device. It offers you a lot of features and benefits that make it one of the best games in the genre. It also gives you some tips and tricks that will help you play like a pro. So what are you waiting for? Download APK FIFA 20 now and enjoy the ultimate soccer experience.

-

FAQs

-

Here are some frequently asked questions about FIFA 20:

-

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Download and Play FINAL FANTASY XIII on Android - Cloud Game with TV Integration Support.md b/spaces/1phancelerku/anime-remove-background/Download and Play FINAL FANTASY XIII on Android - Cloud Game with TV Integration Support.md deleted file mode 100644 index 7b4f447cf2ffc9df32131b7b92d146f2806e8bd5..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Download and Play FINAL FANTASY XIII on Android - Cloud Game with TV Integration Support.md +++ /dev/null @@ -1,103 +0,0 @@ -
-

Final Fantasy XIII APK Full Download: How to Play the Epic JRPG on Your Android Device

-

Are you a fan of Final Fantasy, one of the most popular and influential JRPG series of all time? If so, you might be interested in playing Final Fantasy XIII, the thirteenth installment of the main series, on your Android device. In this article, we will show you how to download Final Fantasy XIII APK full version and enjoy the epic adventure on your smartphone or tablet. We will also share some tips and tricks to enhance your gaming experience. Let's get started!

-

Introduction

-

What is Final Fantasy XIII?

-

Final Fantasy XIII is a role-playing game developed and published by Square Enix in 2009. It is set in a futuristic world where two opposing forces, Cocoon and Pulse, are locked in a conflict. The game follows the story of six characters who are branded as traitors by Cocoon's government and must fight against their fate. The game features a fast-paced combat system, stunning graphics, and a rich soundtrack. It received critical acclaim and sold over seven million copies worldwide.

-

final fantasy xiii apk full download


Download Filehttps://jinyurl.com/2uNPGh



-

Why play Final Fantasy XIII on your Android device?

-

Playing Final Fantasy XIII on your Android device has many benefits. First of all, you can enjoy the game anytime and anywhere, without being tied to a console or a PC. You can also save space on your device, as you don't need to download a large file or install anything. Moreover, you can take advantage of the touch screen, gyroscope, and other features of your device to enhance your gameplay. Finally, you can connect your device to a TV or a monitor and play on a bigger screen.

-

How to download Final Fantasy XIII APK

-

Option 1: Use the official cloud game service from Square Enix

-

The easiest and safest way to play Final Fantasy XIII on your Android device is to use the official cloud game service from Square Enix. This service allows you to stream high-definition games over a Wi-Fi connection, without downloading or installing anything. Here are the steps to follow:

-

Step 1: Download the FINAL FANTASY XIII app from APKCombo

-

The first step is to download the FINAL FANTASY XIII app from APKCombo, a website that provides free APK files for Android apps and games. You can use this link to access the app page and click on the "Download APK" button. The app size is about 12 MB and it requires Android 5.0 or higher.

-

Step 2: Launch the app and sign up for the cloud game service

-

The next step is to launch the app and sign up for the cloud game service. You will need to create an account with your email address and password, or log in with your existing Square Enix account. You will also need to agree to the terms of service and privacy policy.

-

Step 3: Enjoy the free trial and purchase the license if you like it

-

The final step is to enjoy the free trial and purchase the license if you like it. You can play the first 30 minutes of the game for free, and then decide whether to buy the full game for $15.99. You can also choose to pay $5.99 per month and access other cloud games from Square Enix, such as Final Fantasy VII and Final Fantasy VIII.

-

Option 2: Use an unofficial source from the Internet Archive

-

If you don't want to use the official cloud game service from Square Enix, you can try another option: use an unofficial source from the Internet Archive. The Internet Archive is a non-profit organization that preserves digital content, such as books, music, videos, and games. You can find a copy of Final Fantasy XIII for PC on their website and play it on your Android device with an emulator or a streaming app. However, this option is not recommended, as it may be illegal, unsafe, or unstable. Here are the steps to follow:

-

Step 1: Download the final fantasy xiii file from the Internet Archive

-

The first step is to download the final fantasy xiii file from the Internet Archive. You can use this link to access the file page and click on the "DOWNLOAD OPTIONS" button. You will see several formats available, such as ISO, ZIP, or TORRENT. The file size is about 13 GB and it requires a PC with Windows XP or higher.

-

Step 2: Extract the file and install the game on your PC

-

The next step is to extract the file and install the game on your PC. You will need a software like WinRAR or 7-Zip to unzip the file and get the game folder. Then, you will need to run the setup.exe file and follow the instructions to install the game on your PC. You may also need to install some additional components, such as DirectX or Visual C++.

-

final fantasy xiii android apk free download
-final fantasy xiii mobile game download apk
-final fantasy xiii apk obb download
-final fantasy xiii apk mod download
-final fantasy xiii apk offline download
-final fantasy xiii apk data download
-final fantasy xiii apk full version download
-final fantasy xiii apk cracked download
-final fantasy xiii apk unlimited money download
-final fantasy xiii apk cloud game download
-final fantasy xiii apk no license download
-final fantasy xiii apk english version download
-final fantasy xiii apk latest version download
-final fantasy xiii apk direct download
-final fantasy xiii apk mirror download
-final fantasy xiii apk google drive download
-final fantasy xiii apk mega download
-final fantasy xiii apk mediafire download
-final fantasy xiii apk zippyshare download
-final fantasy xiii apk utorrent download
-final fantasy xiii apk for pc download
-final fantasy xiii apk for ios download
-final fantasy xiii apk for tablet download
-final fantasy xiii apk for tv download
-final fantasy xiii apk for chromebook download
-final fantasy xiii hd apk full download
-final fantasy xiii 2 apk full download
-final fantasy xiii 3 apk full download
-final fantasy xiii lightning returns apk full download
-final fantasy xiii remastered apk full download
-how to download final fantasy xiii apk full
-where to download final fantasy xiii apk full
-best site to download final fantasy xiii apk full
-safe site to download final fantasy xiii apk full
-legit site to download final fantasy xiii apk full
-trusted site to download final fantasy xiii apk full
-working link to download final fantasy xiii apk full
-updated link to download final fantasy xiii apk full
-fast link to download final fantasy xiii apk full
-easy way to download final fantasy xiii apk full
-free way to download final fantasy xiii apk full
-legal way to download final fantasy xiii apk full
-illegal way to download final fantasy xiii apk full
-tips and tricks to download final fantasy xiii apk full
-guide and tutorial to download final fantasy xiii apk full
-review and rating of final fantasy xiii apk full download
-gameplay and features of final fantasy xiii apk full download
-problems and solutions of final fantasy xiii apk full download
-requirements and compatibility of final fantasy xiii apk full download

-

Step 3: Use an emulator or a streaming app to play the game on your Android device

-

The final step is to use an emulator or a streaming app to play the game on your Android device. An emulator is a software that mimics the behavior of another device, such as a PC or a console. A streaming app is a software that allows you to stream games from your PC to your Android device over a Wi-Fi connection. Some examples of emulators are ExaGear RPG or Wine, and some examples of streaming apps are Steam Link or Moonlight. You will need to configure these apps according to your preferences and requirements.

-

Tips and tricks for playing Final Fantasy XIII on your Android device

-

Adjust the settings to optimize the performance and battery life

-

One of the challenges of playing Final Fantasy XIII on your Android device is to optimize the performance and battery life of your device. Depending on your device model and specifications, you may experience lagging, crashing, overheating, or draining issues. To avoid these problems, you can adjust some settings in your device or in your app. For example, you can lower the resolution, brightness, volume, or frame rate of your device or app. You can also close other apps running in the background, turn off notifications, or activate airplane mode.

-

Use a controller or a keyboard for better control and comfort

-

Another challenge of playing Final Fantasy XIII on your Android device is to control the game with touch screen gestures. While this may be convenient for some players, others may find it difficult, uncomfortable, or inaccurate. To improve your control and comfort, you can use a controller or a keyboard instead of touch screen gestures. You can connect your controller or keyboard to your device via Bluetooth, USB, or Wi-Fi. You can also customize your controller or keyboard layout according to your preferences.

-

Save your progress frequently and back up your data online

-

The last challenge of playing Final Fantasy XIII on your Android device is to save your progress frequently and back up your data online. Unlike playing on a console or a PC, playing on an Android device may expose you to risks of losing your data due to various reasons, such as deleting the app by mistake, running out of storage space, resetting your device, or losing your device. To prevent these scenarios from happening, you should save your progress frequently in different slots and back up your data online using cloud services like Google Drive or Dropbox.

-

Conclusion

-

Summary of the main points

-

In conclusion, playing Final Fantasy XIII on your Android device is possible and enjoyable if you follow some simple steps and tips. You can download Final Fantasy XIII APK full version from either the official cloud game service from Square Enix or from an unofficial source from the Internet Archive. You can also adjust the settings, use a controller or a keyboard, and save your progress frequently and back up your data online to optimize your gaming experience. Final Fantasy XIII is a great game that deserves to be played on any device you want.

-

Call to action and invitation to comment

-

If you are ready to play Final Fantasy XIII on your Android device, don't hesitate to download the APK file and follow the instructions in this article. You will be amazed by the quality and the fun of this game. And if you have any questions, comments, or feedback, feel free to leave them below. We would love to hear from you and help you with any issues you may encounter. Happy gaming!

-

FAQs

-

Here are some frequently asked questions about playing Final Fantasy XIII on your Android device:

-

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/221091lstwcm/textgenerator/app.py b/spaces/221091lstwcm/textgenerator/app.py deleted file mode 100644 index 67e6bf13da457d066b6ea7b6a5728599f00b5ebb..0000000000000000000000000000000000000000 --- a/spaces/221091lstwcm/textgenerator/app.py +++ /dev/null @@ -1,11 +0,0 @@ -#libraries -import gradio as gr -from gradio.mix import Parallel - -#variables, functions and parameters -model1=gr.Interface.load("huggingface/gpt2") -model2=gr.Interface.load("huggingface/EleutherAI/gpt-j-6B") -model3=gr.Interface.load("huggingface/distilgpt2") - -#funcations, parameters and variables -gr.Parallel(model1, model2, model3).launch() \ No newline at end of file diff --git a/spaces/232labs/VToonify/vtoonify/model/stylegan/op/__init__.py b/spaces/232labs/VToonify/vtoonify/model/stylegan/op/__init__.py deleted file mode 100644 index d0918d92285955855be89f00096b888ee5597ce3..0000000000000000000000000000000000000000 --- a/spaces/232labs/VToonify/vtoonify/model/stylegan/op/__init__.py +++ /dev/null @@ -1,2 +0,0 @@ -from .fused_act import FusedLeakyReLU, fused_leaky_relu -from .upfirdn2d import upfirdn2d diff --git a/spaces/4Taps/SadTalker/src/audio2exp_models/audio2exp.py b/spaces/4Taps/SadTalker/src/audio2exp_models/audio2exp.py deleted file mode 100644 index 5f6e6b77b0ceb2089539caa440f7106c7b1e8aa2..0000000000000000000000000000000000000000 --- a/spaces/4Taps/SadTalker/src/audio2exp_models/audio2exp.py +++ /dev/null @@ -1,40 +0,0 @@ -from tqdm import tqdm -import torch -from torch import nn - - -class Audio2Exp(nn.Module): - def __init__(self, netG, cfg, device, prepare_training_loss=False): - super(Audio2Exp, self).__init__() - self.cfg = cfg - self.device = device - self.netG = netG.to(device) - - def test(self, batch): - - mel_input = batch['indiv_mels'] # bs T 1 80 16 - bs = mel_input.shape[0] - T = mel_input.shape[1] - - exp_coeff_pred = [] - - for i in tqdm(range(0, T, 10),'audio2exp:'): # every 10 frames - - current_mel_input = mel_input[:,i:i+10] - - ref = batch['ref'][:, :, :64].repeat((1,current_mel_input.shape[1],1)) #bs T 64 - ratio = batch['ratio_gt'][:, i:i+10] #bs T - - audiox = current_mel_input.view(-1, 1, 80, 16) # bs*T 1 80 16 - - curr_exp_coeff_pred = self.netG(audiox, ref, ratio) # bs T 64 - - exp_coeff_pred += [curr_exp_coeff_pred] - - # BS x T x 64 - results_dict = { - 'exp_coeff_pred': torch.cat(exp_coeff_pred, axis=1) - } - return results_dict - - diff --git a/spaces/801artistry/RVC801/demucs/utils.py b/spaces/801artistry/RVC801/demucs/utils.py deleted file mode 100644 index 4364184059b1afe3c8379c77793a8e76dccf9699..0000000000000000000000000000000000000000 --- a/spaces/801artistry/RVC801/demucs/utils.py +++ /dev/null @@ -1,323 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import errno -import functools -import hashlib -import inspect -import io -import os -import random -import socket -import tempfile -import warnings -import zlib -from contextlib import contextmanager - -from diffq import UniformQuantizer, DiffQuantizer -import torch as th -import tqdm -from torch import distributed -from torch.nn import functional as F - - -def center_trim(tensor, reference): - """ - Center trim `tensor` with respect to `reference`, along the last dimension. - `reference` can also be a number, representing the length to trim to. - If the size difference != 0 mod 2, the extra sample is removed on the right side. - """ - if hasattr(reference, "size"): - reference = reference.size(-1) - delta = tensor.size(-1) - reference - if delta < 0: - raise ValueError("tensor must be larger than reference. " f"Delta is {delta}.") - if delta: - tensor = tensor[..., delta // 2:-(delta - delta // 2)] - return tensor - - -def average_metric(metric, count=1.): - """ - Average `metric` which should be a float across all hosts. `count` should be - the weight for this particular host (i.e. number of examples). - """ - metric = th.tensor([count, count * metric], dtype=th.float32, device='cuda') - distributed.all_reduce(metric, op=distributed.ReduceOp.SUM) - return metric[1].item() / metric[0].item() - - -def free_port(host='', low=20000, high=40000): - """ - Return a port number that is most likely free. - This could suffer from a race condition although - it should be quite rare. - """ - sock = socket.socket() - while True: - port = random.randint(low, high) - try: - sock.bind((host, port)) - except OSError as error: - if error.errno == errno.EADDRINUSE: - continue - raise - return port - - -def sizeof_fmt(num, suffix='B'): - """ - Given `num` bytes, return human readable size. - Taken from https://stackoverflow.com/a/1094933 - """ - for unit in ['', 'Ki', 'Mi', 'Gi', 'Ti', 'Pi', 'Ei', 'Zi']: - if abs(num) < 1024.0: - return "%3.1f%s%s" % (num, unit, suffix) - num /= 1024.0 - return "%.1f%s%s" % (num, 'Yi', suffix) - - -def human_seconds(seconds, display='.2f'): - """ - Given `seconds` seconds, return human readable duration. - """ - value = seconds * 1e6 - ratios = [1e3, 1e3, 60, 60, 24] - names = ['us', 'ms', 's', 'min', 'hrs', 'days'] - last = names.pop(0) - for name, ratio in zip(names, ratios): - if value / ratio < 0.3: - break - value /= ratio - last = name - return f"{format(value, display)} {last}" - - -class TensorChunk: - def __init__(self, tensor, offset=0, length=None): - total_length = tensor.shape[-1] - assert offset >= 0 - assert offset < total_length - - if length is None: - length = total_length - offset - else: - length = min(total_length - offset, length) - - self.tensor = tensor - self.offset = offset - self.length = length - self.device = tensor.device - - @property - def shape(self): - shape = list(self.tensor.shape) - shape[-1] = self.length - return shape - - def padded(self, target_length): - delta = target_length - self.length - total_length = self.tensor.shape[-1] - assert delta >= 0 - - start = self.offset - delta // 2 - end = start + target_length - - correct_start = max(0, start) - correct_end = min(total_length, end) - - pad_left = correct_start - start - pad_right = end - correct_end - - out = F.pad(self.tensor[..., correct_start:correct_end], (pad_left, pad_right)) - assert out.shape[-1] == target_length - return out - - -def tensor_chunk(tensor_or_chunk): - if isinstance(tensor_or_chunk, TensorChunk): - return tensor_or_chunk - else: - assert isinstance(tensor_or_chunk, th.Tensor) - return TensorChunk(tensor_or_chunk) - - -def apply_model(model, mix, shifts=None, split=False, - overlap=0.25, transition_power=1., progress=False): - """ - Apply model to a given mixture. - - Args: - shifts (int): if > 0, will shift in time `mix` by a random amount between 0 and 0.5 sec - and apply the oppositve shift to the output. This is repeated `shifts` time and - all predictions are averaged. This effectively makes the model time equivariant - and improves SDR by up to 0.2 points. - split (bool): if True, the input will be broken down in 8 seconds extracts - and predictions will be performed individually on each and concatenated. - Useful for model with large memory footprint like Tasnet. - progress (bool): if True, show a progress bar (requires split=True) - """ - assert transition_power >= 1, "transition_power < 1 leads to weird behavior." - device = mix.device - channels, length = mix.shape - if split: - out = th.zeros(len(model.sources), channels, length, device=device) - sum_weight = th.zeros(length, device=device) - segment = model.segment_length - stride = int((1 - overlap) * segment) - offsets = range(0, length, stride) - scale = stride / model.samplerate - if progress: - offsets = tqdm.tqdm(offsets, unit_scale=scale, ncols=120, unit='seconds') - # We start from a triangle shaped weight, with maximal weight in the middle - # of the segment. Then we normalize and take to the power `transition_power`. - # Large values of transition power will lead to sharper transitions. - weight = th.cat([th.arange(1, segment // 2 + 1), - th.arange(segment - segment // 2, 0, -1)]).to(device) - assert len(weight) == segment - # If the overlap < 50%, this will translate to linear transition when - # transition_power is 1. - weight = (weight / weight.max())**transition_power - for offset in offsets: - chunk = TensorChunk(mix, offset, segment) - chunk_out = apply_model(model, chunk, shifts=shifts) - chunk_length = chunk_out.shape[-1] - out[..., offset:offset + segment] += weight[:chunk_length] * chunk_out - sum_weight[offset:offset + segment] += weight[:chunk_length] - offset += segment - assert sum_weight.min() > 0 - out /= sum_weight - return out - elif shifts: - max_shift = int(0.5 * model.samplerate) - mix = tensor_chunk(mix) - padded_mix = mix.padded(length + 2 * max_shift) - out = 0 - for _ in range(shifts): - offset = random.randint(0, max_shift) - shifted = TensorChunk(padded_mix, offset, length + max_shift - offset) - shifted_out = apply_model(model, shifted) - out += shifted_out[..., max_shift - offset:] - out /= shifts - return out - else: - valid_length = model.valid_length(length) - mix = tensor_chunk(mix) - padded_mix = mix.padded(valid_length) - with th.no_grad(): - out = model(padded_mix.unsqueeze(0))[0] - return center_trim(out, length) - - -@contextmanager -def temp_filenames(count, delete=True): - names = [] - try: - for _ in range(count): - names.append(tempfile.NamedTemporaryFile(delete=False).name) - yield names - finally: - if delete: - for name in names: - os.unlink(name) - - -def get_quantizer(model, args, optimizer=None): - quantizer = None - if args.diffq: - quantizer = DiffQuantizer( - model, min_size=args.q_min_size, group_size=8) - if optimizer is not None: - quantizer.setup_optimizer(optimizer) - elif args.qat: - quantizer = UniformQuantizer( - model, bits=args.qat, min_size=args.q_min_size) - return quantizer - - -def load_model(path, strict=False): - with warnings.catch_warnings(): - warnings.simplefilter("ignore") - load_from = path - package = th.load(load_from, 'cpu') - - klass = package["klass"] - args = package["args"] - kwargs = package["kwargs"] - - if strict: - model = klass(*args, **kwargs) - else: - sig = inspect.signature(klass) - for key in list(kwargs): - if key not in sig.parameters: - warnings.warn("Dropping inexistant parameter " + key) - del kwargs[key] - model = klass(*args, **kwargs) - - state = package["state"] - training_args = package["training_args"] - quantizer = get_quantizer(model, training_args) - - set_state(model, quantizer, state) - return model - - -def get_state(model, quantizer): - if quantizer is None: - state = {k: p.data.to('cpu') for k, p in model.state_dict().items()} - else: - state = quantizer.get_quantized_state() - buf = io.BytesIO() - th.save(state, buf) - state = {'compressed': zlib.compress(buf.getvalue())} - return state - - -def set_state(model, quantizer, state): - if quantizer is None: - model.load_state_dict(state) - else: - buf = io.BytesIO(zlib.decompress(state["compressed"])) - state = th.load(buf, "cpu") - quantizer.restore_quantized_state(state) - - return state - - -def save_state(state, path): - buf = io.BytesIO() - th.save(state, buf) - sig = hashlib.sha256(buf.getvalue()).hexdigest()[:8] - - path = path.parent / (path.stem + "-" + sig + path.suffix) - path.write_bytes(buf.getvalue()) - - -def save_model(model, quantizer, training_args, path): - args, kwargs = model._init_args_kwargs - klass = model.__class__ - - state = get_state(model, quantizer) - - save_to = path - package = { - 'klass': klass, - 'args': args, - 'kwargs': kwargs, - 'state': state, - 'training_args': training_args, - } - th.save(package, save_to) - - -def capture_init(init): - @functools.wraps(init) - def __init__(self, *args, **kwargs): - self._init_args_kwargs = (args, kwargs) - init(self, *args, **kwargs) - - return __init__ diff --git a/spaces/801artistry/RVC801/infer/lib/infer_pack/transforms.py b/spaces/801artistry/RVC801/infer/lib/infer_pack/transforms.py deleted file mode 100644 index 6f30b7177d17fc61a4173c21b4233172a890be58..0000000000000000000000000000000000000000 --- a/spaces/801artistry/RVC801/infer/lib/infer_pack/transforms.py +++ /dev/null @@ -1,207 +0,0 @@ -import numpy as np -import torch -from torch.nn import functional as F - -DEFAULT_MIN_BIN_WIDTH = 1e-3 -DEFAULT_MIN_BIN_HEIGHT = 1e-3 -DEFAULT_MIN_DERIVATIVE = 1e-3 - - -def piecewise_rational_quadratic_transform( - inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails=None, - tail_bound=1.0, - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE, -): - if tails is None: - spline_fn = rational_quadratic_spline - spline_kwargs = {} - else: - spline_fn = unconstrained_rational_quadratic_spline - spline_kwargs = {"tails": tails, "tail_bound": tail_bound} - - outputs, logabsdet = spline_fn( - inputs=inputs, - unnormalized_widths=unnormalized_widths, - unnormalized_heights=unnormalized_heights, - unnormalized_derivatives=unnormalized_derivatives, - inverse=inverse, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative, - **spline_kwargs - ) - return outputs, logabsdet - - -def searchsorted(bin_locations, inputs, eps=1e-6): - bin_locations[..., -1] += eps - return torch.sum(inputs[..., None] >= bin_locations, dim=-1) - 1 - - -def unconstrained_rational_quadratic_spline( - inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails="linear", - tail_bound=1.0, - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE, -): - inside_interval_mask = (inputs >= -tail_bound) & (inputs <= tail_bound) - outside_interval_mask = ~inside_interval_mask - - outputs = torch.zeros_like(inputs) - logabsdet = torch.zeros_like(inputs) - - if tails == "linear": - unnormalized_derivatives = F.pad(unnormalized_derivatives, pad=(1, 1)) - constant = np.log(np.exp(1 - min_derivative) - 1) - unnormalized_derivatives[..., 0] = constant - unnormalized_derivatives[..., -1] = constant - - outputs[outside_interval_mask] = inputs[outside_interval_mask] - logabsdet[outside_interval_mask] = 0 - else: - raise RuntimeError("{} tails are not implemented.".format(tails)) - - ( - outputs[inside_interval_mask], - logabsdet[inside_interval_mask], - ) = rational_quadratic_spline( - inputs=inputs[inside_interval_mask], - unnormalized_widths=unnormalized_widths[inside_interval_mask, :], - unnormalized_heights=unnormalized_heights[inside_interval_mask, :], - unnormalized_derivatives=unnormalized_derivatives[inside_interval_mask, :], - inverse=inverse, - left=-tail_bound, - right=tail_bound, - bottom=-tail_bound, - top=tail_bound, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative, - ) - - return outputs, logabsdet - - -def rational_quadratic_spline( - inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - left=0.0, - right=1.0, - bottom=0.0, - top=1.0, - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE, -): - if torch.min(inputs) < left or torch.max(inputs) > right: - raise ValueError("Input to a transform is not within its domain") - - num_bins = unnormalized_widths.shape[-1] - - if min_bin_width * num_bins > 1.0: - raise ValueError("Minimal bin width too large for the number of bins") - if min_bin_height * num_bins > 1.0: - raise ValueError("Minimal bin height too large for the number of bins") - - widths = F.softmax(unnormalized_widths, dim=-1) - widths = min_bin_width + (1 - min_bin_width * num_bins) * widths - cumwidths = torch.cumsum(widths, dim=-1) - cumwidths = F.pad(cumwidths, pad=(1, 0), mode="constant", value=0.0) - cumwidths = (right - left) * cumwidths + left - cumwidths[..., 0] = left - cumwidths[..., -1] = right - widths = cumwidths[..., 1:] - cumwidths[..., :-1] - - derivatives = min_derivative + F.softplus(unnormalized_derivatives) - - heights = F.softmax(unnormalized_heights, dim=-1) - heights = min_bin_height + (1 - min_bin_height * num_bins) * heights - cumheights = torch.cumsum(heights, dim=-1) - cumheights = F.pad(cumheights, pad=(1, 0), mode="constant", value=0.0) - cumheights = (top - bottom) * cumheights + bottom - cumheights[..., 0] = bottom - cumheights[..., -1] = top - heights = cumheights[..., 1:] - cumheights[..., :-1] - - if inverse: - bin_idx = searchsorted(cumheights, inputs)[..., None] - else: - bin_idx = searchsorted(cumwidths, inputs)[..., None] - - input_cumwidths = cumwidths.gather(-1, bin_idx)[..., 0] - input_bin_widths = widths.gather(-1, bin_idx)[..., 0] - - input_cumheights = cumheights.gather(-1, bin_idx)[..., 0] - delta = heights / widths - input_delta = delta.gather(-1, bin_idx)[..., 0] - - input_derivatives = derivatives.gather(-1, bin_idx)[..., 0] - input_derivatives_plus_one = derivatives[..., 1:].gather(-1, bin_idx)[..., 0] - - input_heights = heights.gather(-1, bin_idx)[..., 0] - - if inverse: - a = (inputs - input_cumheights) * ( - input_derivatives + input_derivatives_plus_one - 2 * input_delta - ) + input_heights * (input_delta - input_derivatives) - b = input_heights * input_derivatives - (inputs - input_cumheights) * ( - input_derivatives + input_derivatives_plus_one - 2 * input_delta - ) - c = -input_delta * (inputs - input_cumheights) - - discriminant = b.pow(2) - 4 * a * c - assert (discriminant >= 0).all() - - root = (2 * c) / (-b - torch.sqrt(discriminant)) - outputs = root * input_bin_widths + input_cumwidths - - theta_one_minus_theta = root * (1 - root) - denominator = input_delta + ( - (input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta - ) - derivative_numerator = input_delta.pow(2) * ( - input_derivatives_plus_one * root.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - root).pow(2) - ) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, -logabsdet - else: - theta = (inputs - input_cumwidths) / input_bin_widths - theta_one_minus_theta = theta * (1 - theta) - - numerator = input_heights * ( - input_delta * theta.pow(2) + input_derivatives * theta_one_minus_theta - ) - denominator = input_delta + ( - (input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta - ) - outputs = input_cumheights + numerator / denominator - - derivative_numerator = input_delta.pow(2) * ( - input_derivatives_plus_one * theta.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - theta).pow(2) - ) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, logabsdet diff --git a/spaces/AI-Hobbyist/Hoyo-RVC/docs/README.ko.han.md b/spaces/AI-Hobbyist/Hoyo-RVC/docs/README.ko.han.md deleted file mode 100644 index 2b6bbff02cfbe61fdbccb81d069285eb513c160b..0000000000000000000000000000000000000000 --- a/spaces/AI-Hobbyist/Hoyo-RVC/docs/README.ko.han.md +++ /dev/null @@ -1,100 +0,0 @@ -
- -

Retrieval-based-Voice-Conversion-WebUI

-VITS基盤의 簡單하고使用하기 쉬운音聲變換틀

- -[![madewithlove](https://forthebadge.com/images/badges/built-with-love.svg)](https://github.com/liujing04/Retrieval-based-Voice-Conversion-WebUI) - -
- -[![Open In Colab](https://img.shields.io/badge/Colab-F9AB00?style=for-the-badge&logo=googlecolab&color=525252)](https://colab.research.google.com/github/liujing04/Retrieval-based-Voice-Conversion-WebUI/blob/main/Retrieval_based_Voice_Conversion_WebUI.ipynb) -[![Licence](https://img.shields.io/github/license/liujing04/Retrieval-based-Voice-Conversion-WebUI?style=for-the-badge)](https://github.com/liujing04/Retrieval-based-Voice-Conversion-WebUI/blob/main/%E4%BD%BF%E7%94%A8%E9%9C%80%E9%81%B5%E5%AE%88%E7%9A%84%E5%8D%8F%E8%AE%AE-LICENSE.txt) -[![Huggingface](https://img.shields.io/badge/🤗%20-Spaces-yellow.svg?style=for-the-badge)](https://huggingface.co/lj1995/VoiceConversionWebUI/tree/main/) - -[![Discord](https://img.shields.io/badge/RVC%20Developers-Discord-7289DA?style=for-the-badge&logo=discord&logoColor=white)](https://discord.gg/HcsmBBGyVk) - -
- ------- -[**更新日誌**](https://github.com/liujing04/Retrieval-based-Voice-Conversion-WebUI/blob/main/Changelog_CN.md) - -[**English**](./README.en.md) | [**中文简体**](../README.md) | [**日本語**](./README.ja.md) | [**한국어**](./README.ko.md) ([**韓國語**](./README.ko.han.md)) - -> [示範映像](https://www.bilibili.com/video/BV1pm4y1z7Gm/)을 確認해 보세요! - -> RVC를活用한實時間音聲變換: [w-okada/voice-changer](https://github.com/w-okada/voice-changer) - -> 基本모델은 50時間假量의 高品質 오픈 소스 VCTK 데이터셋을 使用하였으므로, 著作權上의 念慮가 없으니 安心하고 使用하시기 바랍니다. - -> 著作權問題가 없는 高品質의 노래를 以後에도 繼續해서 訓練할 豫定입니다. - -## 紹介 -本Repo는 다음과 같은 特徵을 가지고 있습니다: -+ top1檢索을利用하여 入力音色特徵을 訓練세트音色特徵으로 代替하여 音色의漏出을 防止; -+ 相對的으로 낮은性能의 GPU에서도 빠른訓練可能; -+ 적은量의 데이터로 訓練해도 좋은 結果를 얻을 수 있음 (最小10分以上의 低雜음音聲데이터를 使用하는 것을 勸獎); -+ 모델融合을通한 音色의 變調可能 (ckpt處理탭->ckpt混合選擇); -+ 使用하기 쉬운 WebUI (웹 使用者인터페이스); -+ UVR5 모델을 利用하여 목소리와 背景音樂의 빠른 分離; - -## 環境의準備 -poetry를通해 依存를設置하는 것을 勸獎합니다. - -다음命令은 Python 버전3.8以上의環境에서 實行되어야 합니다: -```bash -# PyTorch 關聯主要依存設置, 이미設置되어 있는 境遇 건너뛰기 可能 -# 參照: https://pytorch.org/get-started/locally/ -pip install torch torchvision torchaudio - -# Windows + Nvidia Ampere Architecture(RTX30xx)를 使用하고 있다面, #21 에서 명시된 것과 같이 PyTorch에 맞는 CUDA 버전을 指定해야 합니다. -#pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu117 - -# Poetry 設置, 이미設置되어 있는 境遇 건너뛰기 可能 -# Reference: https://python-poetry.org/docs/#installation -curl -sSL https://install.python-poetry.org | python3 - - -# 依存設置 -poetry install -``` -pip를 活用하여依存를 設置하여도 無妨합니다. - -```bash -pip install -r requirements.txt -``` - -## 其他預備모델準備 -RVC 모델은 推論과訓練을 依하여 다른 預備모델이 必要합니다. - -[Huggingface space](https://huggingface.co/lj1995/VoiceConversionWebUI/tree/main/)를 通해서 다운로드 할 수 있습니다. - -다음은 RVC에 必要한 預備모델 및 其他 파일 目錄입니다: -```bash -hubert_base.pt - -./pretrained - -./uvr5_weights - -# Windows를 使用하는境遇 이 사전도 必要할 수 있습니다. FFmpeg가 設置되어 있으면 건너뛰어도 됩니다. -ffmpeg.exe -``` -그後 以下의 命令을 使用하여 WebUI를 始作할 수 있습니다: -```bash -python infer-web.py -``` -Windows를 使用하는境遇 `RVC-beta.7z`를 다운로드 및 壓縮解除하여 RVC를 直接使用하거나 `go-web.bat`을 使用하여 WebUi를 直接할 수 있습니다. - -## 參考 -+ [ContentVec](https://github.com/auspicious3000/contentvec/) -+ [VITS](https://github.com/jaywalnut310/vits) -+ [HIFIGAN](https://github.com/jik876/hifi-gan) -+ [Gradio](https://github.com/gradio-app/gradio) -+ [FFmpeg](https://github.com/FFmpeg/FFmpeg) -+ [Ultimate Vocal Remover](https://github.com/Anjok07/ultimatevocalremovergui) -+ [audio-slicer](https://github.com/openvpi/audio-slicer) -## 모든寄與者분들의勞力에感謝드립니다 - - - - - diff --git a/spaces/AIGC-Audio/AudioGPT/audio_to_text/captioning/utils/build_vocab_spacy.py b/spaces/AIGC-Audio/AudioGPT/audio_to_text/captioning/utils/build_vocab_spacy.py deleted file mode 100644 index 84da679f79d9f36b288d7312fb4ad9dc04723b0d..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/AudioGPT/audio_to_text/captioning/utils/build_vocab_spacy.py +++ /dev/null @@ -1,152 +0,0 @@ -import json -from tqdm import tqdm -import logging -import pickle -from collections import Counter -import re -import fire - -class Vocabulary(object): - """Simple vocabulary wrapper.""" - def __init__(self): - self.word2idx = {} - self.idx2word = {} - self.idx = 0 - - def add_word(self, word): - if not word in self.word2idx: - self.word2idx[word] = self.idx - self.idx2word[self.idx] = word - self.idx += 1 - - def __call__(self, word): - if not word in self.word2idx: - return self.word2idx[""] - return self.word2idx[word] - - def __len__(self): - return len(self.word2idx) - - -def build_vocab(input_json: str, - output_json: str, - threshold: int, - keep_punctuation: bool, - host_address: str, - character_level: bool = False, - retokenize: bool = True, - zh: bool = True ): - """Build vocabulary from csv file with a given threshold to drop all counts < threshold - - Args: - input_json(string): Preprossessed json file. Structure like this: - { - 'audios': [ - { - 'audio_id': 'xxx', - 'captions': [ - { - 'caption': 'xxx', - 'cap_id': 'xxx' - } - ] - }, - ... - ] - } - threshold (int): Threshold to drop all words with counts < threshold - keep_punctuation (bool): Includes or excludes punctuation. - - Returns: - vocab (Vocab): Object with the processed vocabulary -""" - data = json.load(open(input_json, "r"))["audios"] - counter = Counter() - if retokenize: - pretokenized = False - else: - pretokenized = "tokens" in data[0]["captions"][0] - - if zh: - from nltk.parse.corenlp import CoreNLPParser - from zhon.hanzi import punctuation - if not pretokenized: - parser = CoreNLPParser(host_address) - for audio_idx in tqdm(range(len(data)), leave=False, ascii=True): - for cap_idx in range(len(data[audio_idx]["captions"])): - if pretokenized: - tokens = data[audio_idx]["captions"][cap_idx]["tokens"].split() - else: - caption = data[audio_idx]["captions"][cap_idx]["caption"] - # Remove all punctuations - if not keep_punctuation: - caption = re.sub("[{}]".format(punctuation), "", caption) - if character_level: - tokens = list(caption) - else: - tokens = list(parser.tokenize(caption)) - data[audio_idx]["captions"][cap_idx]["tokens"] = " ".join(tokens) - counter.update(tokens) - else: - if pretokenized: - for audio_idx in tqdm(range(len(data)), leave=False, ascii=True): - for cap_idx in range(len(data[audio_idx]["captions"])): - tokens = data[audio_idx]["captions"][cap_idx]["tokens"].split() - counter.update(tokens) - else: - import spacy - tokenizer = spacy.load("en_core_web_sm", disable=["parser", "ner"]) - for audio_idx in tqdm(range(len(data)), leave=False, ascii=True): - captions = data[audio_idx]["captions"] - for cap_idx in range(len(captions)): - caption = captions[cap_idx]["caption"] - doc = tokenizer(caption) - tokens = " ".join([str(token).lower() for token in doc]) - data[audio_idx]["captions"][cap_idx]["tokens"] = tokens - counter.update(tokens.split(" ")) - - if not pretokenized: - if output_json is None: - json.dump({ "audios": data }, open(input_json, "w"), - indent=4, ensure_ascii=not zh) - else: - json.dump({ "audios": data }, open(output_json, "w"), - indent=4, ensure_ascii=not zh) - - words = [word for word, cnt in counter.items() if cnt >= threshold] - - # Create a vocab wrapper and add some special tokens. - vocab = Vocabulary() - vocab.add_word("") - vocab.add_word("") - vocab.add_word("") - vocab.add_word("") - - # Add the words to the vocabulary. - for word in words: - vocab.add_word(word) - return vocab - -def process(input_json: str, - output_file: str, - output_json: str = None, - threshold: int = 1, - keep_punctuation: bool = False, - character_level: bool = False, - retokenize: bool = False, - host_address: str = "http://localhost:9000", - zh: bool = True): - logfmt = "%(asctime)s (%(module)s:%(lineno)d) %(levelname)s: %(message)s" - logging.basicConfig(level=logging.INFO, format=logfmt) - logging.info("Build Vocab") - vocabulary = build_vocab( - input_json=input_json, output_json=output_json, threshold=threshold, - keep_punctuation=keep_punctuation, host_address=host_address, - character_level=character_level, retokenize=retokenize, zh=zh) - pickle.dump(vocabulary, open(output_file, "wb")) - logging.info("Total vocabulary size: {}".format(len(vocabulary))) - logging.info("Saved vocab to '{}'".format(output_file)) - - -if __name__ == '__main__': - fire.Fire(process) diff --git a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_1_ClothesKeyPoint/work_dirs_1-x/td_hm_res50_4xb64-120e_deepfashion2_skirt_256x192/td_hm_res50_4xb64-120e_deepfashion2_skirt_256x192.py b/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_1_ClothesKeyPoint/work_dirs_1-x/td_hm_res50_4xb64-120e_deepfashion2_skirt_256x192/td_hm_res50_4xb64-120e_deepfashion2_skirt_256x192.py deleted file mode 100644 index d5833cd4cff1c9290f9703f3470b1fc63bd752e3..0000000000000000000000000000000000000000 --- a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_1_ClothesKeyPoint/work_dirs_1-x/td_hm_res50_4xb64-120e_deepfashion2_skirt_256x192/td_hm_res50_4xb64-120e_deepfashion2_skirt_256x192.py +++ /dev/null @@ -1,2861 +0,0 @@ -default_scope = 'mmpose' -default_hooks = dict( - timer=dict(type='IterTimerHook'), - logger=dict(type='LoggerHook', interval=50), - param_scheduler=dict(type='ParamSchedulerHook'), - checkpoint=dict( - type='CheckpointHook', interval=10, save_best='PCK', rule='greater'), - sampler_seed=dict(type='DistSamplerSeedHook'), - visualization=dict(type='PoseVisualizationHook', enable=False)) -custom_hooks = [dict(type='SyncBuffersHook')] -env_cfg = dict( - cudnn_benchmark=False, - mp_cfg=dict(mp_start_method='fork', opencv_num_threads=0), - dist_cfg=dict(backend='nccl')) -vis_backends = [dict(type='LocalVisBackend')] -visualizer = dict( - type='PoseLocalVisualizer', - vis_backends=[dict(type='LocalVisBackend'), - dict(type='WandbVisBackend')], - name='visualizer') -log_processor = dict( - type='LogProcessor', window_size=50, by_epoch=True, num_digits=6) -log_level = 'INFO' -load_from = None -resume = False -backend_args = dict(backend='local') -train_cfg = dict(by_epoch=True, max_epochs=120, val_interval=10) -val_cfg = dict() -test_cfg = dict() -colors = dict( - sss=[255, 128, 0], - lss=[255, 0, 128], - sso=[128, 0, 255], - lso=[0, 128, 255], - vest=[0, 128, 128], - sling=[0, 0, 128], - shorts=[128, 128, 128], - trousers=[128, 0, 128], - skirt=[64, 128, 128], - ssd=[64, 64, 128], - lsd=[128, 64, 0], - vd=[128, 64, 255], - sd=[128, 64, 0]) -dataset_info = dict( - dataset_name='deepfashion2', - paper_info=dict( - author= - 'Yuying Ge and Ruimao Zhang and Lingyun Wu and Xiaogang Wang and Xiaoou Tang and Ping Luo', - title= - 'DeepFashion2: A Versatile Benchmark for Detection, Pose Estimation, Segmentation and Re-Identification of Clothing Images', - container= - 'Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR)', - year='2019', - homepage='https://github.com/switchablenorms/DeepFashion2'), - keypoint_info=dict({ - 0: - dict(name='sss_kpt1', id=0, color=[255, 128, 0], type='', swap=''), - 1: - dict( - name='sss_kpt2', - id=1, - color=[255, 128, 0], - type='', - swap='sss_kpt6'), - 2: - dict( - name='sss_kpt3', - id=2, - color=[255, 128, 0], - type='', - swap='sss_kpt5'), - 3: - dict(name='sss_kpt4', id=3, color=[255, 128, 0], type='', swap=''), - 4: - dict( - name='sss_kpt5', - id=4, - color=[255, 128, 0], - type='', - swap='sss_kpt3'), - 5: - dict( - name='sss_kpt6', - id=5, - color=[255, 128, 0], - type='', - swap='sss_kpt2'), - 6: - dict( - name='sss_kpt7', - id=6, - color=[255, 128, 0], - type='', - swap='sss_kpt25'), - 7: - dict( - name='sss_kpt8', - id=7, - color=[255, 128, 0], - type='', - swap='sss_kpt24'), - 8: - dict( - name='sss_kpt9', - id=8, - color=[255, 128, 0], - type='', - swap='sss_kpt23'), - 9: - dict( - name='sss_kpt10', - id=9, - color=[255, 128, 0], - type='', - swap='sss_kpt22'), - 10: - dict( - name='sss_kpt11', - id=10, - color=[255, 128, 0], - type='', - swap='sss_kpt21'), - 11: - dict( - name='sss_kpt12', - id=11, - color=[255, 128, 0], - type='', - swap='sss_kpt20'), - 12: - dict( - name='sss_kpt13', - id=12, - color=[255, 128, 0], - type='', - swap='sss_kpt19'), - 13: - dict( - name='sss_kpt14', - id=13, - color=[255, 128, 0], - type='', - swap='sss_kpt18'), - 14: - dict( - name='sss_kpt15', - id=14, - color=[255, 128, 0], - type='', - swap='sss_kpt17'), - 15: - dict(name='sss_kpt16', id=15, color=[255, 128, 0], type='', swap=''), - 16: - dict( - name='sss_kpt17', - id=16, - color=[255, 128, 0], - type='', - swap='sss_kpt15'), - 17: - dict( - name='sss_kpt18', - id=17, - color=[255, 128, 0], - type='', - swap='sss_kpt14'), - 18: - dict( - name='sss_kpt19', - id=18, - color=[255, 128, 0], - type='', - swap='sss_kpt13'), - 19: - dict( - name='sss_kpt20', - id=19, - color=[255, 128, 0], - type='', - swap='sss_kpt12'), - 20: - dict( - name='sss_kpt21', - id=20, - color=[255, 128, 0], - type='', - swap='sss_kpt11'), - 21: - dict( - name='sss_kpt22', - id=21, - color=[255, 128, 0], - type='', - swap='sss_kpt10'), - 22: - dict( - name='sss_kpt23', - id=22, - color=[255, 128, 0], - type='', - swap='sss_kpt9'), - 23: - dict( - name='sss_kpt24', - id=23, - color=[255, 128, 0], - type='', - swap='sss_kpt8'), - 24: - dict( - name='sss_kpt25', - id=24, - color=[255, 128, 0], - type='', - swap='sss_kpt7'), - 25: - dict(name='lss_kpt1', id=25, color=[255, 0, 128], type='', swap=''), - 26: - dict( - name='lss_kpt2', - id=26, - color=[255, 0, 128], - type='', - swap='lss_kpt6'), - 27: - dict( - name='lss_kpt3', - id=27, - color=[255, 0, 128], - type='', - swap='lss_kpt5'), - 28: - dict(name='lss_kpt4', id=28, color=[255, 0, 128], type='', swap=''), - 29: - dict( - name='lss_kpt5', - id=29, - color=[255, 0, 128], - type='', - swap='lss_kpt3'), - 30: - dict( - name='lss_kpt6', - id=30, - color=[255, 0, 128], - type='', - swap='lss_kpt2'), - 31: - dict( - name='lss_kpt7', - id=31, - color=[255, 0, 128], - type='', - swap='lss_kpt33'), - 32: - dict( - name='lss_kpt8', - id=32, - color=[255, 0, 128], - type='', - swap='lss_kpt32'), - 33: - dict( - name='lss_kpt9', - id=33, - color=[255, 0, 128], - type='', - swap='lss_kpt31'), - 34: - dict( - name='lss_kpt10', - id=34, - color=[255, 0, 128], - type='', - swap='lss_kpt30'), - 35: - dict( - name='lss_kpt11', - id=35, - color=[255, 0, 128], - type='', - swap='lss_kpt29'), - 36: - dict( - name='lss_kpt12', - id=36, - color=[255, 0, 128], - type='', - swap='lss_kpt28'), - 37: - dict( - name='lss_kpt13', - id=37, - color=[255, 0, 128], - type='', - swap='lss_kpt27'), - 38: - dict( - name='lss_kpt14', - id=38, - color=[255, 0, 128], - type='', - swap='lss_kpt26'), - 39: - dict( - name='lss_kpt15', - id=39, - color=[255, 0, 128], - type='', - swap='lss_kpt25'), - 40: - dict( - name='lss_kpt16', - id=40, - color=[255, 0, 128], - type='', - swap='lss_kpt24'), - 41: - dict( - name='lss_kpt17', - id=41, - color=[255, 0, 128], - type='', - swap='lss_kpt23'), - 42: - dict( - name='lss_kpt18', - id=42, - color=[255, 0, 128], - type='', - swap='lss_kpt22'), - 43: - dict( - name='lss_kpt19', - id=43, - color=[255, 0, 128], - type='', - swap='lss_kpt21'), - 44: - dict(name='lss_kpt20', id=44, color=[255, 0, 128], type='', swap=''), - 45: - dict( - name='lss_kpt21', - id=45, - color=[255, 0, 128], - type='', - swap='lss_kpt19'), - 46: - dict( - name='lss_kpt22', - id=46, - color=[255, 0, 128], - type='', - swap='lss_kpt18'), - 47: - dict( - name='lss_kpt23', - id=47, - color=[255, 0, 128], - type='', - swap='lss_kpt17'), - 48: - dict( - name='lss_kpt24', - id=48, - color=[255, 0, 128], - type='', - swap='lss_kpt16'), - 49: - dict( - name='lss_kpt25', - id=49, - color=[255, 0, 128], - type='', - swap='lss_kpt15'), - 50: - dict( - name='lss_kpt26', - id=50, - color=[255, 0, 128], - type='', - swap='lss_kpt14'), - 51: - dict( - name='lss_kpt27', - id=51, - color=[255, 0, 128], - type='', - swap='lss_kpt13'), - 52: - dict( - name='lss_kpt28', - id=52, - color=[255, 0, 128], - type='', - swap='lss_kpt12'), - 53: - dict( - name='lss_kpt29', - id=53, - color=[255, 0, 128], - type='', - swap='lss_kpt11'), - 54: - dict( - name='lss_kpt30', - id=54, - color=[255, 0, 128], - type='', - swap='lss_kpt10'), - 55: - dict( - name='lss_kpt31', - id=55, - color=[255, 0, 128], - type='', - swap='lss_kpt9'), - 56: - dict( - name='lss_kpt32', - id=56, - color=[255, 0, 128], - type='', - swap='lss_kpt8'), - 57: - dict( - name='lss_kpt33', - id=57, - color=[255, 0, 128], - type='', - swap='lss_kpt7'), - 58: - dict(name='sso_kpt1', id=58, color=[128, 0, 255], type='', swap=''), - 59: - dict( - name='sso_kpt2', - id=59, - color=[128, 0, 255], - type='', - swap='sso_kpt26'), - 60: - dict( - name='sso_kpt3', - id=60, - color=[128, 0, 255], - type='', - swap='sso_kpt5'), - 61: - dict( - name='sso_kpt4', - id=61, - color=[128, 0, 255], - type='', - swap='sso_kpt6'), - 62: - dict( - name='sso_kpt5', - id=62, - color=[128, 0, 255], - type='', - swap='sso_kpt3'), - 63: - dict( - name='sso_kpt6', - id=63, - color=[128, 0, 255], - type='', - swap='sso_kpt4'), - 64: - dict( - name='sso_kpt7', - id=64, - color=[128, 0, 255], - type='', - swap='sso_kpt25'), - 65: - dict( - name='sso_kpt8', - id=65, - color=[128, 0, 255], - type='', - swap='sso_kpt24'), - 66: - dict( - name='sso_kpt9', - id=66, - color=[128, 0, 255], - type='', - swap='sso_kpt23'), - 67: - dict( - name='sso_kpt10', - id=67, - color=[128, 0, 255], - type='', - swap='sso_kpt22'), - 68: - dict( - name='sso_kpt11', - id=68, - color=[128, 0, 255], - type='', - swap='sso_kpt21'), - 69: - dict( - name='sso_kpt12', - id=69, - color=[128, 0, 255], - type='', - swap='sso_kpt20'), - 70: - dict( - name='sso_kpt13', - id=70, - color=[128, 0, 255], - type='', - swap='sso_kpt19'), - 71: - dict( - name='sso_kpt14', - id=71, - color=[128, 0, 255], - type='', - swap='sso_kpt18'), - 72: - dict( - name='sso_kpt15', - id=72, - color=[128, 0, 255], - type='', - swap='sso_kpt17'), - 73: - dict( - name='sso_kpt16', - id=73, - color=[128, 0, 255], - type='', - swap='sso_kpt29'), - 74: - dict( - name='sso_kpt17', - id=74, - color=[128, 0, 255], - type='', - swap='sso_kpt15'), - 75: - dict( - name='sso_kpt18', - id=75, - color=[128, 0, 255], - type='', - swap='sso_kpt14'), - 76: - dict( - name='sso_kpt19', - id=76, - color=[128, 0, 255], - type='', - swap='sso_kpt13'), - 77: - dict( - name='sso_kpt20', - id=77, - color=[128, 0, 255], - type='', - swap='sso_kpt12'), - 78: - dict( - name='sso_kpt21', - id=78, - color=[128, 0, 255], - type='', - swap='sso_kpt11'), - 79: - dict( - name='sso_kpt22', - id=79, - color=[128, 0, 255], - type='', - swap='sso_kpt10'), - 80: - dict( - name='sso_kpt23', - id=80, - color=[128, 0, 255], - type='', - swap='sso_kpt9'), - 81: - dict( - name='sso_kpt24', - id=81, - color=[128, 0, 255], - type='', - swap='sso_kpt8'), - 82: - dict( - name='sso_kpt25', - id=82, - color=[128, 0, 255], - type='', - swap='sso_kpt7'), - 83: - dict( - name='sso_kpt26', - id=83, - color=[128, 0, 255], - type='', - swap='sso_kpt2'), - 84: - dict( - name='sso_kpt27', - id=84, - color=[128, 0, 255], - type='', - swap='sso_kpt30'), - 85: - dict( - name='sso_kpt28', - id=85, - color=[128, 0, 255], - type='', - swap='sso_kpt31'), - 86: - dict( - name='sso_kpt29', - id=86, - color=[128, 0, 255], - type='', - swap='sso_kpt16'), - 87: - dict( - name='sso_kpt30', - id=87, - color=[128, 0, 255], - type='', - swap='sso_kpt27'), - 88: - dict( - name='sso_kpt31', - id=88, - color=[128, 0, 255], - type='', - swap='sso_kpt28'), - 89: - dict(name='lso_kpt1', id=89, color=[0, 128, 255], type='', swap=''), - 90: - dict( - name='lso_kpt2', - id=90, - color=[0, 128, 255], - type='', - swap='lso_kpt6'), - 91: - dict( - name='lso_kpt3', - id=91, - color=[0, 128, 255], - type='', - swap='lso_kpt5'), - 92: - dict( - name='lso_kpt4', - id=92, - color=[0, 128, 255], - type='', - swap='lso_kpt34'), - 93: - dict( - name='lso_kpt5', - id=93, - color=[0, 128, 255], - type='', - swap='lso_kpt3'), - 94: - dict( - name='lso_kpt6', - id=94, - color=[0, 128, 255], - type='', - swap='lso_kpt2'), - 95: - dict( - name='lso_kpt7', - id=95, - color=[0, 128, 255], - type='', - swap='lso_kpt33'), - 96: - dict( - name='lso_kpt8', - id=96, - color=[0, 128, 255], - type='', - swap='lso_kpt32'), - 97: - dict( - name='lso_kpt9', - id=97, - color=[0, 128, 255], - type='', - swap='lso_kpt31'), - 98: - dict( - name='lso_kpt10', - id=98, - color=[0, 128, 255], - type='', - swap='lso_kpt30'), - 99: - dict( - name='lso_kpt11', - id=99, - color=[0, 128, 255], - type='', - swap='lso_kpt29'), - 100: - dict( - name='lso_kpt12', - id=100, - color=[0, 128, 255], - type='', - swap='lso_kpt28'), - 101: - dict( - name='lso_kpt13', - id=101, - color=[0, 128, 255], - type='', - swap='lso_kpt27'), - 102: - dict( - name='lso_kpt14', - id=102, - color=[0, 128, 255], - type='', - swap='lso_kpt26'), - 103: - dict( - name='lso_kpt15', - id=103, - color=[0, 128, 255], - type='', - swap='lso_kpt25'), - 104: - dict( - name='lso_kpt16', - id=104, - color=[0, 128, 255], - type='', - swap='lso_kpt24'), - 105: - dict( - name='lso_kpt17', - id=105, - color=[0, 128, 255], - type='', - swap='lso_kpt23'), - 106: - dict( - name='lso_kpt18', - id=106, - color=[0, 128, 255], - type='', - swap='lso_kpt22'), - 107: - dict( - name='lso_kpt19', - id=107, - color=[0, 128, 255], - type='', - swap='lso_kpt21'), - 108: - dict( - name='lso_kpt20', - id=108, - color=[0, 128, 255], - type='', - swap='lso_kpt37'), - 109: - dict( - name='lso_kpt21', - id=109, - color=[0, 128, 255], - type='', - swap='lso_kpt19'), - 110: - dict( - name='lso_kpt22', - id=110, - color=[0, 128, 255], - type='', - swap='lso_kpt18'), - 111: - dict( - name='lso_kpt23', - id=111, - color=[0, 128, 255], - type='', - swap='lso_kpt17'), - 112: - dict( - name='lso_kpt24', - id=112, - color=[0, 128, 255], - type='', - swap='lso_kpt16'), - 113: - dict( - name='lso_kpt25', - id=113, - color=[0, 128, 255], - type='', - swap='lso_kpt15'), - 114: - dict( - name='lso_kpt26', - id=114, - color=[0, 128, 255], - type='', - swap='lso_kpt14'), - 115: - dict( - name='lso_kpt27', - id=115, - color=[0, 128, 255], - type='', - swap='lso_kpt13'), - 116: - dict( - name='lso_kpt28', - id=116, - color=[0, 128, 255], - type='', - swap='lso_kpt12'), - 117: - dict( - name='lso_kpt29', - id=117, - color=[0, 128, 255], - type='', - swap='lso_kpt11'), - 118: - dict( - name='lso_kpt30', - id=118, - color=[0, 128, 255], - type='', - swap='lso_kpt10'), - 119: - dict( - name='lso_kpt31', - id=119, - color=[0, 128, 255], - type='', - swap='lso_kpt9'), - 120: - dict( - name='lso_kpt32', - id=120, - color=[0, 128, 255], - type='', - swap='lso_kpt8'), - 121: - dict( - name='lso_kpt33', - id=121, - color=[0, 128, 255], - type='', - swap='lso_kpt7'), - 122: - dict( - name='lso_kpt34', - id=122, - color=[0, 128, 255], - type='', - swap='lso_kpt4'), - 123: - dict( - name='lso_kpt35', - id=123, - color=[0, 128, 255], - type='', - swap='lso_kpt38'), - 124: - dict( - name='lso_kpt36', - id=124, - color=[0, 128, 255], - type='', - swap='lso_kpt39'), - 125: - dict( - name='lso_kpt37', - id=125, - color=[0, 128, 255], - type='', - swap='lso_kpt20'), - 126: - dict( - name='lso_kpt38', - id=126, - color=[0, 128, 255], - type='', - swap='lso_kpt35'), - 127: - dict( - name='lso_kpt39', - id=127, - color=[0, 128, 255], - type='', - swap='lso_kpt36'), - 128: - dict(name='vest_kpt1', id=128, color=[0, 128, 128], type='', swap=''), - 129: - dict( - name='vest_kpt2', - id=129, - color=[0, 128, 128], - type='', - swap='vest_kpt6'), - 130: - dict( - name='vest_kpt3', - id=130, - color=[0, 128, 128], - type='', - swap='vest_kpt5'), - 131: - dict(name='vest_kpt4', id=131, color=[0, 128, 128], type='', swap=''), - 132: - dict( - name='vest_kpt5', - id=132, - color=[0, 128, 128], - type='', - swap='vest_kpt3'), - 133: - dict( - name='vest_kpt6', - id=133, - color=[0, 128, 128], - type='', - swap='vest_kpt2'), - 134: - dict( - name='vest_kpt7', - id=134, - color=[0, 128, 128], - type='', - swap='vest_kpt15'), - 135: - dict( - name='vest_kpt8', - id=135, - color=[0, 128, 128], - type='', - swap='vest_kpt14'), - 136: - dict( - name='vest_kpt9', - id=136, - color=[0, 128, 128], - type='', - swap='vest_kpt13'), - 137: - dict( - name='vest_kpt10', - id=137, - color=[0, 128, 128], - type='', - swap='vest_kpt12'), - 138: - dict(name='vest_kpt11', id=138, color=[0, 128, 128], type='', swap=''), - 139: - dict( - name='vest_kpt12', - id=139, - color=[0, 128, 128], - type='', - swap='vest_kpt10'), - 140: - dict(name='vest_kpt13', id=140, color=[0, 128, 128], type='', swap=''), - 141: - dict( - name='vest_kpt14', - id=141, - color=[0, 128, 128], - type='', - swap='vest_kpt8'), - 142: - dict( - name='vest_kpt15', - id=142, - color=[0, 128, 128], - type='', - swap='vest_kpt7'), - 143: - dict(name='sling_kpt1', id=143, color=[0, 0, 128], type='', swap=''), - 144: - dict( - name='sling_kpt2', - id=144, - color=[0, 0, 128], - type='', - swap='sling_kpt6'), - 145: - dict( - name='sling_kpt3', - id=145, - color=[0, 0, 128], - type='', - swap='sling_kpt5'), - 146: - dict(name='sling_kpt4', id=146, color=[0, 0, 128], type='', swap=''), - 147: - dict( - name='sling_kpt5', - id=147, - color=[0, 0, 128], - type='', - swap='sling_kpt3'), - 148: - dict( - name='sling_kpt6', - id=148, - color=[0, 0, 128], - type='', - swap='sling_kpt2'), - 149: - dict( - name='sling_kpt7', - id=149, - color=[0, 0, 128], - type='', - swap='sling_kpt15'), - 150: - dict( - name='sling_kpt8', - id=150, - color=[0, 0, 128], - type='', - swap='sling_kpt14'), - 151: - dict( - name='sling_kpt9', - id=151, - color=[0, 0, 128], - type='', - swap='sling_kpt13'), - 152: - dict( - name='sling_kpt10', - id=152, - color=[0, 0, 128], - type='', - swap='sling_kpt12'), - 153: - dict(name='sling_kpt11', id=153, color=[0, 0, 128], type='', swap=''), - 154: - dict( - name='sling_kpt12', - id=154, - color=[0, 0, 128], - type='', - swap='sling_kpt10'), - 155: - dict( - name='sling_kpt13', - id=155, - color=[0, 0, 128], - type='', - swap='sling_kpt9'), - 156: - dict( - name='sling_kpt14', - id=156, - color=[0, 0, 128], - type='', - swap='sling_kpt8'), - 157: - dict( - name='sling_kpt15', - id=157, - color=[0, 0, 128], - type='', - swap='sling_kpt7'), - 158: - dict( - name='shorts_kpt1', - id=158, - color=[128, 128, 128], - type='', - swap='shorts_kpt3'), - 159: - dict( - name='shorts_kpt2', - id=159, - color=[128, 128, 128], - type='', - swap=''), - 160: - dict( - name='shorts_kpt3', - id=160, - color=[128, 128, 128], - type='', - swap='shorts_kpt1'), - 161: - dict( - name='shorts_kpt4', - id=161, - color=[128, 128, 128], - type='', - swap='shorts_kpt10'), - 162: - dict( - name='shorts_kpt5', - id=162, - color=[128, 128, 128], - type='', - swap='shorts_kpt9'), - 163: - dict( - name='shorts_kpt6', - id=163, - color=[128, 128, 128], - type='', - swap='shorts_kpt8'), - 164: - dict( - name='shorts_kpt7', - id=164, - color=[128, 128, 128], - type='', - swap=''), - 165: - dict( - name='shorts_kpt8', - id=165, - color=[128, 128, 128], - type='', - swap='shorts_kpt6'), - 166: - dict( - name='shorts_kpt9', - id=166, - color=[128, 128, 128], - type='', - swap='shorts_kpt5'), - 167: - dict( - name='shorts_kpt10', - id=167, - color=[128, 128, 128], - type='', - swap='shorts_kpt4'), - 168: - dict( - name='trousers_kpt1', - id=168, - color=[128, 0, 128], - type='', - swap='trousers_kpt3'), - 169: - dict( - name='trousers_kpt2', - id=169, - color=[128, 0, 128], - type='', - swap=''), - 170: - dict( - name='trousers_kpt3', - id=170, - color=[128, 0, 128], - type='', - swap='trousers_kpt1'), - 171: - dict( - name='trousers_kpt4', - id=171, - color=[128, 0, 128], - type='', - swap='trousers_kpt14'), - 172: - dict( - name='trousers_kpt5', - id=172, - color=[128, 0, 128], - type='', - swap='trousers_kpt13'), - 173: - dict( - name='trousers_kpt6', - id=173, - color=[128, 0, 128], - type='', - swap='trousers_kpt12'), - 174: - dict( - name='trousers_kpt7', - id=174, - color=[128, 0, 128], - type='', - swap='trousers_kpt11'), - 175: - dict( - name='trousers_kpt8', - id=175, - color=[128, 0, 128], - type='', - swap='trousers_kpt10'), - 176: - dict( - name='trousers_kpt9', - id=176, - color=[128, 0, 128], - type='', - swap=''), - 177: - dict( - name='trousers_kpt10', - id=177, - color=[128, 0, 128], - type='', - swap='trousers_kpt8'), - 178: - dict( - name='trousers_kpt11', - id=178, - color=[128, 0, 128], - type='', - swap='trousers_kpt7'), - 179: - dict( - name='trousers_kpt12', - id=179, - color=[128, 0, 128], - type='', - swap='trousers_kpt6'), - 180: - dict( - name='trousers_kpt13', - id=180, - color=[128, 0, 128], - type='', - swap='trousers_kpt5'), - 181: - dict( - name='trousers_kpt14', - id=181, - color=[128, 0, 128], - type='', - swap='trousers_kpt4'), - 182: - dict( - name='skirt_kpt1', - id=182, - color=[64, 128, 128], - type='', - swap='skirt_kpt3'), - 183: - dict( - name='skirt_kpt2', id=183, color=[64, 128, 128], type='', swap=''), - 184: - dict( - name='skirt_kpt3', - id=184, - color=[64, 128, 128], - type='', - swap='skirt_kpt1'), - 185: - dict( - name='skirt_kpt4', - id=185, - color=[64, 128, 128], - type='', - swap='skirt_kpt8'), - 186: - dict( - name='skirt_kpt5', - id=186, - color=[64, 128, 128], - type='', - swap='skirt_kpt7'), - 187: - dict( - name='skirt_kpt6', id=187, color=[64, 128, 128], type='', swap=''), - 188: - dict( - name='skirt_kpt7', - id=188, - color=[64, 128, 128], - type='', - swap='skirt_kpt5'), - 189: - dict( - name='skirt_kpt8', - id=189, - color=[64, 128, 128], - type='', - swap='skirt_kpt4'), - 190: - dict(name='ssd_kpt1', id=190, color=[64, 64, 128], type='', swap=''), - 191: - dict( - name='ssd_kpt2', - id=191, - color=[64, 64, 128], - type='', - swap='ssd_kpt6'), - 192: - dict( - name='ssd_kpt3', - id=192, - color=[64, 64, 128], - type='', - swap='ssd_kpt5'), - 193: - dict(name='ssd_kpt4', id=193, color=[64, 64, 128], type='', swap=''), - 194: - dict( - name='ssd_kpt5', - id=194, - color=[64, 64, 128], - type='', - swap='ssd_kpt3'), - 195: - dict( - name='ssd_kpt6', - id=195, - color=[64, 64, 128], - type='', - swap='ssd_kpt2'), - 196: - dict( - name='ssd_kpt7', - id=196, - color=[64, 64, 128], - type='', - swap='ssd_kpt29'), - 197: - dict( - name='ssd_kpt8', - id=197, - color=[64, 64, 128], - type='', - swap='ssd_kpt28'), - 198: - dict( - name='ssd_kpt9', - id=198, - color=[64, 64, 128], - type='', - swap='ssd_kpt27'), - 199: - dict( - name='ssd_kpt10', - id=199, - color=[64, 64, 128], - type='', - swap='ssd_kpt26'), - 200: - dict( - name='ssd_kpt11', - id=200, - color=[64, 64, 128], - type='', - swap='ssd_kpt25'), - 201: - dict( - name='ssd_kpt12', - id=201, - color=[64, 64, 128], - type='', - swap='ssd_kpt24'), - 202: - dict( - name='ssd_kpt13', - id=202, - color=[64, 64, 128], - type='', - swap='ssd_kpt23'), - 203: - dict( - name='ssd_kpt14', - id=203, - color=[64, 64, 128], - type='', - swap='ssd_kpt22'), - 204: - dict( - name='ssd_kpt15', - id=204, - color=[64, 64, 128], - type='', - swap='ssd_kpt21'), - 205: - dict( - name='ssd_kpt16', - id=205, - color=[64, 64, 128], - type='', - swap='ssd_kpt20'), - 206: - dict( - name='ssd_kpt17', - id=206, - color=[64, 64, 128], - type='', - swap='ssd_kpt19'), - 207: - dict(name='ssd_kpt18', id=207, color=[64, 64, 128], type='', swap=''), - 208: - dict( - name='ssd_kpt19', - id=208, - color=[64, 64, 128], - type='', - swap='ssd_kpt17'), - 209: - dict( - name='ssd_kpt20', - id=209, - color=[64, 64, 128], - type='', - swap='ssd_kpt16'), - 210: - dict( - name='ssd_kpt21', - id=210, - color=[64, 64, 128], - type='', - swap='ssd_kpt15'), - 211: - dict( - name='ssd_kpt22', - id=211, - color=[64, 64, 128], - type='', - swap='ssd_kpt14'), - 212: - dict( - name='ssd_kpt23', - id=212, - color=[64, 64, 128], - type='', - swap='ssd_kpt13'), - 213: - dict( - name='ssd_kpt24', - id=213, - color=[64, 64, 128], - type='', - swap='ssd_kpt12'), - 214: - dict( - name='ssd_kpt25', - id=214, - color=[64, 64, 128], - type='', - swap='ssd_kpt11'), - 215: - dict( - name='ssd_kpt26', - id=215, - color=[64, 64, 128], - type='', - swap='ssd_kpt10'), - 216: - dict( - name='ssd_kpt27', - id=216, - color=[64, 64, 128], - type='', - swap='ssd_kpt9'), - 217: - dict( - name='ssd_kpt28', - id=217, - color=[64, 64, 128], - type='', - swap='ssd_kpt8'), - 218: - dict( - name='ssd_kpt29', - id=218, - color=[64, 64, 128], - type='', - swap='ssd_kpt7'), - 219: - dict(name='lsd_kpt1', id=219, color=[128, 64, 0], type='', swap=''), - 220: - dict( - name='lsd_kpt2', - id=220, - color=[128, 64, 0], - type='', - swap='lsd_kpt6'), - 221: - dict( - name='lsd_kpt3', - id=221, - color=[128, 64, 0], - type='', - swap='lsd_kpt5'), - 222: - dict(name='lsd_kpt4', id=222, color=[128, 64, 0], type='', swap=''), - 223: - dict( - name='lsd_kpt5', - id=223, - color=[128, 64, 0], - type='', - swap='lsd_kpt3'), - 224: - dict( - name='lsd_kpt6', - id=224, - color=[128, 64, 0], - type='', - swap='lsd_kpt2'), - 225: - dict( - name='lsd_kpt7', - id=225, - color=[128, 64, 0], - type='', - swap='lsd_kpt37'), - 226: - dict( - name='lsd_kpt8', - id=226, - color=[128, 64, 0], - type='', - swap='lsd_kpt36'), - 227: - dict( - name='lsd_kpt9', - id=227, - color=[128, 64, 0], - type='', - swap='lsd_kpt35'), - 228: - dict( - name='lsd_kpt10', - id=228, - color=[128, 64, 0], - type='', - swap='lsd_kpt34'), - 229: - dict( - name='lsd_kpt11', - id=229, - color=[128, 64, 0], - type='', - swap='lsd_kpt33'), - 230: - dict( - name='lsd_kpt12', - id=230, - color=[128, 64, 0], - type='', - swap='lsd_kpt32'), - 231: - dict( - name='lsd_kpt13', - id=231, - color=[128, 64, 0], - type='', - swap='lsd_kpt31'), - 232: - dict( - name='lsd_kpt14', - id=232, - color=[128, 64, 0], - type='', - swap='lsd_kpt30'), - 233: - dict( - name='lsd_kpt15', - id=233, - color=[128, 64, 0], - type='', - swap='lsd_kpt29'), - 234: - dict( - name='lsd_kpt16', - id=234, - color=[128, 64, 0], - type='', - swap='lsd_kpt28'), - 235: - dict( - name='lsd_kpt17', - id=235, - color=[128, 64, 0], - type='', - swap='lsd_kpt27'), - 236: - dict( - name='lsd_kpt18', - id=236, - color=[128, 64, 0], - type='', - swap='lsd_kpt26'), - 237: - dict( - name='lsd_kpt19', - id=237, - color=[128, 64, 0], - type='', - swap='lsd_kpt25'), - 238: - dict( - name='lsd_kpt20', - id=238, - color=[128, 64, 0], - type='', - swap='lsd_kpt24'), - 239: - dict( - name='lsd_kpt21', - id=239, - color=[128, 64, 0], - type='', - swap='lsd_kpt23'), - 240: - dict(name='lsd_kpt22', id=240, color=[128, 64, 0], type='', swap=''), - 241: - dict( - name='lsd_kpt23', - id=241, - color=[128, 64, 0], - type='', - swap='lsd_kpt21'), - 242: - dict( - name='lsd_kpt24', - id=242, - color=[128, 64, 0], - type='', - swap='lsd_kpt20'), - 243: - dict( - name='lsd_kpt25', - id=243, - color=[128, 64, 0], - type='', - swap='lsd_kpt19'), - 244: - dict( - name='lsd_kpt26', - id=244, - color=[128, 64, 0], - type='', - swap='lsd_kpt18'), - 245: - dict( - name='lsd_kpt27', - id=245, - color=[128, 64, 0], - type='', - swap='lsd_kpt17'), - 246: - dict( - name='lsd_kpt28', - id=246, - color=[128, 64, 0], - type='', - swap='lsd_kpt16'), - 247: - dict( - name='lsd_kpt29', - id=247, - color=[128, 64, 0], - type='', - swap='lsd_kpt15'), - 248: - dict( - name='lsd_kpt30', - id=248, - color=[128, 64, 0], - type='', - swap='lsd_kpt14'), - 249: - dict( - name='lsd_kpt31', - id=249, - color=[128, 64, 0], - type='', - swap='lsd_kpt13'), - 250: - dict( - name='lsd_kpt32', - id=250, - color=[128, 64, 0], - type='', - swap='lsd_kpt12'), - 251: - dict( - name='lsd_kpt33', - id=251, - color=[128, 64, 0], - type='', - swap='lsd_kpt11'), - 252: - dict( - name='lsd_kpt34', - id=252, - color=[128, 64, 0], - type='', - swap='lsd_kpt10'), - 253: - dict( - name='lsd_kpt35', - id=253, - color=[128, 64, 0], - type='', - swap='lsd_kpt9'), - 254: - dict( - name='lsd_kpt36', - id=254, - color=[128, 64, 0], - type='', - swap='lsd_kpt8'), - 255: - dict( - name='lsd_kpt37', - id=255, - color=[128, 64, 0], - type='', - swap='lsd_kpt7'), - 256: - dict(name='vd_kpt1', id=256, color=[128, 64, 255], type='', swap=''), - 257: - dict( - name='vd_kpt2', - id=257, - color=[128, 64, 255], - type='', - swap='vd_kpt6'), - 258: - dict( - name='vd_kpt3', - id=258, - color=[128, 64, 255], - type='', - swap='vd_kpt5'), - 259: - dict(name='vd_kpt4', id=259, color=[128, 64, 255], type='', swap=''), - 260: - dict( - name='vd_kpt5', - id=260, - color=[128, 64, 255], - type='', - swap='vd_kpt3'), - 261: - dict( - name='vd_kpt6', - id=261, - color=[128, 64, 255], - type='', - swap='vd_kpt2'), - 262: - dict( - name='vd_kpt7', - id=262, - color=[128, 64, 255], - type='', - swap='vd_kpt19'), - 263: - dict( - name='vd_kpt8', - id=263, - color=[128, 64, 255], - type='', - swap='vd_kpt18'), - 264: - dict( - name='vd_kpt9', - id=264, - color=[128, 64, 255], - type='', - swap='vd_kpt17'), - 265: - dict( - name='vd_kpt10', - id=265, - color=[128, 64, 255], - type='', - swap='vd_kpt16'), - 266: - dict( - name='vd_kpt11', - id=266, - color=[128, 64, 255], - type='', - swap='vd_kpt15'), - 267: - dict( - name='vd_kpt12', - id=267, - color=[128, 64, 255], - type='', - swap='vd_kpt14'), - 268: - dict(name='vd_kpt13', id=268, color=[128, 64, 255], type='', swap=''), - 269: - dict( - name='vd_kpt14', - id=269, - color=[128, 64, 255], - type='', - swap='vd_kpt12'), - 270: - dict( - name='vd_kpt15', - id=270, - color=[128, 64, 255], - type='', - swap='vd_kpt11'), - 271: - dict( - name='vd_kpt16', - id=271, - color=[128, 64, 255], - type='', - swap='vd_kpt10'), - 272: - dict( - name='vd_kpt17', - id=272, - color=[128, 64, 255], - type='', - swap='vd_kpt9'), - 273: - dict( - name='vd_kpt18', - id=273, - color=[128, 64, 255], - type='', - swap='vd_kpt8'), - 274: - dict( - name='vd_kpt19', - id=274, - color=[128, 64, 255], - type='', - swap='vd_kpt7'), - 275: - dict(name='sd_kpt1', id=275, color=[128, 64, 0], type='', swap=''), - 276: - dict( - name='sd_kpt2', - id=276, - color=[128, 64, 0], - type='', - swap='sd_kpt6'), - 277: - dict( - name='sd_kpt3', - id=277, - color=[128, 64, 0], - type='', - swap='sd_kpt5'), - 278: - dict(name='sd_kpt4', id=278, color=[128, 64, 0], type='', swap=''), - 279: - dict( - name='sd_kpt5', - id=279, - color=[128, 64, 0], - type='', - swap='sd_kpt3'), - 280: - dict( - name='sd_kpt6', - id=280, - color=[128, 64, 0], - type='', - swap='sd_kpt2'), - 281: - dict( - name='sd_kpt7', - id=281, - color=[128, 64, 0], - type='', - swap='sd_kpt19'), - 282: - dict( - name='sd_kpt8', - id=282, - color=[128, 64, 0], - type='', - swap='sd_kpt18'), - 283: - dict( - name='sd_kpt9', - id=283, - color=[128, 64, 0], - type='', - swap='sd_kpt17'), - 284: - dict( - name='sd_kpt10', - id=284, - color=[128, 64, 0], - type='', - swap='sd_kpt16'), - 285: - dict( - name='sd_kpt11', - id=285, - color=[128, 64, 0], - type='', - swap='sd_kpt15'), - 286: - dict( - name='sd_kpt12', - id=286, - color=[128, 64, 0], - type='', - swap='sd_kpt14'), - 287: - dict(name='sd_kpt13', id=287, color=[128, 64, 0], type='', swap=''), - 288: - dict( - name='sd_kpt14', - id=288, - color=[128, 64, 0], - type='', - swap='sd_kpt12'), - 289: - dict( - name='sd_kpt15', - id=289, - color=[128, 64, 0], - type='', - swap='sd_kpt11'), - 290: - dict( - name='sd_kpt16', - id=290, - color=[128, 64, 0], - type='', - swap='sd_kpt10'), - 291: - dict( - name='sd_kpt17', - id=291, - color=[128, 64, 0], - type='', - swap='sd_kpt9'), - 292: - dict( - name='sd_kpt18', - id=292, - color=[128, 64, 0], - type='', - swap='sd_kpt8'), - 293: - dict( - name='sd_kpt19', - id=293, - color=[128, 64, 0], - type='', - swap='sd_kpt7') - }), - skeleton_info=dict({ - 0: - dict(link=('sss_kpt1', 'sss_kpt2'), id=0, color=[255, 128, 0]), - 1: - dict(link=('sss_kpt2', 'sss_kpt7'), id=1, color=[255, 128, 0]), - 2: - dict(link=('sss_kpt7', 'sss_kpt8'), id=2, color=[255, 128, 0]), - 3: - dict(link=('sss_kpt8', 'sss_kpt9'), id=3, color=[255, 128, 0]), - 4: - dict(link=('sss_kpt9', 'sss_kpt10'), id=4, color=[255, 128, 0]), - 5: - dict(link=('sss_kpt10', 'sss_kpt11'), id=5, color=[255, 128, 0]), - 6: - dict(link=('sss_kpt11', 'sss_kpt12'), id=6, color=[255, 128, 0]), - 7: - dict(link=('sss_kpt12', 'sss_kpt13'), id=7, color=[255, 128, 0]), - 8: - dict(link=('sss_kpt13', 'sss_kpt14'), id=8, color=[255, 128, 0]), - 9: - dict(link=('sss_kpt14', 'sss_kpt15'), id=9, color=[255, 128, 0]), - 10: - dict(link=('sss_kpt15', 'sss_kpt16'), id=10, color=[255, 128, 0]), - 11: - dict(link=('sss_kpt16', 'sss_kpt17'), id=11, color=[255, 128, 0]), - 12: - dict(link=('sss_kpt17', 'sss_kpt18'), id=12, color=[255, 128, 0]), - 13: - dict(link=('sss_kpt18', 'sss_kpt19'), id=13, color=[255, 128, 0]), - 14: - dict(link=('sss_kpt19', 'sss_kpt20'), id=14, color=[255, 128, 0]), - 15: - dict(link=('sss_kpt20', 'sss_kpt21'), id=15, color=[255, 128, 0]), - 16: - dict(link=('sss_kpt21', 'sss_kpt22'), id=16, color=[255, 128, 0]), - 17: - dict(link=('sss_kpt22', 'sss_kpt23'), id=17, color=[255, 128, 0]), - 18: - dict(link=('sss_kpt23', 'sss_kpt24'), id=18, color=[255, 128, 0]), - 19: - dict(link=('sss_kpt24', 'sss_kpt25'), id=19, color=[255, 128, 0]), - 20: - dict(link=('sss_kpt25', 'sss_kpt6'), id=20, color=[255, 128, 0]), - 21: - dict(link=('sss_kpt6', 'sss_kpt1'), id=21, color=[255, 128, 0]), - 22: - dict(link=('sss_kpt2', 'sss_kpt3'), id=22, color=[255, 128, 0]), - 23: - dict(link=('sss_kpt3', 'sss_kpt4'), id=23, color=[255, 128, 0]), - 24: - dict(link=('sss_kpt4', 'sss_kpt5'), id=24, color=[255, 128, 0]), - 25: - dict(link=('sss_kpt5', 'sss_kpt6'), id=25, color=[255, 128, 0]), - 26: - dict(link=('lss_kpt1', 'lss_kpt2'), id=26, color=[255, 0, 128]), - 27: - dict(link=('lss_kpt2', 'lss_kpt7'), id=27, color=[255, 0, 128]), - 28: - dict(link=('lss_kpt7', 'lss_kpt8'), id=28, color=[255, 0, 128]), - 29: - dict(link=('lss_kpt8', 'lss_kpt9'), id=29, color=[255, 0, 128]), - 30: - dict(link=('lss_kpt9', 'lss_kpt10'), id=30, color=[255, 0, 128]), - 31: - dict(link=('lss_kpt10', 'lss_kpt11'), id=31, color=[255, 0, 128]), - 32: - dict(link=('lss_kpt11', 'lss_kpt12'), id=32, color=[255, 0, 128]), - 33: - dict(link=('lss_kpt12', 'lss_kpt13'), id=33, color=[255, 0, 128]), - 34: - dict(link=('lss_kpt13', 'lss_kpt14'), id=34, color=[255, 0, 128]), - 35: - dict(link=('lss_kpt14', 'lss_kpt15'), id=35, color=[255, 0, 128]), - 36: - dict(link=('lss_kpt15', 'lss_kpt16'), id=36, color=[255, 0, 128]), - 37: - dict(link=('lss_kpt16', 'lss_kpt17'), id=37, color=[255, 0, 128]), - 38: - dict(link=('lss_kpt17', 'lss_kpt18'), id=38, color=[255, 0, 128]), - 39: - dict(link=('lss_kpt18', 'lss_kpt19'), id=39, color=[255, 0, 128]), - 40: - dict(link=('lss_kpt19', 'lss_kpt20'), id=40, color=[255, 0, 128]), - 41: - dict(link=('lss_kpt20', 'lss_kpt21'), id=41, color=[255, 0, 128]), - 42: - dict(link=('lss_kpt21', 'lss_kpt22'), id=42, color=[255, 0, 128]), - 43: - dict(link=('lss_kpt22', 'lss_kpt23'), id=43, color=[255, 0, 128]), - 44: - dict(link=('lss_kpt23', 'lss_kpt24'), id=44, color=[255, 0, 128]), - 45: - dict(link=('lss_kpt24', 'lss_kpt25'), id=45, color=[255, 0, 128]), - 46: - dict(link=('lss_kpt25', 'lss_kpt26'), id=46, color=[255, 0, 128]), - 47: - dict(link=('lss_kpt26', 'lss_kpt27'), id=47, color=[255, 0, 128]), - 48: - dict(link=('lss_kpt27', 'lss_kpt28'), id=48, color=[255, 0, 128]), - 49: - dict(link=('lss_kpt28', 'lss_kpt29'), id=49, color=[255, 0, 128]), - 50: - dict(link=('lss_kpt29', 'lss_kpt30'), id=50, color=[255, 0, 128]), - 51: - dict(link=('lss_kpt30', 'lss_kpt31'), id=51, color=[255, 0, 128]), - 52: - dict(link=('lss_kpt31', 'lss_kpt32'), id=52, color=[255, 0, 128]), - 53: - dict(link=('lss_kpt32', 'lss_kpt33'), id=53, color=[255, 0, 128]), - 54: - dict(link=('lss_kpt33', 'lss_kpt6'), id=54, color=[255, 0, 128]), - 55: - dict(link=('lss_kpt6', 'lss_kpt5'), id=55, color=[255, 0, 128]), - 56: - dict(link=('lss_kpt5', 'lss_kpt4'), id=56, color=[255, 0, 128]), - 57: - dict(link=('lss_kpt4', 'lss_kpt3'), id=57, color=[255, 0, 128]), - 58: - dict(link=('lss_kpt3', 'lss_kpt2'), id=58, color=[255, 0, 128]), - 59: - dict(link=('lss_kpt6', 'lss_kpt1'), id=59, color=[255, 0, 128]), - 60: - dict(link=('sso_kpt1', 'sso_kpt4'), id=60, color=[128, 0, 255]), - 61: - dict(link=('sso_kpt4', 'sso_kpt7'), id=61, color=[128, 0, 255]), - 62: - dict(link=('sso_kpt7', 'sso_kpt8'), id=62, color=[128, 0, 255]), - 63: - dict(link=('sso_kpt8', 'sso_kpt9'), id=63, color=[128, 0, 255]), - 64: - dict(link=('sso_kpt9', 'sso_kpt10'), id=64, color=[128, 0, 255]), - 65: - dict(link=('sso_kpt10', 'sso_kpt11'), id=65, color=[128, 0, 255]), - 66: - dict(link=('sso_kpt11', 'sso_kpt12'), id=66, color=[128, 0, 255]), - 67: - dict(link=('sso_kpt12', 'sso_kpt13'), id=67, color=[128, 0, 255]), - 68: - dict(link=('sso_kpt13', 'sso_kpt14'), id=68, color=[128, 0, 255]), - 69: - dict(link=('sso_kpt14', 'sso_kpt15'), id=69, color=[128, 0, 255]), - 70: - dict(link=('sso_kpt15', 'sso_kpt16'), id=70, color=[128, 0, 255]), - 71: - dict(link=('sso_kpt16', 'sso_kpt31'), id=71, color=[128, 0, 255]), - 72: - dict(link=('sso_kpt31', 'sso_kpt30'), id=72, color=[128, 0, 255]), - 73: - dict(link=('sso_kpt30', 'sso_kpt2'), id=73, color=[128, 0, 255]), - 74: - dict(link=('sso_kpt2', 'sso_kpt3'), id=74, color=[128, 0, 255]), - 75: - dict(link=('sso_kpt3', 'sso_kpt4'), id=75, color=[128, 0, 255]), - 76: - dict(link=('sso_kpt1', 'sso_kpt6'), id=76, color=[128, 0, 255]), - 77: - dict(link=('sso_kpt6', 'sso_kpt25'), id=77, color=[128, 0, 255]), - 78: - dict(link=('sso_kpt25', 'sso_kpt24'), id=78, color=[128, 0, 255]), - 79: - dict(link=('sso_kpt24', 'sso_kpt23'), id=79, color=[128, 0, 255]), - 80: - dict(link=('sso_kpt23', 'sso_kpt22'), id=80, color=[128, 0, 255]), - 81: - dict(link=('sso_kpt22', 'sso_kpt21'), id=81, color=[128, 0, 255]), - 82: - dict(link=('sso_kpt21', 'sso_kpt20'), id=82, color=[128, 0, 255]), - 83: - dict(link=('sso_kpt20', 'sso_kpt19'), id=83, color=[128, 0, 255]), - 84: - dict(link=('sso_kpt19', 'sso_kpt18'), id=84, color=[128, 0, 255]), - 85: - dict(link=('sso_kpt18', 'sso_kpt17'), id=85, color=[128, 0, 255]), - 86: - dict(link=('sso_kpt17', 'sso_kpt29'), id=86, color=[128, 0, 255]), - 87: - dict(link=('sso_kpt29', 'sso_kpt28'), id=87, color=[128, 0, 255]), - 88: - dict(link=('sso_kpt28', 'sso_kpt27'), id=88, color=[128, 0, 255]), - 89: - dict(link=('sso_kpt27', 'sso_kpt26'), id=89, color=[128, 0, 255]), - 90: - dict(link=('sso_kpt26', 'sso_kpt5'), id=90, color=[128, 0, 255]), - 91: - dict(link=('sso_kpt5', 'sso_kpt6'), id=91, color=[128, 0, 255]), - 92: - dict(link=('lso_kpt1', 'lso_kpt2'), id=92, color=[0, 128, 255]), - 93: - dict(link=('lso_kpt2', 'lso_kpt7'), id=93, color=[0, 128, 255]), - 94: - dict(link=('lso_kpt7', 'lso_kpt8'), id=94, color=[0, 128, 255]), - 95: - dict(link=('lso_kpt8', 'lso_kpt9'), id=95, color=[0, 128, 255]), - 96: - dict(link=('lso_kpt9', 'lso_kpt10'), id=96, color=[0, 128, 255]), - 97: - dict(link=('lso_kpt10', 'lso_kpt11'), id=97, color=[0, 128, 255]), - 98: - dict(link=('lso_kpt11', 'lso_kpt12'), id=98, color=[0, 128, 255]), - 99: - dict(link=('lso_kpt12', 'lso_kpt13'), id=99, color=[0, 128, 255]), - 100: - dict(link=('lso_kpt13', 'lso_kpt14'), id=100, color=[0, 128, 255]), - 101: - dict(link=('lso_kpt14', 'lso_kpt15'), id=101, color=[0, 128, 255]), - 102: - dict(link=('lso_kpt15', 'lso_kpt16'), id=102, color=[0, 128, 255]), - 103: - dict(link=('lso_kpt16', 'lso_kpt17'), id=103, color=[0, 128, 255]), - 104: - dict(link=('lso_kpt17', 'lso_kpt18'), id=104, color=[0, 128, 255]), - 105: - dict(link=('lso_kpt18', 'lso_kpt19'), id=105, color=[0, 128, 255]), - 106: - dict(link=('lso_kpt19', 'lso_kpt20'), id=106, color=[0, 128, 255]), - 107: - dict(link=('lso_kpt20', 'lso_kpt39'), id=107, color=[0, 128, 255]), - 108: - dict(link=('lso_kpt39', 'lso_kpt38'), id=108, color=[0, 128, 255]), - 109: - dict(link=('lso_kpt38', 'lso_kpt4'), id=109, color=[0, 128, 255]), - 110: - dict(link=('lso_kpt4', 'lso_kpt3'), id=110, color=[0, 128, 255]), - 111: - dict(link=('lso_kpt3', 'lso_kpt2'), id=111, color=[0, 128, 255]), - 112: - dict(link=('lso_kpt1', 'lso_kpt6'), id=112, color=[0, 128, 255]), - 113: - dict(link=('lso_kpt6', 'lso_kpt33'), id=113, color=[0, 128, 255]), - 114: - dict(link=('lso_kpt33', 'lso_kpt32'), id=114, color=[0, 128, 255]), - 115: - dict(link=('lso_kpt32', 'lso_kpt31'), id=115, color=[0, 128, 255]), - 116: - dict(link=('lso_kpt31', 'lso_kpt30'), id=116, color=[0, 128, 255]), - 117: - dict(link=('lso_kpt30', 'lso_kpt29'), id=117, color=[0, 128, 255]), - 118: - dict(link=('lso_kpt29', 'lso_kpt28'), id=118, color=[0, 128, 255]), - 119: - dict(link=('lso_kpt28', 'lso_kpt27'), id=119, color=[0, 128, 255]), - 120: - dict(link=('lso_kpt27', 'lso_kpt26'), id=120, color=[0, 128, 255]), - 121: - dict(link=('lso_kpt26', 'lso_kpt25'), id=121, color=[0, 128, 255]), - 122: - dict(link=('lso_kpt25', 'lso_kpt24'), id=122, color=[0, 128, 255]), - 123: - dict(link=('lso_kpt24', 'lso_kpt23'), id=123, color=[0, 128, 255]), - 124: - dict(link=('lso_kpt23', 'lso_kpt22'), id=124, color=[0, 128, 255]), - 125: - dict(link=('lso_kpt22', 'lso_kpt21'), id=125, color=[0, 128, 255]), - 126: - dict(link=('lso_kpt21', 'lso_kpt37'), id=126, color=[0, 128, 255]), - 127: - dict(link=('lso_kpt37', 'lso_kpt36'), id=127, color=[0, 128, 255]), - 128: - dict(link=('lso_kpt36', 'lso_kpt35'), id=128, color=[0, 128, 255]), - 129: - dict(link=('lso_kpt35', 'lso_kpt34'), id=129, color=[0, 128, 255]), - 130: - dict(link=('lso_kpt34', 'lso_kpt5'), id=130, color=[0, 128, 255]), - 131: - dict(link=('lso_kpt5', 'lso_kpt6'), id=131, color=[0, 128, 255]), - 132: - dict(link=('vest_kpt1', 'vest_kpt2'), id=132, color=[0, 128, 128]), - 133: - dict(link=('vest_kpt2', 'vest_kpt7'), id=133, color=[0, 128, 128]), - 134: - dict(link=('vest_kpt7', 'vest_kpt8'), id=134, color=[0, 128, 128]), - 135: - dict(link=('vest_kpt8', 'vest_kpt9'), id=135, color=[0, 128, 128]), - 136: - dict(link=('vest_kpt9', 'vest_kpt10'), id=136, color=[0, 128, 128]), - 137: - dict(link=('vest_kpt10', 'vest_kpt11'), id=137, color=[0, 128, 128]), - 138: - dict(link=('vest_kpt11', 'vest_kpt12'), id=138, color=[0, 128, 128]), - 139: - dict(link=('vest_kpt12', 'vest_kpt13'), id=139, color=[0, 128, 128]), - 140: - dict(link=('vest_kpt13', 'vest_kpt14'), id=140, color=[0, 128, 128]), - 141: - dict(link=('vest_kpt14', 'vest_kpt15'), id=141, color=[0, 128, 128]), - 142: - dict(link=('vest_kpt15', 'vest_kpt6'), id=142, color=[0, 128, 128]), - 143: - dict(link=('vest_kpt6', 'vest_kpt1'), id=143, color=[0, 128, 128]), - 144: - dict(link=('vest_kpt2', 'vest_kpt3'), id=144, color=[0, 128, 128]), - 145: - dict(link=('vest_kpt3', 'vest_kpt4'), id=145, color=[0, 128, 128]), - 146: - dict(link=('vest_kpt4', 'vest_kpt5'), id=146, color=[0, 128, 128]), - 147: - dict(link=('vest_kpt5', 'vest_kpt6'), id=147, color=[0, 128, 128]), - 148: - dict(link=('sling_kpt1', 'sling_kpt2'), id=148, color=[0, 0, 128]), - 149: - dict(link=('sling_kpt2', 'sling_kpt8'), id=149, color=[0, 0, 128]), - 150: - dict(link=('sling_kpt8', 'sling_kpt9'), id=150, color=[0, 0, 128]), - 151: - dict(link=('sling_kpt9', 'sling_kpt10'), id=151, color=[0, 0, 128]), - 152: - dict(link=('sling_kpt10', 'sling_kpt11'), id=152, color=[0, 0, 128]), - 153: - dict(link=('sling_kpt11', 'sling_kpt12'), id=153, color=[0, 0, 128]), - 154: - dict(link=('sling_kpt12', 'sling_kpt13'), id=154, color=[0, 0, 128]), - 155: - dict(link=('sling_kpt13', 'sling_kpt14'), id=155, color=[0, 0, 128]), - 156: - dict(link=('sling_kpt14', 'sling_kpt6'), id=156, color=[0, 0, 128]), - 157: - dict(link=('sling_kpt2', 'sling_kpt7'), id=157, color=[0, 0, 128]), - 158: - dict(link=('sling_kpt6', 'sling_kpt15'), id=158, color=[0, 0, 128]), - 159: - dict(link=('sling_kpt2', 'sling_kpt3'), id=159, color=[0, 0, 128]), - 160: - dict(link=('sling_kpt3', 'sling_kpt4'), id=160, color=[0, 0, 128]), - 161: - dict(link=('sling_kpt4', 'sling_kpt5'), id=161, color=[0, 0, 128]), - 162: - dict(link=('sling_kpt5', 'sling_kpt6'), id=162, color=[0, 0, 128]), - 163: - dict(link=('sling_kpt1', 'sling_kpt6'), id=163, color=[0, 0, 128]), - 164: - dict( - link=('shorts_kpt1', 'shorts_kpt4'), id=164, color=[128, 128, - 128]), - 165: - dict( - link=('shorts_kpt4', 'shorts_kpt5'), id=165, color=[128, 128, - 128]), - 166: - dict( - link=('shorts_kpt5', 'shorts_kpt6'), id=166, color=[128, 128, - 128]), - 167: - dict( - link=('shorts_kpt6', 'shorts_kpt7'), id=167, color=[128, 128, - 128]), - 168: - dict( - link=('shorts_kpt7', 'shorts_kpt8'), id=168, color=[128, 128, - 128]), - 169: - dict( - link=('shorts_kpt8', 'shorts_kpt9'), id=169, color=[128, 128, - 128]), - 170: - dict( - link=('shorts_kpt9', 'shorts_kpt10'), - id=170, - color=[128, 128, 128]), - 171: - dict( - link=('shorts_kpt10', 'shorts_kpt3'), - id=171, - color=[128, 128, 128]), - 172: - dict( - link=('shorts_kpt3', 'shorts_kpt2'), id=172, color=[128, 128, - 128]), - 173: - dict( - link=('shorts_kpt2', 'shorts_kpt1'), id=173, color=[128, 128, - 128]), - 174: - dict( - link=('trousers_kpt1', 'trousers_kpt4'), - id=174, - color=[128, 0, 128]), - 175: - dict( - link=('trousers_kpt4', 'trousers_kpt5'), - id=175, - color=[128, 0, 128]), - 176: - dict( - link=('trousers_kpt5', 'trousers_kpt6'), - id=176, - color=[128, 0, 128]), - 177: - dict( - link=('trousers_kpt6', 'trousers_kpt7'), - id=177, - color=[128, 0, 128]), - 178: - dict( - link=('trousers_kpt7', 'trousers_kpt8'), - id=178, - color=[128, 0, 128]), - 179: - dict( - link=('trousers_kpt8', 'trousers_kpt9'), - id=179, - color=[128, 0, 128]), - 180: - dict( - link=('trousers_kpt9', 'trousers_kpt10'), - id=180, - color=[128, 0, 128]), - 181: - dict( - link=('trousers_kpt10', 'trousers_kpt11'), - id=181, - color=[128, 0, 128]), - 182: - dict( - link=('trousers_kpt11', 'trousers_kpt12'), - id=182, - color=[128, 0, 128]), - 183: - dict( - link=('trousers_kpt12', 'trousers_kpt13'), - id=183, - color=[128, 0, 128]), - 184: - dict( - link=('trousers_kpt13', 'trousers_kpt14'), - id=184, - color=[128, 0, 128]), - 185: - dict( - link=('trousers_kpt14', 'trousers_kpt3'), - id=185, - color=[128, 0, 128]), - 186: - dict( - link=('trousers_kpt3', 'trousers_kpt2'), - id=186, - color=[128, 0, 128]), - 187: - dict( - link=('trousers_kpt2', 'trousers_kpt1'), - id=187, - color=[128, 0, 128]), - 188: - dict(link=('skirt_kpt1', 'skirt_kpt4'), id=188, color=[64, 128, 128]), - 189: - dict(link=('skirt_kpt4', 'skirt_kpt5'), id=189, color=[64, 128, 128]), - 190: - dict(link=('skirt_kpt5', 'skirt_kpt6'), id=190, color=[64, 128, 128]), - 191: - dict(link=('skirt_kpt6', 'skirt_kpt7'), id=191, color=[64, 128, 128]), - 192: - dict(link=('skirt_kpt7', 'skirt_kpt8'), id=192, color=[64, 128, 128]), - 193: - dict(link=('skirt_kpt8', 'skirt_kpt3'), id=193, color=[64, 128, 128]), - 194: - dict(link=('skirt_kpt3', 'skirt_kpt2'), id=194, color=[64, 128, 128]), - 195: - dict(link=('skirt_kpt2', 'skirt_kpt1'), id=195, color=[64, 128, 128]), - 196: - dict(link=('ssd_kpt1', 'ssd_kpt2'), id=196, color=[64, 64, 128]), - 197: - dict(link=('ssd_kpt2', 'ssd_kpt7'), id=197, color=[64, 64, 128]), - 198: - dict(link=('ssd_kpt7', 'ssd_kpt8'), id=198, color=[64, 64, 128]), - 199: - dict(link=('ssd_kpt8', 'ssd_kpt9'), id=199, color=[64, 64, 128]), - 200: - dict(link=('ssd_kpt9', 'ssd_kpt10'), id=200, color=[64, 64, 128]), - 201: - dict(link=('ssd_kpt10', 'ssd_kpt11'), id=201, color=[64, 64, 128]), - 202: - dict(link=('ssd_kpt11', 'ssd_kpt12'), id=202, color=[64, 64, 128]), - 203: - dict(link=('ssd_kpt12', 'ssd_kpt13'), id=203, color=[64, 64, 128]), - 204: - dict(link=('ssd_kpt13', 'ssd_kpt14'), id=204, color=[64, 64, 128]), - 205: - dict(link=('ssd_kpt14', 'ssd_kpt15'), id=205, color=[64, 64, 128]), - 206: - dict(link=('ssd_kpt15', 'ssd_kpt16'), id=206, color=[64, 64, 128]), - 207: - dict(link=('ssd_kpt16', 'ssd_kpt17'), id=207, color=[64, 64, 128]), - 208: - dict(link=('ssd_kpt17', 'ssd_kpt18'), id=208, color=[64, 64, 128]), - 209: - dict(link=('ssd_kpt18', 'ssd_kpt19'), id=209, color=[64, 64, 128]), - 210: - dict(link=('ssd_kpt19', 'ssd_kpt20'), id=210, color=[64, 64, 128]), - 211: - dict(link=('ssd_kpt20', 'ssd_kpt21'), id=211, color=[64, 64, 128]), - 212: - dict(link=('ssd_kpt21', 'ssd_kpt22'), id=212, color=[64, 64, 128]), - 213: - dict(link=('ssd_kpt22', 'ssd_kpt23'), id=213, color=[64, 64, 128]), - 214: - dict(link=('ssd_kpt23', 'ssd_kpt24'), id=214, color=[64, 64, 128]), - 215: - dict(link=('ssd_kpt24', 'ssd_kpt25'), id=215, color=[64, 64, 128]), - 216: - dict(link=('ssd_kpt25', 'ssd_kpt26'), id=216, color=[64, 64, 128]), - 217: - dict(link=('ssd_kpt26', 'ssd_kpt27'), id=217, color=[64, 64, 128]), - 218: - dict(link=('ssd_kpt27', 'ssd_kpt28'), id=218, color=[64, 64, 128]), - 219: - dict(link=('ssd_kpt28', 'ssd_kpt29'), id=219, color=[64, 64, 128]), - 220: - dict(link=('ssd_kpt29', 'ssd_kpt6'), id=220, color=[64, 64, 128]), - 221: - dict(link=('ssd_kpt6', 'ssd_kpt5'), id=221, color=[64, 64, 128]), - 222: - dict(link=('ssd_kpt5', 'ssd_kpt4'), id=222, color=[64, 64, 128]), - 223: - dict(link=('ssd_kpt4', 'ssd_kpt3'), id=223, color=[64, 64, 128]), - 224: - dict(link=('ssd_kpt3', 'ssd_kpt2'), id=224, color=[64, 64, 128]), - 225: - dict(link=('ssd_kpt6', 'ssd_kpt1'), id=225, color=[64, 64, 128]), - 226: - dict(link=('lsd_kpt1', 'lsd_kpt2'), id=226, color=[128, 64, 0]), - 227: - dict(link=('lsd_kpt2', 'lsd_kpt7'), id=228, color=[128, 64, 0]), - 228: - dict(link=('lsd_kpt7', 'lsd_kpt8'), id=228, color=[128, 64, 0]), - 229: - dict(link=('lsd_kpt8', 'lsd_kpt9'), id=229, color=[128, 64, 0]), - 230: - dict(link=('lsd_kpt9', 'lsd_kpt10'), id=230, color=[128, 64, 0]), - 231: - dict(link=('lsd_kpt10', 'lsd_kpt11'), id=231, color=[128, 64, 0]), - 232: - dict(link=('lsd_kpt11', 'lsd_kpt12'), id=232, color=[128, 64, 0]), - 233: - dict(link=('lsd_kpt12', 'lsd_kpt13'), id=233, color=[128, 64, 0]), - 234: - dict(link=('lsd_kpt13', 'lsd_kpt14'), id=234, color=[128, 64, 0]), - 235: - dict(link=('lsd_kpt14', 'lsd_kpt15'), id=235, color=[128, 64, 0]), - 236: - dict(link=('lsd_kpt15', 'lsd_kpt16'), id=236, color=[128, 64, 0]), - 237: - dict(link=('lsd_kpt16', 'lsd_kpt17'), id=237, color=[128, 64, 0]), - 238: - dict(link=('lsd_kpt17', 'lsd_kpt18'), id=238, color=[128, 64, 0]), - 239: - dict(link=('lsd_kpt18', 'lsd_kpt19'), id=239, color=[128, 64, 0]), - 240: - dict(link=('lsd_kpt19', 'lsd_kpt20'), id=240, color=[128, 64, 0]), - 241: - dict(link=('lsd_kpt20', 'lsd_kpt21'), id=241, color=[128, 64, 0]), - 242: - dict(link=('lsd_kpt21', 'lsd_kpt22'), id=242, color=[128, 64, 0]), - 243: - dict(link=('lsd_kpt22', 'lsd_kpt23'), id=243, color=[128, 64, 0]), - 244: - dict(link=('lsd_kpt23', 'lsd_kpt24'), id=244, color=[128, 64, 0]), - 245: - dict(link=('lsd_kpt24', 'lsd_kpt25'), id=245, color=[128, 64, 0]), - 246: - dict(link=('lsd_kpt25', 'lsd_kpt26'), id=246, color=[128, 64, 0]), - 247: - dict(link=('lsd_kpt26', 'lsd_kpt27'), id=247, color=[128, 64, 0]), - 248: - dict(link=('lsd_kpt27', 'lsd_kpt28'), id=248, color=[128, 64, 0]), - 249: - dict(link=('lsd_kpt28', 'lsd_kpt29'), id=249, color=[128, 64, 0]), - 250: - dict(link=('lsd_kpt29', 'lsd_kpt30'), id=250, color=[128, 64, 0]), - 251: - dict(link=('lsd_kpt30', 'lsd_kpt31'), id=251, color=[128, 64, 0]), - 252: - dict(link=('lsd_kpt31', 'lsd_kpt32'), id=252, color=[128, 64, 0]), - 253: - dict(link=('lsd_kpt32', 'lsd_kpt33'), id=253, color=[128, 64, 0]), - 254: - dict(link=('lsd_kpt33', 'lsd_kpt34'), id=254, color=[128, 64, 0]), - 255: - dict(link=('lsd_kpt34', 'lsd_kpt35'), id=255, color=[128, 64, 0]), - 256: - dict(link=('lsd_kpt35', 'lsd_kpt36'), id=256, color=[128, 64, 0]), - 257: - dict(link=('lsd_kpt36', 'lsd_kpt37'), id=257, color=[128, 64, 0]), - 258: - dict(link=('lsd_kpt37', 'lsd_kpt6'), id=258, color=[128, 64, 0]), - 259: - dict(link=('lsd_kpt6', 'lsd_kpt5'), id=259, color=[128, 64, 0]), - 260: - dict(link=('lsd_kpt5', 'lsd_kpt4'), id=260, color=[128, 64, 0]), - 261: - dict(link=('lsd_kpt4', 'lsd_kpt3'), id=261, color=[128, 64, 0]), - 262: - dict(link=('lsd_kpt3', 'lsd_kpt2'), id=262, color=[128, 64, 0]), - 263: - dict(link=('lsd_kpt6', 'lsd_kpt1'), id=263, color=[128, 64, 0]), - 264: - dict(link=('vd_kpt1', 'vd_kpt2'), id=264, color=[128, 64, 255]), - 265: - dict(link=('vd_kpt2', 'vd_kpt7'), id=265, color=[128, 64, 255]), - 266: - dict(link=('vd_kpt7', 'vd_kpt8'), id=266, color=[128, 64, 255]), - 267: - dict(link=('vd_kpt8', 'vd_kpt9'), id=267, color=[128, 64, 255]), - 268: - dict(link=('vd_kpt9', 'vd_kpt10'), id=268, color=[128, 64, 255]), - 269: - dict(link=('vd_kpt10', 'vd_kpt11'), id=269, color=[128, 64, 255]), - 270: - dict(link=('vd_kpt11', 'vd_kpt12'), id=270, color=[128, 64, 255]), - 271: - dict(link=('vd_kpt12', 'vd_kpt13'), id=271, color=[128, 64, 255]), - 272: - dict(link=('vd_kpt13', 'vd_kpt14'), id=272, color=[128, 64, 255]), - 273: - dict(link=('vd_kpt14', 'vd_kpt15'), id=273, color=[128, 64, 255]), - 274: - dict(link=('vd_kpt15', 'vd_kpt16'), id=274, color=[128, 64, 255]), - 275: - dict(link=('vd_kpt16', 'vd_kpt17'), id=275, color=[128, 64, 255]), - 276: - dict(link=('vd_kpt17', 'vd_kpt18'), id=276, color=[128, 64, 255]), - 277: - dict(link=('vd_kpt18', 'vd_kpt19'), id=277, color=[128, 64, 255]), - 278: - dict(link=('vd_kpt19', 'vd_kpt6'), id=278, color=[128, 64, 255]), - 279: - dict(link=('vd_kpt6', 'vd_kpt5'), id=279, color=[128, 64, 255]), - 280: - dict(link=('vd_kpt5', 'vd_kpt4'), id=280, color=[128, 64, 255]), - 281: - dict(link=('vd_kpt4', 'vd_kpt3'), id=281, color=[128, 64, 255]), - 282: - dict(link=('vd_kpt3', 'vd_kpt2'), id=282, color=[128, 64, 255]), - 283: - dict(link=('vd_kpt6', 'vd_kpt1'), id=283, color=[128, 64, 255]), - 284: - dict(link=('sd_kpt1', 'sd_kpt2'), id=284, color=[128, 64, 0]), - 285: - dict(link=('sd_kpt2', 'sd_kpt8'), id=285, color=[128, 64, 0]), - 286: - dict(link=('sd_kpt8', 'sd_kpt9'), id=286, color=[128, 64, 0]), - 287: - dict(link=('sd_kpt9', 'sd_kpt10'), id=287, color=[128, 64, 0]), - 288: - dict(link=('sd_kpt10', 'sd_kpt11'), id=288, color=[128, 64, 0]), - 289: - dict(link=('sd_kpt11', 'sd_kpt12'), id=289, color=[128, 64, 0]), - 290: - dict(link=('sd_kpt12', 'sd_kpt13'), id=290, color=[128, 64, 0]), - 291: - dict(link=('sd_kpt13', 'sd_kpt14'), id=291, color=[128, 64, 0]), - 292: - dict(link=('sd_kpt14', 'sd_kpt15'), id=292, color=[128, 64, 0]), - 293: - dict(link=('sd_kpt15', 'sd_kpt16'), id=293, color=[128, 64, 0]), - 294: - dict(link=('sd_kpt16', 'sd_kpt17'), id=294, color=[128, 64, 0]), - 295: - dict(link=('sd_kpt17', 'sd_kpt18'), id=295, color=[128, 64, 0]), - 296: - dict(link=('sd_kpt18', 'sd_kpt6'), id=296, color=[128, 64, 0]), - 297: - dict(link=('sd_kpt6', 'sd_kpt5'), id=297, color=[128, 64, 0]), - 298: - dict(link=('sd_kpt5', 'sd_kpt4'), id=298, color=[128, 64, 0]), - 299: - dict(link=('sd_kpt4', 'sd_kpt3'), id=299, color=[128, 64, 0]), - 300: - dict(link=('sd_kpt3', 'sd_kpt2'), id=300, color=[128, 64, 0]), - 301: - dict(link=('sd_kpt2', 'sd_kpt7'), id=301, color=[128, 64, 0]), - 302: - dict(link=('sd_kpt6', 'sd_kpt19'), id=302, color=[128, 64, 0]), - 303: - dict(link=('sd_kpt6', 'sd_kpt1'), id=303, color=[128, 64, 0]) - }), - joint_weights=[ - 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, - 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, - 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, - 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, - 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, - 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, - 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, - 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, - 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, - 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, - 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, - 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, - 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, - 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, - 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, - 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, - 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, - 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, - 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, - 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, - 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0 - ], - sigmas=[]) -param_scheduler = [ - dict( - type='LinearLR', begin=0, end=500, start_factor=0.001, by_epoch=False), - dict( - type='MultiStepLR', - begin=0, - end=120, - milestones=[80, 100], - gamma=0.1, - by_epoch=True) -] -optim_wrapper = dict(optimizer=dict(type='Adam', lr=0.0005)) -auto_scale_lr = dict(base_batch_size=512) -dataset_type = 'DeepFashion2Dataset' -data_mode = 'topdown' -data_root = 'data/deepfashion2/' -codec = dict( - type='MSRAHeatmap', input_size=(192, 256), heatmap_size=(48, 64), sigma=2) -train_pipeline = [ - dict(type='LoadImage'), - dict(type='GetBBoxCenterScale'), - dict(type='RandomFlip', direction='horizontal'), - dict( - type='RandomBBoxTransform', - shift_prob=0, - rotate_factor=60, - scale_factor=(0.75, 1.25)), - dict(type='TopdownAffine', input_size=(192, 256)), - dict( - type='GenerateTarget', - encoder=dict( - type='MSRAHeatmap', - input_size=(192, 256), - heatmap_size=(48, 64), - sigma=2)), - dict(type='PackPoseInputs') -] -val_pipeline = [ - dict(type='LoadImage', backend_args=dict(backend='local')), - dict(type='GetBBoxCenterScale'), - dict(type='TopdownAffine', input_size=(192, 256)), - dict(type='PackPoseInputs') -] -train_dataloader = dict( - batch_size=64, - num_workers=6, - persistent_workers=True, - sampler=dict(type='DefaultSampler', shuffle=True), - dataset=dict( - type='DeepFashion2Dataset', - data_root='data/deepfashion2/', - data_mode='topdown', - ann_file='train/deepfashion2_skirt.json', - data_prefix=dict(img='train/image/'), - pipeline=[ - dict(type='LoadImage'), - dict(type='GetBBoxCenterScale'), - dict(type='RandomFlip', direction='horizontal'), - dict( - type='RandomBBoxTransform', - shift_prob=0, - rotate_factor=60, - scale_factor=(0.75, 1.25)), - dict(type='TopdownAffine', input_size=(192, 256)), - dict( - type='GenerateTarget', - encoder=dict( - type='MSRAHeatmap', - input_size=(192, 256), - heatmap_size=(48, 64), - sigma=2)), - dict(type='PackPoseInputs') - ])) -val_dataloader = dict( - batch_size=32, - num_workers=6, - persistent_workers=True, - drop_last=False, - sampler=dict(type='DefaultSampler', shuffle=False), - dataset=dict( - type='DeepFashion2Dataset', - data_root='data/deepfashion2/', - data_mode='topdown', - ann_file='validation/deepfashion2_skirt.json', - data_prefix=dict(img='validation/image/'), - test_mode=True, - pipeline=[ - dict(type='LoadImage', backend_args=dict(backend='local')), - dict(type='GetBBoxCenterScale'), - dict(type='TopdownAffine', input_size=(192, 256)), - dict(type='PackPoseInputs') - ])) -test_dataloader = dict( - batch_size=32, - num_workers=6, - persistent_workers=True, - drop_last=False, - sampler=dict(type='DefaultSampler', shuffle=False), - dataset=dict( - type='DeepFashion2Dataset', - data_root='data/deepfashion2/', - data_mode='topdown', - ann_file='validation/deepfashion2_skirt.json', - data_prefix=dict(img='validation/image/'), - test_mode=True, - pipeline=[ - dict(type='LoadImage', backend_args=dict(backend='local')), - dict(type='GetBBoxCenterScale'), - dict(type='TopdownAffine', input_size=(192, 256)), - dict(type='PackPoseInputs') - ])) -channel_cfg = dict( - num_output_channels=294, - dataset_joints=294, - dataset_channel=[[ - 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, - 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, - 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, - 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, - 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, - 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, - 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, - 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, - 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, - 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, - 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, - 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, - 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, - 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, - 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, - 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, - 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, - 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, - 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, - 290, 291, 292, 293 - ]], - inference_channel=[ - 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, - 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, - 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, - 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, - 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, - 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, - 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, - 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, - 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, - 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, - 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, - 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, - 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, - 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, - 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, - 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, - 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, - 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, - 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, - 290, 291, 292, 293 - ]) -model = dict( - type='TopdownPoseEstimator', - data_preprocessor=dict( - type='PoseDataPreprocessor', - mean=[123.675, 116.28, 103.53], - std=[58.395, 57.12, 57.375], - bgr_to_rgb=True), - backbone=dict( - type='ResNet', - depth=50, - init_cfg=dict(type='Pretrained', checkpoint='torchvision://resnet50')), - head=dict( - type='HeatmapHead', - in_channels=2048, - out_channels=294, - loss=dict(type='KeypointMSELoss', use_target_weight=True), - decoder=dict( - type='MSRAHeatmap', - input_size=(192, 256), - heatmap_size=(48, 64), - sigma=2)), - test_cfg=dict(flip_test=True, flip_mode='heatmap', shift_heatmap=True)) -val_evaluator = [ - dict(type='PCKAccuracy', thr=0.2), - dict(type='AUC'), - dict(type='EPE') -] -test_evaluator = [ - dict(type='PCKAccuracy', thr=0.2), - dict(type='AUC'), - dict(type='EPE') -] -launcher = 'pytorch' -work_dir = './work_dirs/td_hm_res50_4xb64-120e_deepfashion2_skirt_256x192' diff --git a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/.ipynb_checkpoints/hr_4xb16_1024e_4channel-checkpoint.py b/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/.ipynb_checkpoints/hr_4xb16_1024e_4channel-checkpoint.py deleted file mode 100644 index 372a6632ae325da40530356fa2dc51479986359d..0000000000000000000000000000000000000000 --- a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/.ipynb_checkpoints/hr_4xb16_1024e_4channel-checkpoint.py +++ /dev/null @@ -1,113 +0,0 @@ -_base_ = [ # 此配置文件将继承所有 `_base_` 中的配置 - '../configs/_base_/schedules/custom_schedule.py', # 训练策略配置 - '../configs/_base_/default_runtime.py' # 默认运行设置 -] - -default_hooks = dict( - # print log every 50 iterations. - logger=dict(type='LoggerHook', interval=50), - # save checkpoint per 8 epochs. - checkpoint=dict(save_best='auto', interval=16) -) - -visualizer = dict( - vis_backends=[dict(type='LocalVisBackend'), - dict(type='WandbVisBackend')]) - -dataset_type = 'CustomDataset' - -# config of pipline -train_pipeline = [ - dict(type='LoadImageFromFile', imdecode_backend='pillow', color_type='unchanged'), # 读取图像 - dict(type='RandomResizedCrop', scale=224), # 随机放缩裁剪 - dict(type='RandomFlip', prob=0.5, direction='horizontal'), # 随机水平翻转 - dict(type='PackInputs'), # 准备图像以及标签 -] - -test_pipeline = [ - dict(type='LoadImageFromFile', imdecode_backend='pillow', color_type='unchanged'), # 读取图像 - dict(type='ResizeEdge', scale=256, edge='short'), # 缩放短边尺寸至 256px - dict(type='CenterCrop', crop_size=224), # 中心裁剪 - dict(type='PackInputs'), # 准备图像以及标签 -] - -# config of dataloader -train_dataloader = dict( - batch_size=16, # 每张 GPU 的 batchsize - num_workers=5, # 每个 GPU 的线程数 - dataset=dict( # 训练数据集 - type=dataset_type, - data_root='../2_preprocess_data_3000', - with_label=True, - ann_file='', - data_prefix='train', - pipeline=train_pipeline), - sampler=dict(type='DefaultSampler', shuffle=True), # 默认采样器 - persistent_workers=True, # 是否保持进程,可以缩短每个 epoch 的准备时间 -) - -# 构造验证集 dataloader -val_dataloader = dict( - batch_size=16, - num_workers=5, - dataset=dict( - type=dataset_type, - data_root='../2_preprocess_data_3000', - with_label=True, - ann_file='', - data_prefix='val', - pipeline=test_pipeline), - sampler=dict(type='DefaultSampler', shuffle=False), - persistent_workers=True, -) - -# set evaluator of validation dataset. Here uses top1 and top3 accuracy -val_evaluator = dict(type='Accuracy', topk=(1, 3)) - -test_dataloader = val_dataloader -test_evaluator = val_evaluator - -model = dict( - type='ImageClassifier', # 主模型类型(对于图像分类任务,使用 `ImageClassifier`) - backbone=dict( - type='HRNet', # 主干网络类型 - arch='w32', # 主干网络架构 - in_channels=4, - extra=dict( - stage1=dict( - num_modules=1, - num_branches=1, - block='BOTTLENECK', - num_blocks=(4, ), - num_channels=(64, )), - stage2=dict( - num_modules=1, - num_branches=2, - block='BASIC', - num_blocks=(4, 4), - num_channels=(32, 64)), - stage3=dict( - num_modules=4, - num_branches=3, - block='BASIC', - num_blocks=(4, 4, 4), - num_channels=(32, 64, 128)), - stage4=dict( - num_modules=3, - num_branches=4, - block='BASIC', - num_blocks=(4, 4, 4, 4), - num_channels=(32, 64, 128, 256))), - ), - neck=dict(type='GlobalAveragePooling'), # 颈网络类型 - head=dict( - type='LinearClsHead', # 分类颈网络类型 - # 除了 `type` 之外的所有字段都来自 `LinearClsHead` 类的 __init__ 方法 - # 可查阅 https://mmpretrain.readthedocs.io/zh_CN/latest/api/generated/mmpretrain.models.heads.LinearClsHead.html - num_classes=7, # 分类类别数 - in_channels=256, - loss=dict(type='CrossEntropyLoss', loss_weight=1.0), # 损失函数配置信息 - topk=(1, 3), # 评估指标,Top-k 准确率 - )) - - diff --git a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/_base_/datasets/__init__.py b/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/_base_/datasets/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/resnet/resnet50_8xb32-lbs_in1k.py b/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/resnet/resnet50_8xb32-lbs_in1k.py deleted file mode 100644 index 1c1aa5a2c4eee10c10159175224d9b77ea57e57b..0000000000000000000000000000000000000000 --- a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/resnet/resnet50_8xb32-lbs_in1k.py +++ /dev/null @@ -1,5 +0,0 @@ -_base_ = [ - '../_base_/models/resnet50_label_smooth.py', - '../_base_/datasets/imagenet_bs32.py', - '../_base_/schedules/imagenet_bs256.py', '../_base_/default_runtime.py' -] diff --git a/spaces/Ababababababbababa/Ashaar/poetry_diacritizer/train.py b/spaces/Ababababababbababa/Ashaar/poetry_diacritizer/train.py deleted file mode 100644 index a1c233ce494365e0bb3a31c671aa015b3ecc8043..0000000000000000000000000000000000000000 --- a/spaces/Ababababababbababa/Ashaar/poetry_diacritizer/train.py +++ /dev/null @@ -1,49 +0,0 @@ -import argparse -import random - -import numpy as np -import torch - -from trainer import CBHGTrainer, Seq2SeqTrainer, GPTTrainer - -SEED = 1234 -random.seed(SEED) -np.random.seed(SEED) -torch.manual_seed(SEED) -torch.cuda.manual_seed(SEED) -torch.backends.cudnn.deterministic = True -torch.backends.cudnn.benchmark = False - - -def train_parser(): - parser = argparse.ArgumentParser() - parser.add_argument("--model_kind", dest="model_kind", type=str, required=True) - parser.add_argument( - "--model_desc", dest="model_desc", type=str, required=False, default="" - ) - parser.add_argument("--config", dest="config", type=str, required=True) - parser.add_argument( - "--reset_dir", - dest="clear_dir", - action="store_true", - help="deletes everything under this config's folder.", - ) - return parser - - -parser = train_parser() -args = parser.parse_args() - - -if args.model_kind in ["seq2seq"]: - trainer = Seq2SeqTrainer(args.config, args.model_kind, args.model_desc) -elif args.model_kind in ["tacotron_based"]: - trainer = Seq2SeqTrainer(args.config, args.model_kind, args.model_desc) -elif args.model_kind in ["baseline", "cbhg"]: - trainer = CBHGTrainer(args.config, args.model_kind, args.model_desc) -elif args.model_kind in ["gpt"]: - trainer = GPTTrainer(args.config, args.model_kind, args.model_desc) -else: - raise ValueError("The model kind is not supported") - -trainer.run() diff --git a/spaces/AchyuthGamer/OpenGPT-Chat-UI/src/routes/conversations/+page.server.ts b/spaces/AchyuthGamer/OpenGPT-Chat-UI/src/routes/conversations/+page.server.ts deleted file mode 100644 index d94b030da72c4b269f5385580b99b8509efbdf8f..0000000000000000000000000000000000000000 --- a/spaces/AchyuthGamer/OpenGPT-Chat-UI/src/routes/conversations/+page.server.ts +++ /dev/null @@ -1,10 +0,0 @@ -import { base } from "$app/paths"; -import { authCondition } from "$lib/server/auth"; -import { collections } from "$lib/server/database"; -import { redirect } from "@sveltejs/kit"; - -export const actions = { - delete: async function ({ locals }) { - throw redirect(303, `${base}/`); - }, -}; diff --git a/spaces/Adapter/CoAdapter/ldm/inference_base.py b/spaces/Adapter/CoAdapter/ldm/inference_base.py deleted file mode 100644 index b57e4580ea90515e450afd6e8e267401e75b86e9..0000000000000000000000000000000000000000 --- a/spaces/Adapter/CoAdapter/ldm/inference_base.py +++ /dev/null @@ -1,292 +0,0 @@ -import argparse -import torch -from omegaconf import OmegaConf - -from ldm.models.diffusion.ddim import DDIMSampler -from ldm.models.diffusion.plms import PLMSSampler -from ldm.modules.encoders.adapter import Adapter, StyleAdapter, Adapter_light -from ldm.modules.extra_condition.api import ExtraCondition -from ldm.util import fix_cond_shapes, load_model_from_config, read_state_dict - -DEFAULT_NEGATIVE_PROMPT = 'longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit, ' \ - 'fewer digits, cropped, worst quality, low quality' - - -def get_base_argument_parser() -> argparse.ArgumentParser: - """get the base argument parser for inference scripts""" - parser = argparse.ArgumentParser() - parser.add_argument( - '--outdir', - type=str, - help='dir to write results to', - default=None, - ) - - parser.add_argument( - '--prompt', - type=str, - nargs='?', - default=None, - help='positive prompt', - ) - - parser.add_argument( - '--neg_prompt', - type=str, - default=DEFAULT_NEGATIVE_PROMPT, - help='negative prompt', - ) - - parser.add_argument( - '--cond_path', - type=str, - default=None, - help='condition image path', - ) - - parser.add_argument( - '--cond_inp_type', - type=str, - default='image', - help='the type of the input condition image, take depth T2I as example, the input can be raw image, ' - 'which depth will be calculated, or the input can be a directly a depth map image', - ) - - parser.add_argument( - '--sampler', - type=str, - default='ddim', - choices=['ddim', 'plms'], - help='sampling algorithm, currently, only ddim and plms are supported, more are on the way', - ) - - parser.add_argument( - '--steps', - type=int, - default=50, - help='number of sampling steps', - ) - - parser.add_argument( - '--sd_ckpt', - type=str, - default='models/sd-v1-4.ckpt', - help='path to checkpoint of stable diffusion model, both .ckpt and .safetensor are supported', - ) - - parser.add_argument( - '--vae_ckpt', - type=str, - default=None, - help='vae checkpoint, anime SD models usually have seperate vae ckpt that need to be loaded', - ) - - parser.add_argument( - '--adapter_ckpt', - type=str, - default=None, - help='path to checkpoint of adapter', - ) - - parser.add_argument( - '--config', - type=str, - default='configs/stable-diffusion/sd-v1-inference.yaml', - help='path to config which constructs SD model', - ) - - parser.add_argument( - '--max_resolution', - type=float, - default=512 * 512, - help='max image height * width, only for computer with limited vram', - ) - - parser.add_argument( - '--resize_short_edge', - type=int, - default=None, - help='resize short edge of the input image, if this arg is set, max_resolution will not be used', - ) - - parser.add_argument( - '--C', - type=int, - default=4, - help='latent channels', - ) - - parser.add_argument( - '--f', - type=int, - default=8, - help='downsampling factor', - ) - - parser.add_argument( - '--scale', - type=float, - default=7.5, - help='unconditional guidance scale: eps = eps(x, empty) + scale * (eps(x, cond) - eps(x, empty))', - ) - - parser.add_argument( - '--cond_tau', - type=float, - default=1.0, - help='timestamp parameter that determines until which step the adapter is applied, ' - 'similar as Prompt-to-Prompt tau', - ) - - parser.add_argument( - '--style_cond_tau', - type=float, - default=1.0, - help='timestamp parameter that determines until which step the adapter is applied, ' - 'similar as Prompt-to-Prompt tau', - ) - - parser.add_argument( - '--cond_weight', - type=float, - default=1.0, - help='the adapter features are multiplied by the cond_weight. The larger the cond_weight, the more aligned ' - 'the generated image and condition will be, but the generated quality may be reduced', - ) - - parser.add_argument( - '--seed', - type=int, - default=42, - ) - - parser.add_argument( - '--n_samples', - type=int, - default=4, - help='# of samples to generate', - ) - - return parser - - -def get_sd_models(opt): - """ - build stable diffusion model, sampler - """ - # SD - config = OmegaConf.load(f"{opt.config}") - model = load_model_from_config(config, opt.sd_ckpt, opt.vae_ckpt) - sd_model = model.to(opt.device) - - # sampler - if opt.sampler == 'plms': - sampler = PLMSSampler(model) - elif opt.sampler == 'ddim': - sampler = DDIMSampler(model) - else: - raise NotImplementedError - - return sd_model, sampler - - -def get_t2i_adapter_models(opt): - config = OmegaConf.load(f"{opt.config}") - model = load_model_from_config(config, opt.sd_ckpt, opt.vae_ckpt) - adapter_ckpt_path = getattr(opt, f'{opt.which_cond}_adapter_ckpt', None) - if adapter_ckpt_path is None: - adapter_ckpt_path = getattr(opt, 'adapter_ckpt') - adapter_ckpt = read_state_dict(adapter_ckpt_path) - new_state_dict = {} - for k, v in adapter_ckpt.items(): - if not k.startswith('adapter.'): - new_state_dict[f'adapter.{k}'] = v - else: - new_state_dict[k] = v - m, u = model.load_state_dict(new_state_dict, strict=False) - if len(u) > 0: - print(f"unexpected keys in loading adapter ckpt {adapter_ckpt_path}:") - print(u) - - model = model.to(opt.device) - - # sampler - if opt.sampler == 'plms': - sampler = PLMSSampler(model) - elif opt.sampler == 'ddim': - sampler = DDIMSampler(model) - else: - raise NotImplementedError - - return model, sampler - - -def get_cond_ch(cond_type: ExtraCondition): - if cond_type == ExtraCondition.sketch or cond_type == ExtraCondition.canny: - return 1 - return 3 - - -def get_adapters(opt, cond_type: ExtraCondition): - adapter = {} - cond_weight = getattr(opt, f'{cond_type.name}_weight', None) - if cond_weight is None: - cond_weight = getattr(opt, 'cond_weight') - adapter['cond_weight'] = cond_weight - - if cond_type == ExtraCondition.style: - adapter['model'] = StyleAdapter(width=1024, context_dim=768, num_head=8, n_layes=3, num_token=8).to(opt.device) - elif cond_type == ExtraCondition.color: - adapter['model'] = Adapter_light( - cin=64 * get_cond_ch(cond_type), - channels=[320, 640, 1280, 1280], - nums_rb=4).to(opt.device) - else: - adapter['model'] = Adapter( - cin=64 * get_cond_ch(cond_type), - channels=[320, 640, 1280, 1280][:4], - nums_rb=2, - ksize=1, - sk=True, - use_conv=False).to(opt.device) - ckpt_path = getattr(opt, f'{cond_type.name}_adapter_ckpt', None) - if ckpt_path is None: - ckpt_path = getattr(opt, 'adapter_ckpt') - adapter['model'].load_state_dict(torch.load(ckpt_path)) - - return adapter - - -def diffusion_inference(opt, model, sampler, adapter_features, append_to_context=None): - # get text embedding - c = model.get_learned_conditioning([opt.prompt]) - if opt.scale != 1.0: - uc = model.get_learned_conditioning([opt.neg_prompt]) - else: - uc = None - c, uc = fix_cond_shapes(model, c, uc) - - if not hasattr(opt, 'H'): - opt.H = 512 - opt.W = 512 - shape = [opt.C, opt.H // opt.f, opt.W // opt.f] - - samples_latents, _ = sampler.sample( - S=opt.steps, - conditioning=c, - batch_size=1, - shape=shape, - verbose=False, - unconditional_guidance_scale=opt.scale, - unconditional_conditioning=uc, - x_T=None, - features_adapter=adapter_features, - append_to_context=append_to_context, - cond_tau=opt.cond_tau, - style_cond_tau=opt.style_cond_tau, - ) - - x_samples = model.decode_first_stage(samples_latents) - x_samples = torch.clamp((x_samples + 1.0) / 2.0, min=0.0, max=1.0) - - return x_samples diff --git a/spaces/Adapter/T2I-Adapter/ldm/modules/image_degradation/utils_image.py b/spaces/Adapter/T2I-Adapter/ldm/modules/image_degradation/utils_image.py deleted file mode 100644 index 0175f155ad900ae33c3c46ed87f49b352e3faf98..0000000000000000000000000000000000000000 --- a/spaces/Adapter/T2I-Adapter/ldm/modules/image_degradation/utils_image.py +++ /dev/null @@ -1,916 +0,0 @@ -import os -import math -import random -import numpy as np -import torch -import cv2 -from torchvision.utils import make_grid -from datetime import datetime -#import matplotlib.pyplot as plt # TODO: check with Dominik, also bsrgan.py vs bsrgan_light.py - - -os.environ["KMP_DUPLICATE_LIB_OK"]="TRUE" - - -''' -# -------------------------------------------- -# Kai Zhang (github: https://github.com/cszn) -# 03/Mar/2019 -# -------------------------------------------- -# https://github.com/twhui/SRGAN-pyTorch -# https://github.com/xinntao/BasicSR -# -------------------------------------------- -''' - - -IMG_EXTENSIONS = ['.jpg', '.JPG', '.jpeg', '.JPEG', '.png', '.PNG', '.ppm', '.PPM', '.bmp', '.BMP', '.tif'] - - -def is_image_file(filename): - return any(filename.endswith(extension) for extension in IMG_EXTENSIONS) - - -def get_timestamp(): - return datetime.now().strftime('%y%m%d-%H%M%S') - - -def imshow(x, title=None, cbar=False, figsize=None): - plt.figure(figsize=figsize) - plt.imshow(np.squeeze(x), interpolation='nearest', cmap='gray') - if title: - plt.title(title) - if cbar: - plt.colorbar() - plt.show() - - -def surf(Z, cmap='rainbow', figsize=None): - plt.figure(figsize=figsize) - ax3 = plt.axes(projection='3d') - - w, h = Z.shape[:2] - xx = np.arange(0,w,1) - yy = np.arange(0,h,1) - X, Y = np.meshgrid(xx, yy) - ax3.plot_surface(X,Y,Z,cmap=cmap) - #ax3.contour(X,Y,Z, zdim='z',offset=-2,cmap=cmap) - plt.show() - - -''' -# -------------------------------------------- -# get image pathes -# -------------------------------------------- -''' - - -def get_image_paths(dataroot): - paths = None # return None if dataroot is None - if dataroot is not None: - paths = sorted(_get_paths_from_images(dataroot)) - return paths - - -def _get_paths_from_images(path): - assert os.path.isdir(path), '{:s} is not a valid directory'.format(path) - images = [] - for dirpath, _, fnames in sorted(os.walk(path)): - for fname in sorted(fnames): - if is_image_file(fname): - img_path = os.path.join(dirpath, fname) - images.append(img_path) - assert images, '{:s} has no valid image file'.format(path) - return images - - -''' -# -------------------------------------------- -# split large images into small images -# -------------------------------------------- -''' - - -def patches_from_image(img, p_size=512, p_overlap=64, p_max=800): - w, h = img.shape[:2] - patches = [] - if w > p_max and h > p_max: - w1 = list(np.arange(0, w-p_size, p_size-p_overlap, dtype=np.int)) - h1 = list(np.arange(0, h-p_size, p_size-p_overlap, dtype=np.int)) - w1.append(w-p_size) - h1.append(h-p_size) -# print(w1) -# print(h1) - for i in w1: - for j in h1: - patches.append(img[i:i+p_size, j:j+p_size,:]) - else: - patches.append(img) - - return patches - - -def imssave(imgs, img_path): - """ - imgs: list, N images of size WxHxC - """ - img_name, ext = os.path.splitext(os.path.basename(img_path)) - - for i, img in enumerate(imgs): - if img.ndim == 3: - img = img[:, :, [2, 1, 0]] - new_path = os.path.join(os.path.dirname(img_path), img_name+str('_s{:04d}'.format(i))+'.png') - cv2.imwrite(new_path, img) - - -def split_imageset(original_dataroot, taget_dataroot, n_channels=3, p_size=800, p_overlap=96, p_max=1000): - """ - split the large images from original_dataroot into small overlapped images with size (p_size)x(p_size), - and save them into taget_dataroot; only the images with larger size than (p_max)x(p_max) - will be splitted. - Args: - original_dataroot: - taget_dataroot: - p_size: size of small images - p_overlap: patch size in training is a good choice - p_max: images with smaller size than (p_max)x(p_max) keep unchanged. - """ - paths = get_image_paths(original_dataroot) - for img_path in paths: - # img_name, ext = os.path.splitext(os.path.basename(img_path)) - img = imread_uint(img_path, n_channels=n_channels) - patches = patches_from_image(img, p_size, p_overlap, p_max) - imssave(patches, os.path.join(taget_dataroot,os.path.basename(img_path))) - #if original_dataroot == taget_dataroot: - #del img_path - -''' -# -------------------------------------------- -# makedir -# -------------------------------------------- -''' - - -def mkdir(path): - if not os.path.exists(path): - os.makedirs(path) - - -def mkdirs(paths): - if isinstance(paths, str): - mkdir(paths) - else: - for path in paths: - mkdir(path) - - -def mkdir_and_rename(path): - if os.path.exists(path): - new_name = path + '_archived_' + get_timestamp() - print('Path already exists. Rename it to [{:s}]'.format(new_name)) - os.rename(path, new_name) - os.makedirs(path) - - -''' -# -------------------------------------------- -# read image from path -# opencv is fast, but read BGR numpy image -# -------------------------------------------- -''' - - -# -------------------------------------------- -# get uint8 image of size HxWxn_channles (RGB) -# -------------------------------------------- -def imread_uint(path, n_channels=3): - # input: path - # output: HxWx3(RGB or GGG), or HxWx1 (G) - if n_channels == 1: - img = cv2.imread(path, 0) # cv2.IMREAD_GRAYSCALE - img = np.expand_dims(img, axis=2) # HxWx1 - elif n_channels == 3: - img = cv2.imread(path, cv2.IMREAD_UNCHANGED) # BGR or G - if img.ndim == 2: - img = cv2.cvtColor(img, cv2.COLOR_GRAY2RGB) # GGG - else: - img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) # RGB - return img - - -# -------------------------------------------- -# matlab's imwrite -# -------------------------------------------- -def imsave(img, img_path): - img = np.squeeze(img) - if img.ndim == 3: - img = img[:, :, [2, 1, 0]] - cv2.imwrite(img_path, img) - -def imwrite(img, img_path): - img = np.squeeze(img) - if img.ndim == 3: - img = img[:, :, [2, 1, 0]] - cv2.imwrite(img_path, img) - - - -# -------------------------------------------- -# get single image of size HxWxn_channles (BGR) -# -------------------------------------------- -def read_img(path): - # read image by cv2 - # return: Numpy float32, HWC, BGR, [0,1] - img = cv2.imread(path, cv2.IMREAD_UNCHANGED) # cv2.IMREAD_GRAYSCALE - img = img.astype(np.float32) / 255. - if img.ndim == 2: - img = np.expand_dims(img, axis=2) - # some images have 4 channels - if img.shape[2] > 3: - img = img[:, :, :3] - return img - - -''' -# -------------------------------------------- -# image format conversion -# -------------------------------------------- -# numpy(single) <---> numpy(unit) -# numpy(single) <---> tensor -# numpy(unit) <---> tensor -# -------------------------------------------- -''' - - -# -------------------------------------------- -# numpy(single) [0, 1] <---> numpy(unit) -# -------------------------------------------- - - -def uint2single(img): - - return np.float32(img/255.) - - -def single2uint(img): - - return np.uint8((img.clip(0, 1)*255.).round()) - - -def uint162single(img): - - return np.float32(img/65535.) - - -def single2uint16(img): - - return np.uint16((img.clip(0, 1)*65535.).round()) - - -# -------------------------------------------- -# numpy(unit) (HxWxC or HxW) <---> tensor -# -------------------------------------------- - - -# convert uint to 4-dimensional torch tensor -def uint2tensor4(img): - if img.ndim == 2: - img = np.expand_dims(img, axis=2) - return torch.from_numpy(np.ascontiguousarray(img)).permute(2, 0, 1).float().div(255.).unsqueeze(0) - - -# convert uint to 3-dimensional torch tensor -def uint2tensor3(img): - if img.ndim == 2: - img = np.expand_dims(img, axis=2) - return torch.from_numpy(np.ascontiguousarray(img)).permute(2, 0, 1).float().div(255.) - - -# convert 2/3/4-dimensional torch tensor to uint -def tensor2uint(img): - img = img.data.squeeze().float().clamp_(0, 1).cpu().numpy() - if img.ndim == 3: - img = np.transpose(img, (1, 2, 0)) - return np.uint8((img*255.0).round()) - - -# -------------------------------------------- -# numpy(single) (HxWxC) <---> tensor -# -------------------------------------------- - - -# convert single (HxWxC) to 3-dimensional torch tensor -def single2tensor3(img): - return torch.from_numpy(np.ascontiguousarray(img)).permute(2, 0, 1).float() - - -# convert single (HxWxC) to 4-dimensional torch tensor -def single2tensor4(img): - return torch.from_numpy(np.ascontiguousarray(img)).permute(2, 0, 1).float().unsqueeze(0) - - -# convert torch tensor to single -def tensor2single(img): - img = img.data.squeeze().float().cpu().numpy() - if img.ndim == 3: - img = np.transpose(img, (1, 2, 0)) - - return img - -# convert torch tensor to single -def tensor2single3(img): - img = img.data.squeeze().float().cpu().numpy() - if img.ndim == 3: - img = np.transpose(img, (1, 2, 0)) - elif img.ndim == 2: - img = np.expand_dims(img, axis=2) - return img - - -def single2tensor5(img): - return torch.from_numpy(np.ascontiguousarray(img)).permute(2, 0, 1, 3).float().unsqueeze(0) - - -def single32tensor5(img): - return torch.from_numpy(np.ascontiguousarray(img)).float().unsqueeze(0).unsqueeze(0) - - -def single42tensor4(img): - return torch.from_numpy(np.ascontiguousarray(img)).permute(2, 0, 1, 3).float() - - -# from skimage.io import imread, imsave -def tensor2img(tensor, out_type=np.uint8, min_max=(0, 1)): - ''' - Converts a torch Tensor into an image Numpy array of BGR channel order - Input: 4D(B,(3/1),H,W), 3D(C,H,W), or 2D(H,W), any range, RGB channel order - Output: 3D(H,W,C) or 2D(H,W), [0,255], np.uint8 (default) - ''' - tensor = tensor.squeeze().float().cpu().clamp_(*min_max) # squeeze first, then clamp - tensor = (tensor - min_max[0]) / (min_max[1] - min_max[0]) # to range [0,1] - n_dim = tensor.dim() - if n_dim == 4: - n_img = len(tensor) - img_np = make_grid(tensor, nrow=int(math.sqrt(n_img)), normalize=False).numpy() - img_np = np.transpose(img_np[[2, 1, 0], :, :], (1, 2, 0)) # HWC, BGR - elif n_dim == 3: - img_np = tensor.numpy() - img_np = np.transpose(img_np[[2, 1, 0], :, :], (1, 2, 0)) # HWC, BGR - elif n_dim == 2: - img_np = tensor.numpy() - else: - raise TypeError( - 'Only support 4D, 3D and 2D tensor. But received with dimension: {:d}'.format(n_dim)) - if out_type == np.uint8: - img_np = (img_np * 255.0).round() - # Important. Unlike matlab, numpy.unit8() WILL NOT round by default. - return img_np.astype(out_type) - - -''' -# -------------------------------------------- -# Augmentation, flipe and/or rotate -# -------------------------------------------- -# The following two are enough. -# (1) augmet_img: numpy image of WxHxC or WxH -# (2) augment_img_tensor4: tensor image 1xCxWxH -# -------------------------------------------- -''' - - -def augment_img(img, mode=0): - '''Kai Zhang (github: https://github.com/cszn) - ''' - if mode == 0: - return img - elif mode == 1: - return np.flipud(np.rot90(img)) - elif mode == 2: - return np.flipud(img) - elif mode == 3: - return np.rot90(img, k=3) - elif mode == 4: - return np.flipud(np.rot90(img, k=2)) - elif mode == 5: - return np.rot90(img) - elif mode == 6: - return np.rot90(img, k=2) - elif mode == 7: - return np.flipud(np.rot90(img, k=3)) - - -def augment_img_tensor4(img, mode=0): - '''Kai Zhang (github: https://github.com/cszn) - ''' - if mode == 0: - return img - elif mode == 1: - return img.rot90(1, [2, 3]).flip([2]) - elif mode == 2: - return img.flip([2]) - elif mode == 3: - return img.rot90(3, [2, 3]) - elif mode == 4: - return img.rot90(2, [2, 3]).flip([2]) - elif mode == 5: - return img.rot90(1, [2, 3]) - elif mode == 6: - return img.rot90(2, [2, 3]) - elif mode == 7: - return img.rot90(3, [2, 3]).flip([2]) - - -def augment_img_tensor(img, mode=0): - '''Kai Zhang (github: https://github.com/cszn) - ''' - img_size = img.size() - img_np = img.data.cpu().numpy() - if len(img_size) == 3: - img_np = np.transpose(img_np, (1, 2, 0)) - elif len(img_size) == 4: - img_np = np.transpose(img_np, (2, 3, 1, 0)) - img_np = augment_img(img_np, mode=mode) - img_tensor = torch.from_numpy(np.ascontiguousarray(img_np)) - if len(img_size) == 3: - img_tensor = img_tensor.permute(2, 0, 1) - elif len(img_size) == 4: - img_tensor = img_tensor.permute(3, 2, 0, 1) - - return img_tensor.type_as(img) - - -def augment_img_np3(img, mode=0): - if mode == 0: - return img - elif mode == 1: - return img.transpose(1, 0, 2) - elif mode == 2: - return img[::-1, :, :] - elif mode == 3: - img = img[::-1, :, :] - img = img.transpose(1, 0, 2) - return img - elif mode == 4: - return img[:, ::-1, :] - elif mode == 5: - img = img[:, ::-1, :] - img = img.transpose(1, 0, 2) - return img - elif mode == 6: - img = img[:, ::-1, :] - img = img[::-1, :, :] - return img - elif mode == 7: - img = img[:, ::-1, :] - img = img[::-1, :, :] - img = img.transpose(1, 0, 2) - return img - - -def augment_imgs(img_list, hflip=True, rot=True): - # horizontal flip OR rotate - hflip = hflip and random.random() < 0.5 - vflip = rot and random.random() < 0.5 - rot90 = rot and random.random() < 0.5 - - def _augment(img): - if hflip: - img = img[:, ::-1, :] - if vflip: - img = img[::-1, :, :] - if rot90: - img = img.transpose(1, 0, 2) - return img - - return [_augment(img) for img in img_list] - - -''' -# -------------------------------------------- -# modcrop and shave -# -------------------------------------------- -''' - - -def modcrop(img_in, scale): - # img_in: Numpy, HWC or HW - img = np.copy(img_in) - if img.ndim == 2: - H, W = img.shape - H_r, W_r = H % scale, W % scale - img = img[:H - H_r, :W - W_r] - elif img.ndim == 3: - H, W, C = img.shape - H_r, W_r = H % scale, W % scale - img = img[:H - H_r, :W - W_r, :] - else: - raise ValueError('Wrong img ndim: [{:d}].'.format(img.ndim)) - return img - - -def shave(img_in, border=0): - # img_in: Numpy, HWC or HW - img = np.copy(img_in) - h, w = img.shape[:2] - img = img[border:h-border, border:w-border] - return img - - -''' -# -------------------------------------------- -# image processing process on numpy image -# channel_convert(in_c, tar_type, img_list): -# rgb2ycbcr(img, only_y=True): -# bgr2ycbcr(img, only_y=True): -# ycbcr2rgb(img): -# -------------------------------------------- -''' - - -def rgb2ycbcr(img, only_y=True): - '''same as matlab rgb2ycbcr - only_y: only return Y channel - Input: - uint8, [0, 255] - float, [0, 1] - ''' - in_img_type = img.dtype - img.astype(np.float32) - if in_img_type != np.uint8: - img *= 255. - # convert - if only_y: - rlt = np.dot(img, [65.481, 128.553, 24.966]) / 255.0 + 16.0 - else: - rlt = np.matmul(img, [[65.481, -37.797, 112.0], [128.553, -74.203, -93.786], - [24.966, 112.0, -18.214]]) / 255.0 + [16, 128, 128] - if in_img_type == np.uint8: - rlt = rlt.round() - else: - rlt /= 255. - return rlt.astype(in_img_type) - - -def ycbcr2rgb(img): - '''same as matlab ycbcr2rgb - Input: - uint8, [0, 255] - float, [0, 1] - ''' - in_img_type = img.dtype - img.astype(np.float32) - if in_img_type != np.uint8: - img *= 255. - # convert - rlt = np.matmul(img, [[0.00456621, 0.00456621, 0.00456621], [0, -0.00153632, 0.00791071], - [0.00625893, -0.00318811, 0]]) * 255.0 + [-222.921, 135.576, -276.836] - if in_img_type == np.uint8: - rlt = rlt.round() - else: - rlt /= 255. - return rlt.astype(in_img_type) - - -def bgr2ycbcr(img, only_y=True): - '''bgr version of rgb2ycbcr - only_y: only return Y channel - Input: - uint8, [0, 255] - float, [0, 1] - ''' - in_img_type = img.dtype - img.astype(np.float32) - if in_img_type != np.uint8: - img *= 255. - # convert - if only_y: - rlt = np.dot(img, [24.966, 128.553, 65.481]) / 255.0 + 16.0 - else: - rlt = np.matmul(img, [[24.966, 112.0, -18.214], [128.553, -74.203, -93.786], - [65.481, -37.797, 112.0]]) / 255.0 + [16, 128, 128] - if in_img_type == np.uint8: - rlt = rlt.round() - else: - rlt /= 255. - return rlt.astype(in_img_type) - - -def channel_convert(in_c, tar_type, img_list): - # conversion among BGR, gray and y - if in_c == 3 and tar_type == 'gray': # BGR to gray - gray_list = [cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) for img in img_list] - return [np.expand_dims(img, axis=2) for img in gray_list] - elif in_c == 3 and tar_type == 'y': # BGR to y - y_list = [bgr2ycbcr(img, only_y=True) for img in img_list] - return [np.expand_dims(img, axis=2) for img in y_list] - elif in_c == 1 and tar_type == 'RGB': # gray/y to BGR - return [cv2.cvtColor(img, cv2.COLOR_GRAY2BGR) for img in img_list] - else: - return img_list - - -''' -# -------------------------------------------- -# metric, PSNR and SSIM -# -------------------------------------------- -''' - - -# -------------------------------------------- -# PSNR -# -------------------------------------------- -def calculate_psnr(img1, img2, border=0): - # img1 and img2 have range [0, 255] - #img1 = img1.squeeze() - #img2 = img2.squeeze() - if not img1.shape == img2.shape: - raise ValueError('Input images must have the same dimensions.') - h, w = img1.shape[:2] - img1 = img1[border:h-border, border:w-border] - img2 = img2[border:h-border, border:w-border] - - img1 = img1.astype(np.float64) - img2 = img2.astype(np.float64) - mse = np.mean((img1 - img2)**2) - if mse == 0: - return float('inf') - return 20 * math.log10(255.0 / math.sqrt(mse)) - - -# -------------------------------------------- -# SSIM -# -------------------------------------------- -def calculate_ssim(img1, img2, border=0): - '''calculate SSIM - the same outputs as MATLAB's - img1, img2: [0, 255] - ''' - #img1 = img1.squeeze() - #img2 = img2.squeeze() - if not img1.shape == img2.shape: - raise ValueError('Input images must have the same dimensions.') - h, w = img1.shape[:2] - img1 = img1[border:h-border, border:w-border] - img2 = img2[border:h-border, border:w-border] - - if img1.ndim == 2: - return ssim(img1, img2) - elif img1.ndim == 3: - if img1.shape[2] == 3: - ssims = [] - for i in range(3): - ssims.append(ssim(img1[:,:,i], img2[:,:,i])) - return np.array(ssims).mean() - elif img1.shape[2] == 1: - return ssim(np.squeeze(img1), np.squeeze(img2)) - else: - raise ValueError('Wrong input image dimensions.') - - -def ssim(img1, img2): - C1 = (0.01 * 255)**2 - C2 = (0.03 * 255)**2 - - img1 = img1.astype(np.float64) - img2 = img2.astype(np.float64) - kernel = cv2.getGaussianKernel(11, 1.5) - window = np.outer(kernel, kernel.transpose()) - - mu1 = cv2.filter2D(img1, -1, window)[5:-5, 5:-5] # valid - mu2 = cv2.filter2D(img2, -1, window)[5:-5, 5:-5] - mu1_sq = mu1**2 - mu2_sq = mu2**2 - mu1_mu2 = mu1 * mu2 - sigma1_sq = cv2.filter2D(img1**2, -1, window)[5:-5, 5:-5] - mu1_sq - sigma2_sq = cv2.filter2D(img2**2, -1, window)[5:-5, 5:-5] - mu2_sq - sigma12 = cv2.filter2D(img1 * img2, -1, window)[5:-5, 5:-5] - mu1_mu2 - - ssim_map = ((2 * mu1_mu2 + C1) * (2 * sigma12 + C2)) / ((mu1_sq + mu2_sq + C1) * - (sigma1_sq + sigma2_sq + C2)) - return ssim_map.mean() - - -''' -# -------------------------------------------- -# matlab's bicubic imresize (numpy and torch) [0, 1] -# -------------------------------------------- -''' - - -# matlab 'imresize' function, now only support 'bicubic' -def cubic(x): - absx = torch.abs(x) - absx2 = absx**2 - absx3 = absx**3 - return (1.5*absx3 - 2.5*absx2 + 1) * ((absx <= 1).type_as(absx)) + \ - (-0.5*absx3 + 2.5*absx2 - 4*absx + 2) * (((absx > 1)*(absx <= 2)).type_as(absx)) - - -def calculate_weights_indices(in_length, out_length, scale, kernel, kernel_width, antialiasing): - if (scale < 1) and (antialiasing): - # Use a modified kernel to simultaneously interpolate and antialias- larger kernel width - kernel_width = kernel_width / scale - - # Output-space coordinates - x = torch.linspace(1, out_length, out_length) - - # Input-space coordinates. Calculate the inverse mapping such that 0.5 - # in output space maps to 0.5 in input space, and 0.5+scale in output - # space maps to 1.5 in input space. - u = x / scale + 0.5 * (1 - 1 / scale) - - # What is the left-most pixel that can be involved in the computation? - left = torch.floor(u - kernel_width / 2) - - # What is the maximum number of pixels that can be involved in the - # computation? Note: it's OK to use an extra pixel here; if the - # corresponding weights are all zero, it will be eliminated at the end - # of this function. - P = math.ceil(kernel_width) + 2 - - # The indices of the input pixels involved in computing the k-th output - # pixel are in row k of the indices matrix. - indices = left.view(out_length, 1).expand(out_length, P) + torch.linspace(0, P - 1, P).view( - 1, P).expand(out_length, P) - - # The weights used to compute the k-th output pixel are in row k of the - # weights matrix. - distance_to_center = u.view(out_length, 1).expand(out_length, P) - indices - # apply cubic kernel - if (scale < 1) and (antialiasing): - weights = scale * cubic(distance_to_center * scale) - else: - weights = cubic(distance_to_center) - # Normalize the weights matrix so that each row sums to 1. - weights_sum = torch.sum(weights, 1).view(out_length, 1) - weights = weights / weights_sum.expand(out_length, P) - - # If a column in weights is all zero, get rid of it. only consider the first and last column. - weights_zero_tmp = torch.sum((weights == 0), 0) - if not math.isclose(weights_zero_tmp[0], 0, rel_tol=1e-6): - indices = indices.narrow(1, 1, P - 2) - weights = weights.narrow(1, 1, P - 2) - if not math.isclose(weights_zero_tmp[-1], 0, rel_tol=1e-6): - indices = indices.narrow(1, 0, P - 2) - weights = weights.narrow(1, 0, P - 2) - weights = weights.contiguous() - indices = indices.contiguous() - sym_len_s = -indices.min() + 1 - sym_len_e = indices.max() - in_length - indices = indices + sym_len_s - 1 - return weights, indices, int(sym_len_s), int(sym_len_e) - - -# -------------------------------------------- -# imresize for tensor image [0, 1] -# -------------------------------------------- -def imresize(img, scale, antialiasing=True): - # Now the scale should be the same for H and W - # input: img: pytorch tensor, CHW or HW [0,1] - # output: CHW or HW [0,1] w/o round - need_squeeze = True if img.dim() == 2 else False - if need_squeeze: - img.unsqueeze_(0) - in_C, in_H, in_W = img.size() - out_C, out_H, out_W = in_C, math.ceil(in_H * scale), math.ceil(in_W * scale) - kernel_width = 4 - kernel = 'cubic' - - # Return the desired dimension order for performing the resize. The - # strategy is to perform the resize first along the dimension with the - # smallest scale factor. - # Now we do not support this. - - # get weights and indices - weights_H, indices_H, sym_len_Hs, sym_len_He = calculate_weights_indices( - in_H, out_H, scale, kernel, kernel_width, antialiasing) - weights_W, indices_W, sym_len_Ws, sym_len_We = calculate_weights_indices( - in_W, out_W, scale, kernel, kernel_width, antialiasing) - # process H dimension - # symmetric copying - img_aug = torch.FloatTensor(in_C, in_H + sym_len_Hs + sym_len_He, in_W) - img_aug.narrow(1, sym_len_Hs, in_H).copy_(img) - - sym_patch = img[:, :sym_len_Hs, :] - inv_idx = torch.arange(sym_patch.size(1) - 1, -1, -1).long() - sym_patch_inv = sym_patch.index_select(1, inv_idx) - img_aug.narrow(1, 0, sym_len_Hs).copy_(sym_patch_inv) - - sym_patch = img[:, -sym_len_He:, :] - inv_idx = torch.arange(sym_patch.size(1) - 1, -1, -1).long() - sym_patch_inv = sym_patch.index_select(1, inv_idx) - img_aug.narrow(1, sym_len_Hs + in_H, sym_len_He).copy_(sym_patch_inv) - - out_1 = torch.FloatTensor(in_C, out_H, in_W) - kernel_width = weights_H.size(1) - for i in range(out_H): - idx = int(indices_H[i][0]) - for j in range(out_C): - out_1[j, i, :] = img_aug[j, idx:idx + kernel_width, :].transpose(0, 1).mv(weights_H[i]) - - # process W dimension - # symmetric copying - out_1_aug = torch.FloatTensor(in_C, out_H, in_W + sym_len_Ws + sym_len_We) - out_1_aug.narrow(2, sym_len_Ws, in_W).copy_(out_1) - - sym_patch = out_1[:, :, :sym_len_Ws] - inv_idx = torch.arange(sym_patch.size(2) - 1, -1, -1).long() - sym_patch_inv = sym_patch.index_select(2, inv_idx) - out_1_aug.narrow(2, 0, sym_len_Ws).copy_(sym_patch_inv) - - sym_patch = out_1[:, :, -sym_len_We:] - inv_idx = torch.arange(sym_patch.size(2) - 1, -1, -1).long() - sym_patch_inv = sym_patch.index_select(2, inv_idx) - out_1_aug.narrow(2, sym_len_Ws + in_W, sym_len_We).copy_(sym_patch_inv) - - out_2 = torch.FloatTensor(in_C, out_H, out_W) - kernel_width = weights_W.size(1) - for i in range(out_W): - idx = int(indices_W[i][0]) - for j in range(out_C): - out_2[j, :, i] = out_1_aug[j, :, idx:idx + kernel_width].mv(weights_W[i]) - if need_squeeze: - out_2.squeeze_() - return out_2 - - -# -------------------------------------------- -# imresize for numpy image [0, 1] -# -------------------------------------------- -def imresize_np(img, scale, antialiasing=True): - # Now the scale should be the same for H and W - # input: img: Numpy, HWC or HW [0,1] - # output: HWC or HW [0,1] w/o round - img = torch.from_numpy(img) - need_squeeze = True if img.dim() == 2 else False - if need_squeeze: - img.unsqueeze_(2) - - in_H, in_W, in_C = img.size() - out_C, out_H, out_W = in_C, math.ceil(in_H * scale), math.ceil(in_W * scale) - kernel_width = 4 - kernel = 'cubic' - - # Return the desired dimension order for performing the resize. The - # strategy is to perform the resize first along the dimension with the - # smallest scale factor. - # Now we do not support this. - - # get weights and indices - weights_H, indices_H, sym_len_Hs, sym_len_He = calculate_weights_indices( - in_H, out_H, scale, kernel, kernel_width, antialiasing) - weights_W, indices_W, sym_len_Ws, sym_len_We = calculate_weights_indices( - in_W, out_W, scale, kernel, kernel_width, antialiasing) - # process H dimension - # symmetric copying - img_aug = torch.FloatTensor(in_H + sym_len_Hs + sym_len_He, in_W, in_C) - img_aug.narrow(0, sym_len_Hs, in_H).copy_(img) - - sym_patch = img[:sym_len_Hs, :, :] - inv_idx = torch.arange(sym_patch.size(0) - 1, -1, -1).long() - sym_patch_inv = sym_patch.index_select(0, inv_idx) - img_aug.narrow(0, 0, sym_len_Hs).copy_(sym_patch_inv) - - sym_patch = img[-sym_len_He:, :, :] - inv_idx = torch.arange(sym_patch.size(0) - 1, -1, -1).long() - sym_patch_inv = sym_patch.index_select(0, inv_idx) - img_aug.narrow(0, sym_len_Hs + in_H, sym_len_He).copy_(sym_patch_inv) - - out_1 = torch.FloatTensor(out_H, in_W, in_C) - kernel_width = weights_H.size(1) - for i in range(out_H): - idx = int(indices_H[i][0]) - for j in range(out_C): - out_1[i, :, j] = img_aug[idx:idx + kernel_width, :, j].transpose(0, 1).mv(weights_H[i]) - - # process W dimension - # symmetric copying - out_1_aug = torch.FloatTensor(out_H, in_W + sym_len_Ws + sym_len_We, in_C) - out_1_aug.narrow(1, sym_len_Ws, in_W).copy_(out_1) - - sym_patch = out_1[:, :sym_len_Ws, :] - inv_idx = torch.arange(sym_patch.size(1) - 1, -1, -1).long() - sym_patch_inv = sym_patch.index_select(1, inv_idx) - out_1_aug.narrow(1, 0, sym_len_Ws).copy_(sym_patch_inv) - - sym_patch = out_1[:, -sym_len_We:, :] - inv_idx = torch.arange(sym_patch.size(1) - 1, -1, -1).long() - sym_patch_inv = sym_patch.index_select(1, inv_idx) - out_1_aug.narrow(1, sym_len_Ws + in_W, sym_len_We).copy_(sym_patch_inv) - - out_2 = torch.FloatTensor(out_H, out_W, in_C) - kernel_width = weights_W.size(1) - for i in range(out_W): - idx = int(indices_W[i][0]) - for j in range(out_C): - out_2[:, i, j] = out_1_aug[:, idx:idx + kernel_width, j].mv(weights_W[i]) - if need_squeeze: - out_2.squeeze_() - - return out_2.numpy() - - -if __name__ == '__main__': - print('---') -# img = imread_uint('test.bmp', 3) -# img = uint2single(img) -# img_bicubic = imresize_np(img, 1/4) \ No newline at end of file diff --git a/spaces/Aditya9790/yolo7-object-tracking/utils/add_nms.py b/spaces/Aditya9790/yolo7-object-tracking/utils/add_nms.py deleted file mode 100644 index 0a1f7976a2051d07bb028f9fd68eb52f45234f43..0000000000000000000000000000000000000000 --- a/spaces/Aditya9790/yolo7-object-tracking/utils/add_nms.py +++ /dev/null @@ -1,155 +0,0 @@ -import numpy as np -import onnx -from onnx import shape_inference -try: - import onnx_graphsurgeon as gs -except Exception as e: - print('Import onnx_graphsurgeon failure: %s' % e) - -import logging - -LOGGER = logging.getLogger(__name__) - -class RegisterNMS(object): - def __init__( - self, - onnx_model_path: str, - precision: str = "fp32", - ): - - self.graph = gs.import_onnx(onnx.load(onnx_model_path)) - assert self.graph - LOGGER.info("ONNX graph created successfully") - # Fold constants via ONNX-GS that PyTorch2ONNX may have missed - self.graph.fold_constants() - self.precision = precision - self.batch_size = 1 - def infer(self): - """ - Sanitize the graph by cleaning any unconnected nodes, do a topological resort, - and fold constant inputs values. When possible, run shape inference on the - ONNX graph to determine tensor shapes. - """ - for _ in range(3): - count_before = len(self.graph.nodes) - - self.graph.cleanup().toposort() - try: - for node in self.graph.nodes: - for o in node.outputs: - o.shape = None - model = gs.export_onnx(self.graph) - model = shape_inference.infer_shapes(model) - self.graph = gs.import_onnx(model) - except Exception as e: - LOGGER.info(f"Shape inference could not be performed at this time:\n{e}") - try: - self.graph.fold_constants(fold_shapes=True) - except TypeError as e: - LOGGER.error( - "This version of ONNX GraphSurgeon does not support folding shapes, " - f"please upgrade your onnx_graphsurgeon module. Error:\n{e}" - ) - raise - - count_after = len(self.graph.nodes) - if count_before == count_after: - # No new folding occurred in this iteration, so we can stop for now. - break - - def save(self, output_path): - """ - Save the ONNX model to the given location. - Args: - output_path: Path pointing to the location where to write - out the updated ONNX model. - """ - self.graph.cleanup().toposort() - model = gs.export_onnx(self.graph) - onnx.save(model, output_path) - LOGGER.info(f"Saved ONNX model to {output_path}") - - def register_nms( - self, - *, - score_thresh: float = 0.25, - nms_thresh: float = 0.45, - detections_per_img: int = 100, - ): - """ - Register the ``EfficientNMS_TRT`` plugin node. - NMS expects these shapes for its input tensors: - - box_net: [batch_size, number_boxes, 4] - - class_net: [batch_size, number_boxes, number_labels] - Args: - score_thresh (float): The scalar threshold for score (low scoring boxes are removed). - nms_thresh (float): The scalar threshold for IOU (new boxes that have high IOU - overlap with previously selected boxes are removed). - detections_per_img (int): Number of best detections to keep after NMS. - """ - - self.infer() - # Find the concat node at the end of the network - op_inputs = self.graph.outputs - op = "EfficientNMS_TRT" - attrs = { - "plugin_version": "1", - "background_class": -1, # no background class - "max_output_boxes": detections_per_img, - "score_threshold": score_thresh, - "iou_threshold": nms_thresh, - "score_activation": False, - "box_coding": 0, - } - - if self.precision == "fp32": - dtype_output = np.float32 - elif self.precision == "fp16": - dtype_output = np.float16 - else: - raise NotImplementedError(f"Currently not supports precision: {self.precision}") - - # NMS Outputs - output_num_detections = gs.Variable( - name="num_dets", - dtype=np.int32, - shape=[self.batch_size, 1], - ) # A scalar indicating the number of valid detections per batch image. - output_boxes = gs.Variable( - name="det_boxes", - dtype=dtype_output, - shape=[self.batch_size, detections_per_img, 4], - ) - output_scores = gs.Variable( - name="det_scores", - dtype=dtype_output, - shape=[self.batch_size, detections_per_img], - ) - output_labels = gs.Variable( - name="det_classes", - dtype=np.int32, - shape=[self.batch_size, detections_per_img], - ) - - op_outputs = [output_num_detections, output_boxes, output_scores, output_labels] - - # Create the NMS Plugin node with the selected inputs. The outputs of the node will also - # become the final outputs of the graph. - self.graph.layer(op=op, name="batched_nms", inputs=op_inputs, outputs=op_outputs, attrs=attrs) - LOGGER.info(f"Created NMS plugin '{op}' with attributes: {attrs}") - - self.graph.outputs = op_outputs - - self.infer() - - def save(self, output_path): - """ - Save the ONNX model to the given location. - Args: - output_path: Path pointing to the location where to write - out the updated ONNX model. - """ - self.graph.cleanup().toposort() - model = gs.export_onnx(self.graph) - onnx.save(model, output_path) - LOGGER.info(f"Saved ONNX model to {output_path}") diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/ball/Ball.d.ts b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/ball/Ball.d.ts deleted file mode 100644 index 2e1d6076c75d1adc8a639b5c7410d8905f84a046..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/ball/Ball.d.ts +++ /dev/null @@ -1,2 +0,0 @@ -import Base from '../base/Base'; -export default class Ball extends Base { } \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/overlapsizer/Methods.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/overlapsizer/Methods.js deleted file mode 100644 index 3c21efa258bc47863843c86131ec6b2c2dca8ee3..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/overlapsizer/Methods.js +++ /dev/null @@ -1,25 +0,0 @@ -import GetChildrenWidth from './GetChildrenWidth.js'; -import GetChildrenHeight from './GetChildrenHeight.js'; -import GetExpandedChildWidth from './GetExpandedChildWidth.js'; -import GetExpandedChildHeight from './GetExpandedChildHeight.js'; -import GetChildrenSizers from './GetChildrenSizers.js'; -import LayoutChildren from './LayoutChildren.js'; -import AddChildMethods from './AddChildMethods.js'; -import RemoveChildMethods from './RemoveChildMethods.js'; - -var methods = { - getChildrenWidth: GetChildrenWidth, - getChildrenHeight: GetChildrenHeight, - getExpandedChildWidth: GetExpandedChildWidth, - getExpandedChildHeight: GetExpandedChildHeight, - getChildrenSizers: GetChildrenSizers, - layoutChildren: LayoutChildren, -}; - -Object.assign( - methods, - AddChildMethods, - RemoveChildMethods -); - -export default methods; \ No newline at end of file diff --git a/spaces/AlexWang/lama/saicinpainting/training/modules/depthwise_sep_conv.py b/spaces/AlexWang/lama/saicinpainting/training/modules/depthwise_sep_conv.py deleted file mode 100644 index 83dd15c3df1d9f40baf0091a373fa224532c9ddd..0000000000000000000000000000000000000000 --- a/spaces/AlexWang/lama/saicinpainting/training/modules/depthwise_sep_conv.py +++ /dev/null @@ -1,17 +0,0 @@ -import torch -import torch.nn as nn - -class DepthWiseSeperableConv(nn.Module): - def __init__(self, in_dim, out_dim, *args, **kwargs): - super().__init__() - if 'groups' in kwargs: - # ignoring groups for Depthwise Sep Conv - del kwargs['groups'] - - self.depthwise = nn.Conv2d(in_dim, in_dim, *args, groups=in_dim, **kwargs) - self.pointwise = nn.Conv2d(in_dim, out_dim, kernel_size=1) - - def forward(self, x): - out = self.depthwise(x) - out = self.pointwise(out) - return out \ No newline at end of file diff --git a/spaces/Alichuan/VITS-Umamusume-voice-synthesizer/ONNXVITS_inference.py b/spaces/Alichuan/VITS-Umamusume-voice-synthesizer/ONNXVITS_inference.py deleted file mode 100644 index 258b618cd338322365dfa25bec468a0a3f70ccd1..0000000000000000000000000000000000000000 --- a/spaces/Alichuan/VITS-Umamusume-voice-synthesizer/ONNXVITS_inference.py +++ /dev/null @@ -1,36 +0,0 @@ -import logging -logging.getLogger('numba').setLevel(logging.WARNING) -import IPython.display as ipd -import torch -import commons -import utils -import ONNXVITS_infer -from text import text_to_sequence - -def get_text(text, hps): - text_norm = text_to_sequence(text, hps.symbols, hps.data.text_cleaners) - if hps.data.add_blank: - text_norm = commons.intersperse(text_norm, 0) - text_norm = torch.LongTensor(text_norm) - return text_norm - -hps = utils.get_hparams_from_file("../vits/pretrained_models/uma87.json") - -net_g = ONNXVITS_infer.SynthesizerTrn( - len(hps.symbols), - hps.data.filter_length // 2 + 1, - hps.train.segment_size // hps.data.hop_length, - n_speakers=hps.data.n_speakers, - **hps.model) -_ = net_g.eval() - -_ = utils.load_checkpoint("../vits/pretrained_models/uma_1153000.pth", net_g) - -text1 = get_text("おはようございます。", hps) -stn_tst = text1 -with torch.no_grad(): - x_tst = stn_tst.unsqueeze(0) - x_tst_lengths = torch.LongTensor([stn_tst.size(0)]) - sid = torch.LongTensor([0]) - audio = net_g.infer(x_tst, x_tst_lengths, sid=sid, noise_scale=.667, noise_scale_w=0.8, length_scale=1)[0][0,0].data.cpu().float().numpy() -print(audio) \ No newline at end of file diff --git a/spaces/AlignmentResearch/tuned-lens/Dockerfile b/spaces/AlignmentResearch/tuned-lens/Dockerfile deleted file mode 100644 index b37462737be5068087d02d7f3e81b162491b5acd..0000000000000000000000000000000000000000 --- a/spaces/AlignmentResearch/tuned-lens/Dockerfile +++ /dev/null @@ -1,25 +0,0 @@ -FROM python:3.9 - -WORKDIR /code - -COPY ./requirements.txt /code/requirements.txt - -RUN pip install --no-cache-dir --upgrade -r /code/requirements.txt - -# Set up a new user named "user" with user ID 1000 -RUN useradd -m -u 1000 user - -# Switch to the "user" user -USER user - -# Set home to the user's home directory -ENV HOME=/home/user \ - PATH=/home/user/.local/bin:$PATH - -# Set the working directory to the user's home directory -WORKDIR $HOME/app - -# Copy the current directory contents into the container at $HOME/app setting the owner to the user -COPY --chown=user . $HOME/app - -CMD ["python", "app.py"] diff --git a/spaces/Ameaou/academic-chatgpt3.1/check_proxy.py b/spaces/Ameaou/academic-chatgpt3.1/check_proxy.py deleted file mode 100644 index 28711a8c140bfcdb0683efd924032e6ccc0f0df8..0000000000000000000000000000000000000000 --- a/spaces/Ameaou/academic-chatgpt3.1/check_proxy.py +++ /dev/null @@ -1,149 +0,0 @@ - -def check_proxy(proxies): - import requests - proxies_https = proxies['https'] if proxies is not None else '无' - try: - response = requests.get("https://ipapi.co/json/", - proxies=proxies, timeout=4) - data = response.json() - print(f'查询代理的地理位置,返回的结果是{data}') - if 'country_name' in data: - country = data['country_name'] - result = f"代理配置 {proxies_https}, 代理所在地:{country}" - elif 'error' in data: - result = f"代理配置 {proxies_https}, 代理所在地:未知,IP查询频率受限" - print(result) - return result - except: - result = f"代理配置 {proxies_https}, 代理所在地查询超时,代理可能无效" - print(result) - return result - - -def backup_and_download(current_version, remote_version): - """ - 一键更新协议:备份和下载 - """ - from toolbox import get_conf - import shutil - import os - import requests - import zipfile - os.makedirs(f'./history', exist_ok=True) - backup_dir = f'./history/backup-{current_version}/' - new_version_dir = f'./history/new-version-{remote_version}/' - if os.path.exists(new_version_dir): - return new_version_dir - os.makedirs(new_version_dir) - shutil.copytree('./', backup_dir, ignore=lambda x, y: ['history']) - proxies, = get_conf('proxies') - r = requests.get( - 'https://github.com/binary-husky/chatgpt_academic/archive/refs/heads/master.zip', proxies=proxies, stream=True) - zip_file_path = backup_dir+'/master.zip' - with open(zip_file_path, 'wb+') as f: - f.write(r.content) - dst_path = new_version_dir - with zipfile.ZipFile(zip_file_path, "r") as zip_ref: - for zip_info in zip_ref.infolist(): - dst_file_path = os.path.join(dst_path, zip_info.filename) - if os.path.exists(dst_file_path): - os.remove(dst_file_path) - zip_ref.extract(zip_info, dst_path) - return new_version_dir - - -def patch_and_restart(path): - """ - 一键更新协议:覆盖和重启 - """ - import distutils - import shutil - import os - import sys - import time - from colorful import print亮黄, print亮绿, print亮红 - # if not using config_private, move origin config.py as config_private.py - if not os.path.exists('config_private.py'): - print亮黄('由于您没有设置config_private.py私密配置,现将您的现有配置移动至config_private.py以防止配置丢失,', - '另外您可以随时在history子文件夹下找回旧版的程序。') - shutil.copyfile('config.py', 'config_private.py') - distutils.dir_util.copy_tree(path+'/chatgpt_academic-master', './') - import subprocess - print亮绿('代码已经更新,即将更新pip包依赖……') - for i in reversed(range(5)): time.sleep(1); print(i) - try: - subprocess.check_call([sys.executable, '-m', 'pip', 'install', '-r', 'requirements.txt']) - except: - print亮红('pip包依赖安装出现问题,需要手动安装新增的依赖库 `python -m pip install -r requirements.txt`,然后在用常规的`python main.py`的方式启动。') - print亮绿('更新完成,您可以随时在history子文件夹下找回旧版的程序,5s之后重启') - print亮红('假如重启失败,您可能需要手动安装新增的依赖库 `python -m pip install -r requirements.txt`,然后在用常规的`python main.py`的方式启动。') - print(' ------------------------------ -----------------------------------') - for i in reversed(range(8)): time.sleep(1); print(i) - os.execl(sys.executable, sys.executable, *sys.argv) - - -def get_current_version(): - import json - try: - with open('./version', 'r', encoding='utf8') as f: - current_version = json.loads(f.read())['version'] - except: - current_version = "" - return current_version - - -def auto_update(): - """ - 一键更新协议:查询版本和用户意见 - """ - try: - from toolbox import get_conf - import requests - import time - import json - proxies, = get_conf('proxies') - response = requests.get( - "https://raw.githubusercontent.com/binary-husky/chatgpt_academic/master/version", proxies=proxies, timeout=5) - remote_json_data = json.loads(response.text) - remote_version = remote_json_data['version'] - if remote_json_data["show_feature"]: - new_feature = "新功能:" + remote_json_data["new_feature"] - else: - new_feature = "" - with open('./version', 'r', encoding='utf8') as f: - current_version = f.read() - current_version = json.loads(current_version)['version'] - if (remote_version - current_version) >= 0.01: - from colorful import print亮黄 - print亮黄( - f'\n新版本可用。新版本:{remote_version},当前版本:{current_version}。{new_feature}') - print('(1)Github更新地址:\nhttps://github.com/binary-husky/chatgpt_academic\n') - user_instruction = input('(2)是否一键更新代码(Y+回车=确认,输入其他/无输入+回车=不更新)?') - if user_instruction in ['Y', 'y']: - path = backup_and_download(current_version, remote_version) - try: - patch_and_restart(path) - except: - print('更新失败。') - else: - print('自动更新程序:已禁用') - return - else: - return - except: - print('自动更新程序:已禁用') - -def warm_up_modules(): - print('正在执行一些模块的预热...') - from request_llm.bridge_all import model_info - enc = model_info["gpt-3.5-turbo"]['tokenizer'] - enc.encode("模块预热", disallowed_special=()) - enc = model_info["gpt-4"]['tokenizer'] - enc.encode("模块预热", disallowed_special=()) - -if __name__ == '__main__': - import os - os.environ['no_proxy'] = '*' # 避免代理网络产生意外污染 - from toolbox import get_conf - proxies, = get_conf('proxies') - check_proxy(proxies) diff --git a/spaces/Andy1621/uniformer_image_detection/README.md b/spaces/Andy1621/uniformer_image_detection/README.md deleted file mode 100644 index b64b353e4e012c612228f0027483770d4a8ecf72..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Uniformer_image_detection -emoji: 🌍 -colorFrom: indigo -colorTo: red -sdk: gradio -sdk_version: 3.0.4 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/Andy1621/uniformer_image_detection/configs/rpn/rpn_r50_fpn_1x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/rpn/rpn_r50_fpn_1x_coco.py deleted file mode 100644 index 26f95a3402f9fd2d54c5919484e2f4958beb8a34..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/rpn/rpn_r50_fpn_1x_coco.py +++ /dev/null @@ -1,18 +0,0 @@ -_base_ = [ - '../_base_/models/rpn_r50_fpn.py', '../_base_/datasets/coco_detection.py', - '../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py' -] -img_norm_cfg = dict( - mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations', with_bbox=True, with_label=False), - dict(type='Resize', img_scale=(1333, 800), keep_ratio=True), - dict(type='RandomFlip', flip_ratio=0.5), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_bboxes']), -] -data = dict(train=dict(pipeline=train_pipeline)) -evaluation = dict(interval=1, metric='proposal_fast') diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/psanet/psanet_r50-d8_512x1024_80k_cityscapes.py b/spaces/Andy1621/uniformer_image_segmentation/configs/psanet/psanet_r50-d8_512x1024_80k_cityscapes.py deleted file mode 100644 index a441013a4c1adc39fc064dbac23caaac9efdc4a6..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_segmentation/configs/psanet/psanet_r50-d8_512x1024_80k_cityscapes.py +++ /dev/null @@ -1,4 +0,0 @@ -_base_ = [ - '../_base_/models/psanet_r50-d8.py', '../_base_/datasets/cityscapes.py', - '../_base_/default_runtime.py', '../_base_/schedules/schedule_80k.py' -] diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/pspnet/pspnet_r18b-d8_769x769_80k_cityscapes.py b/spaces/Andy1621/uniformer_image_segmentation/configs/pspnet/pspnet_r18b-d8_769x769_80k_cityscapes.py deleted file mode 100644 index 284be6d09af1806b99bee5b85286b55ce02e8cbd..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_segmentation/configs/pspnet/pspnet_r18b-d8_769x769_80k_cityscapes.py +++ /dev/null @@ -1,9 +0,0 @@ -_base_ = './pspnet_r50-d8_769x769_80k_cityscapes.py' -model = dict( - pretrained='torchvision://resnet18', - backbone=dict(type='ResNet', depth=18), - decode_head=dict( - in_channels=512, - channels=128, - ), - auxiliary_head=dict(in_channels=256, channels=64)) diff --git a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/api-examples/api-example-stream.py b/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/api-examples/api-example-stream.py deleted file mode 100644 index 966ca6f62d97e25557037602bb2202b7197f7066..0000000000000000000000000000000000000000 --- a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/api-examples/api-example-stream.py +++ /dev/null @@ -1,86 +0,0 @@ -import asyncio -import json -import sys - -try: - import websockets -except ImportError: - print("Websockets package not found. Make sure it's installed.") - -# For local streaming, the websockets are hosted without ssl - ws:// -HOST = 'localhost:5005' -URI = f'ws://{HOST}/api/v1/stream' - -# For reverse-proxied streaming, the remote will likely host with ssl - wss:// -# URI = 'wss://your-uri-here.trycloudflare.com/api/v1/stream' - - -async def run(context): - # Note: the selected defaults change from time to time. - request = { - 'prompt': context, - 'max_new_tokens': 250, - 'auto_max_new_tokens': False, - 'max_tokens_second': 0, - - # Generation params. If 'preset' is set to different than 'None', the values - # in presets/preset-name.yaml are used instead of the individual numbers. - 'preset': 'None', - 'do_sample': True, - 'temperature': 0.7, - 'top_p': 0.1, - 'typical_p': 1, - 'epsilon_cutoff': 0, # In units of 1e-4 - 'eta_cutoff': 0, # In units of 1e-4 - 'tfs': 1, - 'top_a': 0, - 'repetition_penalty': 1.18, - 'repetition_penalty_range': 0, - 'top_k': 40, - 'min_length': 0, - 'no_repeat_ngram_size': 0, - 'num_beams': 1, - 'penalty_alpha': 0, - 'length_penalty': 1, - 'early_stopping': False, - 'mirostat_mode': 0, - 'mirostat_tau': 5, - 'mirostat_eta': 0.1, - 'grammar_string': '', - 'guidance_scale': 1, - 'negative_prompt': '', - - 'seed': -1, - 'add_bos_token': True, - 'truncation_length': 2048, - 'ban_eos_token': False, - 'custom_token_bans': '', - 'skip_special_tokens': True, - 'stopping_strings': [] - } - - async with websockets.connect(URI, ping_interval=None) as websocket: - await websocket.send(json.dumps(request)) - - yield context # Remove this if you just want to see the reply - - while True: - incoming_data = await websocket.recv() - incoming_data = json.loads(incoming_data) - - match incoming_data['event']: - case 'text_stream': - yield incoming_data['text'] - case 'stream_end': - return - - -async def print_response_stream(prompt): - async for response in run(prompt): - print(response, end='') - sys.stdout.flush() # If we don't flush, we won't see tokens in realtime. - - -if __name__ == '__main__': - prompt = "In order to make homemade bread, follow these steps:\n1)" - asyncio.run(print_response_stream(prompt)) diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/network/session.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/network/session.py deleted file mode 100644 index 6c40ade1595df0ed4d2963b819211491d55b0aa5..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/network/session.py +++ /dev/null @@ -1,517 +0,0 @@ -"""PipSession and supporting code, containing all pip-specific -network request configuration and behavior. -""" - -import email.utils -import io -import ipaddress -import json -import logging -import mimetypes -import os -import platform -import shutil -import subprocess -import sys -import urllib.parse -import warnings -from typing import ( - TYPE_CHECKING, - Any, - Dict, - Generator, - List, - Mapping, - Optional, - Sequence, - Tuple, - Union, -) - -from pip._vendor import requests, urllib3 -from pip._vendor.cachecontrol import CacheControlAdapter as _BaseCacheControlAdapter -from pip._vendor.requests.adapters import DEFAULT_POOLBLOCK, BaseAdapter -from pip._vendor.requests.adapters import HTTPAdapter as _BaseHTTPAdapter -from pip._vendor.requests.models import PreparedRequest, Response -from pip._vendor.requests.structures import CaseInsensitiveDict -from pip._vendor.urllib3.connectionpool import ConnectionPool -from pip._vendor.urllib3.exceptions import InsecureRequestWarning - -from pip import __version__ -from pip._internal.metadata import get_default_environment -from pip._internal.models.link import Link -from pip._internal.network.auth import MultiDomainBasicAuth -from pip._internal.network.cache import SafeFileCache - -# Import ssl from compat so the initial import occurs in only one place. -from pip._internal.utils.compat import has_tls -from pip._internal.utils.glibc import libc_ver -from pip._internal.utils.misc import build_url_from_netloc, parse_netloc -from pip._internal.utils.urls import url_to_path - -if TYPE_CHECKING: - from ssl import SSLContext - - from pip._vendor.urllib3.poolmanager import PoolManager - - -logger = logging.getLogger(__name__) - -SecureOrigin = Tuple[str, str, Optional[Union[int, str]]] - - -# Ignore warning raised when using --trusted-host. -warnings.filterwarnings("ignore", category=InsecureRequestWarning) - - -SECURE_ORIGINS: List[SecureOrigin] = [ - # protocol, hostname, port - # Taken from Chrome's list of secure origins (See: http://bit.ly/1qrySKC) - ("https", "*", "*"), - ("*", "localhost", "*"), - ("*", "127.0.0.0/8", "*"), - ("*", "::1/128", "*"), - ("file", "*", None), - # ssh is always secure. - ("ssh", "*", "*"), -] - - -# These are environment variables present when running under various -# CI systems. For each variable, some CI systems that use the variable -# are indicated. The collection was chosen so that for each of a number -# of popular systems, at least one of the environment variables is used. -# This list is used to provide some indication of and lower bound for -# CI traffic to PyPI. Thus, it is okay if the list is not comprehensive. -# For more background, see: https://github.com/pypa/pip/issues/5499 -CI_ENVIRONMENT_VARIABLES = ( - # Azure Pipelines - "BUILD_BUILDID", - # Jenkins - "BUILD_ID", - # AppVeyor, CircleCI, Codeship, Gitlab CI, Shippable, Travis CI - "CI", - # Explicit environment variable. - "PIP_IS_CI", -) - - -def looks_like_ci() -> bool: - """ - Return whether it looks like pip is running under CI. - """ - # We don't use the method of checking for a tty (e.g. using isatty()) - # because some CI systems mimic a tty (e.g. Travis CI). Thus that - # method doesn't provide definitive information in either direction. - return any(name in os.environ for name in CI_ENVIRONMENT_VARIABLES) - - -def user_agent() -> str: - """ - Return a string representing the user agent. - """ - data: Dict[str, Any] = { - "installer": {"name": "pip", "version": __version__}, - "python": platform.python_version(), - "implementation": { - "name": platform.python_implementation(), - }, - } - - if data["implementation"]["name"] == "CPython": - data["implementation"]["version"] = platform.python_version() - elif data["implementation"]["name"] == "PyPy": - pypy_version_info = sys.pypy_version_info # type: ignore - if pypy_version_info.releaselevel == "final": - pypy_version_info = pypy_version_info[:3] - data["implementation"]["version"] = ".".join( - [str(x) for x in pypy_version_info] - ) - elif data["implementation"]["name"] == "Jython": - # Complete Guess - data["implementation"]["version"] = platform.python_version() - elif data["implementation"]["name"] == "IronPython": - # Complete Guess - data["implementation"]["version"] = platform.python_version() - - if sys.platform.startswith("linux"): - from pip._vendor import distro - - linux_distribution = distro.name(), distro.version(), distro.codename() - distro_infos: Dict[str, Any] = dict( - filter( - lambda x: x[1], - zip(["name", "version", "id"], linux_distribution), - ) - ) - libc = dict( - filter( - lambda x: x[1], - zip(["lib", "version"], libc_ver()), - ) - ) - if libc: - distro_infos["libc"] = libc - if distro_infos: - data["distro"] = distro_infos - - if sys.platform.startswith("darwin") and platform.mac_ver()[0]: - data["distro"] = {"name": "macOS", "version": platform.mac_ver()[0]} - - if platform.system(): - data.setdefault("system", {})["name"] = platform.system() - - if platform.release(): - data.setdefault("system", {})["release"] = platform.release() - - if platform.machine(): - data["cpu"] = platform.machine() - - if has_tls(): - import _ssl as ssl - - data["openssl_version"] = ssl.OPENSSL_VERSION - - setuptools_dist = get_default_environment().get_distribution("setuptools") - if setuptools_dist is not None: - data["setuptools_version"] = str(setuptools_dist.version) - - if shutil.which("rustc") is not None: - # If for any reason `rustc --version` fails, silently ignore it - try: - rustc_output = subprocess.check_output( - ["rustc", "--version"], stderr=subprocess.STDOUT, timeout=0.5 - ) - except Exception: - pass - else: - if rustc_output.startswith(b"rustc "): - # The format of `rustc --version` is: - # `b'rustc 1.52.1 (9bc8c42bb 2021-05-09)\n'` - # We extract just the middle (1.52.1) part - data["rustc_version"] = rustc_output.split(b" ")[1].decode() - - # Use None rather than False so as not to give the impression that - # pip knows it is not being run under CI. Rather, it is a null or - # inconclusive result. Also, we include some value rather than no - # value to make it easier to know that the check has been run. - data["ci"] = True if looks_like_ci() else None - - user_data = os.environ.get("PIP_USER_AGENT_USER_DATA") - if user_data is not None: - data["user_data"] = user_data - - return "{data[installer][name]}/{data[installer][version]} {json}".format( - data=data, - json=json.dumps(data, separators=(",", ":"), sort_keys=True), - ) - - -class LocalFSAdapter(BaseAdapter): - def send( - self, - request: PreparedRequest, - stream: bool = False, - timeout: Optional[Union[float, Tuple[float, float]]] = None, - verify: Union[bool, str] = True, - cert: Optional[Union[str, Tuple[str, str]]] = None, - proxies: Optional[Mapping[str, str]] = None, - ) -> Response: - pathname = url_to_path(request.url) - - resp = Response() - resp.status_code = 200 - resp.url = request.url - - try: - stats = os.stat(pathname) - except OSError as exc: - # format the exception raised as a io.BytesIO object, - # to return a better error message: - resp.status_code = 404 - resp.reason = type(exc).__name__ - resp.raw = io.BytesIO(f"{resp.reason}: {exc}".encode("utf8")) - else: - modified = email.utils.formatdate(stats.st_mtime, usegmt=True) - content_type = mimetypes.guess_type(pathname)[0] or "text/plain" - resp.headers = CaseInsensitiveDict( - { - "Content-Type": content_type, - "Content-Length": stats.st_size, - "Last-Modified": modified, - } - ) - - resp.raw = open(pathname, "rb") - resp.close = resp.raw.close - - return resp - - def close(self) -> None: - pass - - -class _SSLContextAdapterMixin: - """Mixin to add the ``ssl_context`` constructor argument to HTTP adapters. - - The additional argument is forwarded directly to the pool manager. This allows us - to dynamically decide what SSL store to use at runtime, which is used to implement - the optional ``truststore`` backend. - """ - - def __init__( - self, - *, - ssl_context: Optional["SSLContext"] = None, - **kwargs: Any, - ) -> None: - self._ssl_context = ssl_context - super().__init__(**kwargs) - - def init_poolmanager( - self, - connections: int, - maxsize: int, - block: bool = DEFAULT_POOLBLOCK, - **pool_kwargs: Any, - ) -> "PoolManager": - if self._ssl_context is not None: - pool_kwargs.setdefault("ssl_context", self._ssl_context) - return super().init_poolmanager( # type: ignore[misc] - connections=connections, - maxsize=maxsize, - block=block, - **pool_kwargs, - ) - - -class HTTPAdapter(_SSLContextAdapterMixin, _BaseHTTPAdapter): - pass - - -class CacheControlAdapter(_SSLContextAdapterMixin, _BaseCacheControlAdapter): - pass - - -class InsecureHTTPAdapter(HTTPAdapter): - def cert_verify( - self, - conn: ConnectionPool, - url: str, - verify: Union[bool, str], - cert: Optional[Union[str, Tuple[str, str]]], - ) -> None: - super().cert_verify(conn=conn, url=url, verify=False, cert=cert) - - -class InsecureCacheControlAdapter(CacheControlAdapter): - def cert_verify( - self, - conn: ConnectionPool, - url: str, - verify: Union[bool, str], - cert: Optional[Union[str, Tuple[str, str]]], - ) -> None: - super().cert_verify(conn=conn, url=url, verify=False, cert=cert) - - -class PipSession(requests.Session): - timeout: Optional[int] = None - - def __init__( - self, - *args: Any, - retries: int = 0, - cache: Optional[str] = None, - trusted_hosts: Sequence[str] = (), - index_urls: Optional[List[str]] = None, - ssl_context: Optional["SSLContext"] = None, - **kwargs: Any, - ) -> None: - """ - :param trusted_hosts: Domains not to emit warnings for when not using - HTTPS. - """ - super().__init__(*args, **kwargs) - - # Namespace the attribute with "pip_" just in case to prevent - # possible conflicts with the base class. - self.pip_trusted_origins: List[Tuple[str, Optional[int]]] = [] - - # Attach our User Agent to the request - self.headers["User-Agent"] = user_agent() - - # Attach our Authentication handler to the session - self.auth = MultiDomainBasicAuth(index_urls=index_urls) - - # Create our urllib3.Retry instance which will allow us to customize - # how we handle retries. - retries = urllib3.Retry( - # Set the total number of retries that a particular request can - # have. - total=retries, - # A 503 error from PyPI typically means that the Fastly -> Origin - # connection got interrupted in some way. A 503 error in general - # is typically considered a transient error so we'll go ahead and - # retry it. - # A 500 may indicate transient error in Amazon S3 - # A 520 or 527 - may indicate transient error in CloudFlare - status_forcelist=[500, 503, 520, 527], - # Add a small amount of back off between failed requests in - # order to prevent hammering the service. - backoff_factor=0.25, - ) # type: ignore - - # Our Insecure HTTPAdapter disables HTTPS validation. It does not - # support caching so we'll use it for all http:// URLs. - # If caching is disabled, we will also use it for - # https:// hosts that we've marked as ignoring - # TLS errors for (trusted-hosts). - insecure_adapter = InsecureHTTPAdapter(max_retries=retries) - - # We want to _only_ cache responses on securely fetched origins or when - # the host is specified as trusted. We do this because - # we can't validate the response of an insecurely/untrusted fetched - # origin, and we don't want someone to be able to poison the cache and - # require manual eviction from the cache to fix it. - if cache: - secure_adapter = CacheControlAdapter( - cache=SafeFileCache(cache), - max_retries=retries, - ssl_context=ssl_context, - ) - self._trusted_host_adapter = InsecureCacheControlAdapter( - cache=SafeFileCache(cache), - max_retries=retries, - ) - else: - secure_adapter = HTTPAdapter(max_retries=retries, ssl_context=ssl_context) - self._trusted_host_adapter = insecure_adapter - - self.mount("https://", secure_adapter) - self.mount("http://", insecure_adapter) - - # Enable file:// urls - self.mount("file://", LocalFSAdapter()) - - for host in trusted_hosts: - self.add_trusted_host(host, suppress_logging=True) - - def update_index_urls(self, new_index_urls: List[str]) -> None: - """ - :param new_index_urls: New index urls to update the authentication - handler with. - """ - self.auth.index_urls = new_index_urls - - def add_trusted_host( - self, host: str, source: Optional[str] = None, suppress_logging: bool = False - ) -> None: - """ - :param host: It is okay to provide a host that has previously been - added. - :param source: An optional source string, for logging where the host - string came from. - """ - if not suppress_logging: - msg = f"adding trusted host: {host!r}" - if source is not None: - msg += f" (from {source})" - logger.info(msg) - - host_port = parse_netloc(host) - if host_port not in self.pip_trusted_origins: - self.pip_trusted_origins.append(host_port) - - self.mount( - build_url_from_netloc(host, scheme="http") + "/", self._trusted_host_adapter - ) - self.mount(build_url_from_netloc(host) + "/", self._trusted_host_adapter) - if not host_port[1]: - self.mount( - build_url_from_netloc(host, scheme="http") + ":", - self._trusted_host_adapter, - ) - # Mount wildcard ports for the same host. - self.mount(build_url_from_netloc(host) + ":", self._trusted_host_adapter) - - def iter_secure_origins(self) -> Generator[SecureOrigin, None, None]: - yield from SECURE_ORIGINS - for host, port in self.pip_trusted_origins: - yield ("*", host, "*" if port is None else port) - - def is_secure_origin(self, location: Link) -> bool: - # Determine if this url used a secure transport mechanism - parsed = urllib.parse.urlparse(str(location)) - origin_protocol, origin_host, origin_port = ( - parsed.scheme, - parsed.hostname, - parsed.port, - ) - - # The protocol to use to see if the protocol matches. - # Don't count the repository type as part of the protocol: in - # cases such as "git+ssh", only use "ssh". (I.e., Only verify against - # the last scheme.) - origin_protocol = origin_protocol.rsplit("+", 1)[-1] - - # Determine if our origin is a secure origin by looking through our - # hardcoded list of secure origins, as well as any additional ones - # configured on this PackageFinder instance. - for secure_origin in self.iter_secure_origins(): - secure_protocol, secure_host, secure_port = secure_origin - if origin_protocol != secure_protocol and secure_protocol != "*": - continue - - try: - addr = ipaddress.ip_address(origin_host or "") - network = ipaddress.ip_network(secure_host) - except ValueError: - # We don't have both a valid address or a valid network, so - # we'll check this origin against hostnames. - if ( - origin_host - and origin_host.lower() != secure_host.lower() - and secure_host != "*" - ): - continue - else: - # We have a valid address and network, so see if the address - # is contained within the network. - if addr not in network: - continue - - # Check to see if the port matches. - if ( - origin_port != secure_port - and secure_port != "*" - and secure_port is not None - ): - continue - - # If we've gotten here, then this origin matches the current - # secure origin and we should return True - return True - - # If we've gotten to this point, then the origin isn't secure and we - # will not accept it as a valid location to search. We will however - # log a warning that we are ignoring it. - logger.warning( - "The repository located at %s is not a trusted or secure host and " - "is being ignored. If this repository is available via HTTPS we " - "recommend you use HTTPS instead, otherwise you may silence " - "this warning and allow it anyway with '--trusted-host %s'.", - origin_host, - origin_host, - ) - - return False - - def request(self, method: str, url: str, *args: Any, **kwargs: Any) -> Response: - # Allow setting a default timeout on a session - kwargs.setdefault("timeout", self.timeout) - # Allow setting a default proxies on a session - kwargs.setdefault("proxies", self.proxies) - - # Dispatch the actual request - return super().request(method, url, *args, **kwargs) diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/cachecontrol/caches/redis_cache.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/cachecontrol/caches/redis_cache.py deleted file mode 100644 index 2cba4b0708032d62b4c1278f99e5db87ed8d90fe..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/cachecontrol/caches/redis_cache.py +++ /dev/null @@ -1,39 +0,0 @@ -# SPDX-FileCopyrightText: 2015 Eric Larson -# -# SPDX-License-Identifier: Apache-2.0 - -from __future__ import division - -from datetime import datetime -from pip._vendor.cachecontrol.cache import BaseCache - - -class RedisCache(BaseCache): - - def __init__(self, conn): - self.conn = conn - - def get(self, key): - return self.conn.get(key) - - def set(self, key, value, expires=None): - if not expires: - self.conn.set(key, value) - elif isinstance(expires, datetime): - expires = expires - datetime.utcnow() - self.conn.setex(key, int(expires.total_seconds()), value) - else: - self.conn.setex(key, expires, value) - - def delete(self, key): - self.conn.delete(key) - - def clear(self): - """Helper for clearing all the keys in a database. Use with - caution!""" - for key in self.conn.keys(): - self.conn.delete(key) - - def close(self): - """Redis uses connection pooling, no need to close the connection.""" - pass diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/pyparsing/results.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/pyparsing/results.py deleted file mode 100644 index 00c9421d3b0362526b8f90dc01e8db73841e0b61..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/pyparsing/results.py +++ /dev/null @@ -1,760 +0,0 @@ -# results.py -from collections.abc import MutableMapping, Mapping, MutableSequence, Iterator -import pprint -from weakref import ref as wkref -from typing import Tuple, Any - -str_type: Tuple[type, ...] = (str, bytes) -_generator_type = type((_ for _ in ())) - - -class _ParseResultsWithOffset: - __slots__ = ["tup"] - - def __init__(self, p1, p2): - self.tup = (p1, p2) - - def __getitem__(self, i): - return self.tup[i] - - def __getstate__(self): - return self.tup - - def __setstate__(self, *args): - self.tup = args[0] - - -class ParseResults: - """Structured parse results, to provide multiple means of access to - the parsed data: - - - as a list (``len(results)``) - - by list index (``results[0], results[1]``, etc.) - - by attribute (``results.`` - see :class:`ParserElement.set_results_name`) - - Example:: - - integer = Word(nums) - date_str = (integer.set_results_name("year") + '/' - + integer.set_results_name("month") + '/' - + integer.set_results_name("day")) - # equivalent form: - # date_str = (integer("year") + '/' - # + integer("month") + '/' - # + integer("day")) - - # parse_string returns a ParseResults object - result = date_str.parse_string("1999/12/31") - - def test(s, fn=repr): - print("{} -> {}".format(s, fn(eval(s)))) - test("list(result)") - test("result[0]") - test("result['month']") - test("result.day") - test("'month' in result") - test("'minutes' in result") - test("result.dump()", str) - - prints:: - - list(result) -> ['1999', '/', '12', '/', '31'] - result[0] -> '1999' - result['month'] -> '12' - result.day -> '31' - 'month' in result -> True - 'minutes' in result -> False - result.dump() -> ['1999', '/', '12', '/', '31'] - - day: '31' - - month: '12' - - year: '1999' - """ - - _null_values: Tuple[Any, ...] = (None, [], "", ()) - - __slots__ = [ - "_name", - "_parent", - "_all_names", - "_modal", - "_toklist", - "_tokdict", - "__weakref__", - ] - - class List(list): - """ - Simple wrapper class to distinguish parsed list results that should be preserved - as actual Python lists, instead of being converted to :class:`ParseResults`: - - LBRACK, RBRACK = map(pp.Suppress, "[]") - element = pp.Forward() - item = ppc.integer - element_list = LBRACK + pp.delimited_list(element) + RBRACK - - # add parse actions to convert from ParseResults to actual Python collection types - def as_python_list(t): - return pp.ParseResults.List(t.as_list()) - element_list.add_parse_action(as_python_list) - - element <<= item | element_list - - element.run_tests(''' - 100 - [2,3,4] - [[2, 1],3,4] - [(2, 1),3,4] - (2,3,4) - ''', post_parse=lambda s, r: (r[0], type(r[0]))) - - prints: - - 100 - (100, ) - - [2,3,4] - ([2, 3, 4], ) - - [[2, 1],3,4] - ([[2, 1], 3, 4], ) - - (Used internally by :class:`Group` when `aslist=True`.) - """ - - def __new__(cls, contained=None): - if contained is None: - contained = [] - - if not isinstance(contained, list): - raise TypeError( - "{} may only be constructed with a list," - " not {}".format(cls.__name__, type(contained).__name__) - ) - - return list.__new__(cls) - - def __new__(cls, toklist=None, name=None, **kwargs): - if isinstance(toklist, ParseResults): - return toklist - self = object.__new__(cls) - self._name = None - self._parent = None - self._all_names = set() - - if toklist is None: - self._toklist = [] - elif isinstance(toklist, (list, _generator_type)): - self._toklist = ( - [toklist[:]] - if isinstance(toklist, ParseResults.List) - else list(toklist) - ) - else: - self._toklist = [toklist] - self._tokdict = dict() - return self - - # Performance tuning: we construct a *lot* of these, so keep this - # constructor as small and fast as possible - def __init__( - self, toklist=None, name=None, asList=True, modal=True, isinstance=isinstance - ): - self._modal = modal - if name is not None and name != "": - if isinstance(name, int): - name = str(name) - if not modal: - self._all_names = {name} - self._name = name - if toklist not in self._null_values: - if isinstance(toklist, (str_type, type)): - toklist = [toklist] - if asList: - if isinstance(toklist, ParseResults): - self[name] = _ParseResultsWithOffset( - ParseResults(toklist._toklist), 0 - ) - else: - self[name] = _ParseResultsWithOffset( - ParseResults(toklist[0]), 0 - ) - self[name]._name = name - else: - try: - self[name] = toklist[0] - except (KeyError, TypeError, IndexError): - if toklist is not self: - self[name] = toklist - else: - self._name = name - - def __getitem__(self, i): - if isinstance(i, (int, slice)): - return self._toklist[i] - else: - if i not in self._all_names: - return self._tokdict[i][-1][0] - else: - return ParseResults([v[0] for v in self._tokdict[i]]) - - def __setitem__(self, k, v, isinstance=isinstance): - if isinstance(v, _ParseResultsWithOffset): - self._tokdict[k] = self._tokdict.get(k, list()) + [v] - sub = v[0] - elif isinstance(k, (int, slice)): - self._toklist[k] = v - sub = v - else: - self._tokdict[k] = self._tokdict.get(k, list()) + [ - _ParseResultsWithOffset(v, 0) - ] - sub = v - if isinstance(sub, ParseResults): - sub._parent = wkref(self) - - def __delitem__(self, i): - if isinstance(i, (int, slice)): - mylen = len(self._toklist) - del self._toklist[i] - - # convert int to slice - if isinstance(i, int): - if i < 0: - i += mylen - i = slice(i, i + 1) - # get removed indices - removed = list(range(*i.indices(mylen))) - removed.reverse() - # fixup indices in token dictionary - for name, occurrences in self._tokdict.items(): - for j in removed: - for k, (value, position) in enumerate(occurrences): - occurrences[k] = _ParseResultsWithOffset( - value, position - (position > j) - ) - else: - del self._tokdict[i] - - def __contains__(self, k) -> bool: - return k in self._tokdict - - def __len__(self) -> int: - return len(self._toklist) - - def __bool__(self) -> bool: - return not not (self._toklist or self._tokdict) - - def __iter__(self) -> Iterator: - return iter(self._toklist) - - def __reversed__(self) -> Iterator: - return iter(self._toklist[::-1]) - - def keys(self): - return iter(self._tokdict) - - def values(self): - return (self[k] for k in self.keys()) - - def items(self): - return ((k, self[k]) for k in self.keys()) - - def haskeys(self) -> bool: - """ - Since ``keys()`` returns an iterator, this method is helpful in bypassing - code that looks for the existence of any defined results names.""" - return bool(self._tokdict) - - def pop(self, *args, **kwargs): - """ - Removes and returns item at specified index (default= ``last``). - Supports both ``list`` and ``dict`` semantics for ``pop()``. If - passed no argument or an integer argument, it will use ``list`` - semantics and pop tokens from the list of parsed tokens. If passed - a non-integer argument (most likely a string), it will use ``dict`` - semantics and pop the corresponding value from any defined results - names. A second default return value argument is supported, just as in - ``dict.pop()``. - - Example:: - - numlist = Word(nums)[...] - print(numlist.parse_string("0 123 321")) # -> ['0', '123', '321'] - - def remove_first(tokens): - tokens.pop(0) - numlist.add_parse_action(remove_first) - print(numlist.parse_string("0 123 321")) # -> ['123', '321'] - - label = Word(alphas) - patt = label("LABEL") + Word(nums)[1, ...] - print(patt.parse_string("AAB 123 321").dump()) - - # Use pop() in a parse action to remove named result (note that corresponding value is not - # removed from list form of results) - def remove_LABEL(tokens): - tokens.pop("LABEL") - return tokens - patt.add_parse_action(remove_LABEL) - print(patt.parse_string("AAB 123 321").dump()) - - prints:: - - ['AAB', '123', '321'] - - LABEL: 'AAB' - - ['AAB', '123', '321'] - """ - if not args: - args = [-1] - for k, v in kwargs.items(): - if k == "default": - args = (args[0], v) - else: - raise TypeError( - "pop() got an unexpected keyword argument {!r}".format(k) - ) - if isinstance(args[0], int) or len(args) == 1 or args[0] in self: - index = args[0] - ret = self[index] - del self[index] - return ret - else: - defaultvalue = args[1] - return defaultvalue - - def get(self, key, default_value=None): - """ - Returns named result matching the given key, or if there is no - such name, then returns the given ``default_value`` or ``None`` if no - ``default_value`` is specified. - - Similar to ``dict.get()``. - - Example:: - - integer = Word(nums) - date_str = integer("year") + '/' + integer("month") + '/' + integer("day") - - result = date_str.parse_string("1999/12/31") - print(result.get("year")) # -> '1999' - print(result.get("hour", "not specified")) # -> 'not specified' - print(result.get("hour")) # -> None - """ - if key in self: - return self[key] - else: - return default_value - - def insert(self, index, ins_string): - """ - Inserts new element at location index in the list of parsed tokens. - - Similar to ``list.insert()``. - - Example:: - - numlist = Word(nums)[...] - print(numlist.parse_string("0 123 321")) # -> ['0', '123', '321'] - - # use a parse action to insert the parse location in the front of the parsed results - def insert_locn(locn, tokens): - tokens.insert(0, locn) - numlist.add_parse_action(insert_locn) - print(numlist.parse_string("0 123 321")) # -> [0, '0', '123', '321'] - """ - self._toklist.insert(index, ins_string) - # fixup indices in token dictionary - for name, occurrences in self._tokdict.items(): - for k, (value, position) in enumerate(occurrences): - occurrences[k] = _ParseResultsWithOffset( - value, position + (position > index) - ) - - def append(self, item): - """ - Add single element to end of ``ParseResults`` list of elements. - - Example:: - - numlist = Word(nums)[...] - print(numlist.parse_string("0 123 321")) # -> ['0', '123', '321'] - - # use a parse action to compute the sum of the parsed integers, and add it to the end - def append_sum(tokens): - tokens.append(sum(map(int, tokens))) - numlist.add_parse_action(append_sum) - print(numlist.parse_string("0 123 321")) # -> ['0', '123', '321', 444] - """ - self._toklist.append(item) - - def extend(self, itemseq): - """ - Add sequence of elements to end of ``ParseResults`` list of elements. - - Example:: - - patt = Word(alphas)[1, ...] - - # use a parse action to append the reverse of the matched strings, to make a palindrome - def make_palindrome(tokens): - tokens.extend(reversed([t[::-1] for t in tokens])) - return ''.join(tokens) - patt.add_parse_action(make_palindrome) - print(patt.parse_string("lskdj sdlkjf lksd")) # -> 'lskdjsdlkjflksddsklfjkldsjdksl' - """ - if isinstance(itemseq, ParseResults): - self.__iadd__(itemseq) - else: - self._toklist.extend(itemseq) - - def clear(self): - """ - Clear all elements and results names. - """ - del self._toklist[:] - self._tokdict.clear() - - def __getattr__(self, name): - try: - return self[name] - except KeyError: - if name.startswith("__"): - raise AttributeError(name) - return "" - - def __add__(self, other) -> "ParseResults": - ret = self.copy() - ret += other - return ret - - def __iadd__(self, other) -> "ParseResults": - if other._tokdict: - offset = len(self._toklist) - addoffset = lambda a: offset if a < 0 else a + offset - otheritems = other._tokdict.items() - otherdictitems = [ - (k, _ParseResultsWithOffset(v[0], addoffset(v[1]))) - for k, vlist in otheritems - for v in vlist - ] - for k, v in otherdictitems: - self[k] = v - if isinstance(v[0], ParseResults): - v[0]._parent = wkref(self) - - self._toklist += other._toklist - self._all_names |= other._all_names - return self - - def __radd__(self, other) -> "ParseResults": - if isinstance(other, int) and other == 0: - # useful for merging many ParseResults using sum() builtin - return self.copy() - else: - # this may raise a TypeError - so be it - return other + self - - def __repr__(self) -> str: - return "{}({!r}, {})".format(type(self).__name__, self._toklist, self.as_dict()) - - def __str__(self) -> str: - return ( - "[" - + ", ".join( - [ - str(i) if isinstance(i, ParseResults) else repr(i) - for i in self._toklist - ] - ) - + "]" - ) - - def _asStringList(self, sep=""): - out = [] - for item in self._toklist: - if out and sep: - out.append(sep) - if isinstance(item, ParseResults): - out += item._asStringList() - else: - out.append(str(item)) - return out - - def as_list(self) -> list: - """ - Returns the parse results as a nested list of matching tokens, all converted to strings. - - Example:: - - patt = Word(alphas)[1, ...] - result = patt.parse_string("sldkj lsdkj sldkj") - # even though the result prints in string-like form, it is actually a pyparsing ParseResults - print(type(result), result) # -> ['sldkj', 'lsdkj', 'sldkj'] - - # Use as_list() to create an actual list - result_list = result.as_list() - print(type(result_list), result_list) # -> ['sldkj', 'lsdkj', 'sldkj'] - """ - return [ - res.as_list() if isinstance(res, ParseResults) else res - for res in self._toklist - ] - - def as_dict(self) -> dict: - """ - Returns the named parse results as a nested dictionary. - - Example:: - - integer = Word(nums) - date_str = integer("year") + '/' + integer("month") + '/' + integer("day") - - result = date_str.parse_string('12/31/1999') - print(type(result), repr(result)) # -> (['12', '/', '31', '/', '1999'], {'day': [('1999', 4)], 'year': [('12', 0)], 'month': [('31', 2)]}) - - result_dict = result.as_dict() - print(type(result_dict), repr(result_dict)) # -> {'day': '1999', 'year': '12', 'month': '31'} - - # even though a ParseResults supports dict-like access, sometime you just need to have a dict - import json - print(json.dumps(result)) # -> Exception: TypeError: ... is not JSON serializable - print(json.dumps(result.as_dict())) # -> {"month": "31", "day": "1999", "year": "12"} - """ - - def to_item(obj): - if isinstance(obj, ParseResults): - return obj.as_dict() if obj.haskeys() else [to_item(v) for v in obj] - else: - return obj - - return dict((k, to_item(v)) for k, v in self.items()) - - def copy(self) -> "ParseResults": - """ - Returns a new copy of a :class:`ParseResults` object. - """ - ret = ParseResults(self._toklist) - ret._tokdict = self._tokdict.copy() - ret._parent = self._parent - ret._all_names |= self._all_names - ret._name = self._name - return ret - - def get_name(self): - r""" - Returns the results name for this token expression. Useful when several - different expressions might match at a particular location. - - Example:: - - integer = Word(nums) - ssn_expr = Regex(r"\d\d\d-\d\d-\d\d\d\d") - house_number_expr = Suppress('#') + Word(nums, alphanums) - user_data = (Group(house_number_expr)("house_number") - | Group(ssn_expr)("ssn") - | Group(integer)("age")) - user_info = user_data[1, ...] - - result = user_info.parse_string("22 111-22-3333 #221B") - for item in result: - print(item.get_name(), ':', item[0]) - - prints:: - - age : 22 - ssn : 111-22-3333 - house_number : 221B - """ - if self._name: - return self._name - elif self._parent: - par = self._parent() - - def find_in_parent(sub): - return next( - ( - k - for k, vlist in par._tokdict.items() - for v, loc in vlist - if sub is v - ), - None, - ) - - return find_in_parent(self) if par else None - elif ( - len(self) == 1 - and len(self._tokdict) == 1 - and next(iter(self._tokdict.values()))[0][1] in (0, -1) - ): - return next(iter(self._tokdict.keys())) - else: - return None - - def dump(self, indent="", full=True, include_list=True, _depth=0) -> str: - """ - Diagnostic method for listing out the contents of - a :class:`ParseResults`. Accepts an optional ``indent`` argument so - that this string can be embedded in a nested display of other data. - - Example:: - - integer = Word(nums) - date_str = integer("year") + '/' + integer("month") + '/' + integer("day") - - result = date_str.parse_string('1999/12/31') - print(result.dump()) - - prints:: - - ['1999', '/', '12', '/', '31'] - - day: '31' - - month: '12' - - year: '1999' - """ - out = [] - NL = "\n" - out.append(indent + str(self.as_list()) if include_list else "") - - if full: - if self.haskeys(): - items = sorted((str(k), v) for k, v in self.items()) - for k, v in items: - if out: - out.append(NL) - out.append("{}{}- {}: ".format(indent, (" " * _depth), k)) - if isinstance(v, ParseResults): - if v: - out.append( - v.dump( - indent=indent, - full=full, - include_list=include_list, - _depth=_depth + 1, - ) - ) - else: - out.append(str(v)) - else: - out.append(repr(v)) - if any(isinstance(vv, ParseResults) for vv in self): - v = self - for i, vv in enumerate(v): - if isinstance(vv, ParseResults): - out.append( - "\n{}{}[{}]:\n{}{}{}".format( - indent, - (" " * (_depth)), - i, - indent, - (" " * (_depth + 1)), - vv.dump( - indent=indent, - full=full, - include_list=include_list, - _depth=_depth + 1, - ), - ) - ) - else: - out.append( - "\n%s%s[%d]:\n%s%s%s" - % ( - indent, - (" " * (_depth)), - i, - indent, - (" " * (_depth + 1)), - str(vv), - ) - ) - - return "".join(out) - - def pprint(self, *args, **kwargs): - """ - Pretty-printer for parsed results as a list, using the - `pprint `_ module. - Accepts additional positional or keyword args as defined for - `pprint.pprint `_ . - - Example:: - - ident = Word(alphas, alphanums) - num = Word(nums) - func = Forward() - term = ident | num | Group('(' + func + ')') - func <<= ident + Group(Optional(delimited_list(term))) - result = func.parse_string("fna a,b,(fnb c,d,200),100") - result.pprint(width=40) - - prints:: - - ['fna', - ['a', - 'b', - ['(', 'fnb', ['c', 'd', '200'], ')'], - '100']] - """ - pprint.pprint(self.as_list(), *args, **kwargs) - - # add support for pickle protocol - def __getstate__(self): - return ( - self._toklist, - ( - self._tokdict.copy(), - self._parent is not None and self._parent() or None, - self._all_names, - self._name, - ), - ) - - def __setstate__(self, state): - self._toklist, (self._tokdict, par, inAccumNames, self._name) = state - self._all_names = set(inAccumNames) - if par is not None: - self._parent = wkref(par) - else: - self._parent = None - - def __getnewargs__(self): - return self._toklist, self._name - - def __dir__(self): - return dir(type(self)) + list(self.keys()) - - @classmethod - def from_dict(cls, other, name=None) -> "ParseResults": - """ - Helper classmethod to construct a ``ParseResults`` from a ``dict``, preserving the - name-value relations as results names. If an optional ``name`` argument is - given, a nested ``ParseResults`` will be returned. - """ - - def is_iterable(obj): - try: - iter(obj) - except Exception: - return False - else: - return not isinstance(obj, str_type) - - ret = cls([]) - for k, v in other.items(): - if isinstance(v, Mapping): - ret += cls.from_dict(v, name=k) - else: - ret += cls([v], name=k, asList=is_iterable(v)) - if name is not None: - ret = cls([ret], name=name) - return ret - - asList = as_list - asDict = as_dict - getName = get_name - - -MutableMapping.register(ParseResults) -MutableSequence.register(ParseResults) diff --git a/spaces/Awesimo/jojogan/e4e/utils/common.py b/spaces/Awesimo/jojogan/e4e/utils/common.py deleted file mode 100644 index b19e18ddcb78b06678fa18e4a76da44fc511b789..0000000000000000000000000000000000000000 --- a/spaces/Awesimo/jojogan/e4e/utils/common.py +++ /dev/null @@ -1,55 +0,0 @@ -from PIL import Image -import matplotlib.pyplot as plt - - -# Log images -def log_input_image(x, opts): - return tensor2im(x) - - -def tensor2im(var): - # var shape: (3, H, W) - var = var.cpu().detach().transpose(0, 2).transpose(0, 1).numpy() - var = ((var + 1) / 2) - var[var < 0] = 0 - var[var > 1] = 1 - var = var * 255 - return Image.fromarray(var.astype('uint8')) - - -def vis_faces(log_hooks): - display_count = len(log_hooks) - fig = plt.figure(figsize=(8, 4 * display_count)) - gs = fig.add_gridspec(display_count, 3) - for i in range(display_count): - hooks_dict = log_hooks[i] - fig.add_subplot(gs[i, 0]) - if 'diff_input' in hooks_dict: - vis_faces_with_id(hooks_dict, fig, gs, i) - else: - vis_faces_no_id(hooks_dict, fig, gs, i) - plt.tight_layout() - return fig - - -def vis_faces_with_id(hooks_dict, fig, gs, i): - plt.imshow(hooks_dict['input_face']) - plt.title('Input\nOut Sim={:.2f}'.format(float(hooks_dict['diff_input']))) - fig.add_subplot(gs[i, 1]) - plt.imshow(hooks_dict['target_face']) - plt.title('Target\nIn={:.2f}, Out={:.2f}'.format(float(hooks_dict['diff_views']), - float(hooks_dict['diff_target']))) - fig.add_subplot(gs[i, 2]) - plt.imshow(hooks_dict['output_face']) - plt.title('Output\n Target Sim={:.2f}'.format(float(hooks_dict['diff_target']))) - - -def vis_faces_no_id(hooks_dict, fig, gs, i): - plt.imshow(hooks_dict['input_face'], cmap="gray") - plt.title('Input') - fig.add_subplot(gs[i, 1]) - plt.imshow(hooks_dict['target_face']) - plt.title('Target') - fig.add_subplot(gs[i, 2]) - plt.imshow(hooks_dict['output_face']) - plt.title('Output') diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/tests/layers/test_roi_align_rotated.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/tests/layers/test_roi_align_rotated.py deleted file mode 100644 index 7323d7d5a86816f337571221313c428238c439f4..0000000000000000000000000000000000000000 --- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/tests/layers/test_roi_align_rotated.py +++ /dev/null @@ -1,176 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import logging -import unittest -import cv2 -import torch -from torch.autograd import Variable, gradcheck - -from detectron2.layers.roi_align import ROIAlign -from detectron2.layers.roi_align_rotated import ROIAlignRotated - -logger = logging.getLogger(__name__) - - -class ROIAlignRotatedTest(unittest.TestCase): - def _box_to_rotated_box(self, box, angle): - return [ - (box[0] + box[2]) / 2.0, - (box[1] + box[3]) / 2.0, - box[2] - box[0], - box[3] - box[1], - angle, - ] - - def _rot90(self, img, num): - num = num % 4 # note: -1 % 4 == 3 - for _ in range(num): - img = img.transpose(0, 1).flip(0) - return img - - def test_forward_output_0_90_180_270(self): - for i in range(4): - # i = 0, 1, 2, 3 corresponding to 0, 90, 180, 270 degrees - img = torch.arange(25, dtype=torch.float32).reshape(5, 5) - """ - 0 1 2 3 4 - 5 6 7 8 9 - 10 11 12 13 14 - 15 16 17 18 19 - 20 21 22 23 24 - """ - box = [1, 1, 3, 3] - rotated_box = self._box_to_rotated_box(box=box, angle=90 * i) - - result = self._simple_roi_align_rotated(img=img, box=rotated_box, resolution=(4, 4)) - - # Here's an explanation for 0 degree case: - # point 0 in the original input lies at [0.5, 0.5] - # (the center of bin [0, 1] x [0, 1]) - # point 1 in the original input lies at [1.5, 0.5], etc. - # since the resolution is (4, 4) that divides [1, 3] x [1, 3] - # into 4 x 4 equal bins, - # the top-left bin is [1, 1.5] x [1, 1.5], and its center - # (1.25, 1.25) lies at the 3/4 position - # between point 0 and point 1, point 5 and point 6, - # point 0 and point 5, point 1 and point 6, so it can be calculated as - # 0.25*(0*0.25+1*0.75)+(5*0.25+6*0.75)*0.75 = 4.5 - result_expected = torch.tensor( - [ - [4.5, 5.0, 5.5, 6.0], - [7.0, 7.5, 8.0, 8.5], - [9.5, 10.0, 10.5, 11.0], - [12.0, 12.5, 13.0, 13.5], - ] - ) - # This is also an upsampled version of [[6, 7], [11, 12]] - - # When the box is rotated by 90 degrees CCW, - # the result would be rotated by 90 degrees CW, thus it's -i here - result_expected = self._rot90(result_expected, -i) - - assert torch.allclose(result, result_expected) - - def test_resize(self): - H, W = 30, 30 - input = torch.rand(H, W) * 100 - box = [10, 10, 20, 20] - rotated_box = self._box_to_rotated_box(box, angle=0) - output = self._simple_roi_align_rotated(img=input, box=rotated_box, resolution=(5, 5)) - - input2x = cv2.resize(input.numpy(), (W // 2, H // 2), interpolation=cv2.INTER_LINEAR) - input2x = torch.from_numpy(input2x) - box2x = [x / 2 for x in box] - rotated_box2x = self._box_to_rotated_box(box2x, angle=0) - output2x = self._simple_roi_align_rotated(img=input2x, box=rotated_box2x, resolution=(5, 5)) - assert torch.allclose(output2x, output) - - def _simple_roi_align_rotated(self, img, box, resolution): - """ - RoiAlignRotated with scale 1.0 and 0 sample ratio. - """ - op = ROIAlignRotated(output_size=resolution, spatial_scale=1.0, sampling_ratio=0) - input = img[None, None, :, :] - - rois = [0] + list(box) - rois = torch.tensor(rois, dtype=torch.float32)[None, :] - result_cpu = op.forward(input, rois) - if torch.cuda.is_available(): - result_cuda = op.forward(input.cuda(), rois.cuda()) - assert torch.allclose(result_cpu, result_cuda.cpu()) - return result_cpu[0, 0] - - def test_empty_box(self): - img = torch.rand(5, 5) - out = self._simple_roi_align_rotated(img, [2, 3, 0, 0, 0], (7, 7)) - self.assertTrue((out == 0).all()) - - def test_roi_align_rotated_gradcheck_cpu(self): - dtype = torch.float64 - device = torch.device("cpu") - roi_align_rotated_op = ROIAlignRotated( - output_size=(5, 5), spatial_scale=0.5, sampling_ratio=1 - ).to(dtype=dtype, device=device) - x = torch.rand(1, 1, 10, 10, dtype=dtype, device=device, requires_grad=True) - # roi format is (batch index, x_center, y_center, width, height, angle) - rois = torch.tensor( - [[0, 4.5, 4.5, 9, 9, 0], [0, 2, 7, 4, 4, 0], [0, 7, 7, 4, 4, 0]], - dtype=dtype, - device=device, - ) - - def func(input): - return roi_align_rotated_op(input, rois) - - assert gradcheck(func, (x,)), "gradcheck failed for RoIAlignRotated CPU" - assert gradcheck(func, (x.transpose(2, 3),)), "gradcheck failed for RoIAlignRotated CPU" - - @unittest.skipIf(not torch.cuda.is_available(), "CUDA not available") - def test_roi_align_rotated_gradient_cuda(self): - """ - Compute gradients for ROIAlignRotated with multiple bounding boxes on the GPU, - and compare the result with ROIAlign - """ - # torch.manual_seed(123) - dtype = torch.float64 - device = torch.device("cuda") - pool_h, pool_w = (5, 5) - - roi_align = ROIAlign(output_size=(pool_h, pool_w), spatial_scale=1, sampling_ratio=2).to( - device=device - ) - - roi_align_rotated = ROIAlignRotated( - output_size=(pool_h, pool_w), spatial_scale=1, sampling_ratio=2 - ).to(device=device) - - x = torch.rand(1, 1, 10, 10, dtype=dtype, device=device, requires_grad=True) - # x_rotated = x.clone() won't work (will lead to grad_fun=CloneBackward)! - x_rotated = Variable(x.data.clone(), requires_grad=True) - - # roi_rotated format is (batch index, x_center, y_center, width, height, angle) - rois_rotated = torch.tensor( - [[0, 4.5, 4.5, 9, 9, 0], [0, 2, 7, 4, 4, 0], [0, 7, 7, 4, 4, 0]], - dtype=dtype, - device=device, - ) - - y_rotated = roi_align_rotated(x_rotated, rois_rotated) - s_rotated = y_rotated.sum() - s_rotated.backward() - - # roi format is (batch index, x1, y1, x2, y2) - rois = torch.tensor( - [[0, 0, 0, 9, 9], [0, 0, 5, 4, 9], [0, 5, 5, 9, 9]], dtype=dtype, device=device - ) - - y = roi_align(x, rois) - s = y.sum() - s.backward() - - assert torch.allclose( - x.grad, x_rotated.grad - ), "gradients for ROIAlign and ROIAlignRotated mismatch on CUDA" - - -if __name__ == "__main__": - unittest.main() diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/chardet/escprober.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/chardet/escprober.py deleted file mode 100644 index fd713830d36cabc6a0fb4ab4e8cf426a84decdc6..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/chardet/escprober.py +++ /dev/null @@ -1,102 +0,0 @@ -######################## BEGIN LICENSE BLOCK ######################## -# The Original Code is mozilla.org code. -# -# The Initial Developer of the Original Code is -# Netscape Communications Corporation. -# Portions created by the Initial Developer are Copyright (C) 1998 -# the Initial Developer. All Rights Reserved. -# -# Contributor(s): -# Mark Pilgrim - port to Python -# -# This library is free software; you can redistribute it and/or -# modify it under the terms of the GNU Lesser General Public -# License as published by the Free Software Foundation; either -# version 2.1 of the License, or (at your option) any later version. -# -# This library is distributed in the hope that it will be useful, -# but WITHOUT ANY WARRANTY; without even the implied warranty of -# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU -# Lesser General Public License for more details. -# -# You should have received a copy of the GNU Lesser General Public -# License along with this library; if not, write to the Free Software -# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA -# 02110-1301 USA -######################### END LICENSE BLOCK ######################### - -from typing import Optional, Union - -from .charsetprober import CharSetProber -from .codingstatemachine import CodingStateMachine -from .enums import LanguageFilter, MachineState, ProbingState -from .escsm import ( - HZ_SM_MODEL, - ISO2022CN_SM_MODEL, - ISO2022JP_SM_MODEL, - ISO2022KR_SM_MODEL, -) - - -class EscCharSetProber(CharSetProber): - """ - This CharSetProber uses a "code scheme" approach for detecting encodings, - whereby easily recognizable escape or shift sequences are relied on to - identify these encodings. - """ - - def __init__(self, lang_filter: LanguageFilter = LanguageFilter.NONE) -> None: - super().__init__(lang_filter=lang_filter) - self.coding_sm = [] - if self.lang_filter & LanguageFilter.CHINESE_SIMPLIFIED: - self.coding_sm.append(CodingStateMachine(HZ_SM_MODEL)) - self.coding_sm.append(CodingStateMachine(ISO2022CN_SM_MODEL)) - if self.lang_filter & LanguageFilter.JAPANESE: - self.coding_sm.append(CodingStateMachine(ISO2022JP_SM_MODEL)) - if self.lang_filter & LanguageFilter.KOREAN: - self.coding_sm.append(CodingStateMachine(ISO2022KR_SM_MODEL)) - self.active_sm_count = 0 - self._detected_charset: Optional[str] = None - self._detected_language: Optional[str] = None - self._state = ProbingState.DETECTING - self.reset() - - def reset(self) -> None: - super().reset() - for coding_sm in self.coding_sm: - coding_sm.active = True - coding_sm.reset() - self.active_sm_count = len(self.coding_sm) - self._detected_charset = None - self._detected_language = None - - @property - def charset_name(self) -> Optional[str]: - return self._detected_charset - - @property - def language(self) -> Optional[str]: - return self._detected_language - - def get_confidence(self) -> float: - return 0.99 if self._detected_charset else 0.00 - - def feed(self, byte_str: Union[bytes, bytearray]) -> ProbingState: - for c in byte_str: - for coding_sm in self.coding_sm: - if not coding_sm.active: - continue - coding_state = coding_sm.next_state(c) - if coding_state == MachineState.ERROR: - coding_sm.active = False - self.active_sm_count -= 1 - if self.active_sm_count <= 0: - self._state = ProbingState.NOT_ME - return self.state - elif coding_state == MachineState.ITS_ME: - self._state = ProbingState.FOUND_IT - self._detected_charset = coding_sm.get_coding_state_machine() - self._detected_language = coding_sm.language - return self.state - - return self.state diff --git a/spaces/CVPR/LIVE/pydiffvg/render_pytorch.py b/spaces/CVPR/LIVE/pydiffvg/render_pytorch.py deleted file mode 100644 index b776ce67a0cdb587c8bd558fe5060a6d96e51e3c..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/pydiffvg/render_pytorch.py +++ /dev/null @@ -1,870 +0,0 @@ -import torch -import diffvg -import pydiffvg -import time -from enum import IntEnum -import warnings - -print_timing = False - -def set_print_timing(val): - global print_timing - print_timing=val - -class OutputType(IntEnum): - color = 1 - sdf = 2 - -class RenderFunction(torch.autograd.Function): - """ - The PyTorch interface of diffvg. - """ - @staticmethod - def serialize_scene(canvas_width, - canvas_height, - shapes, - shape_groups, - filter = pydiffvg.PixelFilter(type = diffvg.FilterType.box, - radius = torch.tensor(0.5)), - output_type = OutputType.color, - use_prefiltering = False, - eval_positions = torch.tensor([])): - """ - Given a list of shapes, convert them to a linear list of argument, - so that we can use it in PyTorch. - """ - num_shapes = len(shapes) - num_shape_groups = len(shape_groups) - args = [] - args.append(canvas_width) - args.append(canvas_height) - args.append(num_shapes) - args.append(num_shape_groups) - args.append(output_type) - args.append(use_prefiltering) - args.append(eval_positions.to(pydiffvg.get_device())) - for shape in shapes: - use_thickness = False - if isinstance(shape, pydiffvg.Circle): - assert(shape.center.is_contiguous()) - args.append(diffvg.ShapeType.circle) - args.append(shape.radius.cpu()) - args.append(shape.center.cpu()) - elif isinstance(shape, pydiffvg.Ellipse): - assert(shape.radius.is_contiguous()) - assert(shape.center.is_contiguous()) - args.append(diffvg.ShapeType.ellipse) - args.append(shape.radius.cpu()) - args.append(shape.center.cpu()) - elif isinstance(shape, pydiffvg.Path): - assert(shape.num_control_points.is_contiguous()) - assert(shape.points.is_contiguous()) - assert(shape.points.shape[1] == 2) - assert(torch.isfinite(shape.points).all()) - args.append(diffvg.ShapeType.path) - args.append(shape.num_control_points.to(torch.int32).cpu()) - args.append(shape.points.cpu()) - if len(shape.stroke_width.shape) > 0 and shape.stroke_width.shape[0] > 1: - assert(torch.isfinite(shape.stroke_width).all()) - use_thickness = True - args.append(shape.stroke_width.cpu()) - else: - args.append(None) - args.append(shape.is_closed) - args.append(shape.use_distance_approx) - elif isinstance(shape, pydiffvg.Polygon): - assert(shape.points.is_contiguous()) - assert(shape.points.shape[1] == 2) - args.append(diffvg.ShapeType.path) - if shape.is_closed: - args.append(torch.zeros(shape.points.shape[0], dtype = torch.int32)) - else: - args.append(torch.zeros(shape.points.shape[0] - 1, dtype = torch.int32)) - args.append(shape.points.cpu()) - args.append(None) - args.append(shape.is_closed) - args.append(False) # use_distance_approx - elif isinstance(shape, pydiffvg.Rect): - assert(shape.p_min.is_contiguous()) - assert(shape.p_max.is_contiguous()) - args.append(diffvg.ShapeType.rect) - args.append(shape.p_min.cpu()) - args.append(shape.p_max.cpu()) - else: - assert(False) - if use_thickness: - args.append(torch.tensor(0.0)) - else: - args.append(shape.stroke_width.cpu()) - - for shape_group in shape_groups: - assert(shape_group.shape_ids.is_contiguous()) - args.append(shape_group.shape_ids.to(torch.int32).cpu()) - # Fill color - if shape_group.fill_color is None: - args.append(None) - elif isinstance(shape_group.fill_color, torch.Tensor): - assert(shape_group.fill_color.is_contiguous()) - args.append(diffvg.ColorType.constant) - args.append(shape_group.fill_color.cpu()) - elif isinstance(shape_group.fill_color, pydiffvg.LinearGradient): - assert(shape_group.fill_color.begin.is_contiguous()) - assert(shape_group.fill_color.end.is_contiguous()) - assert(shape_group.fill_color.offsets.is_contiguous()) - assert(shape_group.fill_color.stop_colors.is_contiguous()) - args.append(diffvg.ColorType.linear_gradient) - args.append(shape_group.fill_color.begin.cpu()) - args.append(shape_group.fill_color.end.cpu()) - args.append(shape_group.fill_color.offsets.cpu()) - args.append(shape_group.fill_color.stop_colors.cpu()) - elif isinstance(shape_group.fill_color, pydiffvg.RadialGradient): - assert(shape_group.fill_color.center.is_contiguous()) - assert(shape_group.fill_color.radius.is_contiguous()) - assert(shape_group.fill_color.offsets.is_contiguous()) - assert(shape_group.fill_color.stop_colors.is_contiguous()) - args.append(diffvg.ColorType.radial_gradient) - args.append(shape_group.fill_color.center.cpu()) - args.append(shape_group.fill_color.radius.cpu()) - args.append(shape_group.fill_color.offsets.cpu()) - args.append(shape_group.fill_color.stop_colors.cpu()) - - if shape_group.fill_color is not None: - # go through the underlying shapes and check if they are all closed - for shape_id in shape_group.shape_ids: - if isinstance(shapes[shape_id], pydiffvg.Path): - if not shapes[shape_id].is_closed: - warnings.warn("Detected non-closed paths with fill color. This might causes unexpected results.", Warning) - - # Stroke color - if shape_group.stroke_color is None: - args.append(None) - elif isinstance(shape_group.stroke_color, torch.Tensor): - assert(shape_group.stroke_color.is_contiguous()) - args.append(diffvg.ColorType.constant) - args.append(shape_group.stroke_color.cpu()) - elif isinstance(shape_group.stroke_color, pydiffvg.LinearGradient): - assert(shape_group.stroke_color.begin.is_contiguous()) - assert(shape_group.stroke_color.end.is_contiguous()) - assert(shape_group.stroke_color.offsets.is_contiguous()) - assert(shape_group.stroke_color.stop_colors.is_contiguous()) - assert(torch.isfinite(shape_group.stroke_color.stop_colors).all()) - args.append(diffvg.ColorType.linear_gradient) - args.append(shape_group.stroke_color.begin.cpu()) - args.append(shape_group.stroke_color.end.cpu()) - args.append(shape_group.stroke_color.offsets.cpu()) - args.append(shape_group.stroke_color.stop_colors.cpu()) - elif isinstance(shape_group.stroke_color, pydiffvg.RadialGradient): - assert(shape_group.stroke_color.center.is_contiguous()) - assert(shape_group.stroke_color.radius.is_contiguous()) - assert(shape_group.stroke_color.offsets.is_contiguous()) - assert(shape_group.stroke_color.stop_colors.is_contiguous()) - assert(torch.isfinite(shape_group.stroke_color.stop_colors).all()) - args.append(diffvg.ColorType.radial_gradient) - args.append(shape_group.stroke_color.center.cpu()) - args.append(shape_group.stroke_color.radius.cpu()) - args.append(shape_group.stroke_color.offsets.cpu()) - args.append(shape_group.stroke_color.stop_colors.cpu()) - args.append(shape_group.use_even_odd_rule) - # Transformation - args.append(shape_group.shape_to_canvas.contiguous().cpu()) - args.append(filter.type) - args.append(filter.radius.cpu()) - return args - - @staticmethod - def forward(ctx, - width, - height, - num_samples_x, - num_samples_y, - seed, - background_image, - *args): - """ - Forward rendering pass. - """ - # Unpack arguments - current_index = 0 - canvas_width = args[current_index] - current_index += 1 - canvas_height = args[current_index] - current_index += 1 - num_shapes = args[current_index] - current_index += 1 - num_shape_groups = args[current_index] - current_index += 1 - output_type = args[current_index] - current_index += 1 - use_prefiltering = args[current_index] - current_index += 1 - eval_positions = args[current_index] - current_index += 1 - shapes = [] - shape_groups = [] - shape_contents = [] # Important to avoid GC deleting the shapes - color_contents = [] # Same as above - for shape_id in range(num_shapes): - shape_type = args[current_index] - current_index += 1 - if shape_type == diffvg.ShapeType.circle: - radius = args[current_index] - current_index += 1 - center = args[current_index] - current_index += 1 - shape = diffvg.Circle(radius, diffvg.Vector2f(center[0], center[1])) - elif shape_type == diffvg.ShapeType.ellipse: - radius = args[current_index] - current_index += 1 - center = args[current_index] - current_index += 1 - shape = diffvg.Ellipse(diffvg.Vector2f(radius[0], radius[1]), - diffvg.Vector2f(center[0], center[1])) - elif shape_type == diffvg.ShapeType.path: - num_control_points = args[current_index] - current_index += 1 - points = args[current_index] - current_index += 1 - thickness = args[current_index] - current_index += 1 - is_closed = args[current_index] - current_index += 1 - use_distance_approx = args[current_index] - current_index += 1 - shape = diffvg.Path(diffvg.int_ptr(num_control_points.data_ptr()), - diffvg.float_ptr(points.data_ptr()), - diffvg.float_ptr(thickness.data_ptr() if thickness is not None else 0), - num_control_points.shape[0], - points.shape[0], - is_closed, - use_distance_approx) - elif shape_type == diffvg.ShapeType.rect: - p_min = args[current_index] - current_index += 1 - p_max = args[current_index] - current_index += 1 - shape = diffvg.Rect(diffvg.Vector2f(p_min[0], p_min[1]), - diffvg.Vector2f(p_max[0], p_max[1])) - else: - assert(False) - stroke_width = args[current_index] - current_index += 1 - shapes.append(diffvg.Shape(\ - shape_type, shape.get_ptr(), stroke_width.item())) - shape_contents.append(shape) - - for shape_group_id in range(num_shape_groups): - shape_ids = args[current_index] - current_index += 1 - fill_color_type = args[current_index] - current_index += 1 - if fill_color_type == diffvg.ColorType.constant: - color = args[current_index] - current_index += 1 - fill_color = diffvg.Constant(\ - diffvg.Vector4f(color[0], color[1], color[2], color[3])) - elif fill_color_type == diffvg.ColorType.linear_gradient: - beg = args[current_index] - current_index += 1 - end = args[current_index] - current_index += 1 - offsets = args[current_index] - current_index += 1 - stop_colors = args[current_index] - current_index += 1 - assert(offsets.shape[0] == stop_colors.shape[0]) - fill_color = diffvg.LinearGradient(diffvg.Vector2f(beg[0], beg[1]), - diffvg.Vector2f(end[0], end[1]), - offsets.shape[0], - diffvg.float_ptr(offsets.data_ptr()), - diffvg.float_ptr(stop_colors.data_ptr())) - elif fill_color_type == diffvg.ColorType.radial_gradient: - center = args[current_index] - current_index += 1 - radius = args[current_index] - current_index += 1 - offsets = args[current_index] - current_index += 1 - stop_colors = args[current_index] - current_index += 1 - assert(offsets.shape[0] == stop_colors.shape[0]) - fill_color = diffvg.RadialGradient(diffvg.Vector2f(center[0], center[1]), - diffvg.Vector2f(radius[0], radius[1]), - offsets.shape[0], - diffvg.float_ptr(offsets.data_ptr()), - diffvg.float_ptr(stop_colors.data_ptr())) - elif fill_color_type is None: - fill_color = None - else: - assert(False) - stroke_color_type = args[current_index] - current_index += 1 - if stroke_color_type == diffvg.ColorType.constant: - color = args[current_index] - current_index += 1 - stroke_color = diffvg.Constant(\ - diffvg.Vector4f(color[0], color[1], color[2], color[3])) - elif stroke_color_type == diffvg.ColorType.linear_gradient: - beg = args[current_index] - current_index += 1 - end = args[current_index] - current_index += 1 - offsets = args[current_index] - current_index += 1 - stop_colors = args[current_index] - current_index += 1 - assert(offsets.shape[0] == stop_colors.shape[0]) - stroke_color = diffvg.LinearGradient(diffvg.Vector2f(beg[0], beg[1]), - diffvg.Vector2f(end[0], end[1]), - offsets.shape[0], - diffvg.float_ptr(offsets.data_ptr()), - diffvg.float_ptr(stop_colors.data_ptr())) - elif stroke_color_type == diffvg.ColorType.radial_gradient: - center = args[current_index] - current_index += 1 - radius = args[current_index] - current_index += 1 - offsets = args[current_index] - current_index += 1 - stop_colors = args[current_index] - current_index += 1 - assert(offsets.shape[0] == stop_colors.shape[0]) - stroke_color = diffvg.RadialGradient(diffvg.Vector2f(center[0], center[1]), - diffvg.Vector2f(radius[0], radius[1]), - offsets.shape[0], - diffvg.float_ptr(offsets.data_ptr()), - diffvg.float_ptr(stop_colors.data_ptr())) - elif stroke_color_type is None: - stroke_color = None - else: - assert(False) - use_even_odd_rule = args[current_index] - current_index += 1 - shape_to_canvas = args[current_index] - current_index += 1 - - if fill_color is not None: - color_contents.append(fill_color) - if stroke_color is not None: - color_contents.append(stroke_color) - shape_groups.append(diffvg.ShapeGroup(\ - diffvg.int_ptr(shape_ids.data_ptr()), - shape_ids.shape[0], - diffvg.ColorType.constant if fill_color_type is None else fill_color_type, - diffvg.void_ptr(0) if fill_color is None else fill_color.get_ptr(), - diffvg.ColorType.constant if stroke_color_type is None else stroke_color_type, - diffvg.void_ptr(0) if stroke_color is None else stroke_color.get_ptr(), - use_even_odd_rule, - diffvg.float_ptr(shape_to_canvas.data_ptr()))) - - filter_type = args[current_index] - current_index += 1 - filter_radius = args[current_index] - current_index += 1 - filt = diffvg.Filter(filter_type, filter_radius) - - start = time.time() - scene = diffvg.Scene(canvas_width, canvas_height, - shapes, shape_groups, filt, pydiffvg.get_use_gpu(), - pydiffvg.get_device().index if pydiffvg.get_device().index is not None else -1) - time_elapsed = time.time() - start - global print_timing - if print_timing: - print('Scene construction, time: %.5f s' % time_elapsed) - - if output_type == OutputType.color: - assert(eval_positions.shape[0] == 0) - rendered_image = torch.zeros(height, width, 4, device = pydiffvg.get_device()) - else: - assert(output_type == OutputType.sdf) - if eval_positions.shape[0] == 0: - rendered_image = torch.zeros(height, width, 1, device = pydiffvg.get_device()) - else: - rendered_image = torch.zeros(eval_positions.shape[0], 1, device = pydiffvg.get_device()) - - if background_image is not None: - background_image = background_image.to(pydiffvg.get_device()) - if background_image.shape[2] == 3: - background_image = torch.cat((\ - background_image, torch.ones(background_image.shape[0], background_image.shape[1], 1, - device = background_image.device)), dim = 2) - background_image = background_image.contiguous() - assert(background_image.shape[0] == rendered_image.shape[0]) - assert(background_image.shape[1] == rendered_image.shape[1]) - assert(background_image.shape[2] == 4) - - start = time.time() - diffvg.render(scene, - diffvg.float_ptr(background_image.data_ptr() if background_image is not None else 0), - diffvg.float_ptr(rendered_image.data_ptr() if output_type == OutputType.color else 0), - diffvg.float_ptr(rendered_image.data_ptr() if output_type == OutputType.sdf else 0), - width, - height, - num_samples_x, - num_samples_y, - seed, - diffvg.float_ptr(0), # d_background_image - diffvg.float_ptr(0), # d_render_image - diffvg.float_ptr(0), # d_render_sdf - diffvg.float_ptr(0), # d_translation - use_prefiltering, - diffvg.float_ptr(eval_positions.data_ptr()), - eval_positions.shape[0]) - assert(torch.isfinite(rendered_image).all()) - time_elapsed = time.time() - start - if print_timing: - print('Forward pass, time: %.5f s' % time_elapsed) - - ctx.scene = scene - ctx.background_image = background_image - ctx.shape_contents = shape_contents - ctx.color_contents = color_contents - ctx.filter = filt - ctx.width = width - ctx.height = height - ctx.num_samples_x = num_samples_x - ctx.num_samples_y = num_samples_y - ctx.seed = seed - ctx.output_type = output_type - ctx.use_prefiltering = use_prefiltering - ctx.eval_positions = eval_positions - return rendered_image - - @staticmethod - def render_grad(grad_img, - width, - height, - num_samples_x, - num_samples_y, - seed, - background_image, - *args): - if not grad_img.is_contiguous(): - grad_img = grad_img.contiguous() - assert(torch.isfinite(grad_img).all()) - - # Unpack arguments - current_index = 0 - canvas_width = args[current_index] - current_index += 1 - canvas_height = args[current_index] - current_index += 1 - num_shapes = args[current_index] - current_index += 1 - num_shape_groups = args[current_index] - current_index += 1 - output_type = args[current_index] - current_index += 1 - use_prefiltering = args[current_index] - current_index += 1 - eval_positions = args[current_index] - current_index += 1 - shapes = [] - shape_groups = [] - shape_contents = [] # Important to avoid GC deleting the shapes - color_contents = [] # Same as above - for shape_id in range(num_shapes): - shape_type = args[current_index] - current_index += 1 - if shape_type == diffvg.ShapeType.circle: - radius = args[current_index] - current_index += 1 - center = args[current_index] - current_index += 1 - shape = diffvg.Circle(radius, diffvg.Vector2f(center[0], center[1])) - elif shape_type == diffvg.ShapeType.ellipse: - radius = args[current_index] - current_index += 1 - center = args[current_index] - current_index += 1 - shape = diffvg.Ellipse(diffvg.Vector2f(radius[0], radius[1]), - diffvg.Vector2f(center[0], center[1])) - elif shape_type == diffvg.ShapeType.path: - num_control_points = args[current_index] - current_index += 1 - points = args[current_index] - current_index += 1 - thickness = args[current_index] - current_index += 1 - is_closed = args[current_index] - current_index += 1 - use_distance_approx = args[current_index] - current_index += 1 - shape = diffvg.Path(diffvg.int_ptr(num_control_points.data_ptr()), - diffvg.float_ptr(points.data_ptr()), - diffvg.float_ptr(thickness.data_ptr() if thickness is not None else 0), - num_control_points.shape[0], - points.shape[0], - is_closed, - use_distance_approx) - elif shape_type == diffvg.ShapeType.rect: - p_min = args[current_index] - current_index += 1 - p_max = args[current_index] - current_index += 1 - shape = diffvg.Rect(diffvg.Vector2f(p_min[0], p_min[1]), - diffvg.Vector2f(p_max[0], p_max[1])) - else: - assert(False) - stroke_width = args[current_index] - current_index += 1 - shapes.append(diffvg.Shape(\ - shape_type, shape.get_ptr(), stroke_width.item())) - shape_contents.append(shape) - - for shape_group_id in range(num_shape_groups): - shape_ids = args[current_index] - current_index += 1 - fill_color_type = args[current_index] - current_index += 1 - if fill_color_type == diffvg.ColorType.constant: - color = args[current_index] - current_index += 1 - fill_color = diffvg.Constant(\ - diffvg.Vector4f(color[0], color[1], color[2], color[3])) - elif fill_color_type == diffvg.ColorType.linear_gradient: - beg = args[current_index] - current_index += 1 - end = args[current_index] - current_index += 1 - offsets = args[current_index] - current_index += 1 - stop_colors = args[current_index] - current_index += 1 - assert(offsets.shape[0] == stop_colors.shape[0]) - fill_color = diffvg.LinearGradient(diffvg.Vector2f(beg[0], beg[1]), - diffvg.Vector2f(end[0], end[1]), - offsets.shape[0], - diffvg.float_ptr(offsets.data_ptr()), - diffvg.float_ptr(stop_colors.data_ptr())) - elif fill_color_type == diffvg.ColorType.radial_gradient: - center = args[current_index] - current_index += 1 - radius = args[current_index] - current_index += 1 - offsets = args[current_index] - current_index += 1 - stop_colors = args[current_index] - current_index += 1 - assert(offsets.shape[0] == stop_colors.shape[0]) - fill_color = diffvg.RadialGradient(diffvg.Vector2f(center[0], center[1]), - diffvg.Vector2f(radius[0], radius[1]), - offsets.shape[0], - diffvg.float_ptr(offsets.data_ptr()), - diffvg.float_ptr(stop_colors.data_ptr())) - elif fill_color_type is None: - fill_color = None - else: - assert(False) - stroke_color_type = args[current_index] - current_index += 1 - if stroke_color_type == diffvg.ColorType.constant: - color = args[current_index] - current_index += 1 - stroke_color = diffvg.Constant(\ - diffvg.Vector4f(color[0], color[1], color[2], color[3])) - elif stroke_color_type == diffvg.ColorType.linear_gradient: - beg = args[current_index] - current_index += 1 - end = args[current_index] - current_index += 1 - offsets = args[current_index] - current_index += 1 - stop_colors = args[current_index] - current_index += 1 - assert(offsets.shape[0] == stop_colors.shape[0]) - stroke_color = diffvg.LinearGradient(diffvg.Vector2f(beg[0], beg[1]), - diffvg.Vector2f(end[0], end[1]), - offsets.shape[0], - diffvg.float_ptr(offsets.data_ptr()), - diffvg.float_ptr(stop_colors.data_ptr())) - elif stroke_color_type == diffvg.ColorType.radial_gradient: - center = args[current_index] - current_index += 1 - radius = args[current_index] - current_index += 1 - offsets = args[current_index] - current_index += 1 - stop_colors = args[current_index] - current_index += 1 - assert(offsets.shape[0] == stop_colors.shape[0]) - stroke_color = diffvg.RadialGradient(diffvg.Vector2f(center[0], center[1]), - diffvg.Vector2f(radius[0], radius[1]), - offsets.shape[0], - diffvg.float_ptr(offsets.data_ptr()), - diffvg.float_ptr(stop_colors.data_ptr())) - elif stroke_color_type is None: - stroke_color = None - else: - assert(False) - use_even_odd_rule = args[current_index] - current_index += 1 - shape_to_canvas = args[current_index] - current_index += 1 - - if fill_color is not None: - color_contents.append(fill_color) - if stroke_color is not None: - color_contents.append(stroke_color) - shape_groups.append(diffvg.ShapeGroup(\ - diffvg.int_ptr(shape_ids.data_ptr()), - shape_ids.shape[0], - diffvg.ColorType.constant if fill_color_type is None else fill_color_type, - diffvg.void_ptr(0) if fill_color is None else fill_color.get_ptr(), - diffvg.ColorType.constant if stroke_color_type is None else stroke_color_type, - diffvg.void_ptr(0) if stroke_color is None else stroke_color.get_ptr(), - use_even_odd_rule, - diffvg.float_ptr(shape_to_canvas.data_ptr()))) - - filter_type = args[current_index] - current_index += 1 - filter_radius = args[current_index] - current_index += 1 - filt = diffvg.Filter(filter_type, filter_radius) - - scene = diffvg.Scene(canvas_width, canvas_height, - shapes, shape_groups, filt, pydiffvg.get_use_gpu(), - pydiffvg.get_device().index if pydiffvg.get_device().index is not None else -1) - - if output_type == OutputType.color: - assert(grad_img.shape[2] == 4) - else: - assert(grad_img.shape[2] == 1) - - if background_image is not None: - background_image = background_image.to(pydiffvg.get_device()) - if background_image.shape[2] == 3: - background_image = torch.cat((\ - background_image, torch.ones(background_image.shape[0], background_image.shape[1], 1, - device = background_image.device)), dim = 2) - background_image = background_image.contiguous() - assert(background_image.shape[0] == rendered_image.shape[0]) - assert(background_image.shape[1] == rendered_image.shape[1]) - assert(background_image.shape[2] == 4) - - translation_grad_image = \ - torch.zeros(height, width, 2, device = pydiffvg.get_device()) - start = time.time() - diffvg.render(scene, - diffvg.float_ptr(background_image.data_ptr() if background_image is not None else 0), - diffvg.float_ptr(0), # render_image - diffvg.float_ptr(0), # render_sdf - width, - height, - num_samples_x, - num_samples_y, - seed, - diffvg.float_ptr(0), # d_background_image - diffvg.float_ptr(grad_img.data_ptr() if output_type == OutputType.color else 0), - diffvg.float_ptr(grad_img.data_ptr() if output_type == OutputType.sdf else 0), - diffvg.float_ptr(translation_grad_image.data_ptr()), - use_prefiltering, - diffvg.float_ptr(eval_positions.data_ptr()), - eval_positions.shape[0]) - time_elapsed = time.time() - start - if print_timing: - print('Gradient pass, time: %.5f s' % time_elapsed) - assert(torch.isfinite(translation_grad_image).all()) - - return translation_grad_image - - @staticmethod - def backward(ctx, - grad_img): - if not grad_img.is_contiguous(): - grad_img = grad_img.contiguous() - assert(torch.isfinite(grad_img).all()) - - scene = ctx.scene - width = ctx.width - height = ctx.height - num_samples_x = ctx.num_samples_x - num_samples_y = ctx.num_samples_y - seed = ctx.seed - output_type = ctx.output_type - use_prefiltering = ctx.use_prefiltering - eval_positions = ctx.eval_positions - background_image = ctx.background_image - - if background_image is not None: - d_background_image = torch.zeros_like(background_image) - else: - d_background_image = None - - start = time.time() - diffvg.render(scene, - diffvg.float_ptr(background_image.data_ptr() if background_image is not None else 0), - diffvg.float_ptr(0), # render_image - diffvg.float_ptr(0), # render_sdf - width, - height, - num_samples_x, - num_samples_y, - seed, - diffvg.float_ptr(d_background_image.data_ptr() if background_image is not None else 0), - diffvg.float_ptr(grad_img.data_ptr() if output_type == OutputType.color else 0), - diffvg.float_ptr(grad_img.data_ptr() if output_type == OutputType.sdf else 0), - diffvg.float_ptr(0), # d_translation - use_prefiltering, - diffvg.float_ptr(eval_positions.data_ptr()), - eval_positions.shape[0]) - time_elapsed = time.time() - start - global print_timing - if print_timing: - print('Backward pass, time: %.5f s' % time_elapsed) - - d_args = [] - d_args.append(None) # width - d_args.append(None) # height - d_args.append(None) # num_samples_x - d_args.append(None) # num_samples_y - d_args.append(None) # seed - d_args.append(d_background_image) - d_args.append(None) # canvas_width - d_args.append(None) # canvas_height - d_args.append(None) # num_shapes - d_args.append(None) # num_shape_groups - d_args.append(None) # output_type - d_args.append(None) # use_prefiltering - d_args.append(None) # eval_positions - for shape_id in range(scene.num_shapes): - d_args.append(None) # type - d_shape = scene.get_d_shape(shape_id) - use_thickness = False - if d_shape.type == diffvg.ShapeType.circle: - d_circle = d_shape.as_circle() - radius = torch.tensor(d_circle.radius) - assert(torch.isfinite(radius).all()) - d_args.append(radius) - c = d_circle.center - c = torch.tensor((c.x, c.y)) - assert(torch.isfinite(c).all()) - d_args.append(c) - elif d_shape.type == diffvg.ShapeType.ellipse: - d_ellipse = d_shape.as_ellipse() - r = d_ellipse.radius - r = torch.tensor((d_ellipse.radius.x, d_ellipse.radius.y)) - assert(torch.isfinite(r).all()) - d_args.append(r) - c = d_ellipse.center - c = torch.tensor((c.x, c.y)) - assert(torch.isfinite(c).all()) - d_args.append(c) - elif d_shape.type == diffvg.ShapeType.path: - d_path = d_shape.as_path() - points = torch.zeros((d_path.num_points, 2)) - thickness = None - if d_path.has_thickness(): - use_thickness = True - thickness = torch.zeros(d_path.num_points) - d_path.copy_to(diffvg.float_ptr(points.data_ptr()), diffvg.float_ptr(thickness.data_ptr())) - else: - d_path.copy_to(diffvg.float_ptr(points.data_ptr()), diffvg.float_ptr(0)) - assert(torch.isfinite(points).all()) - if thickness is not None: - assert(torch.isfinite(thickness).all()) - d_args.append(None) # num_control_points - d_args.append(points) - d_args.append(thickness) - d_args.append(None) # is_closed - d_args.append(None) # use_distance_approx - elif d_shape.type == diffvg.ShapeType.rect: - d_rect = d_shape.as_rect() - p_min = torch.tensor((d_rect.p_min.x, d_rect.p_min.y)) - p_max = torch.tensor((d_rect.p_max.x, d_rect.p_max.y)) - assert(torch.isfinite(p_min).all()) - assert(torch.isfinite(p_max).all()) - d_args.append(p_min) - d_args.append(p_max) - else: - assert(False) - if use_thickness: - d_args.append(None) - else: - w = torch.tensor((d_shape.stroke_width)) - assert(torch.isfinite(w).all()) - d_args.append(w) - - for group_id in range(scene.num_shape_groups): - d_shape_group = scene.get_d_shape_group(group_id) - d_args.append(None) # shape_ids - d_args.append(None) # fill_color_type - if d_shape_group.has_fill_color(): - if d_shape_group.fill_color_type == diffvg.ColorType.constant: - d_constant = d_shape_group.fill_color_as_constant() - c = d_constant.color - d_args.append(torch.tensor((c.x, c.y, c.z, c.w))) - elif d_shape_group.fill_color_type == diffvg.ColorType.linear_gradient: - d_linear_gradient = d_shape_group.fill_color_as_linear_gradient() - beg = d_linear_gradient.begin - d_args.append(torch.tensor((beg.x, beg.y))) - end = d_linear_gradient.end - d_args.append(torch.tensor((end.x, end.y))) - offsets = torch.zeros((d_linear_gradient.num_stops)) - stop_colors = torch.zeros((d_linear_gradient.num_stops, 4)) - d_linear_gradient.copy_to(\ - diffvg.float_ptr(offsets.data_ptr()), - diffvg.float_ptr(stop_colors.data_ptr())) - assert(torch.isfinite(stop_colors).all()) - d_args.append(offsets) - d_args.append(stop_colors) - elif d_shape_group.fill_color_type == diffvg.ColorType.radial_gradient: - d_radial_gradient = d_shape_group.fill_color_as_radial_gradient() - center = d_radial_gradient.center - d_args.append(torch.tensor((center.x, center.y))) - radius = d_radial_gradient.radius - d_args.append(torch.tensor((radius.x, radius.y))) - offsets = torch.zeros((d_radial_gradient.num_stops)) - stop_colors = torch.zeros((d_radial_gradient.num_stops, 4)) - d_radial_gradient.copy_to(\ - diffvg.float_ptr(offsets.data_ptr()), - diffvg.float_ptr(stop_colors.data_ptr())) - assert(torch.isfinite(stop_colors).all()) - d_args.append(offsets) - d_args.append(stop_colors) - else: - assert(False) - d_args.append(None) # stroke_color_type - if d_shape_group.has_stroke_color(): - if d_shape_group.stroke_color_type == diffvg.ColorType.constant: - d_constant = d_shape_group.stroke_color_as_constant() - c = d_constant.color - d_args.append(torch.tensor((c.x, c.y, c.z, c.w))) - elif d_shape_group.stroke_color_type == diffvg.ColorType.linear_gradient: - d_linear_gradient = d_shape_group.stroke_color_as_linear_gradient() - beg = d_linear_gradient.begin - d_args.append(torch.tensor((beg.x, beg.y))) - end = d_linear_gradient.end - d_args.append(torch.tensor((end.x, end.y))) - offsets = torch.zeros((d_linear_gradient.num_stops)) - stop_colors = torch.zeros((d_linear_gradient.num_stops, 4)) - d_linear_gradient.copy_to(\ - diffvg.float_ptr(offsets.data_ptr()), - diffvg.float_ptr(stop_colors.data_ptr())) - assert(torch.isfinite(stop_colors).all()) - d_args.append(offsets) - d_args.append(stop_colors) - elif d_shape_group.fill_color_type == diffvg.ColorType.radial_gradient: - d_radial_gradient = d_shape_group.stroke_color_as_radial_gradient() - center = d_radial_gradient.center - d_args.append(torch.tensor((center.x, center.y))) - radius = d_radial_gradient.radius - d_args.append(torch.tensor((radius.x, radius.y))) - offsets = torch.zeros((d_radial_gradient.num_stops)) - stop_colors = torch.zeros((d_radial_gradient.num_stops, 4)) - d_radial_gradient.copy_to(\ - diffvg.float_ptr(offsets.data_ptr()), - diffvg.float_ptr(stop_colors.data_ptr())) - assert(torch.isfinite(stop_colors).all()) - d_args.append(offsets) - d_args.append(stop_colors) - else: - assert(False) - d_args.append(None) # use_even_odd_rule - d_shape_to_canvas = torch.zeros((3, 3)) - d_shape_group.copy_to(diffvg.float_ptr(d_shape_to_canvas.data_ptr())) - assert(torch.isfinite(d_shape_to_canvas).all()) - d_args.append(d_shape_to_canvas) - d_args.append(None) # filter_type - d_args.append(torch.tensor(scene.get_d_filter_radius())) - - return tuple(d_args) diff --git a/spaces/CVPR/LIVE/thrust/thrust/mr/disjoint_tls_pool.h b/spaces/CVPR/LIVE/thrust/thrust/mr/disjoint_tls_pool.h deleted file mode 100644 index e50eba76255421812bb1b0c4a355e879eef37492..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/mr/disjoint_tls_pool.h +++ /dev/null @@ -1,69 +0,0 @@ -/* - * Copyright 2018 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -/*! \file disjoint_tls_pool.h - * \brief A function wrapping a thread local instance of a \p disjoint_unsynchronized_pool_resource. - */ - -#pragma once - -#include - -#if THRUST_CPP_DIALECT >= 2011 - -#include - -namespace thrust -{ -namespace mr -{ - -/*! \addtogroup memory_management Memory Management - * \addtogroup memory_resources Memory Resources - * \ingroup memory_resources - * \{ - */ - -/*! Potentially constructs, if not yet created, and then returns the address of a thread-local - * \p disjoint_unsynchronized_pool_resource, - * - * \tparam Upstream the first template argument to the pool template - * \tparam Bookkeeper the second template argument to the pool template - * \param upstream the first argument to the constructor, if invoked - * \param bookkeeper the second argument to the constructor, if invoked - */ -template -__host__ -thrust::mr::disjoint_unsynchronized_pool_resource & tls_disjoint_pool( - Upstream * upstream = NULL, - Bookkeeper * bookkeeper = NULL) -{ - static thread_local auto adaptor = [&]{ - assert(upstream && bookkeeper); - return thrust::mr::disjoint_unsynchronized_pool_resource(upstream, bookkeeper); - }(); - - return adaptor; -} - -/*! \} - */ - -} // end mr -} // end thrust - -#endif // THRUST_CPP_DIALECT >= 2011 - diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/cpp/detail/for_each.h b/spaces/CVPR/LIVE/thrust/thrust/system/cpp/detail/for_each.h deleted file mode 100644 index 9ef45dfe62e47d9779c4b60839628efd82c2a5e1..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/system/cpp/detail/for_each.h +++ /dev/null @@ -1,23 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -#include - -// this system inherits for_each -#include - diff --git a/spaces/CVPR/lama-example/saicinpainting/training/modules/spatial_transform.py b/spaces/CVPR/lama-example/saicinpainting/training/modules/spatial_transform.py deleted file mode 100644 index 2de024ba08c549605a08b64d096f1f0db7b7722a..0000000000000000000000000000000000000000 --- a/spaces/CVPR/lama-example/saicinpainting/training/modules/spatial_transform.py +++ /dev/null @@ -1,49 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F -from kornia.geometry.transform import rotate - - -class LearnableSpatialTransformWrapper(nn.Module): - def __init__(self, impl, pad_coef=0.5, angle_init_range=80, train_angle=True): - super().__init__() - self.impl = impl - self.angle = torch.rand(1) * angle_init_range - if train_angle: - self.angle = nn.Parameter(self.angle, requires_grad=True) - self.pad_coef = pad_coef - - def forward(self, x): - if torch.is_tensor(x): - return self.inverse_transform(self.impl(self.transform(x)), x) - elif isinstance(x, tuple): - x_trans = tuple(self.transform(elem) for elem in x) - y_trans = self.impl(x_trans) - return tuple(self.inverse_transform(elem, orig_x) for elem, orig_x in zip(y_trans, x)) - else: - raise ValueError(f'Unexpected input type {type(x)}') - - def transform(self, x): - height, width = x.shape[2:] - pad_h, pad_w = int(height * self.pad_coef), int(width * self.pad_coef) - x_padded = F.pad(x, [pad_w, pad_w, pad_h, pad_h], mode='reflect') - x_padded_rotated = rotate(x_padded, angle=self.angle.to(x_padded)) - return x_padded_rotated - - def inverse_transform(self, y_padded_rotated, orig_x): - height, width = orig_x.shape[2:] - pad_h, pad_w = int(height * self.pad_coef), int(width * self.pad_coef) - - y_padded = rotate(y_padded_rotated, angle=-self.angle.to(y_padded_rotated)) - y_height, y_width = y_padded.shape[2:] - y = y_padded[:, :, pad_h : y_height - pad_h, pad_w : y_width - pad_w] - return y - - -if __name__ == '__main__': - layer = LearnableSpatialTransformWrapper(nn.Identity()) - x = torch.arange(2* 3 * 15 * 15).view(2, 3, 15, 15).float() - y = layer(x) - assert x.shape == y.shape - assert torch.allclose(x[:, :, 1:, 1:][:, :, :-1, :-1], y[:, :, 1:, 1:][:, :, :-1, :-1]) - print('all ok') diff --git a/spaces/Chintan-Donda/KKMS-KSSW-HF/src/translator.py b/spaces/Chintan-Donda/KKMS-KSSW-HF/src/translator.py deleted file mode 100644 index 20a9473388d6404ef0ed316a04ec01f12e153686..0000000000000000000000000000000000000000 --- a/spaces/Chintan-Donda/KKMS-KSSW-HF/src/translator.py +++ /dev/null @@ -1,61 +0,0 @@ -import src.constants as constants_utils -import requests -from transformers import AutoModelForSeq2SeqLM, AutoTokenizer -from mosestokenizer import * -from indicnlp.tokenize import sentence_tokenize -from googletrans import Translator, constants - - -class TRANSLATOR: - def __init__(self): - print() - - - def split_sentences(self, paragraph, language): - if language == "en": - with MosesSentenceSplitter(language) as splitter: - return splitter([paragraph]) - elif language in constants_utils.INDIC_LANGUAGE: - return sentence_tokenize.sentence_split(paragraph, lang=language) - - - def get_in_hindi(self, payload): - tokenizer = AutoTokenizer.from_pretrained("facebook/nllb-200-distilled-600M") - model = AutoModelForSeq2SeqLM.from_pretrained("facebook/nllb-200-distilled-600M") - article = self.split_sentences(payload['inputs'], 'en') - # inputs = tokenizer(payload['input'], return_tensors="pt") - out_text = "" - for a in article: - inputs = tokenizer(a, return_tensors="pt") - translated_tokens = model.generate(**inputs, forced_bos_token_id=tokenizer.lang_code_to_id["hin_Deva"], max_length=100) - translated_sent = tokenizer.batch_decode(translated_tokens, skip_special_tokens=True)[0] - out_text = out_text.join(translated_sent) - return out_text - - - def get_in_indic(self, text, language='Hindi'): - tokenizer = AutoTokenizer.from_pretrained("facebook/nllb-200-distilled-600M") - model = AutoModelForSeq2SeqLM.from_pretrained("facebook/nllb-200-distilled-600M") - inputs = tokenizer(text, return_tensors="pt") - - code = "eng_Latn" - if language == 'Hindi': - code= "hin_Deva" - elif language == 'Marathi': - code = "mar_Deva" - - translated_tokens = model.generate( - **inputs, - forced_bos_token_id=tokenizer.lang_code_to_id[code], - max_length=1000 - ) - - out_text = tokenizer.batch_decode(translated_tokens, skip_special_tokens=True)[0] - return out_text - - - def get_indic_google_translate(self, text, language='Hindi'): - # Init the Google API translator - translator = Translator() - translations = translator.translate(text, dest=constants_utils.INDIC_LANGUAGE.get(language, 'en')) - return str(translations.text) diff --git a/spaces/DAMO-NLP-SG/Video-LLaMA/video_llama/datasets/datasets/base_dataset.py b/spaces/DAMO-NLP-SG/Video-LLaMA/video_llama/datasets/datasets/base_dataset.py deleted file mode 100644 index ae2a8d0e21370129c0182cddc427eb293bbe5982..0000000000000000000000000000000000000000 --- a/spaces/DAMO-NLP-SG/Video-LLaMA/video_llama/datasets/datasets/base_dataset.py +++ /dev/null @@ -1,68 +0,0 @@ -""" - Copyright (c) 2022, salesforce.com, inc. - All rights reserved. - SPDX-License-Identifier: BSD-3-Clause - For full license text, see the LICENSE_Lavis file in the repo root or https://opensource.org/licenses/BSD-3-Clause -""" - -import json -from typing import Iterable - -from torch.utils.data import Dataset, ConcatDataset -from torch.utils.data.dataloader import default_collate - - -class BaseDataset(Dataset): - def __init__( - self, vis_processor=None, text_processor=None, vis_root=None, ann_paths=[] - ): - """ - vis_root (string): Root directory of images (e.g. coco/images/) - ann_root (string): directory to store the annotation file - """ - self.vis_root = vis_root - - self.annotation = [] - for ann_path in ann_paths: - self.annotation.extend(json.load(open(ann_path, "r"))['annotations']) - - self.vis_processor = vis_processor - self.text_processor = text_processor - - self._add_instance_ids() - - def __len__(self): - return len(self.annotation) - - def collater(self, samples): - return default_collate(samples) - - def set_processors(self, vis_processor, text_processor): - self.vis_processor = vis_processor - self.text_processor = text_processor - - def _add_instance_ids(self, key="instance_id"): - for idx, ann in enumerate(self.annotation): - ann[key] = str(idx) - - -class ConcatDataset(ConcatDataset): - def __init__(self, datasets: Iterable[Dataset]) -> None: - super().__init__(datasets) - - def collater(self, samples): - # TODO For now only supports datasets with same underlying collater implementations - - all_keys = set() - for s in samples: - all_keys.update(s) - - shared_keys = all_keys - for s in samples: - shared_keys = shared_keys & set(s.keys()) - - samples_shared_keys = [] - for s in samples: - samples_shared_keys.append({k: s[k] for k in s.keys() if k in shared_keys}) - - return self.datasets[0].collater(samples_shared_keys) diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/PIL/ImageTk.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/PIL/ImageTk.py deleted file mode 100644 index bf98eb2c8c25c7446dd91890f49291486222f3b8..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/PIL/ImageTk.py +++ /dev/null @@ -1,283 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# a Tk display interface -# -# History: -# 96-04-08 fl Created -# 96-09-06 fl Added getimage method -# 96-11-01 fl Rewritten, removed image attribute and crop method -# 97-05-09 fl Use PyImagingPaste method instead of image type -# 97-05-12 fl Minor tweaks to match the IFUNC95 interface -# 97-05-17 fl Support the "pilbitmap" booster patch -# 97-06-05 fl Added file= and data= argument to image constructors -# 98-03-09 fl Added width and height methods to Image classes -# 98-07-02 fl Use default mode for "P" images without palette attribute -# 98-07-02 fl Explicitly destroy Tkinter image objects -# 99-07-24 fl Support multiple Tk interpreters (from Greg Couch) -# 99-07-26 fl Automatically hook into Tkinter (if possible) -# 99-08-15 fl Hook uses _imagingtk instead of _imaging -# -# Copyright (c) 1997-1999 by Secret Labs AB -# Copyright (c) 1996-1997 by Fredrik Lundh -# -# See the README file for information on usage and redistribution. -# - -import tkinter -from io import BytesIO - -from . import Image - -# -------------------------------------------------------------------- -# Check for Tkinter interface hooks - -_pilbitmap_ok = None - - -def _pilbitmap_check(): - global _pilbitmap_ok - if _pilbitmap_ok is None: - try: - im = Image.new("1", (1, 1)) - tkinter.BitmapImage(data=f"PIL:{im.im.id}") - _pilbitmap_ok = 1 - except tkinter.TclError: - _pilbitmap_ok = 0 - return _pilbitmap_ok - - -def _get_image_from_kw(kw): - source = None - if "file" in kw: - source = kw.pop("file") - elif "data" in kw: - source = BytesIO(kw.pop("data")) - if source: - return Image.open(source) - - -def _pyimagingtkcall(command, photo, id): - tk = photo.tk - try: - tk.call(command, photo, id) - except tkinter.TclError: - # activate Tkinter hook - # may raise an error if it cannot attach to Tkinter - from . import _imagingtk - - _imagingtk.tkinit(tk.interpaddr()) - tk.call(command, photo, id) - - -# -------------------------------------------------------------------- -# PhotoImage - - -class PhotoImage: - """ - A Tkinter-compatible photo image. This can be used - everywhere Tkinter expects an image object. If the image is an RGBA - image, pixels having alpha 0 are treated as transparent. - - The constructor takes either a PIL image, or a mode and a size. - Alternatively, you can use the ``file`` or ``data`` options to initialize - the photo image object. - - :param image: Either a PIL image, or a mode string. If a mode string is - used, a size must also be given. - :param size: If the first argument is a mode string, this defines the size - of the image. - :keyword file: A filename to load the image from (using - ``Image.open(file)``). - :keyword data: An 8-bit string containing image data (as loaded from an - image file). - """ - - def __init__(self, image=None, size=None, **kw): - # Tk compatibility: file or data - if image is None: - image = _get_image_from_kw(kw) - - if hasattr(image, "mode") and hasattr(image, "size"): - # got an image instead of a mode - mode = image.mode - if mode == "P": - # palette mapped data - image.apply_transparency() - image.load() - try: - mode = image.palette.mode - except AttributeError: - mode = "RGB" # default - size = image.size - kw["width"], kw["height"] = size - else: - mode = image - image = None - - if mode not in ["1", "L", "RGB", "RGBA"]: - mode = Image.getmodebase(mode) - - self.__mode = mode - self.__size = size - self.__photo = tkinter.PhotoImage(**kw) - self.tk = self.__photo.tk - if image: - self.paste(image) - - def __del__(self): - name = self.__photo.name - self.__photo.name = None - try: - self.__photo.tk.call("image", "delete", name) - except Exception: - pass # ignore internal errors - - def __str__(self): - """ - Get the Tkinter photo image identifier. This method is automatically - called by Tkinter whenever a PhotoImage object is passed to a Tkinter - method. - - :return: A Tkinter photo image identifier (a string). - """ - return str(self.__photo) - - def width(self): - """ - Get the width of the image. - - :return: The width, in pixels. - """ - return self.__size[0] - - def height(self): - """ - Get the height of the image. - - :return: The height, in pixels. - """ - return self.__size[1] - - def paste(self, im): - """ - Paste a PIL image into the photo image. Note that this can - be very slow if the photo image is displayed. - - :param im: A PIL image. The size must match the target region. If the - mode does not match, the image is converted to the mode of - the bitmap image. - """ - # convert to blittable - im.load() - image = im.im - if image.isblock() and im.mode == self.__mode: - block = image - else: - block = image.new_block(self.__mode, im.size) - image.convert2(block, image) # convert directly between buffers - - _pyimagingtkcall("PyImagingPhoto", self.__photo, block.id) - - -# -------------------------------------------------------------------- -# BitmapImage - - -class BitmapImage: - """ - A Tkinter-compatible bitmap image. This can be used everywhere Tkinter - expects an image object. - - The given image must have mode "1". Pixels having value 0 are treated as - transparent. Options, if any, are passed on to Tkinter. The most commonly - used option is ``foreground``, which is used to specify the color for the - non-transparent parts. See the Tkinter documentation for information on - how to specify colours. - - :param image: A PIL image. - """ - - def __init__(self, image=None, **kw): - # Tk compatibility: file or data - if image is None: - image = _get_image_from_kw(kw) - - self.__mode = image.mode - self.__size = image.size - - if _pilbitmap_check(): - # fast way (requires the pilbitmap booster patch) - image.load() - kw["data"] = f"PIL:{image.im.id}" - self.__im = image # must keep a reference - else: - # slow but safe way - kw["data"] = image.tobitmap() - self.__photo = tkinter.BitmapImage(**kw) - - def __del__(self): - name = self.__photo.name - self.__photo.name = None - try: - self.__photo.tk.call("image", "delete", name) - except Exception: - pass # ignore internal errors - - def width(self): - """ - Get the width of the image. - - :return: The width, in pixels. - """ - return self.__size[0] - - def height(self): - """ - Get the height of the image. - - :return: The height, in pixels. - """ - return self.__size[1] - - def __str__(self): - """ - Get the Tkinter bitmap image identifier. This method is automatically - called by Tkinter whenever a BitmapImage object is passed to a Tkinter - method. - - :return: A Tkinter bitmap image identifier (a string). - """ - return str(self.__photo) - - -def getimage(photo): - """Copies the contents of a PhotoImage to a PIL image memory.""" - im = Image.new("RGBA", (photo.width(), photo.height())) - block = im.im - - _pyimagingtkcall("PyImagingPhotoGet", photo, block.id) - - return im - - -def _show(image, title): - """Helper for the Image.show method.""" - - class UI(tkinter.Label): - def __init__(self, master, im): - if im.mode == "1": - self.image = BitmapImage(im, foreground="white", master=master) - else: - self.image = PhotoImage(im, master=master) - super().__init__(master, image=self.image, bg="black", bd=0) - - if not tkinter._default_root: - msg = "tkinter not initialized" - raise OSError(msg) - top = tkinter.Toplevel() - if title: - top.title(title) - UI(top, image).pack() diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/aiofiles/tempfile/temptypes.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/aiofiles/tempfile/temptypes.py deleted file mode 100644 index b17e0257b7999e2512f125f8b74d266158f23820..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/aiofiles/tempfile/temptypes.py +++ /dev/null @@ -1,73 +0,0 @@ -"""Async wrappers for spooled temp files and temp directory objects""" - -# Imports -import asyncio -from types import coroutine - -from ..base import AsyncBase -from ..threadpool.utils import ( - delegate_to_executor, - proxy_property_directly, - cond_delegate_to_executor, -) -from functools import partial - - -@delegate_to_executor("fileno", "rollover") -@cond_delegate_to_executor( - "close", - "flush", - "isatty", - "read", - "readline", - "readlines", - "seek", - "tell", - "truncate", -) -@proxy_property_directly("closed", "encoding", "mode", "name", "newlines") -class AsyncSpooledTemporaryFile(AsyncBase): - """Async wrapper for SpooledTemporaryFile class""" - - async def _check(self): - if self._file._rolled: - return - max_size = self._file._max_size - if max_size and self._file.tell() > max_size: - await self.rollover() - - async def write(self, s): - """Implementation to anticipate rollover""" - if self._file._rolled: - cb = partial(self._file.write, s) - return await self._loop.run_in_executor(self._executor, cb) - else: - file = self._file._file # reference underlying base IO object - rv = file.write(s) - await self._check() - return rv - - async def writelines(self, iterable): - """Implementation to anticipate rollover""" - if self._file._rolled: - cb = partial(self._file.writelines, iterable) - return await self._loop.run_in_executor(self._executor, cb) - else: - file = self._file._file # reference underlying base IO object - rv = file.writelines(iterable) - await self._check() - return rv - - -@delegate_to_executor("cleanup") -@proxy_property_directly("name") -class AsyncTemporaryDirectory: - """Async wrapper for TemporaryDirectory class""" - - def __init__(self, file, loop, executor): - self._file = file - self._loop = loop - self._executor = executor - - async def close(self): - await self.cleanup() diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/pens/statisticsPen.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/pens/statisticsPen.py deleted file mode 100644 index 3468fc72e54a19cd5bdf2e6d459e848566208df8..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/pens/statisticsPen.py +++ /dev/null @@ -1,122 +0,0 @@ -"""Pen calculating area, center of mass, variance and standard-deviation, -covariance and correlation, and slant, of glyph shapes.""" -import math -from fontTools.pens.momentsPen import MomentsPen - -__all__ = ["StatisticsPen"] - - -class StatisticsPen(MomentsPen): - - """Pen calculating area, center of mass, variance and - standard-deviation, covariance and correlation, and slant, - of glyph shapes. - - Note that all the calculated values are 'signed'. Ie. if the - glyph shape is self-intersecting, the values are not correct - (but well-defined). As such, area will be negative if contour - directions are clockwise. Moreover, variance might be negative - if the shapes are self-intersecting in certain ways.""" - - def __init__(self, glyphset=None): - MomentsPen.__init__(self, glyphset=glyphset) - self.__zero() - - def _closePath(self): - MomentsPen._closePath(self) - self.__update() - - def __zero(self): - self.meanX = 0 - self.meanY = 0 - self.varianceX = 0 - self.varianceY = 0 - self.stddevX = 0 - self.stddevY = 0 - self.covariance = 0 - self.correlation = 0 - self.slant = 0 - - def __update(self): - - area = self.area - if not area: - self.__zero() - return - - # Center of mass - # https://en.wikipedia.org/wiki/Center_of_mass#A_continuous_volume - self.meanX = meanX = self.momentX / area - self.meanY = meanY = self.momentY / area - - # Var(X) = E[X^2] - E[X]^2 - self.varianceX = varianceX = self.momentXX / area - meanX**2 - self.varianceY = varianceY = self.momentYY / area - meanY**2 - - self.stddevX = stddevX = math.copysign(abs(varianceX) ** 0.5, varianceX) - self.stddevY = stddevY = math.copysign(abs(varianceY) ** 0.5, varianceY) - - # Covariance(X,Y) = ( E[X.Y] - E[X]E[Y] ) - self.covariance = covariance = self.momentXY / area - meanX * meanY - - # Correlation(X,Y) = Covariance(X,Y) / ( stddev(X) * stddev(Y) ) - # https://en.wikipedia.org/wiki/Pearson_product-moment_correlation_coefficient - if stddevX * stddevY == 0: - correlation = float("NaN") - else: - correlation = covariance / (stddevX * stddevY) - self.correlation = correlation if abs(correlation) > 1e-3 else 0 - - slant = covariance / varianceY if varianceY != 0 else float("NaN") - self.slant = slant if abs(slant) > 1e-3 else 0 - - -def _test(glyphset, upem, glyphs): - from fontTools.pens.transformPen import TransformPen - from fontTools.misc.transform import Scale - - print("upem", upem) - - for glyph_name in glyphs: - print() - print("glyph:", glyph_name) - glyph = glyphset[glyph_name] - pen = StatisticsPen(glyphset=glyphset) - transformer = TransformPen(pen, Scale(1.0 / upem)) - glyph.draw(transformer) - for item in [ - "area", - "momentX", - "momentY", - "momentXX", - "momentYY", - "momentXY", - "meanX", - "meanY", - "varianceX", - "varianceY", - "stddevX", - "stddevY", - "covariance", - "correlation", - "slant", - ]: - print("%s: %g" % (item, getattr(pen, item))) - - -def main(args): - if not args: - return - filename, glyphs = args[0], args[1:] - from fontTools.ttLib import TTFont - - font = TTFont(filename) - if not glyphs: - glyphs = font.getGlyphOrder() - _test(font.getGlyphSet(), font["head"].unitsPerEm, glyphs) - - -if __name__ == "__main__": - import sys - - main(sys.argv[1:]) diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/T_S_I__5.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/T_S_I__5.py deleted file mode 100644 index 5edc86a9cbc9a0b710cfc014a3910f671f791e54..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/T_S_I__5.py +++ /dev/null @@ -1,46 +0,0 @@ -""" TSI{0,1,2,3,5} are private tables used by Microsoft Visual TrueType (VTT) -tool to store its hinting source data. - -TSI5 contains the VTT character groups. -""" -from fontTools.misc.textTools import safeEval -from . import DefaultTable -import sys -import array - - -class table_T_S_I__5(DefaultTable.DefaultTable): - def decompile(self, data, ttFont): - numGlyphs = ttFont["maxp"].numGlyphs - assert len(data) == 2 * numGlyphs - a = array.array("H") - a.frombytes(data) - if sys.byteorder != "big": - a.byteswap() - self.glyphGrouping = {} - for i in range(numGlyphs): - self.glyphGrouping[ttFont.getGlyphName(i)] = a[i] - - def compile(self, ttFont): - glyphNames = ttFont.getGlyphOrder() - a = array.array("H") - for i in range(len(glyphNames)): - a.append(self.glyphGrouping.get(glyphNames[i], 0)) - if sys.byteorder != "big": - a.byteswap() - return a.tobytes() - - def toXML(self, writer, ttFont): - names = sorted(self.glyphGrouping.keys()) - for glyphName in names: - writer.simpletag( - "glyphgroup", name=glyphName, value=self.glyphGrouping[glyphName] - ) - writer.newline() - - def fromXML(self, name, attrs, content, ttFont): - if not hasattr(self, "glyphGrouping"): - self.glyphGrouping = {} - if name != "glyphgroup": - return - self.glyphGrouping[attrs["name"]] = safeEval(attrs["value"]) diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/otConverters.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/otConverters.py deleted file mode 100644 index 6b2a8c39678af0f4828ee477e57038d81d02006b..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/otConverters.py +++ /dev/null @@ -1,1929 +0,0 @@ -from fontTools.misc.fixedTools import ( - fixedToFloat as fi2fl, - floatToFixed as fl2fi, - floatToFixedToStr as fl2str, - strToFixedToFloat as str2fl, - ensureVersionIsLong as fi2ve, - versionToFixed as ve2fi, -) -from fontTools.misc.roundTools import nearestMultipleShortestRepr, otRound -from fontTools.misc.textTools import bytesjoin, tobytes, tostr, pad, safeEval -from fontTools.ttLib import getSearchRange -from .otBase import ( - CountReference, - FormatSwitchingBaseTable, - OTTableReader, - OTTableWriter, - ValueRecordFactory, -) -from .otTables import ( - lookupTypes, - AATStateTable, - AATState, - AATAction, - ContextualMorphAction, - LigatureMorphAction, - InsertionMorphAction, - MorxSubtable, - ExtendMode as _ExtendMode, - CompositeMode as _CompositeMode, - NO_VARIATION_INDEX, -) -from itertools import zip_longest -from functools import partial -import re -import struct -from typing import Optional -import logging - - -log = logging.getLogger(__name__) -istuple = lambda t: isinstance(t, tuple) - - -def buildConverters(tableSpec, tableNamespace): - """Given a table spec from otData.py, build a converter object for each - field of the table. This is called for each table in otData.py, and - the results are assigned to the corresponding class in otTables.py.""" - converters = [] - convertersByName = {} - for tp, name, repeat, aux, descr in tableSpec: - tableName = name - if name.startswith("ValueFormat"): - assert tp == "uint16" - converterClass = ValueFormat - elif name.endswith("Count") or name in ("StructLength", "MorphType"): - converterClass = { - "uint8": ComputedUInt8, - "uint16": ComputedUShort, - "uint32": ComputedULong, - }[tp] - elif name == "SubTable": - converterClass = SubTable - elif name == "ExtSubTable": - converterClass = ExtSubTable - elif name == "SubStruct": - converterClass = SubStruct - elif name == "FeatureParams": - converterClass = FeatureParams - elif name in ("CIDGlyphMapping", "GlyphCIDMapping"): - converterClass = StructWithLength - else: - if not tp in converterMapping and "(" not in tp: - tableName = tp - converterClass = Struct - else: - converterClass = eval(tp, tableNamespace, converterMapping) - - conv = converterClass(name, repeat, aux, description=descr) - - if conv.tableClass: - # A "template" such as OffsetTo(AType) knowss the table class already - tableClass = conv.tableClass - elif tp in ("MortChain", "MortSubtable", "MorxChain"): - tableClass = tableNamespace.get(tp) - else: - tableClass = tableNamespace.get(tableName) - - if not conv.tableClass: - conv.tableClass = tableClass - - if name in ["SubTable", "ExtSubTable", "SubStruct"]: - conv.lookupTypes = tableNamespace["lookupTypes"] - # also create reverse mapping - for t in conv.lookupTypes.values(): - for cls in t.values(): - convertersByName[cls.__name__] = Table(name, repeat, aux, cls) - if name == "FeatureParams": - conv.featureParamTypes = tableNamespace["featureParamTypes"] - conv.defaultFeatureParams = tableNamespace["FeatureParams"] - for cls in conv.featureParamTypes.values(): - convertersByName[cls.__name__] = Table(name, repeat, aux, cls) - converters.append(conv) - assert name not in convertersByName, name - convertersByName[name] = conv - return converters, convertersByName - - -class _MissingItem(tuple): - __slots__ = () - - -try: - from collections import UserList -except ImportError: - from UserList import UserList - - -class _LazyList(UserList): - def __getslice__(self, i, j): - return self.__getitem__(slice(i, j)) - - def __getitem__(self, k): - if isinstance(k, slice): - indices = range(*k.indices(len(self))) - return [self[i] for i in indices] - item = self.data[k] - if isinstance(item, _MissingItem): - self.reader.seek(self.pos + item[0] * self.recordSize) - item = self.conv.read(self.reader, self.font, {}) - self.data[k] = item - return item - - def __add__(self, other): - if isinstance(other, _LazyList): - other = list(other) - elif isinstance(other, list): - pass - else: - return NotImplemented - return list(self) + other - - def __radd__(self, other): - if not isinstance(other, list): - return NotImplemented - return other + list(self) - - -class BaseConverter(object): - - """Base class for converter objects. Apart from the constructor, this - is an abstract class.""" - - def __init__(self, name, repeat, aux, tableClass=None, *, description=""): - self.name = name - self.repeat = repeat - self.aux = aux - self.tableClass = tableClass - self.isCount = name.endswith("Count") or name in [ - "DesignAxisRecordSize", - "ValueRecordSize", - ] - self.isLookupType = name.endswith("LookupType") or name == "MorphType" - self.isPropagated = name in [ - "ClassCount", - "Class2Count", - "FeatureTag", - "SettingsCount", - "VarRegionCount", - "MappingCount", - "RegionAxisCount", - "DesignAxisCount", - "DesignAxisRecordSize", - "AxisValueCount", - "ValueRecordSize", - "AxisCount", - "BaseGlyphRecordCount", - "LayerRecordCount", - ] - self.description = description - - def readArray(self, reader, font, tableDict, count): - """Read an array of values from the reader.""" - lazy = font.lazy and count > 8 - if lazy: - recordSize = self.getRecordSize(reader) - if recordSize is NotImplemented: - lazy = False - if not lazy: - l = [] - for i in range(count): - l.append(self.read(reader, font, tableDict)) - return l - else: - l = _LazyList() - l.reader = reader.copy() - l.pos = l.reader.pos - l.font = font - l.conv = self - l.recordSize = recordSize - l.extend(_MissingItem([i]) for i in range(count)) - reader.advance(count * recordSize) - return l - - def getRecordSize(self, reader): - if hasattr(self, "staticSize"): - return self.staticSize - return NotImplemented - - def read(self, reader, font, tableDict): - """Read a value from the reader.""" - raise NotImplementedError(self) - - def writeArray(self, writer, font, tableDict, values): - try: - for i, value in enumerate(values): - self.write(writer, font, tableDict, value, i) - except Exception as e: - e.args = e.args + (i,) - raise - - def write(self, writer, font, tableDict, value, repeatIndex=None): - """Write a value to the writer.""" - raise NotImplementedError(self) - - def xmlRead(self, attrs, content, font): - """Read a value from XML.""" - raise NotImplementedError(self) - - def xmlWrite(self, xmlWriter, font, value, name, attrs): - """Write a value to XML.""" - raise NotImplementedError(self) - - varIndexBasePlusOffsetRE = re.compile(r"VarIndexBase\s*\+\s*(\d+)") - - def getVarIndexOffset(self) -> Optional[int]: - """If description has `VarIndexBase + {offset}`, return the offset else None.""" - m = self.varIndexBasePlusOffsetRE.search(self.description) - if not m: - return None - return int(m.group(1)) - - -class SimpleValue(BaseConverter): - @staticmethod - def toString(value): - return value - - @staticmethod - def fromString(value): - return value - - def xmlWrite(self, xmlWriter, font, value, name, attrs): - xmlWriter.simpletag(name, attrs + [("value", self.toString(value))]) - xmlWriter.newline() - - def xmlRead(self, attrs, content, font): - return self.fromString(attrs["value"]) - - -class OptionalValue(SimpleValue): - DEFAULT = None - - def xmlWrite(self, xmlWriter, font, value, name, attrs): - if value != self.DEFAULT: - attrs.append(("value", self.toString(value))) - xmlWriter.simpletag(name, attrs) - xmlWriter.newline() - - def xmlRead(self, attrs, content, font): - if "value" in attrs: - return self.fromString(attrs["value"]) - return self.DEFAULT - - -class IntValue(SimpleValue): - @staticmethod - def fromString(value): - return int(value, 0) - - -class Long(IntValue): - staticSize = 4 - - def read(self, reader, font, tableDict): - return reader.readLong() - - def readArray(self, reader, font, tableDict, count): - return reader.readLongArray(count) - - def write(self, writer, font, tableDict, value, repeatIndex=None): - writer.writeLong(value) - - def writeArray(self, writer, font, tableDict, values): - writer.writeLongArray(values) - - -class ULong(IntValue): - staticSize = 4 - - def read(self, reader, font, tableDict): - return reader.readULong() - - def readArray(self, reader, font, tableDict, count): - return reader.readULongArray(count) - - def write(self, writer, font, tableDict, value, repeatIndex=None): - writer.writeULong(value) - - def writeArray(self, writer, font, tableDict, values): - writer.writeULongArray(values) - - -class Flags32(ULong): - @staticmethod - def toString(value): - return "0x%08X" % value - - -class VarIndex(OptionalValue, ULong): - DEFAULT = NO_VARIATION_INDEX - - -class Short(IntValue): - staticSize = 2 - - def read(self, reader, font, tableDict): - return reader.readShort() - - def readArray(self, reader, font, tableDict, count): - return reader.readShortArray(count) - - def write(self, writer, font, tableDict, value, repeatIndex=None): - writer.writeShort(value) - - def writeArray(self, writer, font, tableDict, values): - writer.writeShortArray(values) - - -class UShort(IntValue): - staticSize = 2 - - def read(self, reader, font, tableDict): - return reader.readUShort() - - def readArray(self, reader, font, tableDict, count): - return reader.readUShortArray(count) - - def write(self, writer, font, tableDict, value, repeatIndex=None): - writer.writeUShort(value) - - def writeArray(self, writer, font, tableDict, values): - writer.writeUShortArray(values) - - -class Int8(IntValue): - staticSize = 1 - - def read(self, reader, font, tableDict): - return reader.readInt8() - - def readArray(self, reader, font, tableDict, count): - return reader.readInt8Array(count) - - def write(self, writer, font, tableDict, value, repeatIndex=None): - writer.writeInt8(value) - - def writeArray(self, writer, font, tableDict, values): - writer.writeInt8Array(values) - - -class UInt8(IntValue): - staticSize = 1 - - def read(self, reader, font, tableDict): - return reader.readUInt8() - - def readArray(self, reader, font, tableDict, count): - return reader.readUInt8Array(count) - - def write(self, writer, font, tableDict, value, repeatIndex=None): - writer.writeUInt8(value) - - def writeArray(self, writer, font, tableDict, values): - writer.writeUInt8Array(values) - - -class UInt24(IntValue): - staticSize = 3 - - def read(self, reader, font, tableDict): - return reader.readUInt24() - - def write(self, writer, font, tableDict, value, repeatIndex=None): - writer.writeUInt24(value) - - -class ComputedInt(IntValue): - def xmlWrite(self, xmlWriter, font, value, name, attrs): - if value is not None: - xmlWriter.comment("%s=%s" % (name, value)) - xmlWriter.newline() - - -class ComputedUInt8(ComputedInt, UInt8): - pass - - -class ComputedUShort(ComputedInt, UShort): - pass - - -class ComputedULong(ComputedInt, ULong): - pass - - -class Tag(SimpleValue): - staticSize = 4 - - def read(self, reader, font, tableDict): - return reader.readTag() - - def write(self, writer, font, tableDict, value, repeatIndex=None): - writer.writeTag(value) - - -class GlyphID(SimpleValue): - staticSize = 2 - typecode = "H" - - def readArray(self, reader, font, tableDict, count): - return font.getGlyphNameMany( - reader.readArray(self.typecode, self.staticSize, count) - ) - - def read(self, reader, font, tableDict): - return font.getGlyphName(reader.readValue(self.typecode, self.staticSize)) - - def writeArray(self, writer, font, tableDict, values): - writer.writeArray(self.typecode, font.getGlyphIDMany(values)) - - def write(self, writer, font, tableDict, value, repeatIndex=None): - writer.writeValue(self.typecode, font.getGlyphID(value)) - - -class GlyphID32(GlyphID): - staticSize = 4 - typecode = "L" - - -class NameID(UShort): - def xmlWrite(self, xmlWriter, font, value, name, attrs): - xmlWriter.simpletag(name, attrs + [("value", value)]) - if font and value: - nameTable = font.get("name") - if nameTable: - name = nameTable.getDebugName(value) - xmlWriter.write(" ") - if name: - xmlWriter.comment(name) - else: - xmlWriter.comment("missing from name table") - log.warning("name id %d missing from name table" % value) - xmlWriter.newline() - - -class STATFlags(UShort): - def xmlWrite(self, xmlWriter, font, value, name, attrs): - xmlWriter.simpletag(name, attrs + [("value", value)]) - flags = [] - if value & 0x01: - flags.append("OlderSiblingFontAttribute") - if value & 0x02: - flags.append("ElidableAxisValueName") - if flags: - xmlWriter.write(" ") - xmlWriter.comment(" ".join(flags)) - xmlWriter.newline() - - -class FloatValue(SimpleValue): - @staticmethod - def fromString(value): - return float(value) - - -class DeciPoints(FloatValue): - staticSize = 2 - - def read(self, reader, font, tableDict): - return reader.readUShort() / 10 - - def write(self, writer, font, tableDict, value, repeatIndex=None): - writer.writeUShort(round(value * 10)) - - -class BaseFixedValue(FloatValue): - staticSize = NotImplemented - precisionBits = NotImplemented - readerMethod = NotImplemented - writerMethod = NotImplemented - - def read(self, reader, font, tableDict): - return self.fromInt(getattr(reader, self.readerMethod)()) - - def write(self, writer, font, tableDict, value, repeatIndex=None): - getattr(writer, self.writerMethod)(self.toInt(value)) - - @classmethod - def fromInt(cls, value): - return fi2fl(value, cls.precisionBits) - - @classmethod - def toInt(cls, value): - return fl2fi(value, cls.precisionBits) - - @classmethod - def fromString(cls, value): - return str2fl(value, cls.precisionBits) - - @classmethod - def toString(cls, value): - return fl2str(value, cls.precisionBits) - - -class Fixed(BaseFixedValue): - staticSize = 4 - precisionBits = 16 - readerMethod = "readLong" - writerMethod = "writeLong" - - -class F2Dot14(BaseFixedValue): - staticSize = 2 - precisionBits = 14 - readerMethod = "readShort" - writerMethod = "writeShort" - - -class Angle(F2Dot14): - # angles are specified in degrees, and encoded as F2Dot14 fractions of half - # circle: e.g. 1.0 => 180, -0.5 => -90, -2.0 => -360, etc. - bias = 0.0 - factor = 1.0 / (1 << 14) * 180 # 0.010986328125 - - @classmethod - def fromInt(cls, value): - return (super().fromInt(value) + cls.bias) * 180 - - @classmethod - def toInt(cls, value): - return super().toInt((value / 180) - cls.bias) - - @classmethod - def fromString(cls, value): - # quantize to nearest multiples of minimum fixed-precision angle - return otRound(float(value) / cls.factor) * cls.factor - - @classmethod - def toString(cls, value): - return nearestMultipleShortestRepr(value, cls.factor) - - -class BiasedAngle(Angle): - # A bias of 1.0 is used in the representation of start and end angles - # of COLRv1 PaintSweepGradients to allow for encoding +360deg - bias = 1.0 - - -class Version(SimpleValue): - staticSize = 4 - - def read(self, reader, font, tableDict): - value = reader.readLong() - return value - - def write(self, writer, font, tableDict, value, repeatIndex=None): - value = fi2ve(value) - writer.writeLong(value) - - @staticmethod - def fromString(value): - return ve2fi(value) - - @staticmethod - def toString(value): - return "0x%08x" % value - - @staticmethod - def fromFloat(v): - return fl2fi(v, 16) - - -class Char64(SimpleValue): - """An ASCII string with up to 64 characters. - - Unused character positions are filled with 0x00 bytes. - Used in Apple AAT fonts in the `gcid` table. - """ - - staticSize = 64 - - def read(self, reader, font, tableDict): - data = reader.readData(self.staticSize) - zeroPos = data.find(b"\0") - if zeroPos >= 0: - data = data[:zeroPos] - s = tostr(data, encoding="ascii", errors="replace") - if s != tostr(data, encoding="ascii", errors="ignore"): - log.warning('replaced non-ASCII characters in "%s"' % s) - return s - - def write(self, writer, font, tableDict, value, repeatIndex=None): - data = tobytes(value, encoding="ascii", errors="replace") - if data != tobytes(value, encoding="ascii", errors="ignore"): - log.warning('replacing non-ASCII characters in "%s"' % value) - if len(data) > self.staticSize: - log.warning( - 'truncating overlong "%s" to %d bytes' % (value, self.staticSize) - ) - data = (data + b"\0" * self.staticSize)[: self.staticSize] - writer.writeData(data) - - -class Struct(BaseConverter): - def getRecordSize(self, reader): - return self.tableClass and self.tableClass.getRecordSize(reader) - - def read(self, reader, font, tableDict): - table = self.tableClass() - table.decompile(reader, font) - return table - - def write(self, writer, font, tableDict, value, repeatIndex=None): - value.compile(writer, font) - - def xmlWrite(self, xmlWriter, font, value, name, attrs): - if value is None: - if attrs: - # If there are attributes (probably index), then - # don't drop this even if it's NULL. It will mess - # up the array indices of the containing element. - xmlWriter.simpletag(name, attrs + [("empty", 1)]) - xmlWriter.newline() - else: - pass # NULL table, ignore - else: - value.toXML(xmlWriter, font, attrs, name=name) - - def xmlRead(self, attrs, content, font): - if "empty" in attrs and safeEval(attrs["empty"]): - return None - table = self.tableClass() - Format = attrs.get("Format") - if Format is not None: - table.Format = int(Format) - - noPostRead = not hasattr(table, "postRead") - if noPostRead: - # TODO Cache table.hasPropagated. - cleanPropagation = False - for conv in table.getConverters(): - if conv.isPropagated: - cleanPropagation = True - if not hasattr(font, "_propagator"): - font._propagator = {} - propagator = font._propagator - assert conv.name not in propagator, (conv.name, propagator) - setattr(table, conv.name, None) - propagator[conv.name] = CountReference(table.__dict__, conv.name) - - for element in content: - if isinstance(element, tuple): - name, attrs, content = element - table.fromXML(name, attrs, content, font) - else: - pass - - table.populateDefaults(propagator=getattr(font, "_propagator", None)) - - if noPostRead: - if cleanPropagation: - for conv in table.getConverters(): - if conv.isPropagated: - propagator = font._propagator - del propagator[conv.name] - if not propagator: - del font._propagator - - return table - - def __repr__(self): - return "Struct of " + repr(self.tableClass) - - -class StructWithLength(Struct): - def read(self, reader, font, tableDict): - pos = reader.pos - table = self.tableClass() - table.decompile(reader, font) - reader.seek(pos + table.StructLength) - return table - - def write(self, writer, font, tableDict, value, repeatIndex=None): - for convIndex, conv in enumerate(value.getConverters()): - if conv.name == "StructLength": - break - lengthIndex = len(writer.items) + convIndex - if isinstance(value, FormatSwitchingBaseTable): - lengthIndex += 1 # implicit Format field - deadbeef = {1: 0xDE, 2: 0xDEAD, 4: 0xDEADBEEF}[conv.staticSize] - - before = writer.getDataLength() - value.StructLength = deadbeef - value.compile(writer, font) - length = writer.getDataLength() - before - lengthWriter = writer.getSubWriter() - conv.write(lengthWriter, font, tableDict, length) - assert writer.items[lengthIndex] == b"\xde\xad\xbe\xef"[: conv.staticSize] - writer.items[lengthIndex] = lengthWriter.getAllData() - - -class Table(Struct): - - staticSize = 2 - - def readOffset(self, reader): - return reader.readUShort() - - def writeNullOffset(self, writer): - writer.writeUShort(0) - - def read(self, reader, font, tableDict): - offset = self.readOffset(reader) - if offset == 0: - return None - table = self.tableClass() - reader = reader.getSubReader(offset) - if font.lazy: - table.reader = reader - table.font = font - else: - table.decompile(reader, font) - return table - - def write(self, writer, font, tableDict, value, repeatIndex=None): - if value is None: - self.writeNullOffset(writer) - else: - subWriter = writer.getSubWriter(offsetSize=self.staticSize) - subWriter.name = self.name - if repeatIndex is not None: - subWriter.repeatIndex = repeatIndex - writer.writeSubTable(subWriter) - value.compile(subWriter, font) - - -class LTable(Table): - - staticSize = 4 - - def readOffset(self, reader): - return reader.readULong() - - def writeNullOffset(self, writer): - writer.writeULong(0) - - -# Table pointed to by a 24-bit, 3-byte long offset -class Table24(Table): - - staticSize = 3 - - def readOffset(self, reader): - return reader.readUInt24() - - def writeNullOffset(self, writer): - writer.writeUInt24(0) - - -# TODO Clean / merge the SubTable and SubStruct - - -class SubStruct(Struct): - def getConverter(self, tableType, lookupType): - tableClass = self.lookupTypes[tableType][lookupType] - return self.__class__(self.name, self.repeat, self.aux, tableClass) - - def xmlWrite(self, xmlWriter, font, value, name, attrs): - super(SubStruct, self).xmlWrite(xmlWriter, font, value, None, attrs) - - -class SubTable(Table): - def getConverter(self, tableType, lookupType): - tableClass = self.lookupTypes[tableType][lookupType] - return self.__class__(self.name, self.repeat, self.aux, tableClass) - - def xmlWrite(self, xmlWriter, font, value, name, attrs): - super(SubTable, self).xmlWrite(xmlWriter, font, value, None, attrs) - - -class ExtSubTable(LTable, SubTable): - def write(self, writer, font, tableDict, value, repeatIndex=None): - writer.Extension = True # actually, mere presence of the field flags it as an Ext Subtable writer. - Table.write(self, writer, font, tableDict, value, repeatIndex) - - -class FeatureParams(Table): - def getConverter(self, featureTag): - tableClass = self.featureParamTypes.get(featureTag, self.defaultFeatureParams) - return self.__class__(self.name, self.repeat, self.aux, tableClass) - - -class ValueFormat(IntValue): - staticSize = 2 - - def __init__(self, name, repeat, aux, tableClass=None, *, description=""): - BaseConverter.__init__( - self, name, repeat, aux, tableClass, description=description - ) - self.which = "ValueFormat" + ("2" if name[-1] == "2" else "1") - - def read(self, reader, font, tableDict): - format = reader.readUShort() - reader[self.which] = ValueRecordFactory(format) - return format - - def write(self, writer, font, tableDict, format, repeatIndex=None): - writer.writeUShort(format) - writer[self.which] = ValueRecordFactory(format) - - -class ValueRecord(ValueFormat): - def getRecordSize(self, reader): - return 2 * len(reader[self.which]) - - def read(self, reader, font, tableDict): - return reader[self.which].readValueRecord(reader, font) - - def write(self, writer, font, tableDict, value, repeatIndex=None): - writer[self.which].writeValueRecord(writer, font, value) - - def xmlWrite(self, xmlWriter, font, value, name, attrs): - if value is None: - pass # NULL table, ignore - else: - value.toXML(xmlWriter, font, self.name, attrs) - - def xmlRead(self, attrs, content, font): - from .otBase import ValueRecord - - value = ValueRecord() - value.fromXML(None, attrs, content, font) - return value - - -class AATLookup(BaseConverter): - BIN_SEARCH_HEADER_SIZE = 10 - - def __init__(self, name, repeat, aux, tableClass, *, description=""): - BaseConverter.__init__( - self, name, repeat, aux, tableClass, description=description - ) - if issubclass(self.tableClass, SimpleValue): - self.converter = self.tableClass(name="Value", repeat=None, aux=None) - else: - self.converter = Table( - name="Value", repeat=None, aux=None, tableClass=self.tableClass - ) - - def read(self, reader, font, tableDict): - format = reader.readUShort() - if format == 0: - return self.readFormat0(reader, font) - elif format == 2: - return self.readFormat2(reader, font) - elif format == 4: - return self.readFormat4(reader, font) - elif format == 6: - return self.readFormat6(reader, font) - elif format == 8: - return self.readFormat8(reader, font) - else: - assert False, "unsupported lookup format: %d" % format - - def write(self, writer, font, tableDict, value, repeatIndex=None): - values = list( - sorted([(font.getGlyphID(glyph), val) for glyph, val in value.items()]) - ) - # TODO: Also implement format 4. - formats = list( - sorted( - filter( - None, - [ - self.buildFormat0(writer, font, values), - self.buildFormat2(writer, font, values), - self.buildFormat6(writer, font, values), - self.buildFormat8(writer, font, values), - ], - ) - ) - ) - # We use the format ID as secondary sort key to make the output - # deterministic when multiple formats have same encoded size. - dataSize, lookupFormat, writeMethod = formats[0] - pos = writer.getDataLength() - writeMethod() - actualSize = writer.getDataLength() - pos - assert ( - actualSize == dataSize - ), "AATLookup format %d claimed to write %d bytes, but wrote %d" % ( - lookupFormat, - dataSize, - actualSize, - ) - - @staticmethod - def writeBinSearchHeader(writer, numUnits, unitSize): - writer.writeUShort(unitSize) - writer.writeUShort(numUnits) - searchRange, entrySelector, rangeShift = getSearchRange( - n=numUnits, itemSize=unitSize - ) - writer.writeUShort(searchRange) - writer.writeUShort(entrySelector) - writer.writeUShort(rangeShift) - - def buildFormat0(self, writer, font, values): - numGlyphs = len(font.getGlyphOrder()) - if len(values) != numGlyphs: - return None - valueSize = self.converter.staticSize - return ( - 2 + numGlyphs * valueSize, - 0, - lambda: self.writeFormat0(writer, font, values), - ) - - def writeFormat0(self, writer, font, values): - writer.writeUShort(0) - for glyphID_, value in values: - self.converter.write( - writer, font, tableDict=None, value=value, repeatIndex=None - ) - - def buildFormat2(self, writer, font, values): - segStart, segValue = values[0] - segEnd = segStart - segments = [] - for glyphID, curValue in values[1:]: - if glyphID != segEnd + 1 or curValue != segValue: - segments.append((segStart, segEnd, segValue)) - segStart = segEnd = glyphID - segValue = curValue - else: - segEnd = glyphID - segments.append((segStart, segEnd, segValue)) - valueSize = self.converter.staticSize - numUnits, unitSize = len(segments) + 1, valueSize + 4 - return ( - 2 + self.BIN_SEARCH_HEADER_SIZE + numUnits * unitSize, - 2, - lambda: self.writeFormat2(writer, font, segments), - ) - - def writeFormat2(self, writer, font, segments): - writer.writeUShort(2) - valueSize = self.converter.staticSize - numUnits, unitSize = len(segments), valueSize + 4 - self.writeBinSearchHeader(writer, numUnits, unitSize) - for firstGlyph, lastGlyph, value in segments: - writer.writeUShort(lastGlyph) - writer.writeUShort(firstGlyph) - self.converter.write( - writer, font, tableDict=None, value=value, repeatIndex=None - ) - writer.writeUShort(0xFFFF) - writer.writeUShort(0xFFFF) - writer.writeData(b"\x00" * valueSize) - - def buildFormat6(self, writer, font, values): - valueSize = self.converter.staticSize - numUnits, unitSize = len(values), valueSize + 2 - return ( - 2 + self.BIN_SEARCH_HEADER_SIZE + (numUnits + 1) * unitSize, - 6, - lambda: self.writeFormat6(writer, font, values), - ) - - def writeFormat6(self, writer, font, values): - writer.writeUShort(6) - valueSize = self.converter.staticSize - numUnits, unitSize = len(values), valueSize + 2 - self.writeBinSearchHeader(writer, numUnits, unitSize) - for glyphID, value in values: - writer.writeUShort(glyphID) - self.converter.write( - writer, font, tableDict=None, value=value, repeatIndex=None - ) - writer.writeUShort(0xFFFF) - writer.writeData(b"\x00" * valueSize) - - def buildFormat8(self, writer, font, values): - minGlyphID, maxGlyphID = values[0][0], values[-1][0] - if len(values) != maxGlyphID - minGlyphID + 1: - return None - valueSize = self.converter.staticSize - return ( - 6 + len(values) * valueSize, - 8, - lambda: self.writeFormat8(writer, font, values), - ) - - def writeFormat8(self, writer, font, values): - firstGlyphID = values[0][0] - writer.writeUShort(8) - writer.writeUShort(firstGlyphID) - writer.writeUShort(len(values)) - for _, value in values: - self.converter.write( - writer, font, tableDict=None, value=value, repeatIndex=None - ) - - def readFormat0(self, reader, font): - numGlyphs = len(font.getGlyphOrder()) - data = self.converter.readArray(reader, font, tableDict=None, count=numGlyphs) - return {font.getGlyphName(k): value for k, value in enumerate(data)} - - def readFormat2(self, reader, font): - mapping = {} - pos = reader.pos - 2 # start of table is at UShort for format - unitSize, numUnits = reader.readUShort(), reader.readUShort() - assert unitSize >= 4 + self.converter.staticSize, unitSize - for i in range(numUnits): - reader.seek(pos + i * unitSize + 12) - last = reader.readUShort() - first = reader.readUShort() - value = self.converter.read(reader, font, tableDict=None) - if last != 0xFFFF: - for k in range(first, last + 1): - mapping[font.getGlyphName(k)] = value - return mapping - - def readFormat4(self, reader, font): - mapping = {} - pos = reader.pos - 2 # start of table is at UShort for format - unitSize = reader.readUShort() - assert unitSize >= 6, unitSize - for i in range(reader.readUShort()): - reader.seek(pos + i * unitSize + 12) - last = reader.readUShort() - first = reader.readUShort() - offset = reader.readUShort() - if last != 0xFFFF: - dataReader = reader.getSubReader(0) # relative to current position - dataReader.seek(pos + offset) # relative to start of table - data = self.converter.readArray( - dataReader, font, tableDict=None, count=last - first + 1 - ) - for k, v in enumerate(data): - mapping[font.getGlyphName(first + k)] = v - return mapping - - def readFormat6(self, reader, font): - mapping = {} - pos = reader.pos - 2 # start of table is at UShort for format - unitSize = reader.readUShort() - assert unitSize >= 2 + self.converter.staticSize, unitSize - for i in range(reader.readUShort()): - reader.seek(pos + i * unitSize + 12) - glyphID = reader.readUShort() - value = self.converter.read(reader, font, tableDict=None) - if glyphID != 0xFFFF: - mapping[font.getGlyphName(glyphID)] = value - return mapping - - def readFormat8(self, reader, font): - first = reader.readUShort() - count = reader.readUShort() - data = self.converter.readArray(reader, font, tableDict=None, count=count) - return {font.getGlyphName(first + k): value for (k, value) in enumerate(data)} - - def xmlRead(self, attrs, content, font): - value = {} - for element in content: - if isinstance(element, tuple): - name, a, eltContent = element - if name == "Lookup": - value[a["glyph"]] = self.converter.xmlRead(a, eltContent, font) - return value - - def xmlWrite(self, xmlWriter, font, value, name, attrs): - xmlWriter.begintag(name, attrs) - xmlWriter.newline() - for glyph, value in sorted(value.items()): - self.converter.xmlWrite( - xmlWriter, font, value=value, name="Lookup", attrs=[("glyph", glyph)] - ) - xmlWriter.endtag(name) - xmlWriter.newline() - - -# The AAT 'ankr' table has an unusual structure: An offset to an AATLookup -# followed by an offset to a glyph data table. Other than usual, the -# offsets in the AATLookup are not relative to the beginning of -# the beginning of the 'ankr' table, but relative to the glyph data table. -# So, to find the anchor data for a glyph, one needs to add the offset -# to the data table to the offset found in the AATLookup, and then use -# the sum of these two offsets to find the actual data. -class AATLookupWithDataOffset(BaseConverter): - def read(self, reader, font, tableDict): - lookupOffset = reader.readULong() - dataOffset = reader.readULong() - lookupReader = reader.getSubReader(lookupOffset) - lookup = AATLookup("DataOffsets", None, None, UShort) - offsets = lookup.read(lookupReader, font, tableDict) - result = {} - for glyph, offset in offsets.items(): - dataReader = reader.getSubReader(offset + dataOffset) - item = self.tableClass() - item.decompile(dataReader, font) - result[glyph] = item - return result - - def write(self, writer, font, tableDict, value, repeatIndex=None): - # We do not work with OTTableWriter sub-writers because - # the offsets in our AATLookup are relative to our data - # table, for which we need to provide an offset value itself. - # It might have been possible to somehow make a kludge for - # performing this indirect offset computation directly inside - # OTTableWriter. But this would have made the internal logic - # of OTTableWriter even more complex than it already is, - # so we decided to roll our own offset computation for the - # contents of the AATLookup and associated data table. - offsetByGlyph, offsetByData, dataLen = {}, {}, 0 - compiledData = [] - for glyph in sorted(value, key=font.getGlyphID): - subWriter = OTTableWriter() - value[glyph].compile(subWriter, font) - data = subWriter.getAllData() - offset = offsetByData.get(data, None) - if offset == None: - offset = dataLen - dataLen = dataLen + len(data) - offsetByData[data] = offset - compiledData.append(data) - offsetByGlyph[glyph] = offset - # For calculating the offsets to our AATLookup and data table, - # we can use the regular OTTableWriter infrastructure. - lookupWriter = writer.getSubWriter(offsetSize=4) - lookup = AATLookup("DataOffsets", None, None, UShort) - lookup.write(lookupWriter, font, tableDict, offsetByGlyph, None) - - dataWriter = writer.getSubWriter(offsetSize=4) - writer.writeSubTable(lookupWriter) - writer.writeSubTable(dataWriter) - for d in compiledData: - dataWriter.writeData(d) - - def xmlRead(self, attrs, content, font): - lookup = AATLookup("DataOffsets", None, None, self.tableClass) - return lookup.xmlRead(attrs, content, font) - - def xmlWrite(self, xmlWriter, font, value, name, attrs): - lookup = AATLookup("DataOffsets", None, None, self.tableClass) - lookup.xmlWrite(xmlWriter, font, value, name, attrs) - - -class MorxSubtableConverter(BaseConverter): - _PROCESSING_ORDERS = { - # bits 30 and 28 of morx.CoverageFlags; see morx spec - (False, False): "LayoutOrder", - (True, False): "ReversedLayoutOrder", - (False, True): "LogicalOrder", - (True, True): "ReversedLogicalOrder", - } - - _PROCESSING_ORDERS_REVERSED = {val: key for key, val in _PROCESSING_ORDERS.items()} - - def __init__(self, name, repeat, aux, tableClass=None, *, description=""): - BaseConverter.__init__( - self, name, repeat, aux, tableClass, description=description - ) - - def _setTextDirectionFromCoverageFlags(self, flags, subtable): - if (flags & 0x20) != 0: - subtable.TextDirection = "Any" - elif (flags & 0x80) != 0: - subtable.TextDirection = "Vertical" - else: - subtable.TextDirection = "Horizontal" - - def read(self, reader, font, tableDict): - pos = reader.pos - m = MorxSubtable() - m.StructLength = reader.readULong() - flags = reader.readUInt8() - orderKey = ((flags & 0x40) != 0, (flags & 0x10) != 0) - m.ProcessingOrder = self._PROCESSING_ORDERS[orderKey] - self._setTextDirectionFromCoverageFlags(flags, m) - m.Reserved = reader.readUShort() - m.Reserved |= (flags & 0xF) << 16 - m.MorphType = reader.readUInt8() - m.SubFeatureFlags = reader.readULong() - tableClass = lookupTypes["morx"].get(m.MorphType) - if tableClass is None: - assert False, "unsupported 'morx' lookup type %s" % m.MorphType - # To decode AAT ligatures, we need to know the subtable size. - # The easiest way to pass this along is to create a new reader - # that works on just the subtable as its data. - headerLength = reader.pos - pos - data = reader.data[reader.pos : reader.pos + m.StructLength - headerLength] - assert len(data) == m.StructLength - headerLength - subReader = OTTableReader(data=data, tableTag=reader.tableTag) - m.SubStruct = tableClass() - m.SubStruct.decompile(subReader, font) - reader.seek(pos + m.StructLength) - return m - - def xmlWrite(self, xmlWriter, font, value, name, attrs): - xmlWriter.begintag(name, attrs) - xmlWriter.newline() - xmlWriter.comment("StructLength=%d" % value.StructLength) - xmlWriter.newline() - xmlWriter.simpletag("TextDirection", value=value.TextDirection) - xmlWriter.newline() - xmlWriter.simpletag("ProcessingOrder", value=value.ProcessingOrder) - xmlWriter.newline() - if value.Reserved != 0: - xmlWriter.simpletag("Reserved", value="0x%04x" % value.Reserved) - xmlWriter.newline() - xmlWriter.comment("MorphType=%d" % value.MorphType) - xmlWriter.newline() - xmlWriter.simpletag("SubFeatureFlags", value="0x%08x" % value.SubFeatureFlags) - xmlWriter.newline() - value.SubStruct.toXML(xmlWriter, font) - xmlWriter.endtag(name) - xmlWriter.newline() - - def xmlRead(self, attrs, content, font): - m = MorxSubtable() - covFlags = 0 - m.Reserved = 0 - for eltName, eltAttrs, eltContent in filter(istuple, content): - if eltName == "CoverageFlags": - # Only in XML from old versions of fonttools. - covFlags = safeEval(eltAttrs["value"]) - orderKey = ((covFlags & 0x40) != 0, (covFlags & 0x10) != 0) - m.ProcessingOrder = self._PROCESSING_ORDERS[orderKey] - self._setTextDirectionFromCoverageFlags(covFlags, m) - elif eltName == "ProcessingOrder": - m.ProcessingOrder = eltAttrs["value"] - assert m.ProcessingOrder in self._PROCESSING_ORDERS_REVERSED, ( - "unknown ProcessingOrder: %s" % m.ProcessingOrder - ) - elif eltName == "TextDirection": - m.TextDirection = eltAttrs["value"] - assert m.TextDirection in {"Horizontal", "Vertical", "Any"}, ( - "unknown TextDirection %s" % m.TextDirection - ) - elif eltName == "Reserved": - m.Reserved = safeEval(eltAttrs["value"]) - elif eltName == "SubFeatureFlags": - m.SubFeatureFlags = safeEval(eltAttrs["value"]) - elif eltName.endswith("Morph"): - m.fromXML(eltName, eltAttrs, eltContent, font) - else: - assert False, eltName - m.Reserved = (covFlags & 0xF) << 16 | m.Reserved - return m - - def write(self, writer, font, tableDict, value, repeatIndex=None): - covFlags = (value.Reserved & 0x000F0000) >> 16 - reverseOrder, logicalOrder = self._PROCESSING_ORDERS_REVERSED[ - value.ProcessingOrder - ] - covFlags |= 0x80 if value.TextDirection == "Vertical" else 0 - covFlags |= 0x40 if reverseOrder else 0 - covFlags |= 0x20 if value.TextDirection == "Any" else 0 - covFlags |= 0x10 if logicalOrder else 0 - value.CoverageFlags = covFlags - lengthIndex = len(writer.items) - before = writer.getDataLength() - value.StructLength = 0xDEADBEEF - # The high nibble of value.Reserved is actuallly encoded - # into coverageFlags, so we need to clear it here. - origReserved = value.Reserved # including high nibble - value.Reserved = value.Reserved & 0xFFFF # without high nibble - value.compile(writer, font) - value.Reserved = origReserved # restore original value - assert writer.items[lengthIndex] == b"\xde\xad\xbe\xef" - length = writer.getDataLength() - before - writer.items[lengthIndex] = struct.pack(">L", length) - - -# https://developer.apple.com/fonts/TrueType-Reference-Manual/RM06/Chap6Tables.html#ExtendedStateHeader -# TODO: Untangle the implementation of the various lookup-specific formats. -class STXHeader(BaseConverter): - def __init__(self, name, repeat, aux, tableClass, *, description=""): - BaseConverter.__init__( - self, name, repeat, aux, tableClass, description=description - ) - assert issubclass(self.tableClass, AATAction) - self.classLookup = AATLookup("GlyphClasses", None, None, UShort) - if issubclass(self.tableClass, ContextualMorphAction): - self.perGlyphLookup = AATLookup("PerGlyphLookup", None, None, GlyphID) - else: - self.perGlyphLookup = None - - def read(self, reader, font, tableDict): - table = AATStateTable() - pos = reader.pos - classTableReader = reader.getSubReader(0) - stateArrayReader = reader.getSubReader(0) - entryTableReader = reader.getSubReader(0) - actionReader = None - ligaturesReader = None - table.GlyphClassCount = reader.readULong() - classTableReader.seek(pos + reader.readULong()) - stateArrayReader.seek(pos + reader.readULong()) - entryTableReader.seek(pos + reader.readULong()) - if self.perGlyphLookup is not None: - perGlyphTableReader = reader.getSubReader(0) - perGlyphTableReader.seek(pos + reader.readULong()) - if issubclass(self.tableClass, LigatureMorphAction): - actionReader = reader.getSubReader(0) - actionReader.seek(pos + reader.readULong()) - ligComponentReader = reader.getSubReader(0) - ligComponentReader.seek(pos + reader.readULong()) - ligaturesReader = reader.getSubReader(0) - ligaturesReader.seek(pos + reader.readULong()) - numLigComponents = (ligaturesReader.pos - ligComponentReader.pos) // 2 - assert numLigComponents >= 0 - table.LigComponents = ligComponentReader.readUShortArray(numLigComponents) - table.Ligatures = self._readLigatures(ligaturesReader, font) - elif issubclass(self.tableClass, InsertionMorphAction): - actionReader = reader.getSubReader(0) - actionReader.seek(pos + reader.readULong()) - table.GlyphClasses = self.classLookup.read(classTableReader, font, tableDict) - numStates = int( - (entryTableReader.pos - stateArrayReader.pos) / (table.GlyphClassCount * 2) - ) - for stateIndex in range(numStates): - state = AATState() - table.States.append(state) - for glyphClass in range(table.GlyphClassCount): - entryIndex = stateArrayReader.readUShort() - state.Transitions[glyphClass] = self._readTransition( - entryTableReader, entryIndex, font, actionReader - ) - if self.perGlyphLookup is not None: - table.PerGlyphLookups = self._readPerGlyphLookups( - table, perGlyphTableReader, font - ) - return table - - def _readTransition(self, reader, entryIndex, font, actionReader): - transition = self.tableClass() - entryReader = reader.getSubReader( - reader.pos + entryIndex * transition.staticSize - ) - transition.decompile(entryReader, font, actionReader) - return transition - - def _readLigatures(self, reader, font): - limit = len(reader.data) - numLigatureGlyphs = (limit - reader.pos) // 2 - return font.getGlyphNameMany(reader.readUShortArray(numLigatureGlyphs)) - - def _countPerGlyphLookups(self, table): - # Somewhat annoyingly, the morx table does not encode - # the size of the per-glyph table. So we need to find - # the maximum value that MorphActions use as index - # into this table. - numLookups = 0 - for state in table.States: - for t in state.Transitions.values(): - if isinstance(t, ContextualMorphAction): - if t.MarkIndex != 0xFFFF: - numLookups = max(numLookups, t.MarkIndex + 1) - if t.CurrentIndex != 0xFFFF: - numLookups = max(numLookups, t.CurrentIndex + 1) - return numLookups - - def _readPerGlyphLookups(self, table, reader, font): - pos = reader.pos - lookups = [] - for _ in range(self._countPerGlyphLookups(table)): - lookupReader = reader.getSubReader(0) - lookupReader.seek(pos + reader.readULong()) - lookups.append(self.perGlyphLookup.read(lookupReader, font, {})) - return lookups - - def write(self, writer, font, tableDict, value, repeatIndex=None): - glyphClassWriter = OTTableWriter() - self.classLookup.write( - glyphClassWriter, font, tableDict, value.GlyphClasses, repeatIndex=None - ) - glyphClassData = pad(glyphClassWriter.getAllData(), 2) - glyphClassCount = max(value.GlyphClasses.values()) + 1 - glyphClassTableOffset = 16 # size of STXHeader - if self.perGlyphLookup is not None: - glyphClassTableOffset += 4 - - glyphClassTableOffset += self.tableClass.actionHeaderSize - actionData, actionIndex = self.tableClass.compileActions(font, value.States) - stateArrayData, entryTableData = self._compileStates( - font, value.States, glyphClassCount, actionIndex - ) - stateArrayOffset = glyphClassTableOffset + len(glyphClassData) - entryTableOffset = stateArrayOffset + len(stateArrayData) - perGlyphOffset = entryTableOffset + len(entryTableData) - perGlyphData = pad(self._compilePerGlyphLookups(value, font), 4) - if actionData is not None: - actionOffset = entryTableOffset + len(entryTableData) - else: - actionOffset = None - - ligaturesOffset, ligComponentsOffset = None, None - ligComponentsData = self._compileLigComponents(value, font) - ligaturesData = self._compileLigatures(value, font) - if ligComponentsData is not None: - assert len(perGlyphData) == 0 - ligComponentsOffset = actionOffset + len(actionData) - ligaturesOffset = ligComponentsOffset + len(ligComponentsData) - - writer.writeULong(glyphClassCount) - writer.writeULong(glyphClassTableOffset) - writer.writeULong(stateArrayOffset) - writer.writeULong(entryTableOffset) - if self.perGlyphLookup is not None: - writer.writeULong(perGlyphOffset) - if actionOffset is not None: - writer.writeULong(actionOffset) - if ligComponentsOffset is not None: - writer.writeULong(ligComponentsOffset) - writer.writeULong(ligaturesOffset) - writer.writeData(glyphClassData) - writer.writeData(stateArrayData) - writer.writeData(entryTableData) - writer.writeData(perGlyphData) - if actionData is not None: - writer.writeData(actionData) - if ligComponentsData is not None: - writer.writeData(ligComponentsData) - if ligaturesData is not None: - writer.writeData(ligaturesData) - - def _compileStates(self, font, states, glyphClassCount, actionIndex): - stateArrayWriter = OTTableWriter() - entries, entryIDs = [], {} - for state in states: - for glyphClass in range(glyphClassCount): - transition = state.Transitions[glyphClass] - entryWriter = OTTableWriter() - transition.compile(entryWriter, font, actionIndex) - entryData = entryWriter.getAllData() - assert ( - len(entryData) == transition.staticSize - ), "%s has staticSize %d, " "but actually wrote %d bytes" % ( - repr(transition), - transition.staticSize, - len(entryData), - ) - entryIndex = entryIDs.get(entryData) - if entryIndex is None: - entryIndex = len(entries) - entryIDs[entryData] = entryIndex - entries.append(entryData) - stateArrayWriter.writeUShort(entryIndex) - stateArrayData = pad(stateArrayWriter.getAllData(), 4) - entryTableData = pad(bytesjoin(entries), 4) - return stateArrayData, entryTableData - - def _compilePerGlyphLookups(self, table, font): - if self.perGlyphLookup is None: - return b"" - numLookups = self._countPerGlyphLookups(table) - assert len(table.PerGlyphLookups) == numLookups, ( - "len(AATStateTable.PerGlyphLookups) is %d, " - "but the actions inside the table refer to %d" - % (len(table.PerGlyphLookups), numLookups) - ) - writer = OTTableWriter() - for lookup in table.PerGlyphLookups: - lookupWriter = writer.getSubWriter(offsetSize=4) - self.perGlyphLookup.write(lookupWriter, font, {}, lookup, None) - writer.writeSubTable(lookupWriter) - return writer.getAllData() - - def _compileLigComponents(self, table, font): - if not hasattr(table, "LigComponents"): - return None - writer = OTTableWriter() - for component in table.LigComponents: - writer.writeUShort(component) - return writer.getAllData() - - def _compileLigatures(self, table, font): - if not hasattr(table, "Ligatures"): - return None - writer = OTTableWriter() - for glyphName in table.Ligatures: - writer.writeUShort(font.getGlyphID(glyphName)) - return writer.getAllData() - - def xmlWrite(self, xmlWriter, font, value, name, attrs): - xmlWriter.begintag(name, attrs) - xmlWriter.newline() - xmlWriter.comment("GlyphClassCount=%s" % value.GlyphClassCount) - xmlWriter.newline() - for g, klass in sorted(value.GlyphClasses.items()): - xmlWriter.simpletag("GlyphClass", glyph=g, value=klass) - xmlWriter.newline() - for stateIndex, state in enumerate(value.States): - xmlWriter.begintag("State", index=stateIndex) - xmlWriter.newline() - for glyphClass, trans in sorted(state.Transitions.items()): - trans.toXML( - xmlWriter, - font=font, - attrs={"onGlyphClass": glyphClass}, - name="Transition", - ) - xmlWriter.endtag("State") - xmlWriter.newline() - for i, lookup in enumerate(value.PerGlyphLookups): - xmlWriter.begintag("PerGlyphLookup", index=i) - xmlWriter.newline() - for glyph, val in sorted(lookup.items()): - xmlWriter.simpletag("Lookup", glyph=glyph, value=val) - xmlWriter.newline() - xmlWriter.endtag("PerGlyphLookup") - xmlWriter.newline() - if hasattr(value, "LigComponents"): - xmlWriter.begintag("LigComponents") - xmlWriter.newline() - for i, val in enumerate(getattr(value, "LigComponents")): - xmlWriter.simpletag("LigComponent", index=i, value=val) - xmlWriter.newline() - xmlWriter.endtag("LigComponents") - xmlWriter.newline() - self._xmlWriteLigatures(xmlWriter, font, value, name, attrs) - xmlWriter.endtag(name) - xmlWriter.newline() - - def _xmlWriteLigatures(self, xmlWriter, font, value, name, attrs): - if not hasattr(value, "Ligatures"): - return - xmlWriter.begintag("Ligatures") - xmlWriter.newline() - for i, g in enumerate(getattr(value, "Ligatures")): - xmlWriter.simpletag("Ligature", index=i, glyph=g) - xmlWriter.newline() - xmlWriter.endtag("Ligatures") - xmlWriter.newline() - - def xmlRead(self, attrs, content, font): - table = AATStateTable() - for eltName, eltAttrs, eltContent in filter(istuple, content): - if eltName == "GlyphClass": - glyph = eltAttrs["glyph"] - value = eltAttrs["value"] - table.GlyphClasses[glyph] = safeEval(value) - elif eltName == "State": - state = self._xmlReadState(eltAttrs, eltContent, font) - table.States.append(state) - elif eltName == "PerGlyphLookup": - lookup = self.perGlyphLookup.xmlRead(eltAttrs, eltContent, font) - table.PerGlyphLookups.append(lookup) - elif eltName == "LigComponents": - table.LigComponents = self._xmlReadLigComponents( - eltAttrs, eltContent, font - ) - elif eltName == "Ligatures": - table.Ligatures = self._xmlReadLigatures(eltAttrs, eltContent, font) - table.GlyphClassCount = max(table.GlyphClasses.values()) + 1 - return table - - def _xmlReadState(self, attrs, content, font): - state = AATState() - for eltName, eltAttrs, eltContent in filter(istuple, content): - if eltName == "Transition": - glyphClass = safeEval(eltAttrs["onGlyphClass"]) - transition = self.tableClass() - transition.fromXML(eltName, eltAttrs, eltContent, font) - state.Transitions[glyphClass] = transition - return state - - def _xmlReadLigComponents(self, attrs, content, font): - ligComponents = [] - for eltName, eltAttrs, _eltContent in filter(istuple, content): - if eltName == "LigComponent": - ligComponents.append(safeEval(eltAttrs["value"])) - return ligComponents - - def _xmlReadLigatures(self, attrs, content, font): - ligs = [] - for eltName, eltAttrs, _eltContent in filter(istuple, content): - if eltName == "Ligature": - ligs.append(eltAttrs["glyph"]) - return ligs - - -class CIDGlyphMap(BaseConverter): - def read(self, reader, font, tableDict): - numCIDs = reader.readUShort() - result = {} - for cid, glyphID in enumerate(reader.readUShortArray(numCIDs)): - if glyphID != 0xFFFF: - result[cid] = font.getGlyphName(glyphID) - return result - - def write(self, writer, font, tableDict, value, repeatIndex=None): - items = {cid: font.getGlyphID(glyph) for cid, glyph in value.items()} - count = max(items) + 1 if items else 0 - writer.writeUShort(count) - for cid in range(count): - writer.writeUShort(items.get(cid, 0xFFFF)) - - def xmlRead(self, attrs, content, font): - result = {} - for eName, eAttrs, _eContent in filter(istuple, content): - if eName == "CID": - result[safeEval(eAttrs["cid"])] = eAttrs["glyph"].strip() - return result - - def xmlWrite(self, xmlWriter, font, value, name, attrs): - xmlWriter.begintag(name, attrs) - xmlWriter.newline() - for cid, glyph in sorted(value.items()): - if glyph is not None and glyph != 0xFFFF: - xmlWriter.simpletag("CID", cid=cid, glyph=glyph) - xmlWriter.newline() - xmlWriter.endtag(name) - xmlWriter.newline() - - -class GlyphCIDMap(BaseConverter): - def read(self, reader, font, tableDict): - glyphOrder = font.getGlyphOrder() - count = reader.readUShort() - cids = reader.readUShortArray(count) - if count > len(glyphOrder): - log.warning( - "GlyphCIDMap has %d elements, " - "but the font has only %d glyphs; " - "ignoring the rest" % (count, len(glyphOrder)) - ) - result = {} - for glyphID in range(min(len(cids), len(glyphOrder))): - cid = cids[glyphID] - if cid != 0xFFFF: - result[glyphOrder[glyphID]] = cid - return result - - def write(self, writer, font, tableDict, value, repeatIndex=None): - items = { - font.getGlyphID(g): cid - for g, cid in value.items() - if cid is not None and cid != 0xFFFF - } - count = max(items) + 1 if items else 0 - writer.writeUShort(count) - for glyphID in range(count): - writer.writeUShort(items.get(glyphID, 0xFFFF)) - - def xmlRead(self, attrs, content, font): - result = {} - for eName, eAttrs, _eContent in filter(istuple, content): - if eName == "CID": - result[eAttrs["glyph"]] = safeEval(eAttrs["value"]) - return result - - def xmlWrite(self, xmlWriter, font, value, name, attrs): - xmlWriter.begintag(name, attrs) - xmlWriter.newline() - for glyph, cid in sorted(value.items()): - if cid is not None and cid != 0xFFFF: - xmlWriter.simpletag("CID", glyph=glyph, value=cid) - xmlWriter.newline() - xmlWriter.endtag(name) - xmlWriter.newline() - - -class DeltaValue(BaseConverter): - def read(self, reader, font, tableDict): - StartSize = tableDict["StartSize"] - EndSize = tableDict["EndSize"] - DeltaFormat = tableDict["DeltaFormat"] - assert DeltaFormat in (1, 2, 3), "illegal DeltaFormat" - nItems = EndSize - StartSize + 1 - nBits = 1 << DeltaFormat - minusOffset = 1 << nBits - mask = (1 << nBits) - 1 - signMask = 1 << (nBits - 1) - - DeltaValue = [] - tmp, shift = 0, 0 - for i in range(nItems): - if shift == 0: - tmp, shift = reader.readUShort(), 16 - shift = shift - nBits - value = (tmp >> shift) & mask - if value & signMask: - value = value - minusOffset - DeltaValue.append(value) - return DeltaValue - - def write(self, writer, font, tableDict, value, repeatIndex=None): - StartSize = tableDict["StartSize"] - EndSize = tableDict["EndSize"] - DeltaFormat = tableDict["DeltaFormat"] - DeltaValue = value - assert DeltaFormat in (1, 2, 3), "illegal DeltaFormat" - nItems = EndSize - StartSize + 1 - nBits = 1 << DeltaFormat - assert len(DeltaValue) == nItems - mask = (1 << nBits) - 1 - - tmp, shift = 0, 16 - for value in DeltaValue: - shift = shift - nBits - tmp = tmp | ((value & mask) << shift) - if shift == 0: - writer.writeUShort(tmp) - tmp, shift = 0, 16 - if shift != 16: - writer.writeUShort(tmp) - - def xmlWrite(self, xmlWriter, font, value, name, attrs): - xmlWriter.simpletag(name, attrs + [("value", value)]) - xmlWriter.newline() - - def xmlRead(self, attrs, content, font): - return safeEval(attrs["value"]) - - -class VarIdxMapValue(BaseConverter): - def read(self, reader, font, tableDict): - fmt = tableDict["EntryFormat"] - nItems = tableDict["MappingCount"] - - innerBits = 1 + (fmt & 0x000F) - innerMask = (1 << innerBits) - 1 - outerMask = 0xFFFFFFFF - innerMask - outerShift = 16 - innerBits - - entrySize = 1 + ((fmt & 0x0030) >> 4) - readArray = { - 1: reader.readUInt8Array, - 2: reader.readUShortArray, - 3: reader.readUInt24Array, - 4: reader.readULongArray, - }[entrySize] - - return [ - (((raw & outerMask) << outerShift) | (raw & innerMask)) - for raw in readArray(nItems) - ] - - def write(self, writer, font, tableDict, value, repeatIndex=None): - fmt = tableDict["EntryFormat"] - mapping = value - writer["MappingCount"].setValue(len(mapping)) - - innerBits = 1 + (fmt & 0x000F) - innerMask = (1 << innerBits) - 1 - outerShift = 16 - innerBits - - entrySize = 1 + ((fmt & 0x0030) >> 4) - writeArray = { - 1: writer.writeUInt8Array, - 2: writer.writeUShortArray, - 3: writer.writeUInt24Array, - 4: writer.writeULongArray, - }[entrySize] - - writeArray( - [ - (((idx & 0xFFFF0000) >> outerShift) | (idx & innerMask)) - for idx in mapping - ] - ) - - -class VarDataValue(BaseConverter): - def read(self, reader, font, tableDict): - values = [] - - regionCount = tableDict["VarRegionCount"] - wordCount = tableDict["NumShorts"] - - # https://github.com/fonttools/fonttools/issues/2279 - longWords = bool(wordCount & 0x8000) - wordCount = wordCount & 0x7FFF - - if longWords: - readBigArray, readSmallArray = reader.readLongArray, reader.readShortArray - else: - readBigArray, readSmallArray = reader.readShortArray, reader.readInt8Array - - n1, n2 = min(regionCount, wordCount), max(regionCount, wordCount) - values.extend(readBigArray(n1)) - values.extend(readSmallArray(n2 - n1)) - if n2 > regionCount: # Padding - del values[regionCount:] - - return values - - def write(self, writer, font, tableDict, values, repeatIndex=None): - regionCount = tableDict["VarRegionCount"] - wordCount = tableDict["NumShorts"] - - # https://github.com/fonttools/fonttools/issues/2279 - longWords = bool(wordCount & 0x8000) - wordCount = wordCount & 0x7FFF - - (writeBigArray, writeSmallArray) = { - False: (writer.writeShortArray, writer.writeInt8Array), - True: (writer.writeLongArray, writer.writeShortArray), - }[longWords] - - n1, n2 = min(regionCount, wordCount), max(regionCount, wordCount) - writeBigArray(values[:n1]) - writeSmallArray(values[n1:regionCount]) - if n2 > regionCount: # Padding - writer.writeSmallArray([0] * (n2 - regionCount)) - - def xmlWrite(self, xmlWriter, font, value, name, attrs): - xmlWriter.simpletag(name, attrs + [("value", value)]) - xmlWriter.newline() - - def xmlRead(self, attrs, content, font): - return safeEval(attrs["value"]) - - -class LookupFlag(UShort): - def xmlWrite(self, xmlWriter, font, value, name, attrs): - xmlWriter.simpletag(name, attrs + [("value", value)]) - flags = [] - if value & 0x01: - flags.append("rightToLeft") - if value & 0x02: - flags.append("ignoreBaseGlyphs") - if value & 0x04: - flags.append("ignoreLigatures") - if value & 0x08: - flags.append("ignoreMarks") - if value & 0x10: - flags.append("useMarkFilteringSet") - if value & 0xFF00: - flags.append("markAttachmentType[%i]" % (value >> 8)) - if flags: - xmlWriter.comment(" ".join(flags)) - xmlWriter.newline() - - -class _UInt8Enum(UInt8): - enumClass = NotImplemented - - def read(self, reader, font, tableDict): - return self.enumClass(super().read(reader, font, tableDict)) - - @classmethod - def fromString(cls, value): - return getattr(cls.enumClass, value.upper()) - - @classmethod - def toString(cls, value): - return cls.enumClass(value).name.lower() - - -class ExtendMode(_UInt8Enum): - enumClass = _ExtendMode - - -class CompositeMode(_UInt8Enum): - enumClass = _CompositeMode - - -converterMapping = { - # type class - "int8": Int8, - "int16": Short, - "uint8": UInt8, - "uint16": UShort, - "uint24": UInt24, - "uint32": ULong, - "char64": Char64, - "Flags32": Flags32, - "VarIndex": VarIndex, - "Version": Version, - "Tag": Tag, - "GlyphID": GlyphID, - "GlyphID32": GlyphID32, - "NameID": NameID, - "DeciPoints": DeciPoints, - "Fixed": Fixed, - "F2Dot14": F2Dot14, - "Angle": Angle, - "BiasedAngle": BiasedAngle, - "struct": Struct, - "Offset": Table, - "LOffset": LTable, - "Offset24": Table24, - "ValueRecord": ValueRecord, - "DeltaValue": DeltaValue, - "VarIdxMapValue": VarIdxMapValue, - "VarDataValue": VarDataValue, - "LookupFlag": LookupFlag, - "ExtendMode": ExtendMode, - "CompositeMode": CompositeMode, - "STATFlags": STATFlags, - # AAT - "CIDGlyphMap": CIDGlyphMap, - "GlyphCIDMap": GlyphCIDMap, - "MortChain": StructWithLength, - "MortSubtable": StructWithLength, - "MorxChain": StructWithLength, - "MorxSubtable": MorxSubtableConverter, - # "Template" types - "AATLookup": lambda C: partial(AATLookup, tableClass=C), - "AATLookupWithDataOffset": lambda C: partial(AATLookupWithDataOffset, tableClass=C), - "STXHeader": lambda C: partial(STXHeader, tableClass=C), - "OffsetTo": lambda C: partial(Table, tableClass=C), - "LOffsetTo": lambda C: partial(LTable, tableClass=C), - "LOffset24To": lambda C: partial(Table24, tableClass=C), -} diff --git a/spaces/DaleChen/AutoGPT/autogpt/__init__.py b/spaces/DaleChen/AutoGPT/autogpt/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Dipl0/Dipl0-pepe-diffuser/README.md b/spaces/Dipl0/Dipl0-pepe-diffuser/README.md deleted file mode 100644 index 3218bbdc2d8a7953d72781161960d1494253fc4f..0000000000000000000000000000000000000000 --- a/spaces/Dipl0/Dipl0-pepe-diffuser/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Dipl0 Pepe Diffuser -emoji: 🦀 -colorFrom: purple -colorTo: indigo -sdk: gradio -sdk_version: 3.17.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/DragGan/DragGan-Inversion/PTI/models/StyleCLIP/global_directions/manipulate.py b/spaces/DragGan/DragGan-Inversion/PTI/models/StyleCLIP/global_directions/manipulate.py deleted file mode 100644 index e1a2480caad8016fea0c06f0bfe521b25f084436..0000000000000000000000000000000000000000 --- a/spaces/DragGan/DragGan-Inversion/PTI/models/StyleCLIP/global_directions/manipulate.py +++ /dev/null @@ -1,278 +0,0 @@ - - -import os -import os.path -import pickle -import numpy as np -import tensorflow as tf -from dnnlib import tflib -from global_directions.utils.visualizer import HtmlPageVisualizer - - -def Vis(bname,suffix,out,rownames=None,colnames=None): - num_images=out.shape[0] - step=out.shape[1] - - if colnames is None: - colnames=[f'Step {i:02d}' for i in range(1, step + 1)] - if rownames is None: - rownames=[str(i) for i in range(num_images)] - - - visualizer = HtmlPageVisualizer( - num_rows=num_images, num_cols=step + 1, viz_size=256) - visualizer.set_headers( - ['Name'] +colnames) - - for i in range(num_images): - visualizer.set_cell(i, 0, text=rownames[i]) - - for i in range(num_images): - for k in range(step): - image=out[i,k,:,:,:] - visualizer.set_cell(i, 1+k, image=image) - - # Save results. - visualizer.save(f'./html/'+bname+'_'+suffix+'.html') - - - - -def LoadData(img_path): - tmp=img_path+'S' - with open(tmp, "rb") as fp: #Pickling - s_names,all_s=pickle.load( fp) - dlatents=all_s - - pindexs=[] - mindexs=[] - for i in range(len(s_names)): - name=s_names[i] - if not('ToRGB' in name): - mindexs.append(i) - else: - pindexs.append(i) - - tmp=img_path+'S_mean_std' - with open(tmp, "rb") as fp: #Pickling - m,std=pickle.load( fp) - - return dlatents,s_names,mindexs,pindexs,m,std - - -def LoadModel(model_path,model_name): - # Initialize TensorFlow. - tflib.init_tf() - tmp=os.path.join(model_path,model_name) - with open(tmp, 'rb') as f: - _, _, Gs = pickle.load(f) - Gs.print_layers() - return Gs - -def convert_images_to_uint8(images, drange=[-1,1], nchw_to_nhwc=False): - """Convert a minibatch of images from float32 to uint8 with configurable dynamic range. - Can be used as an output transformation for Network.run(). - """ - if nchw_to_nhwc: - images = np.transpose(images, [0, 2, 3, 1]) - - scale = 255 / (drange[1] - drange[0]) - images = images * scale + (0.5 - drange[0] * scale) - - np.clip(images, 0, 255, out=images) - images=images.astype('uint8') - return images - - -def convert_images_from_uint8(images, drange=[-1,1], nhwc_to_nchw=False): - """Convert a minibatch of images from uint8 to float32 with configurable dynamic range. - Can be used as an input transformation for Network.run(). - """ - if nhwc_to_nchw: - images=np.rollaxis(images, 3, 1) - return images/ 255 *(drange[1] - drange[0])+ drange[0] - - -class Manipulator(): - def __init__(self,dataset_name='ffhq'): - self.file_path='./' - self.img_path=self.file_path+'npy/'+dataset_name+'/' - self.model_path=self.file_path+'model/' - self.dataset_name=dataset_name - self.model_name=dataset_name+'.pkl' - - self.alpha=[0] #manipulation strength - self.num_images=10 - self.img_index=0 #which image to start - self.viz_size=256 - self.manipulate_layers=None #which layer to manipulate, list - - self.dlatents,self.s_names,self.mindexs,self.pindexs,self.code_mean,self.code_std=LoadData(self.img_path) - - self.sess=tf.InteractiveSession() - init = tf.global_variables_initializer() - self.sess.run(init) - self.Gs=LoadModel(self.model_path,self.model_name) - self.num_layers=len(self.dlatents) - - self.Vis=Vis - self.noise_constant={} - - for i in range(len(self.s_names)): - tmp1=self.s_names[i].split('/') - if not 'ToRGB' in tmp1: - tmp1[-1]='random_normal:0' - size=int(tmp1[1].split('x')[0]) - tmp1='/'.join(tmp1) - tmp=(1,1,size,size) - self.noise_constant[tmp1]=np.random.random(tmp) - - tmp=self.Gs.components.synthesis.input_shape[1] - d={} - d['G_synthesis_1/dlatents_in:0']=np.zeros([1,tmp,512]) - names=list(self.noise_constant.keys()) - tmp=tflib.run(names,d) - for i in range(len(names)): - self.noise_constant[names[i]]=tmp[i] - - self.fmt = dict(func=tflib.convert_images_to_uint8, nchw_to_nhwc=True) - self.img_size=self.Gs.output_shape[-1] - - def GenerateImg(self,codes): - - - num_images,step=codes[0].shape[:2] - - - out=np.zeros((num_images,step,self.img_size,self.img_size,3),dtype='uint8') - for i in range(num_images): - for k in range(step): - d={} - for m in range(len(self.s_names)): - d[self.s_names[m]]=codes[m][i,k][None,:] #need to change - d['G_synthesis_1/4x4/Const/Shape:0']=np.array([1,18, 512], dtype=np.int32) - d.update(self.noise_constant) - img=tflib.run('G_synthesis_1/images_out:0', d) - image=convert_images_to_uint8(img, nchw_to_nhwc=True) - out[i,k,:,:,:]=image[0] - return out - - - - def MSCode(self,dlatent_tmp,boundary_tmp): - - step=len(self.alpha) - dlatent_tmp1=[tmp.reshape((self.num_images,-1)) for tmp in dlatent_tmp] - dlatent_tmp2=[np.tile(tmp[:,None],(1,step,1)) for tmp in dlatent_tmp1] # (10, 7, 512) - - l=np.array(self.alpha) - l=l.reshape( - [step if axis == 1 else 1 for axis in range(dlatent_tmp2[0].ndim)]) - - if type(self.manipulate_layers)==int: - tmp=[self.manipulate_layers] - elif type(self.manipulate_layers)==list: - tmp=self.manipulate_layers - elif self.manipulate_layers is None: - tmp=np.arange(len(boundary_tmp)) - else: - raise ValueError('manipulate_layers is wrong') - - for i in tmp: - dlatent_tmp2[i]+=l*boundary_tmp[i] - - codes=[] - for i in range(len(dlatent_tmp2)): - tmp=list(dlatent_tmp[i].shape) - tmp.insert(1,step) - codes.append(dlatent_tmp2[i].reshape(tmp)) - return codes - - - def EditOne(self,bname,dlatent_tmp=None): - if dlatent_tmp==None: - dlatent_tmp=[tmp[self.img_index:(self.img_index+self.num_images)] for tmp in self.dlatents] - - boundary_tmp=[] - for i in range(len(self.boundary)): - tmp=self.boundary[i] - if len(tmp)<=bname: - boundary_tmp.append([]) - else: - boundary_tmp.append(tmp[bname]) - - codes=self.MSCode(dlatent_tmp,boundary_tmp) - - out=self.GenerateImg(codes) - return codes,out - - def EditOneC(self,cindex,dlatent_tmp=None): - if dlatent_tmp==None: - dlatent_tmp=[tmp[self.img_index:(self.img_index+self.num_images)] for tmp in self.dlatents] - - boundary_tmp=[[] for i in range(len(self.dlatents))] - - #'only manipulate 1 layer and one channel' - assert len(self.manipulate_layers)==1 - - ml=self.manipulate_layers[0] - tmp=dlatent_tmp[ml].shape[1] #ada - tmp1=np.zeros(tmp) - tmp1[cindex]=self.code_std[ml][cindex] #1 - boundary_tmp[ml]=tmp1 - - codes=self.MSCode(dlatent_tmp,boundary_tmp) - out=self.GenerateImg(codes) - return codes,out - - - def W2S(self,dlatent_tmp): - - all_s = self.sess.run( - self.s_names, - feed_dict={'G_synthesis_1/dlatents_in:0': dlatent_tmp}) - return all_s - - - - - - - - -#%% -if __name__ == "__main__": - - - M=Manipulator(dataset_name='ffhq') - - - #%% - M.alpha=[-5,0,5] - M.num_images=20 - lindex,cindex=6,501 - - M.manipulate_layers=[lindex] - codes,out=M.EditOneC(cindex) #dlatent_tmp - tmp=str(M.manipulate_layers)+'_'+str(cindex) - M.Vis(tmp,'c',out) - - - - - - - - - - - - - - - - - - - - diff --git a/spaces/DragGan/DragGan-Inversion/torch_utils/ops/bias_act.h b/spaces/DragGan/DragGan-Inversion/torch_utils/ops/bias_act.h deleted file mode 100644 index 60b81c6058d54638a6d74a13046fa388442d767d..0000000000000000000000000000000000000000 --- a/spaces/DragGan/DragGan-Inversion/torch_utils/ops/bias_act.h +++ /dev/null @@ -1,38 +0,0 @@ -// Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved. -// -// NVIDIA CORPORATION and its licensors retain all intellectual property -// and proprietary rights in and to this software, related documentation -// and any modifications thereto. Any use, reproduction, disclosure or -// distribution of this software and related documentation without an express -// license agreement from NVIDIA CORPORATION is strictly prohibited. - -//------------------------------------------------------------------------ -// CUDA kernel parameters. - -struct bias_act_kernel_params -{ - const void* x; // [sizeX] - const void* b; // [sizeB] or NULL - const void* xref; // [sizeX] or NULL - const void* yref; // [sizeX] or NULL - const void* dy; // [sizeX] or NULL - void* y; // [sizeX] - - int grad; - int act; - float alpha; - float gain; - float clamp; - - int sizeX; - int sizeB; - int stepB; - int loopX; -}; - -//------------------------------------------------------------------------ -// CUDA kernel selection. - -template void* choose_bias_act_kernel(const bias_act_kernel_params& p); - -//------------------------------------------------------------------------ diff --git a/spaces/DragGan/DragGan/stylegan_human/PP_HumanSeg/pretrained_model/download_pretrained_model.py b/spaces/DragGan/DragGan/stylegan_human/PP_HumanSeg/pretrained_model/download_pretrained_model.py deleted file mode 100644 index 69e3f0910e5c553cc59a3067ac02881a720a474f..0000000000000000000000000000000000000000 --- a/spaces/DragGan/DragGan/stylegan_human/PP_HumanSeg/pretrained_model/download_pretrained_model.py +++ /dev/null @@ -1,44 +0,0 @@ -# coding: utf8 -# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserve. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import sys -import os - -LOCAL_PATH = os.path.dirname(os.path.abspath(__file__)) -TEST_PATH = os.path.join(LOCAL_PATH, "../../../", "test") -sys.path.append(TEST_PATH) - -from paddleseg.utils.download import download_file_and_uncompress - -model_urls = { - "pphumanseg_lite_portrait_398x224": - "https://paddleseg.bj.bcebos.com/dygraph/ppseg/ppseg_lite_portrait_398x224.tar.gz", - "deeplabv3p_resnet50_os8_humanseg_512x512_100k": - "https://paddleseg.bj.bcebos.com/dygraph/humanseg/train/deeplabv3p_resnet50_os8_humanseg_512x512_100k.zip", - "fcn_hrnetw18_small_v1_humanseg_192x192": - "https://paddleseg.bj.bcebos.com/dygraph/humanseg/train/fcn_hrnetw18_small_v1_humanseg_192x192.zip", - "pphumanseg_lite_generic_human_192x192": - "https://paddleseg.bj.bcebos.com/dygraph/humanseg/train/pphumanseg_lite_generic_192x192.zip", -} - -if __name__ == "__main__": - for model_name, url in model_urls.items(): - download_file_and_uncompress( - url=url, - savepath=LOCAL_PATH, - extrapath=LOCAL_PATH, - extraname=model_name) - - print("Pretrained model download success!") diff --git a/spaces/DynoKevin/img-cap-for-vision-mate/README.md b/spaces/DynoKevin/img-cap-for-vision-mate/README.md deleted file mode 100644 index a45b40cedae3acbcf0ea690ea4fe5426f3010c0e..0000000000000000000000000000000000000000 --- a/spaces/DynoKevin/img-cap-for-vision-mate/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Img Cap For Vision Mate -emoji: 👀 -colorFrom: green -colorTo: green -sdk: streamlit -sdk_version: 1.27.2 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Eddycrack864/Applio-Inference/tools/torchgate/utils.py b/spaces/Eddycrack864/Applio-Inference/tools/torchgate/utils.py deleted file mode 100644 index dc97d45a399c112c76e80cdd8c73cfebaf3ef6ad..0000000000000000000000000000000000000000 --- a/spaces/Eddycrack864/Applio-Inference/tools/torchgate/utils.py +++ /dev/null @@ -1,66 +0,0 @@ -import torch -from torch.types import Number - - -@torch.no_grad() -def amp_to_db(x: torch.Tensor, eps=torch.finfo(torch.float64).eps, top_db=40) -> torch.Tensor: - """ - Convert the input tensor from amplitude to decibel scale. - - Arguments: - x {[torch.Tensor]} -- [Input tensor.] - - Keyword Arguments: - eps {[float]} -- [Small value to avoid numerical instability.] - (default: {torch.finfo(torch.float64).eps}) - top_db {[float]} -- [threshold the output at ``top_db`` below the peak] - ` (default: {40}) - - Returns: - [torch.Tensor] -- [Output tensor in decibel scale.] - """ - x_db = 20 * torch.log10(x.abs() + eps) - return torch.max(x_db, (x_db.max(-1).values - top_db).unsqueeze(-1)) - - -@torch.no_grad() -def temperature_sigmoid(x: torch.Tensor, x0: float, temp_coeff: float) -> torch.Tensor: - """ - Apply a sigmoid function with temperature scaling. - - Arguments: - x {[torch.Tensor]} -- [Input tensor.] - x0 {[float]} -- [Parameter that controls the threshold of the sigmoid.] - temp_coeff {[float]} -- [Parameter that controls the slope of the sigmoid.] - - Returns: - [torch.Tensor] -- [Output tensor after applying the sigmoid with temperature scaling.] - """ - return torch.sigmoid((x - x0) / temp_coeff) - - -@torch.no_grad() -def linspace(start: Number, stop: Number, num: int = 50, endpoint: bool = True, **kwargs) -> torch.Tensor: - """ - Generate a linearly spaced 1-D tensor. - - Arguments: - start {[Number]} -- [The starting value of the sequence.] - stop {[Number]} -- [The end value of the sequence, unless `endpoint` is set to False. - In that case, the sequence consists of all but the last of ``num + 1`` - evenly spaced samples, so that `stop` is excluded. Note that the step - size changes when `endpoint` is False.] - - Keyword Arguments: - num {[int]} -- [Number of samples to generate. Default is 50. Must be non-negative.] - endpoint {[bool]} -- [If True, `stop` is the last sample. Otherwise, it is not included. - Default is True.] - **kwargs -- [Additional arguments to be passed to the underlying PyTorch `linspace` function.] - - Returns: - [torch.Tensor] -- [1-D tensor of `num` equally spaced samples from `start` to `stop`.] - """ - if endpoint: - return torch.linspace(start, stop, num, **kwargs) - else: - return torch.linspace(start, stop, num + 1, **kwargs)[:-1] diff --git a/spaces/EinsteinCoder/sf-voicebot/app.py b/spaces/EinsteinCoder/sf-voicebot/app.py deleted file mode 100644 index f76e4c62d982f273f8cf028e765b885465a41e8c..0000000000000000000000000000000000000000 --- a/spaces/EinsteinCoder/sf-voicebot/app.py +++ /dev/null @@ -1,207 +0,0 @@ - -from flask import Flask, request, render_template -from twilio.twiml.voice_response import VoiceResponse, Gather -import openai -import csv -import os -import requests -from simple_salesforce import Salesforce -from langchain import OpenAI -from langchain.chat_models import ChatOpenAI -from langchain.chains import LLMChain, ConversationChain -from langchain import PromptTemplate -from langchain import HuggingFaceHub -from langchain.chains.conversation.memory import (ConversationBufferMemory, - ConversationSummaryMemory, - ConversationBufferWindowMemory, - ConversationKGMemory,ConversationSummaryBufferMemory) - -app = Flask(__name__) - -os.environ['OPENAI_API_KEY'] = os.environ.get("OPENAI_KEY") -openai.api_key = os.environ.get("OPENAI_KEY") - -# Set up the LangChain - -template = """Answer the question based on the context below. -Context: You are Lisa, a loyal helpful service agent, appointed for SuperFoods Petcare Company. -Your goal is to ask one question at a time and provide a friendly conversational responses to the customer. -- For Complaints: Ask the product or brnad name they have purchased and when they bought it. -- Next, ask the customer if he wants a refund or return the product. -- For Returns: Tell him about the 10-day return policy, after which it's non-returnable. -- For Refunds: Ask about the mode of refund he wants and clarify him the refunds will happen within 2-3 business days. Do not ask for Bank Details from the customer. -For all complaints, a case for will be created, and the caller will be notified over his registered Email or WhatsApp. -Do not answer anything outside your role or context, and apologize for any unknown questions. - -Past Conversations: {chat_history} -Human: {input} -AI: - -""" - -prompt = PromptTemplate( - input_variables=["chat_history", "input"], - template=template -) - -llm35 = ChatOpenAI( - temperature=0, - model_name='gpt-3.5-turbo', - max_tokens=256 -) - -llm30 = OpenAI( - temperature=0, - max_tokens=256, - frequency_penalty=0 -) - -memory = ConversationBufferMemory(memory_key="chat_history") - -conversations = ConversationChain( - prompt=prompt, - llm=llm30, - memory=memory, - verbose=False -) - -# Set up the Salesforce API - -#sf_user = os.environ.get("SF_USER") -#sf_pwd = os.environ.get("SF_PWD") -#sf_token = os.environ.get("SF_TOKEN") -#sf_instance = os.environ.get("SF_INSTANCE") -#sf = Salesforce(username=sf_user, password=sf_pwd, security_token=sf_token,instance_url=sf_instance) -#print(sf.headers) -#print("Successfully Connected to Salesforce") - -conversation_id = '' - -# Define a function to handle incoming calls -def handle_incoming_call(): - response = VoiceResponse() - gather = Gather(input='speech', speechTimeout='auto', action='/process_input') - gather.say("Welcome to the SuperFood Customer Services !") - gather.pause(length=1) - gather.say("Hi, I am Lisa, from customer desk") - gather.pause(length=0) - gather.say("May i know who i am talking to?") - response.append(gather) - return str(response) - -# Define a route to handle incoming calls -@app.route("/incoming_call", methods=["POST"]) -def incoming_call(): - return handle_incoming_call() - -# Define a route to handle user input -@app.route('/process_input', methods=['POST']) -def process_input(): - user_input = request.form['SpeechResult'] - print("Rob : " +user_input) - conversation_id = request.form['CallSid'] - #print("Conversation Id: " + conversation_id) - - if user_input.lower() in ['thank you', 'thanks.', 'bye.', 'goodbye.','no thanks.','no, thank you.','i m good.','no, i m good.','same to you.','no, thanks.','thank you.']: - response = VoiceResponse() - response.say("Thank you for using our service. Goodbye!") - - response.hangup() - print("Hanged-up") - - create_case(conversations.memory.buffer,conversation_id) - - memory.clear() - - print("Case created successfully !!") - - else: - response = VoiceResponse() - ai_response=conversations.predict(input=user_input) - response.say(ai_response) - print("Bot: " + ai_response) - gather = Gather(input='speech', speechTimeout='auto', action='/process_input') - response.append(gather) - - return str(response) - -# For Case Summary and Subject - - -def get_case_summary(conv_detail): - #chatresponse_desc = openai.ChatCompletion.create( - #model="gpt-3.5-turbo", - #temperature=0, - #max_tokens=128, - #messages=[ - # {"role": "system", "content": "You are an Text Summarizer."}, - # {"role": "user", "content": "You need to summarise the conversation between an agent and customer mentioned below. Remember to keep the Product Name, Customer Tone and other key elements from the convsersation"}, - # {"role": "user", "content": conv_detail} - #] - #) - #case_desc = chatresponse_desc.choices[0].message.content - chatresponse_desc = openai.Completion.create( - model = 'text-davinci-003', - prompt = 'You need to summarise the problem as told by the customer. Remember to keep the Product Name and other key points discussed from the conversation.Here is the conversation between service agent and the customer: ' + conv_detail, - temperature = 0, - top_p =1, - best_of=1, - max_tokens=256 - ) - case_desc = chatresponse_desc.choices[0].text.strip() - return case_desc - -def get_case_subject(conv_detail): - #chatresponse_subj = openai.ChatCompletion.create( - #model="gpt-3.5-turbo", - #temperature=0, - #max_tokens=32, - #messages=[ - # {"role": "system", "content": "You are an Text Summarizer."}, - # {"role": "user", "content": "You need to summarise the conversation between an agent and customer in 15 words mentioned below for case subject."}, - # {"role": "user", "content": conv_detail} - #] - #) - #case_subj = chatresponse_subj.choices[0].message.content - chatresponse_subj = openai.Completion.create( - model = 'text-davinci-003', - prompt = 'Summarise the conversation between an agent and customer in 10 words mentioned below for Case Subject. Here is the conversation: ' + conv_detail, - temperature = 0, - top_p =1, - best_of=1, - max_tokens=256 - ) - case_subj = chatresponse_subj.choices[0].text.strip() - return case_subj - -# Define a function to create a case record in Salesforce -def create_case(conv_hist,conv_id): - sf_user = os.environ.get("SF_USER") - sf_pwd = os.environ.get("SF_PWD") - sf_token = os.environ.get("SF_TOKEN") - sf_instance = os.environ.get("SF_INSTANCE") - session = requests.Session() - sf = Salesforce(username=sf_user, password=sf_pwd, security_token=sf_token,instance_url=sf_instance,session=session) - - desc = get_case_summary(conv_hist) - subj = get_case_subject(conv_hist) - - case_data = { - 'Subject': 'Voice Bot Case: ' + subj , - 'Description': desc, - 'Status': 'New', - 'Origin': 'Voice Bot', - 'Voice_Call_Conversation__c': conv_hist , - 'Voice_Call_Id__c': conv_id, - 'ContactId': '003B000000NLHQ1IAP' - } - sf.Case.create(case_data) - - -@app.route('/') -def index(): - return """Flask Server running with Twilio Voice & ChatGPT integrated with Salesforce for Case Creation. Call the registered Twilio # to talk to the AI Voice Bot.""" - -if __name__ == '__main__': - app.run(debug=False,host='0.0.0.0',port=5050) - uvicorn.run(app,host='0.0.0.0', port=5050) \ No newline at end of file diff --git a/spaces/Ekimetrics/Biomap/biomap/inference.py b/spaces/Ekimetrics/Biomap/biomap/inference.py deleted file mode 100644 index 796e273d5986b1e20801e4d808fb1c5283b69719..0000000000000000000000000000000000000000 --- a/spaces/Ekimetrics/Biomap/biomap/inference.py +++ /dev/null @@ -1,72 +0,0 @@ -import torch.multiprocessing -import torchvision.transforms as T -from utils import transform_to_pil -import logging - -preprocess = T.Compose( - [ - T.ToPILImage(), - T.Resize((320, 320)), - # T.CenterCrop(224), - T.ToTensor(), - T.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]), - ] - ) - -import numpy as np -def inference(images, model): - logging.info("Inference on Images") - x = torch.stack([preprocess(image) for image in images]).cpu() - - with torch.no_grad(): - _, code = model.net(x) - linear_pred = model.linear_probe(x, code) - linear_pred = linear_pred.argmax(1) - outputs = [{ - "img": x[i].detach().cpu(), - "linear_preds": linear_pred[i].detach().cpu(), - } for i in range(x.shape[0])] - - # water to natural green - for output in outputs: - output["linear_preds"] = torch.where(output["linear_preds"] == 5, 3, output["linear_preds"]) - return outputs - - -if __name__ == "__main__": - import hydra - from model import LitUnsupervisedSegmenter - from utils_gee import extract_img, transform_ee_img - import os - latitude = 2.98 - longitude = 48.81 - start_date = '2020-03-20' - end_date = '2020-04-20' - - location = [float(latitude), float(longitude)] - # Extract img numpy from earth engine and transform it to PIL img - img = extract_img(location, start_date, end_date) - image = transform_ee_img( - img, max=0.3 - ) # max value is the value from numpy file that will be equal to 255 - print("image loaded") - # Initialize hydra with configs - hydra.initialize(config_path="configs", job_name="corine") - cfg = hydra.compose(config_name="my_train_config.yml") - - # Load the model - - model_path = os.path.join(os.path.dirname(__file__), "checkpoint/model/model.pt") - saved_state_dict = torch.load(model_path, map_location=torch.device("cpu")) - - nbclasses = cfg.dir_dataset_n_classes - - model = LitUnsupervisedSegmenter(nbclasses, cfg) - print("model initialized") - model.load_state_dict(saved_state_dict) - print("model loaded") - # img.save("output/image.png") - inference([image], model) - - inference([image,image], model) - \ No newline at end of file diff --git a/spaces/EleutherAI/VQGAN_CLIP/CLIP/data/yfcc100m.md b/spaces/EleutherAI/VQGAN_CLIP/CLIP/data/yfcc100m.md deleted file mode 100644 index 575c54bc4bab3972878291c8d227a313c9fc766e..0000000000000000000000000000000000000000 --- a/spaces/EleutherAI/VQGAN_CLIP/CLIP/data/yfcc100m.md +++ /dev/null @@ -1,14 +0,0 @@ -# The YFCC100M Subset - -In the paper, we performed a dataset ablation using a subset of the YFCC100M dataset and showed that the performance remained largely similar. - -The subset contains 14,829,396 images, about 15% of the full dataset, which have been filtered to only keep those with natural languag titles and/or descriptions in English. - -We provide the list of (line number, photo identifier, photo hash) of each image contained in this subset. These correspond to the first three columns in the dataset's metadata TSV file. - -``` -wget https://openaipublic.azureedge.net/clip/data/yfcc100m_subset_data.tsv.bz2 -bunzip2 yfcc100m_subset_data.tsv.bz2 -``` - -Use of the underlying media files is subject to the Creative Commons licenses chosen by their creators/uploaders. For more information about the YFCC100M dataset, visit [the official website](https://multimediacommons.wordpress.com/yfcc100m-core-dataset/). \ No newline at end of file diff --git a/spaces/EronSamez/RVC_HFmeu/go-applio-manager-recode.bat b/spaces/EronSamez/RVC_HFmeu/go-applio-manager-recode.bat deleted file mode 100644 index 91b8acfc0c69a356fd5b1d77650b2cd728b1072b..0000000000000000000000000000000000000000 --- a/spaces/EronSamez/RVC_HFmeu/go-applio-manager-recode.bat +++ /dev/null @@ -1,322 +0,0 @@ -@echo off -title Applio Installer - -::: _ _ _____ _ -::: /\ | (_) | __ \ | | -::: / \ _ __ _ __ | |_ ___ | |__) |___ ___ ___ __| | ___ -::: / /\ \ | '_ \| '_ \| | |/ _ \ | _ // _ \/ __/ _ \ / _` |/ _ \ -::: / ____ \| |_) | |_) | | | (_) | | | \ \ __/ (_| (_) | (_| | __/ -::: /_/ \_\ .__/| .__/|_|_|\___/ |_| \_\___|\___\___/ \__,_|\___| -::: | | | | -::: |_| |_| -::: -::: - -setlocal -set "branch=applio-recode" -set "runtime=runtime-recode" -set "repoUrl=https://github.com/IAHispano/Applio-RVC-Fork/archive/refs/heads/%branch%.zip" -set "fixesFolder=fixes" -set "localFixesPy=local_fixes.py" -set "principal=%cd%" -set "URL_BASE=https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main" -set "URL_EXTRA=https://huggingface.co/IAHispano/applio/resolve/main" - -:menu -for /f "delims=: tokens=*" %%A in ('findstr /b ":::" "%~f0"') do @echo(%%A - -echo [1] Reinstall Applio -echo [2] Update Applio -echo [3] Update Applio + Runtime -echo. - -set /p choice=Select an option: -set choice=%choice: =% - -if "%choice%"=="1" ( - cls - echo Starting Applio Reinstaller... - echo. - goto reinstaller - pause - cls - goto menu - -) - -if "%choice%"=="2" ( - cls - echo Starting Applio Updater... - echo. - goto updater - pause - cls - goto menu -) - -if "%choice%"=="3" ( - cls - echo Updating Applio + Runtime... - echo. - goto updaterRuntime - pause - cls - goto menu - -) - -cls -echo Invalid option. Please enter a number from 1 to 3. -echo. -echo Press 'Enter' to access the main menu... -pause>nul -cls -goto menu - -:reinstaller - -echo WARNING: Remember to install Microsoft C++ Build Tools, Redistributable, Python, and Git before continuing. -echo. -echo Step-by-step guide: https://rentry.org/appliolocal -echo Build Tools: https://aka.ms/vs/17/release/vs_BuildTools.exe -echo Redistributable: https://aka.ms/vs/17/release/vc_redist.x64.exe -echo Git: https://github.com/git-for-windows/git/releases/download/v2.42.0.windows.2/Git-2.42.0.2-64-bit.exe -echo Python: Add this route to the windows enviroment variables the user path variable: %principal%\runtime\Scripts -echo. -pause -cls - -echo Downloading ZIP file... -powershell -command "& { Invoke-WebRequest -Uri '%repoUrl%' -OutFile '%principal%\repo.zip' }" -echo. - -echo Extracting ZIP file... -powershell -command "& { Add-Type -AssemblyName System.IO.Compression.FileSystem ; [System.IO.Compression.ZipFile]::ExtractToDirectory('%principal%\repo.zip', '%principal%') }" -echo. - -echo Copying folder and file structure from subdirectory to main directory... -robocopy "%principal%\Applio-RVC-Fork-%branch%" "%principal%" /E -echo. - -echo Deleting contents of subdirectory (files and folders)... -rmdir "%principal%\Applio-RVC-Fork-%branch%" /S /Q -echo. - -echo Cleaning up... -del "%principal%\repo.zip" -echo. -cls - -echo Proceeding to download the models... -echo. - -echo WARNING: At this point, it's recommended to disable antivirus or firewall, as errors might occur when downloading pretrained models. -pause -cls - -echo Downloading models in the assets folder... -cd "assets" -echo. -echo Downloading the "pretrained" folder... -cd "pretrained" -curl -LJO "%URL_BASE%/pretrained/D32k.pth" -curl -LJO "%URL_BASE%/pretrained/D40k.pth" -curl -LJO "%URL_BASE%/pretrained/D48k.pth" -curl -LJO "%URL_BASE%/pretrained/G32k.pth" -curl -LJO "%URL_BASE%/pretrained/G40k.pth" -curl -LJO "%URL_BASE%/pretrained/G48k.pth" -curl -LJO "%URL_BASE%/pretrained/f0D32k.pth" -curl -LJO "%URL_BASE%/pretrained/f0D40k.pth" -curl -LJO "%URL_BASE%/pretrained/f0D48k.pth" -curl -LJO "%URL_BASE%/pretrained/f0G32k.pth" -curl -LJO "%URL_BASE%/pretrained/f0G40k.pth" -curl -LJO "%URL_BASE%/pretrained/f0G48k.pth" -cd ".." -echo. -cls - -echo Downloading the "pretrained_v2" folder... -cd "pretrained_v2" -curl -LJO "%URL_BASE%/pretrained_v2/D32k.pth" -curl -LJO "%URL_BASE%/pretrained_v2/D40k.pth" -curl -LJO "%URL_BASE%/pretrained_v2/D48k.pth" -curl -LJO "%URL_BASE%/pretrained_v2/G32k.pth" -curl -LJO "%URL_BASE%/pretrained_v2/G40k.pth" -curl -LJO "%URL_BASE%/pretrained_v2/G48k.pth" -curl -LJO "%URL_BASE%/pretrained_v2/f0D32k.pth" -curl -LJO "%URL_BASE%/pretrained_v2/f0D40k.pth" -curl -LJO "%URL_BASE%/pretrained_v2/f0D48k.pth" -curl -LJO "%URL_BASE%/pretrained_v2/f0G32k.pth" -curl -LJO "%URL_BASE%/pretrained_v2/f0G40k.pth" -curl -LJO "%URL_BASE%/pretrained_v2/f0G48k.pth" -cd ".." -echo. -cls - -echo Downloading the hubert_base.pt file... -cd "hubert" -curl -LJO "%URL_BASE%/hubert_base.pt" -cd ".." -echo. -cls - - -echo Downloading the rmvpe.pt file... -cd "rmvpe" -curl -LJO "%URL_BASE%/rmvpe.pt" -echo. -cls - -echo Downloading the rmvpe.onnx file... -curl -LJO "%URL_BASE%/rmvpe.onnx" -cd ".." -cd ".." -echo. -cls - -echo Downloading the rest of the large files - -echo Downloading the "uvr5_weights" folder... -cd "uvr5_weights" -curl -LJO "%URL_BASE%/uvr5_weights/HP2_all_vocals.pth" -curl -LJO "%URL_BASE%/uvr5_weights/HP3_all_vocals.pth" -curl -LJO "%URL_BASE%/uvr5_weights/HP5_only_main_vocal.pth" -curl -LJO "%URL_BASE%/uvr5_weights/VR-DeEchoAggressive.pth" -curl -LJO "%URL_BASE%/uvr5_weights/VR-DeEchoDeReverb.pth" -curl -LJO "%URL_BASE%/uvr5_weights/VR-DeEchoNormal.pth" -cd ".." -echo. -cls - -echo Downloading the ffmpeg.exe file... -curl -LJO "%URL_BASE%/ffmpeg.exe" -echo. -cls - -echo Downloading the ffprobe.exe file... -curl -LJO "%URL_BASE%/ffprobe.exe" -echo. -cls - -echo Downloading the runtime.zip file... -curl -LJO "%URL_EXTRA%/%runtime%.zip" -echo. -cls - -echo Extracting the runtime.zip file, this might take a while... -powershell -Command "Expand-Archive -Path '%runtime%.zip' -DestinationPath '.'" -del %runtime%.zip -echo. -cls - -echo Downloads completed! -echo. - -echo Checking if the local_fixes.py file exists in the Fixes folder... -if exist "%fixesFolder%\%localFixesPy%" ( - echo Running the file... - runtime\python.exe "%fixesFolder%\%localFixesPy%" -) else ( - echo The "%localFixesPy%" file was not found in the "Fixes" folder. -) -echo. - -echo Fixes Applied! -echo. - -echo Applio has been reinstalled! -echo. -echo Press 'Enter' to access the main menu... -pause>nul -cls -goto menu - - -:updater - -echo Downloading the ZIP file... -powershell -command "& { Invoke-WebRequest -Uri '%repoUrl%' -OutFile '%principal%\repo.zip' }" -echo. - -echo Extracting ZIP file... -powershell -command "& { Add-Type -AssemblyName System.IO.Compression.FileSystem ; [System.IO.Compression.ZipFile]::ExtractToDirectory('%principal%\repo.zip', '%principal%') }" -echo. - -echo Copying folder and file structure from subdirectory to main directory... -robocopy "%principal%\Applio-RVC-Fork-%branch%" "%principal%" /E -echo. - -echo Deleting contents of the subdirectory (files and folders)... -rmdir "%principal%\Applio-RVC-Fork-%branch%" /S /Q -echo. - -echo Cleaning up... -del "%principal%\repo.zip" -echo. -cls - -echo Verifying if the local_fixes.py file exists in the Fixes folder... -if exist "%fixesFolder%\%localFixesPy%" ( - echo Running the file... - runtime\python.exe "%fixesFolder%\%localFixesPy%" -) else ( - echo The file "%localFixesPy%" was not found in the "Fixes" folder. -) -echo. - -echo Applio has been updated! -echo. -echo Press 'Enter' to access the main menu... -pause>nul -cls -goto menu - - -:updaterRuntime - -echo Downloading the ZIP file... -powershell -command "& { Invoke-WebRequest -Uri '%repoUrl%' -OutFile '%principal%\repo.zip' }" -echo. - -echo Extracting ZIP file... -powershell -command "& { Add-Type -AssemblyName System.IO.Compression.FileSystem ; [System.IO.Compression.ZipFile]::ExtractToDirectory('%principal%\repo.zip', '%principal%') }" -echo. - -echo Copying folder and file structure from subdirectory to main directory... -robocopy "%principal%\Applio-RVC-Fork-%branch%" "%principal%" /E -echo. - -echo Deleting contents of the subdirectory (files and folders)... -rmdir "%principal%\Applio-RVC-Fork-%branch%" /S /Q -echo. - -echo Cleaning up... -del "%principal%\repo.zip" -echo. -cls - -echo Downloading the runtime.zip file... -curl -LJO "%URL_EXTRA%/%runtime%.zip" -echo. -cls -echo Extracting the runtime.zip file, this might take a while... -powershell -Command "Expand-Archive -Path '%runtime%.zip' -DestinationPath '.'" -del runtime.zip -echo. -cls - -echo Verifying if the local_fixes.py file exists in the Fixes folder... -if exist "%fixesFolder%\%localFixesPy%" ( - echo Running the file... - runtime\python.exe "%fixesFolder%\%localFixesPy%" -) else ( - echo The file "%localFixesPy%" was not found in the "Fixes" folder. -) -echo. - -echo Applio has been updated! -echo. -echo Press 'Enter' to access the main menu... -pause>nul -cls -goto menu diff --git a/spaces/EronSamez/RVC_HFmeu/infer/lib/uvr5_pack/lib_v5/nets_123821KB.py b/spaces/EronSamez/RVC_HFmeu/infer/lib/uvr5_pack/lib_v5/nets_123821KB.py deleted file mode 100644 index 167d4cb2198863cf43e93440f7e63c5342fc7605..0000000000000000000000000000000000000000 --- a/spaces/EronSamez/RVC_HFmeu/infer/lib/uvr5_pack/lib_v5/nets_123821KB.py +++ /dev/null @@ -1,122 +0,0 @@ -import torch -import torch.nn.functional as F -from torch import nn - -from . import layers_123821KB as layers - - -class BaseASPPNet(nn.Module): - def __init__(self, nin, ch, dilations=(4, 8, 16)): - super(BaseASPPNet, self).__init__() - self.enc1 = layers.Encoder(nin, ch, 3, 2, 1) - self.enc2 = layers.Encoder(ch, ch * 2, 3, 2, 1) - self.enc3 = layers.Encoder(ch * 2, ch * 4, 3, 2, 1) - self.enc4 = layers.Encoder(ch * 4, ch * 8, 3, 2, 1) - - self.aspp = layers.ASPPModule(ch * 8, ch * 16, dilations) - - self.dec4 = layers.Decoder(ch * (8 + 16), ch * 8, 3, 1, 1) - self.dec3 = layers.Decoder(ch * (4 + 8), ch * 4, 3, 1, 1) - self.dec2 = layers.Decoder(ch * (2 + 4), ch * 2, 3, 1, 1) - self.dec1 = layers.Decoder(ch * (1 + 2), ch, 3, 1, 1) - - def __call__(self, x): - h, e1 = self.enc1(x) - h, e2 = self.enc2(h) - h, e3 = self.enc3(h) - h, e4 = self.enc4(h) - - h = self.aspp(h) - - h = self.dec4(h, e4) - h = self.dec3(h, e3) - h = self.dec2(h, e2) - h = self.dec1(h, e1) - - return h - - -class CascadedASPPNet(nn.Module): - def __init__(self, n_fft): - super(CascadedASPPNet, self).__init__() - self.stg1_low_band_net = BaseASPPNet(2, 32) - self.stg1_high_band_net = BaseASPPNet(2, 32) - - self.stg2_bridge = layers.Conv2DBNActiv(34, 16, 1, 1, 0) - self.stg2_full_band_net = BaseASPPNet(16, 32) - - self.stg3_bridge = layers.Conv2DBNActiv(66, 32, 1, 1, 0) - self.stg3_full_band_net = BaseASPPNet(32, 64) - - self.out = nn.Conv2d(64, 2, 1, bias=False) - self.aux1_out = nn.Conv2d(32, 2, 1, bias=False) - self.aux2_out = nn.Conv2d(32, 2, 1, bias=False) - - self.max_bin = n_fft // 2 - self.output_bin = n_fft // 2 + 1 - - self.offset = 128 - - def forward(self, x, aggressiveness=None): - mix = x.detach() - x = x.clone() - - x = x[:, :, : self.max_bin] - - bandw = x.size()[2] // 2 - aux1 = torch.cat( - [ - self.stg1_low_band_net(x[:, :, :bandw]), - self.stg1_high_band_net(x[:, :, bandw:]), - ], - dim=2, - ) - - h = torch.cat([x, aux1], dim=1) - aux2 = self.stg2_full_band_net(self.stg2_bridge(h)) - - h = torch.cat([x, aux1, aux2], dim=1) - h = self.stg3_full_band_net(self.stg3_bridge(h)) - - mask = torch.sigmoid(self.out(h)) - mask = F.pad( - input=mask, - pad=(0, 0, 0, self.output_bin - mask.size()[2]), - mode="replicate", - ) - - if self.training: - aux1 = torch.sigmoid(self.aux1_out(aux1)) - aux1 = F.pad( - input=aux1, - pad=(0, 0, 0, self.output_bin - aux1.size()[2]), - mode="replicate", - ) - aux2 = torch.sigmoid(self.aux2_out(aux2)) - aux2 = F.pad( - input=aux2, - pad=(0, 0, 0, self.output_bin - aux2.size()[2]), - mode="replicate", - ) - return mask * mix, aux1 * mix, aux2 * mix - else: - if aggressiveness: - mask[:, :, : aggressiveness["split_bin"]] = torch.pow( - mask[:, :, : aggressiveness["split_bin"]], - 1 + aggressiveness["value"] / 3, - ) - mask[:, :, aggressiveness["split_bin"] :] = torch.pow( - mask[:, :, aggressiveness["split_bin"] :], - 1 + aggressiveness["value"], - ) - - return mask * mix - - def predict(self, x_mag, aggressiveness=None): - h = self.forward(x_mag, aggressiveness) - - if self.offset > 0: - h = h[:, :, :, self.offset : -self.offset] - assert h.size()[3] > 0 - - return h diff --git a/spaces/ErtugrulDemir/TextSummarizing/app.py b/spaces/ErtugrulDemir/TextSummarizing/app.py deleted file mode 100644 index d07099e6f508132a4cb2fe8557bb4e60e5654c60..0000000000000000000000000000000000000000 --- a/spaces/ErtugrulDemir/TextSummarizing/app.py +++ /dev/null @@ -1,55 +0,0 @@ -import gradio as gr -import pickle -import pickle -from transformers import AutoTokenizer, TFAutoModelForSeq2SeqLM, pipeline - -# File Paths -model_path = 'fine_tuned_sum' -tokenizer_path = "tokenizer" -examples_path = "examples.pkl" - -# Load the tokenizer -tokenizer = AutoTokenizer.from_pretrained(tokenizer_path) - -# Load the fine-tuned BERT model -seq2seq_model = TFAutoModelForSeq2SeqLM.from_pretrained(model_path) - -# loading the examples -with open('examples.pkl', 'rb') as f: examples = pickle.load(f) - -# Creating the pipeline -sum_params = { - "model":seq2seq_model, - "tokenizer":tokenizer, - "framework":"tf", -} - -summarizer = pipeline("summarization", **sum_params) -# Load the model -# Define a function to make predictions with the model -def summarize(text): - - # defining the params - prms = { - "min_length":5, - "max_length":128 - } - return summarizer(text,**prms)[0]["summary_text"] - -# GUI Component -# defining the params -if_p = { - "fn":summarize, - "inputs":gr.inputs.Textbox(label="Text"), - "outputs":gr.outputs.Textbox(label="Output"), - "title":"Fine-tuned 't5-small' model for text summarization", - "description":"Write something to summarization text", - "examples":examples -} - -# Create a Gradio interface instance -demo = gr.Interface(**if_p) - -# Launching the demo -if __name__ == "__main__": - demo.launch() diff --git a/spaces/FantasticGNU/AnomalyGPT/model/ImageBind/CONTRIBUTING.md b/spaces/FantasticGNU/AnomalyGPT/model/ImageBind/CONTRIBUTING.md deleted file mode 100644 index 63d0b751e8a00b606ddff92e2524faa3c90a63b0..0000000000000000000000000000000000000000 --- a/spaces/FantasticGNU/AnomalyGPT/model/ImageBind/CONTRIBUTING.md +++ /dev/null @@ -1,31 +0,0 @@ -# Contributing to ImageBind -We want to make contributing to this project as easy and transparent as -possible. - -## Pull Requests -We actively welcome your pull requests. - -1. Fork the repo and create your branch from `main`. -2. If you've added code that should be tested, add tests. -3. If you've changed APIs, update the documentation. -4. Ensure the test suite passes. -5. Make sure your code lints. -6. If you haven't already, complete the Contributor License Agreement ("CLA"). - -## Contributor License Agreement ("CLA") -In order to accept your pull request, we need you to submit a CLA. You only need -to do this once to work on any of Meta's open source projects. - -Complete your CLA here: - -## Issues -We use GitHub issues to track public bugs. Please ensure your description is -clear and has sufficient instructions to be able to reproduce the issue. - -Meta has a [bounty program](https://www.facebook.com/whitehat/) for the safe -disclosure of security bugs. In those cases, please go through the process -outlined on that page and do not file a public issue. - -## License -By contributing to Omnivore, you agree that your contributions will be licensed -under the [LICENSE](LICENSE) file in the root directory of this source tree. diff --git a/spaces/Fengbinbin/gpt-academic/docs/self_analysis.md b/spaces/Fengbinbin/gpt-academic/docs/self_analysis.md deleted file mode 100644 index c88e1e41217eb13a30269f933586f6c241fab38d..0000000000000000000000000000000000000000 --- a/spaces/Fengbinbin/gpt-academic/docs/self_analysis.md +++ /dev/null @@ -1,256 +0,0 @@ -# chatgpt-academic项目自译解报告 -(Author补充:以下分析均由本项目调用ChatGPT一键生成,如果有不准确的地方,全怪GPT😄) - -## 对程序的整体功能和构架做出概括。然后用一张markdown表格整理每个文件的功能。 - -整体概括: - -该程序是一个基于自然语言处理和机器学习的科学论文辅助工具,主要功能包括聊天机器人、批量总结PDF文档、批量翻译PDF文档、生成函数注释、解析项目源代码等。程序基于 Gradio 构建 Web 服务,并集成了代理和自动更新功能,提高了用户的使用体验。 - -文件功能表格: - -| 文件名 | 文件功能 | -| --- | --- | -| check_proxy.py | 用于检查代理的正确性和可用性 | -| colorful.py | 包含不同预设置颜色的常量,并用于多种UI元素 | -| config.py | 用于全局配置的类 | -| config_private.py | 与config.py文件一起使用的另一个配置文件,用于更改私密信息 | -| core_functional.py | 包含一些TextFunctional类和基础功能函数 | -| crazy_functional.py | 包含大量高级功能函数和实验性的功能函数 | -| main.py | 程序的主入口,包含GUI主窗口和主要的UI管理功能 | -| theme.py | 包含一些预设置主题的颜色 | -| toolbox.py | 提供了一些有用的工具函数 | -| crazy_functions\crazy_utils.py | 包含一些用于实现高级功能的辅助函数 | -| crazy_functions\Latex全文润色.py | 实现了对LaTeX文件中全文的润色和格式化功能 | -| crazy_functions\Latex全文翻译.py | 实现了对LaTeX文件中的内容进行翻译的功能 | -| crazy_functions\_\_init\_\_.py | 用于导入crazy_functional.py中的功能函数 | -| crazy_functions\下载arxiv论文翻译摘要.py | 从Arxiv上下载论文并提取重要信息 | -| crazy_functions\代码重写为全英文_多线程.py | 针对中文Python文件,将其翻译为全英文 | -| crazy_functions\总结word文档.py | 提取Word文件的重要内容来生成摘要 | -| crazy_functions\批量Markdown翻译.py | 批量翻译Markdown文件 | -| crazy_functions\批量总结PDF文档.py | 批量从PDF文件中提取摘要 | -| crazy_functions\批量总结PDF文档pdfminer.py | 批量从PDF文件中提取摘要 | -| crazy_functions\批量翻译PDF文档_多线程.py | 批量翻译PDF文件 | -| crazy_functions\理解PDF文档内容.py | 批量分析PDF文件并提取摘要 | -| crazy_functions\生成函数注释.py | 自动生成Python文件中函数的注释 | -| crazy_functions\解析项目源代码.py | 解析并分析给定项目的源代码 | -| crazy_functions\询问多个大语言模型.py | 向多个大语言模型询问输入文本并进行处理 | -| crazy_functions\读文献写摘要.py | 根据用户输入读取文献内容并生成摘要 | -| crazy_functions\谷歌检索小助手.py | 利用谷歌学术检索用户提供的论文信息并提取相关信息 | -| crazy_functions\高级功能函数模板.py | 实现高级功能的模板函数 | -| request_llm\bridge_all.py | 处理与LLM的交互 | -| request_llm\bridge_chatglm.py | 使用ChatGLM模型进行聊天 | -| request_llm\bridge_chatgpt.py | 实现对话生成的各项功能 | -| request_llm\bridge_tgui.py | 在Websockets中与用户进行交互并生成文本输出 | - - - -## [0/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\check_proxy.py - -该文件主要包括四个函数:check_proxy、backup_and_download、patch_and_restart 和 auto_update。其中,check_proxy 函数用于检查代理是否可用;backup_and_download 用于进行一键更新备份和下载;patch_and_restart 是一键更新协议的重要函数,用于覆盖和重启;auto_update 函数用于查询版本和用户意见,并自动进行一键更新。该文件主要使用了 requests、json、shutil、zipfile、distutils、subprocess 等 Python 标准库和 toolbox 和 colorful 两个第三方库。 - -## [1/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\colorful.py - -该程序文件实现了一些打印文本的函数,使其具有不同的颜色输出。当系统为Linux时直接跳过,否则使用colorama库来实现颜色输出。程序提供了深色和亮色两种颜色输出方式,同时也提供了对打印函数的别名。对于不是终端输出的情况,对所有的打印函数进行重复定义,以便在重定向时能够避免打印错误日志。 - -## [2/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\config.py - -该程序文件是一个配置文件,其主要功能是提供使用API密钥等信息,以及对程序的体验进行优化,例如定义对话框高度、布局等。还包含一些其他的设置,例如设置并行使用的线程数、重试次数限制等等。 - -## [3/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\config_private.py - -这是一个名为config_private.py的Python文件,它用于配置API_KEY和代理信息。API_KEY是一个私密密钥,用于访问某些受保护的API。USE_PROXY变量设置为True以应用代理,proxies变量配置了代理网络的地址和协议。在使用该文件时,需要填写正确的API_KEY和代理信息。 - -## [4/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\core_functional.py - -该文件是一个Python模块,名为"core_functional.py"。模块中定义了一个字典,包含了各种核心功能的配置信息,如英语学术润色、中文学术润色、查找语法错误等。每个功能都包含一些前言和后语,在前言中描述了该功能的任务和要求,在后语中提供一些附加信息。此外,有些功能还定义了一些特定的处理函数和按钮颜色。 - -## [5/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\crazy_functional.py - -这是一个Python程序文件,文件名是crazy_functional.py。它导入了一个名为HotReload的工具箱,并定义了一个名为get_crazy_functions()的函数。这个函数包括三个部分的插件组,分别是已经编写完成的第一组插件、已经测试但距离完美状态还差一点点的第二组插件和尚未充分测试的第三组插件。每个插件都有一个名称、一个按钮颜色、一个函数和一个是否加入下拉菜单中的标志位。这些插件提供了多种功能,包括生成函数注释、解析项目源代码、批量翻译PDF文档、谷歌检索、PDF文档内容理解和Latex文档的全文润色、翻译等功能。其中第三组插件可能还存在一定的bug。 - -## [6/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\main.py - -该Python脚本代码实现了一个用于交互式对话的Chatbot机器人。它使用了Gradio框架来构建一个Web界面,并在此基础之上嵌入了一个文本输入框和与Chatbot进行交互的其他控件,包括提交、重置、停止和清除按钮、选择框和滑块等。此外,它还包括了一些类和函数和一些用于编程分析的工具和方法。整个程序文件的结构清晰,注释丰富,并提供了很多技术细节,使得开发者可以很容易地在其基础上进行二次开发、修改、扩展和集成。 - -## [7/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\theme.py - -该程序文件名为theme.py,主要功能为调节Gradio的全局样式。在该文件中,调节了Gradio的主题颜色、字体、阴影、边框、渐变等等样式。同时,该文件还添加了一些高级CSS样式,比如调整表格单元格的背景和边框,设定聊天气泡的圆角、最大宽度和阴影等等。如果CODE_HIGHLIGHT为True,则还进行了代码高亮显示。 - -## [8/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\toolbox.py - -这是一个名为`toolbox.py`的源代码文件。该文件包含了一系列工具函数和装饰器,用于聊天Bot的开发和调试。其中有一些功能包括将输入参数进行重组、捕捉函数中的异常并记录到历史记录中、生成Markdown格式的聊天记录报告等。该文件中还包含了一些与转换Markdown文本相关的函数。 - -## [9/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\crazy_functions\crazy_utils.py - -这是一个Python程序文件 `crazy_utils.py`,它包含了两个函数: - -- `input_clipping(inputs, history, max_token_limit)`:这个函数接收三个参数,inputs 是一个字符串,history 是一个列表,max_token_limit 是一个整数。它使用 `tiktoken` 、`numpy` 和 `toolbox` 模块,处理输入文本和历史记录,将其裁剪到指定的最大标记数,避免输入过长导致的性能问题。如果 inputs 长度不超过 max_token_limit 的一半,则只裁剪历史;否则,同时裁剪输入和历史。 -- `request_gpt_model_in_new_thread_with_ui_alive(inputs, inputs_show_user, llm_kwargs, chatbot, history, sys_prompt, refresh_interval=0.2, handle_token_exceed=True, retry_times_at_unknown_error=2)`:这个函数接收八个参数,其中后三个是列表类型,其他为标量或句柄等。它提供对话窗口和刷新控制,执行 `predict_no_ui_long_connection` 方法,将输入数据发送至 GPT 模型并获取结果,如果子任务出错,返回相应的错误信息,否则返回结果。 - -## [10/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\crazy_functions\Latex全文润色.py - -这是一个名为"crazy_functions\Latex全文润色.py"的程序文件,其中包含了两个函数"Latex英文润色"和"Latex中文润色",以及其他辅助函数。这些函数能够对 Latex 项目进行润色处理,其中 "多文件润色" 函数是一个主要函数,它调用了其他辅助函数用于读取和处理 Latex 项目中的文件。函数使用了多线程和机器学习模型进行自然语言处理,对文件进行简化和排版来满足学术标准。注释已删除并可以在函数内部查找。 - -## [11/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\crazy_functions\Latex全文翻译.py - -这个程序文件包括一个用于对整个Latex项目进行翻译的函数 `Latex英译中` 和一个用于将中文翻译为英文的函数 `Latex中译英`。这两个函数都会尝试导入依赖库 tiktoken, 若无法导入则会提示用户安装。`Latex英译中` 函数会对 Latex 项目中的文件进行分离并去除注释,然后运行多线程翻译。`Latex中译英` 也做同样的事情,只不过是将中文翻译为英文。这个程序文件还包括其他一些帮助函数。 - -## [12/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\crazy_functions\__init__.py - -这是一个 Python 包,包名为 `crazy_functions`,在 `__init__.py` 文件中定义了一些函数,包含以下函数: - -- `crazy_addition(a, b)`:对两个数进行加法运算,并将结果返回。 -- `crazy_multiplication(a, b)`:对两个数进行乘法运算,并将结果返回。 -- `crazy_subtraction(a, b)`:对两个数进行减法运算,并将结果返回。 -- `crazy_division(a, b)`:对两个数进行除法运算,并将结果返回。 -- `crazy_factorial(n)`:计算 `n` 的阶乘并返回结果。 - -这些函数可能会有一些奇怪或者不符合常规的实现方式(由函数名可以看出来),所以这个包的名称为 `crazy_functions`,可能是暗示这些函数会有一些“疯狂”的实现方式。 - -## [13/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\crazy_functions\下载arxiv论文翻译摘要.py - -该程序实现了一个名为“下载arxiv论文并翻译摘要”的函数插件,作者是“binary-husky”。该函数的功能是,在输入一篇arxiv论文的链接后,提取摘要、下载PDF文档、翻译摘要为中文,并将翻译结果保存到文件中。程序使用了一些Python库,如requests、pdfminer和beautifulsoup4等。程序入口是名为“下载arxiv论文并翻译摘要”的函数,其中使用了自定义的辅助函数download_arxiv_和get_name。程序中还使用了其他非函数的辅助函数和变量,如update_ui、CatchException、report_exception和get_conf等。 - -## [14/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\crazy_functions\代码重写为全英文_多线程.py - -该文件是一个多线程Python脚本,包含多个函数和利用第三方库进行的API请求。主要功能是将给定文件夹内的Python代码文件中所有中文转化为英文,然后输出转化后的英文代码。重要的功能和步骤包括: - -1. 清空历史,以免输入溢出 -2. 尝试导入依赖,如果缺少依赖,则给出安装建议 -3. 集合文件 -4. 显示随意内容以防卡顿的感觉 -5. Token限制下的截断与处理 -6. 多线程操作请求转换中文变为英文的代码 -7. 所有线程同时开始执行任务函数 -8. 循环轮询各个线程是否执行完毕 -9. 把结果写入文件 -10. 备份一个文件 - -## [15/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\crazy_functions\总结word文档.py - -这是一个名为"总结word文档.py"的程序文件,使用python编写。该文件导入了"toolbox"和"crazy_utils"模块,实现了解析docx格式和doc格式的文件的功能。该文件包含了一个名为"解析docx"的函数,通过对文件内容应用自然语言处理技术,生成文章片段的中英文概述。具体实现过程中,该函数使用了"docx"模块和"win32com.client"模块来实现对docx和doc格式文件的解析,同时使用了"request_gpt_model_in_new_thread_with_ui_alive"函数来向GPT模型发起请求。最后,该文件还实现了一个名为"总结word文档"的函数来批量总结Word文档。 - -## [16/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\crazy_functions\批量Markdown翻译.py - -这个程序文件实现了一个批量Markdown翻译功能,可以将一个源代码项目中的Markdown文本翻译成指定语言(目前支持中<-英和英<-中)。程序主要分为三个函数,`PaperFileGroup`类用于处理长文本的拆分,`多文件翻译`是主要函数调用了`request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency`函数进行多线程翻译并输出结果,`Markdown英译中`和`Markdown中译外`分别是英译中和中译英的入口函数,用于解析项目路径和调用翻译函数。程序依赖于tiktoken等库实现。 - -## [17/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\crazy_functions\批量总结PDF文档.py - -这是一个名为“批量总结PDF文档”的Python脚本,包含了多个函数。其中有一个函数名为“clean_text”,可以对PDF提取出的原始文本进行清洗和格式化处理,将连字转换为其基本形式,并根据heuristic规则判断换行符是否是段落分隔,并相应地进行替换。另一个函数名为“解析PDF”,可以接收一个PDF文件清单,并对清单中的每一个PDF进行解析,提取出文本并调用“clean_text”函数进行清洗和格式化处理,然后向用户发送一个包含文章简介信息的问题并等待用户回答。最后,该脚本也包含一个名为“批量总结PDF文档”的主函数,其中调用了“解析PDF”函数来完成对PDF文件的批量处理。 - -## [18/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\crazy_functions\批量总结PDF文档pdfminer.py - -这个文件是一个Python模块,文件名为pdfminer.py,它定义了一个函数批量总结PDF文档。该函数接受一些参数,然后尝试导入pdfminer和beautifulsoup4库。该函数将读取pdf文件或tex文件中的内容,对其进行分析,并使用GPT模型进行自然语言摘要。文件中还有一个辅助函数readPdf,用于读取pdf文件中的内容。 - -## [19/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\crazy_functions\批量翻译PDF文档_多线程.py - -这是一个Python脚本,文件名是crazy_functions\批量翻译PDF文档_多线程.py。该脚本提供了一个名为“批量翻译PDF文档”的函数,可以批量翻译PDF文件并生成报告文件。该函数使用了多个模块和函数(如toolbox、crazy_utils、update_ui等),使用了Python的异常处理和多线程功能,还使用了一些文本处理函数和第三方库(如fitz和tiktoken)。在函数执行过程中,它会进行一些参数检查、读取和清理PDF文本、递归地切割PDF文件、获取文章meta信息、多线程翻译、整理报告格式等操作,并更新UI界面和生成报告文件。 - -## [20/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\crazy_functions\理解PDF文档内容.py - -这是一个解析PDF文件内容的Python程序,程序文件名为"理解PDF文档内容.py",程序主要由5个步骤组成:第0步是切割PDF文件;第1步是从摘要中提取高价值信息,放到history中;第2步是迭代地历遍整个文章,提取精炼信息;第3步是整理history;第4步是设置一个token上限,防止回答时Token溢出。程序主要用到了Python中的各种模块和函数库,如:toolbox, tiktoken, pymupdf等。 - -## [21/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\crazy_functions\生成函数注释.py - -这是一个名为"生成函数注释"的函数,带有一个装饰器"@CatchException",可以捕获异常。该函数接受文件路径、参数和聊天机器人等参数,用于对多个Python或C++文件进行函数注释,使用了"toolbox"和"crazy_utils"模块中的函数。该函数会逐个读取指定文件中的内容,并使用聊天机器人进行交互,向用户请求注释信息,然后将生成的注释与原文件内容一起输出到一个markdown表格中。最后,该函数返回一个字符串,指示任务是否已完成。另外还包含一个名为"批量生成函数注释"的函数,它与"生成函数注释"函数一起用于批量处理多个文件。 - -## [22/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\crazy_functions\解析项目源代码.py - -这个程序文件实现了对一个源代码项目进行分析的功能。其中,函数`解析项目本身`、`解析一个Python项目`、`解析一个C项目的头文件`、`解析一个C项目`、`解析一个Java项目`和`解析前端项目`分别用于解析不同类型的项目。函数`解析源代码新`实现了对每一个源代码文件的分析,并将分析结果汇总,同时还实现了分组和迭代处理,提高了效率。最后,函数`write_results_to_file`将所有分析结果写入文件。中间,还用到了`request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency`和`request_gpt_model_in_new_thread_with_ui_alive`来完成请求和响应,并用`update_ui`实时更新界面。 - -## [23/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\crazy_functions\询问多个大语言模型.py - -这是一个Python程序,文件名为"crazy_functions\询问多个大语言模型.py"。该程序实现了一个同时向多个大语言模型询问的功能,接收用户输入文本以及模型参数,向ChatGPT和ChatGLM模型发出请求,并将对话记录显示在聊天框中,同时刷新界面。 - -## [24/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\crazy_functions\读文章写摘要.py - -该程序文件是一个Python模块,文件名为"读文章写摘要.py",主要包含两个函数:"解析Paper"和"读文章写摘要"。其中,"解析Paper"函数接受文件路径、参数等参数,逐个打印文件内容并使用GPT模型生成对该文件的摘要;"读文章写摘要"函数则接受一段文本内容和参数,将该文本内容及其所有.tex文件逐个传递给"解析Paper"函数进行处理,并使用GPT模型生成文章的中英文摘要。文件还导入了一些工具函数,如异常处理、信息上报和文件写入等。 - -## [25/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\crazy_functions\谷歌检索小助手.py - -该文件代码包含了一个名为`get_meta_information`的函数和一个名为`谷歌检索小助手`的装饰器函数,用于从谷歌学术中抓取文章元信息,并从用户提供的搜索页面中分析所有文章的相关信息。该文件使用了许多第三方库,如requests、arxiv、BeautifulSoup等。其中`get_meta_information`函数中还定义了一个名为`string_similar`的辅助函数,用于比较字符串相似度。 - -## [26/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\crazy_functions\高级功能函数模板.py - -该程序文件是一个 Python 模块,包含一个名为“高阶功能模板函数”的函数。该函数接受多个参数,其中包括输入文本、GPT 模型参数、插件模型参数、聊天显示框、聊天历史等。 该函数的主要功能是根据输入文本,使用 GPT 模型生成一些问题,并等待用户回答这些问题(使用 Markdown 格式),然后将用户回答加入到聊天历史中,并更新聊天显示框。该函数还包含了一些异常处理和多线程的相关操作。该程序文件还引用了另一个 Python 模块中的两个函数,分别为“CatchException”和“update_ui”,并且还引用了一个名为“request_gpt_model_in_new_thread_with_ui_alive”的自定义函数。 - -## [27/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\request_llm\bridge_all.py - -这个文件是用来处理与LLM的交互的。包含两个函数,一个是 predict_no_ui_long_connection 用来处理长文本的输出,可以多线程调用;另一个是 predict 用来处理基础的对话功能。这个文件会导入其他文件中定义的方法进行调用,具体调用哪个方法取决于传入的参数。函数中还有一些装饰器和管理多线程的逻辑。 - -## [28/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\request_llm\bridge_chatglm.py - -这个程序文件实现了一个使用ChatGLM模型进行聊天的功能。具体实现过程是:首先进行初始化,然后使用GetGLMHandle类进行ChatGLM模型的加载和运行。predict_no_ui_long_connection函数用于多线程聊天,而predict函数用于单线程聊天,它们的不同之处在于前者不会更新UI界面,后者会。这个文件还导入了其他模块和库,例如transformers、time、importlib等,并使用了多进程Pipe。 - -## [29/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\request_llm\bridge_chatgpt.py - -这个程序文件是用于对话生成的,主要包含三个函数:predict、predict_no_ui、predict_no_ui_long_connection。其中,predict是用于普通对话的函数,具备完备的交互功能,但不具备多线程能力;predict_no_ui是高级实验性功能模块调用的函数,参数简单,可以多线程并行,方便实现复杂的功能逻辑;predict_no_ui_long_connection解决了predict_no_ui在处理长文档时容易断开连接的问题,同样支持多线程。程序中还包含一些常量和工具函数,用于整合信息,选择LLM模型,生成http请求,发送请求,接收响应等。它需要配置一个config文件,包含代理网址、API等敏感信息。 - -## [30/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\request_llm\bridge_tgui.py - -该程序文件实现了一个基于Websockets的文本生成服务和对话功能。其中,有三个函数:`run()`、`predict()`和`predict_no_ui_long_connection()`。`run()`函数用于连接到Websocket服务并生成文本结果;`predict()`函数用于将用户输入作为文本生成的输入,同时在UI上显示对话历史记录,并在不断更新UI的过程中不断更新生成的文本输出;`predict_no_ui_long_connection()`函数与`predict()`函数类似,但没有UI,并在一段时间内返回单个生成的文本。整个程序还引入了多个Python模块来完成相关功能,例如`asyncio`、`websockets`、`json`等等。 - -## 根据以上分析,对程序的整体功能和构架重新做出概括。然后用一张markdown表格整理每个文件的功能(包括check_proxy.py, colorful.py, config.py, config_private.py, core_functional.py, crazy_functional.py, main.py, theme.py, toolbox.py, crazy_functions\crazy_utils.py, crazy_functions\Latex全文润色.py, crazy_functions\Latex全文翻译.py, crazy_functions\__init__.py, crazy_functions\下载arxiv论文翻译摘要.py, crazy_functions\代码重写为全英文_多线程.py, crazy_functions\总结word文档.py)。 - -程序功能概括:该程序是一个聊天机器人,可以通过 Web 界面与用户进行交互。它包含了丰富的功能,如文本润色、翻译、代码重写、在线查找等,并且支持多线程处理。用户可以通过 Gradio 框架提供的 Web 界面进行交互,程序还提供了一些调试工具,如toolbox 模块,方便程序开发和调试。 - -下表概述了每个文件的功能: - -| 文件名 | 功能 | -| ----------------------------------------------------------- | ------------------------------------------------------------ | -| check_proxy.py | 检查代理是否可用 | -| colorful.py | 用于打印文本的字体颜色输出模块 | -| config.py | 用于程序中的各种设置,如并行线程数量和重试次数的限制等 | -| config_private.py | 配置API_KEY和代理信息的文件 | -| core_functional.py | 包含具体的文本处理功能的模块 | -| crazy_functional.py | 包括各种插件函数的模块,提供了多种文本处理功能 | -| main.py | 包含 Chatbot 机器人主程序的模块 | -| theme.py | 用于调节全局样式的模块 | -| toolbox.py | 包含工具函数和装饰器,用于聊天Bot的开发和调试 | -| crazy_functions\crazy_utils.py | 包含一些辅助函数,如文本裁剪和消息捕捉等 | -| crazy_functions\Latex全文润色.py | 对 Latex 项目进行润色处理的功能模块 | -| crazy_functions\Latex全文翻译.py | 对 Latex 项目进行翻译的功能模块 | -| crazy_functions\__init__.py | 定义一些奇特的数学函数等 | -| crazy_functions\下载arxiv论文翻译摘要.py | 下载 Arxiv 论文并翻译摘要的功能模块 | -| crazy_functions\代码重写为全英文_多线程.py | 将Python程序中所有中文转化为英文的功能模块 | -| crazy_functions\总结word文档.py | 解析 docx 和 doc 格式的文件,生成文章片段的中英文概述的功能模块 | - -## 根据以上分析,对程序的整体功能和构架重新做出概括。然后用一张markdown表格整理每个文件的功能(包括check_proxy.py, colorful.py, config.py, config_private.py, core_functional.py, crazy_functional.py, main.py, theme.py, toolbox.py, crazy_functions\crazy_utils.py, crazy_functions\Latex全文润色.py, crazy_functions\Latex全文翻译.py, crazy_functions\__init__.py, crazy_functions\下载arxiv论文翻译摘要.py, crazy_functions\代码重写为全英文_多线程.py, crazy_functions\总结word文档.py, crazy_functions\批量Markdown翻译.py, crazy_functions\批量总结PDF文档.py, crazy_functions\批量总结PDF文档pdfminer.py, crazy_functions\批量翻译PDF文档_多线程.py, crazy_functions\理解PDF文档内容.py, crazy_functions\生成函数注释.py, crazy_functions\解析项目源代码.py, crazy_functions\询问多个大语言模型.py, crazy_functions\读文章写摘要.py, crazy_functions\谷歌检索小助手.py, crazy_functions\高级功能函数模板.py, request_llm\bridge_all.py, request_llm\bridge_chatglm.py, request_llm\bridge_chatgpt.py, request_llm\bridge_tgui.py)。 - -根据以上分析,整个程序是一个集成了多个有用工具和功能的文本处理和生成工具,提供了多种在不同场景下使用的功能,包括但不限于对话生成、文本摘要、PDF文件批量处理、代码翻译和实用工具等。主要的Python模块包括"toolbox.py"、"config.py"、"core_functional.py"和"crazy_functional.py"等,并且还使用了许多第三方库和模块实现相关功能。以下是每个程序文件的功能: - -| 文件名 | 文件功能 | -| --- | --- | -| check_proxy.py | 用于检查代理的正确性和可用性 | -| colorful.py | 包含不同预设置颜色的常量,并用于多种UI元素 | -| config.py | 用于全局配置的类 | -| config_private.py | 与config.py文件一起使用的另一个配置文件,用于更改私密信息 | -| core_functional.py | 包含一些TextFunctional类和基础功能函数 | -| crazy_functional.py | 包含大量高级功能函数和实验性的功能函数 | -| main.py | 程序的主入口,包含GUI主窗口和主要的UI管理功能 | -| theme.py | 包含一些预设置主题的颜色 | -| toolbox.py | 提供了一些有用的工具函数 | -| crazy_functions\crazy_utils.py | 包含一些用于实现高级功能的辅助函数 | -| crazy_functions\Latex全文润色.py | 实现了对LaTeX文件中全文的润色和格式化功能 | -| crazy_functions\Latex全文翻译.py | 实现了对LaTeX文件中的内容进行翻译的功能 | -| crazy_functions\_\_init\_\_.py | 用于导入crazy_functional.py中的功能函数 | -| crazy_functions\下载arxiv论文翻译摘要.py | 从Arxiv上下载论文并提取重要信息 | -| crazy_functions\代码重写为全英文_多线程.py | 针对中文Python文件,将其翻译为全英文 | -| crazy_functions\总结word文档.py | 提取Word文件的重要内容来生成摘要 | -| crazy_functions\批量Markdown翻译.py | 批量翻译Markdown文件 | -| crazy_functions\批量总结PDF文档.py | 批量从PDF文件中提取摘要 | -| crazy_functions\批量总结PDF文档pdfminer.py | 批量从PDF文件中提取摘要 | -| crazy_functions\批量翻译PDF文档_多线程.py | 批量翻译PDF文件 | -| crazy_functions\理解PDF文档内容.py | 批量分析PDF文件并提取摘要 | -| crazy_functions\生成函数注释.py | 自动生成Python文件中函数的注释 | -| crazy_functions\解析项目源代码.py | 解析并分析给定项目的源代码 | -| crazy_functions\询问多个大语言模型.py | 向多个大语言模型询问输入文本并进行处理 | -| crazy_functions\读文献写摘要.py | 根据用户输入读取文献内容并生成摘要 | -| crazy_functions\谷歌检索小助手.py | 利用谷歌学术检索用户提供的论文信息并提取相关信息 | -| crazy_functions\高级功能函数模板.py | 实现高级功能的模板函数 | -| request_llm\bridge_all.py | 处理与LLM的交互 | -| request_llm\bridge_chatglm.py | 使用ChatGLM模型进行聊天 | -| request_llm\bridge_chatgpt.py | 实现对话生成的各项功能 | -| request_llm\bridge_tgui.py | 在Websockets中与用户进行交互并生成文本输出 | - diff --git a/spaces/Flux9665/SpeechCloning/app.py b/spaces/Flux9665/SpeechCloning/app.py deleted file mode 100644 index 0fc145d458c676d3dc679ba5841ffcdedf063860..0000000000000000000000000000000000000000 --- a/spaces/Flux9665/SpeechCloning/app.py +++ /dev/null @@ -1,276 +0,0 @@ -import os - -import gradio as gr -import numpy as np -import soundfile -import soundfile as sf -import torch -from tqdm import tqdm - -os.system("git clone --branch v2.5 https://github.com/DigitalPhonetics/IMS-Toucan.git toucan_codebase") -os.system("mv toucan_codebase/* .") - -from run_model_downloader import download_models - -download_models() - -from Preprocessing.TextFrontend import ArticulatoryCombinedTextFrontend -from Preprocessing.AudioPreprocessor import AudioPreprocessor -from TrainingInterfaces.Text_to_Spectrogram.AutoAligner.Aligner import Aligner -from TrainingInterfaces.Text_to_Spectrogram.FastSpeech2.DurationCalculator import DurationCalculator -from InferenceInterfaces.UtteranceCloner import UtteranceCloner -from Preprocessing.articulatory_features import get_feature_to_index_lookup - - -def float2pcm(sig, dtype='int16'): - """ - https://gist.github.com/HudsonHuang/fbdf8e9af7993fe2a91620d3fb86a182 - """ - sig = np.asarray(sig) - if sig.dtype.kind != 'f': - raise TypeError("'sig' must be a float array") - dtype = np.dtype(dtype) - if dtype.kind not in 'iu': - raise TypeError("'dtype' must be an integer type") - i = np.iinfo(dtype) - abs_max = 2 ** (i.bits - 1) - offset = i.min + abs_max - return (sig * abs_max + offset).clip(i.min, i.max).astype(dtype) - - -class TTS_Interface: - - def __init__(self): - self.device = "cuda" if torch.cuda.is_available() else "cpu" - - self.utterance_cloner = UtteranceCloner(model_id="Meta", device=self.device) - self.speaker_path_lookup = { - "Voice 1": "reference_audios/voice_1.flac", - "Voice 2": "reference_audios/voice_2.wav", - "Voice 3": "reference_audios/voice_3.wav", - } - self.acoustic_model = Aligner() - self.acoustic_model.load_state_dict(torch.load("Models/Aligner/aligner.pt", map_location='cpu')["asr_model"]) - self.acoustic_model = self.acoustic_model.to(self.device) - self.dc = DurationCalculator(reduction_factor=1) - self.tf = ArticulatoryCombinedTextFrontend(language="en") - example_audio, sr = soundfile.read("reference_audios/clone_me_5.wav") - self.ap = AudioPreprocessor(input_sr=sr, output_sr=16000, ) - - ## finetune aligner - steps = 10 - tokens = list() # we need an ID sequence for training rather than a sequence of phonological features - for vector in self.tf.string_to_tensor( - "Betty Botter bought some butter, but she said the butters bitter. If I put it in my batter, it will make my batter bitter. But a bit of better butter will make my batter better."): - if vector[get_feature_to_index_lookup()["word-boundary"]] == 0: - # we don't include word boundaries when performing alignment, since they are not always present in audio. - for phone in self.tf.phone_to_vector: - if vector.numpy().tolist()[13:] == self.tf.phone_to_vector[phone][13:]: - # the first 12 dimensions are for modifiers, so we ignore those when trying to find the phoneme in the ID lookup - tokens.append(self.tf.phone_to_id[phone]) - # this is terribly inefficient, but it's fine - break - tokens = torch.LongTensor(tokens).squeeze().to(self.device) - tokens_len = torch.LongTensor([len(tokens)]).to(self.device) - mel = self.ap.audio_to_mel_spec_tensor(example_audio, normalize=True).transpose(0, 1).unsqueeze(0).to(self.device) - mel.requires_grad = True - mel_len = torch.LongTensor([len(mel[0])]).to(self.device) - # actual fine-tuning starts here - optim_asr = torch.optim.SGD(self.acoustic_model.parameters(), lr=0.1) - self.acoustic_model.train() - for _ in tqdm(list(range(steps))): - pred = self.acoustic_model(mel) - loss = self.acoustic_model.ctc_loss(pred.transpose(0, 1).log_softmax(2), tokens, mel_len, tokens_len) - optim_asr.zero_grad() - loss.backward() - torch.nn.utils.clip_grad_norm_(self.acoustic_model.parameters(), 1.0) - optim_asr.step() - self.acoustic_model.eval() - ## done finetuning - - reference_audio = "reference_audios/clone_me_5.wav" - prompt = "Betty Botter bought some butter, but she said the butters bitter. If I put it in my batter, it will make my batter bitter. But a bit of better butter will make my batter better." - text_list = prompt.replace(".", ".|").replace("?", "?|").replace("!", "!|").split("|") - # we don't split on the punctuation marks because we want to retain them. - - self.split_audio(reference_audio, text_list) - # at this point, split_1.wav, split_2.wav and split_3.wav should exist. - - self.utterance_cloner.tts.set_utterance_embedding("reference_audios/voice_1.flac") - self.part_1_voice_1 = self.utterance_cloner.clone_utterance(path_to_reference_audio="split_1.wav", - reference_transcription=text_list[0], - clone_speaker_identity=False, - lang="en") - self.utterance_cloner.tts.set_utterance_embedding("reference_audios/voice_2.wav") - self.part_1_voice_2 = self.utterance_cloner.clone_utterance(path_to_reference_audio="split_1.wav", - reference_transcription=text_list[0], - clone_speaker_identity=False, - lang="en") - self.utterance_cloner.tts.set_utterance_embedding("reference_audios/voice_3.wav") - self.part_1_voice_3 = self.utterance_cloner.clone_utterance(path_to_reference_audio="split_1.wav", - reference_transcription=text_list[0], - clone_speaker_identity=False, - lang="en") - - self.utterance_cloner.tts.set_utterance_embedding("reference_audios/voice_1.flac") - self.part_2_voice_1 = self.utterance_cloner.clone_utterance(path_to_reference_audio="split_2.wav", - reference_transcription=text_list[1], - clone_speaker_identity=False, - lang="en") - self.utterance_cloner.tts.set_utterance_embedding("reference_audios/voice_2.wav") - self.part_2_voice_2 = self.utterance_cloner.clone_utterance(path_to_reference_audio="split_2.wav", - reference_transcription=text_list[1], - clone_speaker_identity=False, - lang="en") - self.utterance_cloner.tts.set_utterance_embedding("reference_audios/voice_3.wav") - self.part_2_voice_3 = self.utterance_cloner.clone_utterance(path_to_reference_audio="split_2.wav", - reference_transcription=text_list[1], - clone_speaker_identity=False, - lang="en") - - self.utterance_cloner.tts.set_utterance_embedding("reference_audios/voice_1.flac") - self.part_3_voice_1 = self.utterance_cloner.clone_utterance(path_to_reference_audio="split_3.wav", - reference_transcription=text_list[2], - clone_speaker_identity=False, - lang="en") - self.utterance_cloner.tts.set_utterance_embedding("reference_audios/voice_2.wav") - self.part_3_voice_2 = self.utterance_cloner.clone_utterance(path_to_reference_audio="split_3.wav", - reference_transcription=text_list[2], - clone_speaker_identity=False, - lang="en") - self.utterance_cloner.tts.set_utterance_embedding("reference_audios/voice_3.wav") - self.part_3_voice_3 = self.utterance_cloner.clone_utterance(path_to_reference_audio="split_3.wav", - reference_transcription=text_list[2], - clone_speaker_identity=False, - lang="en") - - def read(self, _, speaker_1, speaker_2, speaker_3): - reference_audio = "reference_audios/clone_me_5.wav" - - if speaker_1 == "Voice 1": - part_1 = self.part_1_voice_1 - elif speaker_1 == "Voice 2": - part_1 = self.part_1_voice_2 - elif speaker_1 == "Voice 3": - part_1 = self.part_1_voice_3 - - if speaker_2 == "Voice 1": - part_2 = self.part_2_voice_1 - elif speaker_2 == "Voice 2": - part_2 = self.part_2_voice_2 - elif speaker_2 == "Voice 3": - part_2 = self.part_2_voice_3 - - if speaker_3 == "Voice 1": - part_3 = self.part_3_voice_1 - elif speaker_3 == "Voice 2": - part_3 = self.part_3_voice_2 - elif speaker_3 == "Voice 3": - part_3 = self.part_3_voice_3 - - return "alignment.png", \ - reference_audio, \ - self.speaker_path_lookup["Voice 1"], \ - self.speaker_path_lookup["Voice 2"], \ - self.speaker_path_lookup["Voice 3"], \ - (24000, float2pcm(torch.cat([torch.tensor(part_1), torch.tensor(part_2), torch.tensor(part_3)], dim=0).numpy())) - - def split_audio(self, path_to_audio, text_list): - # extract audio - audio, sr = sf.read(path_to_audio) - ap = AudioPreprocessor(input_sr=sr, output_sr=16000, melspec_buckets=80, hop_length=256, n_fft=1024, cut_silence=False) - norm_wave = ap.audio_to_wave_tensor(normalize=True, audio=audio) - melspec = ap.audio_to_mel_spec_tensor(audio=norm_wave, normalize=False, explicit_sampling_rate=16000).transpose(0, 1) - - # extract phonemes - lines = list() - self.tf.use_word_boundaries = False # this causes problems when splitting otherwise - for segment in text_list: - if segment.strip() != "": - lines.append(self.tf.string_to_tensor(segment, handle_missing=False).squeeze()) - self.tf.use_word_boundaries = True - - # postprocess phonemes: [~ sentence ~ #] --> [sentence ~] except for the first one, which is [~ sentence ~] - processed_lines = list() - for index, line in enumerate(lines): - if index == 0: - processed_lines.append(line[:-1]) - else: - processed_lines.append(line[1:-1]) - lines = processed_lines - joined_phonemes = torch.cat(lines, dim=0) - - # get durations of each phone in audio as average of an ensemble - alignment_paths = list() - ensemble_of_durations = list() - for ensemble in range(1): - alignment_paths.append(self.acoustic_model.inference(mel=melspec.to(self.device), - tokens=joined_phonemes.to(self.device), - save_img_for_debug="alignment.png" if ensemble == 0 else None, - return_ctc=False)) - for alignment_path in alignment_paths: - ensemble_of_durations.append(self.dc(torch.LongTensor(alignment_path), vis=None).squeeze()) - durations = list() - for i, _ in enumerate(ensemble_of_durations[0]): - duration_of_phone = list() - for ensemble_member in ensemble_of_durations: - duration_of_phone.append(ensemble_member.squeeze()[i]) - durations.append(sum(duration_of_phone) / len(duration_of_phone)) - - # cut audio according to duration sum of each line in transcript - line_lens = [len(x) for x in lines] - index = 0 - segment_durations = list() - for num_phones in line_lens: - segment_durations.append(sum(durations[index: index + num_phones])) - index += num_phones - spec_to_wave_factor = len(norm_wave) / sum(segment_durations) - wave_segment_lens = [int(x * spec_to_wave_factor) for x in segment_durations] - start_index = 0 - wave_segments = list() - for index, segment_len in enumerate(wave_segment_lens): - if index == len(wave_segment_lens) - 1: - wave_segments.append(norm_wave[start_index:]) - else: - wave_segments.append(norm_wave[start_index: start_index + segment_len]) - start_index += segment_len - - # write the audio segments into new files - for index, wave_segment in enumerate(wave_segments): - sf.write(f"split_{index + 1}.wav", wave_segment, 16000) - - -meta_model = TTS_Interface() -article = "

This is still a work in progress, models will be exchanged for better ones as soon as they are done. More diverse training data can help with more exact cloning. For example we are still trying to incorporate more singing data.

Click here to learn more about the IMS Toucan Speech Synthesis Toolkit

" - -iface = gr.Interface(fn=meta_model.read, - inputs=[gr.inputs.Dropdown( - [ - "Betty Botter bought some butter, but she said the butters bitter. If I put it in my batter, it will make my batter bitter. But a bit of better butter will make my batter better."], - type="value", - default="Betty Botter bought some butter, but she said the butters bitter. If I put it in my batter, it will make my batter bitter. But a bit of better butter will make my batter better.", - label="Select which utterance should be customized"), - gr.inputs.Dropdown(["Voice 1", - "Voice 2", - "Voice 3"], type="value", default="Voice 1", label="Speaker selection for the first sentence"), - gr.inputs.Dropdown(["Voice 1", - "Voice 2", - "Voice 3"], type="value", default="Voice 2", label="Speaker selection for the second sentence"), - gr.inputs.Dropdown(["Voice 1", - "Voice 2", - "Voice 3"], type="value", default="Voice 3", label="Speaker selection for the third sentence")], - outputs=[gr.outputs.Image(label="Alignment of Phonemes to Audio"), - gr.outputs.Audio(type="file", label="Original Audio"), - gr.outputs.Audio(type="file", label="Reference-Voice 1"), - gr.outputs.Audio(type="file", label="Reference-Voice 2"), - gr.outputs.Audio(type="file", label="Reference-Voice 3"), - gr.outputs.Audio(type="numpy", label="Customized Audio")], - layout="vertical", - title="Speech Customization", - thumbnail="Utility/toucan.png", - theme="default", - allow_flagging="never", - allow_screenshot=False, - description="In this demo, an audio is split automatically into individual sentences. Then each of the sentences is re-synthesized into speech with the exact same prosody, but with a voice that you can choose. This allows customizing any existing read speech while retaining as much from the original reading as possible. Unfortunately, we cannot show you the reference audio and the reference voices ahead of time, so they will be displayed together with the resulting cloned speech.", - article=article) -iface.launch(enable_queue=True) diff --git a/spaces/FridaZuley/RVC_HFKawaii/lib/uvr5_pack/lib_v5/model_param_init.py b/spaces/FridaZuley/RVC_HFKawaii/lib/uvr5_pack/lib_v5/model_param_init.py deleted file mode 100644 index b995c0bfb1194746187692e2ab1c2a6dbaaaec6c..0000000000000000000000000000000000000000 --- a/spaces/FridaZuley/RVC_HFKawaii/lib/uvr5_pack/lib_v5/model_param_init.py +++ /dev/null @@ -1,69 +0,0 @@ -import json -import os -import pathlib - -default_param = {} -default_param["bins"] = 768 -default_param["unstable_bins"] = 9 # training only -default_param["reduction_bins"] = 762 # training only -default_param["sr"] = 44100 -default_param["pre_filter_start"] = 757 -default_param["pre_filter_stop"] = 768 -default_param["band"] = {} - - -default_param["band"][1] = { - "sr": 11025, - "hl": 128, - "n_fft": 960, - "crop_start": 0, - "crop_stop": 245, - "lpf_start": 61, # inference only - "res_type": "polyphase", -} - -default_param["band"][2] = { - "sr": 44100, - "hl": 512, - "n_fft": 1536, - "crop_start": 24, - "crop_stop": 547, - "hpf_start": 81, # inference only - "res_type": "sinc_best", -} - - -def int_keys(d): - r = {} - for k, v in d: - if k.isdigit(): - k = int(k) - r[k] = v - return r - - -class ModelParameters(object): - def __init__(self, config_path=""): - if ".pth" == pathlib.Path(config_path).suffix: - import zipfile - - with zipfile.ZipFile(config_path, "r") as zip: - self.param = json.loads( - zip.read("param.json"), object_pairs_hook=int_keys - ) - elif ".json" == pathlib.Path(config_path).suffix: - with open(config_path, "r") as f: - self.param = json.loads(f.read(), object_pairs_hook=int_keys) - else: - self.param = default_param - - for k in [ - "mid_side", - "mid_side_b", - "mid_side_b2", - "stereo_w", - "stereo_n", - "reverse", - ]: - if not k in self.param: - self.param[k] = False diff --git a/spaces/GT4SD/paccmann_rl/model_cards/description.md b/spaces/GT4SD/paccmann_rl/model_cards/description.md deleted file mode 100644 index 3f9274435cfecaf27564c2ea5ca65065ff78de2d..0000000000000000000000000000000000000000 --- a/spaces/GT4SD/paccmann_rl/model_cards/description.md +++ /dev/null @@ -1,9 +0,0 @@ -logo - -[PaccMannRL](https://github.com/PaccMann/paccmann_rl) is a language-based molecular generative model that can be conditioned (primed) on protein targets or gene expression profiles and produces molecules with high affinity toward the context vector. This model has been developed at IBM Research and is distributed by the **GT4SD** (Generative Toolkit for Scientific Discovery) team. For details please see the two publications: -- [Born et al., (2021), *iScience*](https://www.cell.com/iscience/fulltext/S2589-0042(21)00237-6) for the model conditionable on gene expression profiles. -- [Born et al., (2021), *Machine Learning: Science & Technology*](https://iopscience.iop.org/article/10.1088/2632-2153/abe808/meta) for the model conditionable on protein targets. - - -For **examples** and **documentation** of the model parameters, please see below. -Moreover, we provide a **model card** ([Mitchell et al. (2019)](https://dl.acm.org/doi/abs/10.1145/3287560.3287596?casa_token=XD4eHiE2cRUAAAAA:NL11gMa1hGPOUKTAbtXnbVQBDBbjxwcjGECF_i-WC_3g1aBgU1Hbz_f2b4kI_m1in-w__1ztGeHnwHs)) at the bottom of this page. diff --git a/spaces/GastonMazzei/escher-inpaint-project/glide_text2im/gaussian_diffusion.py b/spaces/GastonMazzei/escher-inpaint-project/glide_text2im/gaussian_diffusion.py deleted file mode 100644 index 1c0f97783e7a336390324516f2ba8e89d1dcfaf1..0000000000000000000000000000000000000000 --- a/spaces/GastonMazzei/escher-inpaint-project/glide_text2im/gaussian_diffusion.py +++ /dev/null @@ -1,639 +0,0 @@ -""" -Simplified from https://github.com/openai/guided-diffusion/blob/main/guided_diffusion/gaussian_diffusion.py. -""" - -import math - -import numpy as np -import torch as th - - -def _warmup_beta(beta_start, beta_end, num_diffusion_timesteps, warmup_frac): - betas = beta_end * np.ones(num_diffusion_timesteps, dtype=np.float64) - warmup_time = int(num_diffusion_timesteps * warmup_frac) - betas[:warmup_time] = np.linspace(beta_start, beta_end, warmup_time, dtype=np.float64) - return betas - - -def get_beta_schedule(beta_schedule, *, beta_start, beta_end, num_diffusion_timesteps): - """ - This is the deprecated API for creating beta schedules. - - See get_named_beta_schedule() for the new library of schedules. - """ - if beta_schedule == "quad": - betas = ( - np.linspace( - beta_start ** 0.5, - beta_end ** 0.5, - num_diffusion_timesteps, - dtype=np.float64, - ) - ** 2 - ) - elif beta_schedule == "linear": - betas = np.linspace(beta_start, beta_end, num_diffusion_timesteps, dtype=np.float64) - elif beta_schedule == "warmup10": - betas = _warmup_beta(beta_start, beta_end, num_diffusion_timesteps, 0.1) - elif beta_schedule == "warmup50": - betas = _warmup_beta(beta_start, beta_end, num_diffusion_timesteps, 0.5) - elif beta_schedule == "const": - betas = beta_end * np.ones(num_diffusion_timesteps, dtype=np.float64) - elif beta_schedule == "jsd": # 1/T, 1/(T-1), 1/(T-2), ..., 1 - betas = 1.0 / np.linspace( - num_diffusion_timesteps, 1, num_diffusion_timesteps, dtype=np.float64 - ) - else: - raise NotImplementedError(beta_schedule) - assert betas.shape == (num_diffusion_timesteps,) - return betas - - -def get_named_beta_schedule(schedule_name, num_diffusion_timesteps): - """ - Get a pre-defined beta schedule for the given name. - - The beta schedule library consists of beta schedules which remain similar - in the limit of num_diffusion_timesteps. - Beta schedules may be added, but should not be removed or changed once - they are committed to maintain backwards compatibility. - """ - if schedule_name == "linear": - # Linear schedule from Ho et al, extended to work for any number of - # diffusion steps. - scale = 1000 / num_diffusion_timesteps - return get_beta_schedule( - "linear", - beta_start=scale * 0.0001, - beta_end=scale * 0.02, - num_diffusion_timesteps=num_diffusion_timesteps, - ) - elif schedule_name == "squaredcos_cap_v2": - return betas_for_alpha_bar( - num_diffusion_timesteps, - lambda t: math.cos((t + 0.008) / 1.008 * math.pi / 2) ** 2, - ) - else: - raise NotImplementedError(f"unknown beta schedule: {schedule_name}") - - -def betas_for_alpha_bar(num_diffusion_timesteps, alpha_bar, max_beta=0.999): - """ - Create a beta schedule that discretizes the given alpha_t_bar function, - which defines the cumulative product of (1-beta) over time from t = [0,1]. - - :param num_diffusion_timesteps: the number of betas to produce. - :param alpha_bar: a lambda that takes an argument t from 0 to 1 and - produces the cumulative product of (1-beta) up to that - part of the diffusion process. - :param max_beta: the maximum beta to use; use values lower than 1 to - prevent singularities. - """ - betas = [] - for i in range(num_diffusion_timesteps): - t1 = i / num_diffusion_timesteps - t2 = (i + 1) / num_diffusion_timesteps - betas.append(min(1 - alpha_bar(t2) / alpha_bar(t1), max_beta)) - return np.array(betas) - - -class GaussianDiffusion: - """ - Utilities for training and sampling diffusion models. - - Original ported from this codebase: - https://github.com/hojonathanho/diffusion/blob/1e0dceb3b3495bbe19116a5e1b3596cd0706c543/diffusion_tf/diffusion_utils_2.py#L42 - - :param betas: a 1-D numpy array of betas for each diffusion timestep, - starting at T and going to 1. - """ - - def __init__( - self, - *, - betas, - ): - # Use float64 for accuracy. - betas = np.array(betas, dtype=np.float64) - self.betas = betas - assert len(betas.shape) == 1, "betas must be 1-D" - assert (betas > 0).all() and (betas <= 1).all() - - self.num_timesteps = int(betas.shape[0]) - - alphas = 1.0 - betas - self.alphas_cumprod = np.cumprod(alphas, axis=0) - self.alphas_cumprod_prev = np.append(1.0, self.alphas_cumprod[:-1]) - self.alphas_cumprod_next = np.append(self.alphas_cumprod[1:], 0.0) - assert self.alphas_cumprod_prev.shape == (self.num_timesteps,) - - # calculations for diffusion q(x_t | x_{t-1}) and others - self.sqrt_alphas_cumprod = np.sqrt(self.alphas_cumprod) - self.sqrt_one_minus_alphas_cumprod = np.sqrt(1.0 - self.alphas_cumprod) - self.log_one_minus_alphas_cumprod = np.log(1.0 - self.alphas_cumprod) - self.sqrt_recip_alphas_cumprod = np.sqrt(1.0 / self.alphas_cumprod) - self.sqrt_recipm1_alphas_cumprod = np.sqrt(1.0 / self.alphas_cumprod - 1) - - # calculations for posterior q(x_{t-1} | x_t, x_0) - self.posterior_variance = ( - betas * (1.0 - self.alphas_cumprod_prev) / (1.0 - self.alphas_cumprod) - ) - # below: log calculation clipped because the posterior variance is 0 at the beginning of the diffusion chain - self.posterior_log_variance_clipped = np.log( - np.append(self.posterior_variance[1], self.posterior_variance[1:]) - ) - self.posterior_mean_coef1 = ( - betas * np.sqrt(self.alphas_cumprod_prev) / (1.0 - self.alphas_cumprod) - ) - self.posterior_mean_coef2 = ( - (1.0 - self.alphas_cumprod_prev) * np.sqrt(alphas) / (1.0 - self.alphas_cumprod) - ) - - def q_mean_variance(self, x_start, t): - """ - Get the distribution q(x_t | x_0). - - :param x_start: the [N x C x ...] tensor of noiseless inputs. - :param t: the number of diffusion steps (minus 1). Here, 0 means one step. - :return: A tuple (mean, variance, log_variance), all of x_start's shape. - """ - mean = _extract_into_tensor(self.sqrt_alphas_cumprod, t, x_start.shape) * x_start - variance = _extract_into_tensor(1.0 - self.alphas_cumprod, t, x_start.shape) - log_variance = _extract_into_tensor(self.log_one_minus_alphas_cumprod, t, x_start.shape) - return mean, variance, log_variance - - def q_sample(self, x_start, t, noise=None): - """ - Diffuse the data for a given number of diffusion steps. - - In other words, sample from q(x_t | x_0). - - :param x_start: the initial data batch. - :param t: the number of diffusion steps (minus 1). Here, 0 means one step. - :param noise: if specified, the split-out normal noise. - :return: A noisy version of x_start. - """ - if noise is None: - noise = th.randn_like(x_start) - assert noise.shape == x_start.shape - return ( - _extract_into_tensor(self.sqrt_alphas_cumprod, t, x_start.shape) * x_start - + _extract_into_tensor(self.sqrt_one_minus_alphas_cumprod, t, x_start.shape) * noise - ) - - def q_posterior_mean_variance(self, x_start, x_t, t): - """ - Compute the mean and variance of the diffusion posterior: - - q(x_{t-1} | x_t, x_0) - - """ - assert x_start.shape == x_t.shape - posterior_mean = ( - _extract_into_tensor(self.posterior_mean_coef1, t, x_t.shape) * x_start - + _extract_into_tensor(self.posterior_mean_coef2, t, x_t.shape) * x_t - ) - posterior_variance = _extract_into_tensor(self.posterior_variance, t, x_t.shape) - posterior_log_variance_clipped = _extract_into_tensor( - self.posterior_log_variance_clipped, t, x_t.shape - ) - assert ( - posterior_mean.shape[0] - == posterior_variance.shape[0] - == posterior_log_variance_clipped.shape[0] - == x_start.shape[0] - ) - return posterior_mean, posterior_variance, posterior_log_variance_clipped - - def p_mean_variance(self, model, x, t, clip_denoised=True, denoised_fn=None, model_kwargs=None): - """ - Apply the model to get p(x_{t-1} | x_t), as well as a prediction of - the initial x, x_0. - - :param model: the model, which takes a signal and a batch of timesteps - as input. - :param x: the [N x C x ...] tensor at time t. - :param t: a 1-D Tensor of timesteps. - :param clip_denoised: if True, clip the denoised signal into [-1, 1]. - :param denoised_fn: if not None, a function which applies to the - x_start prediction before it is used to sample. Applies before - clip_denoised. - :param model_kwargs: if not None, a dict of extra keyword arguments to - pass to the model. This can be used for conditioning. - :return: a dict with the following keys: - - 'mean': the model mean output. - - 'variance': the model variance output. - - 'log_variance': the log of 'variance'. - - 'pred_xstart': the prediction for x_0. - """ - if model_kwargs is None: - model_kwargs = {} - - B, C = x.shape[:2] - assert t.shape == (B,) - model_output = model(x, t, **model_kwargs) - if isinstance(model_output, tuple): - model_output, extra = model_output - else: - extra = None - - assert model_output.shape == (B, C * 2, *x.shape[2:]) - model_output, model_var_values = th.split(model_output, C, dim=1) - min_log = _extract_into_tensor(self.posterior_log_variance_clipped, t, x.shape) - max_log = _extract_into_tensor(np.log(self.betas), t, x.shape) - # The model_var_values is [-1, 1] for [min_var, max_var]. - frac = (model_var_values + 1) / 2 - model_log_variance = frac * max_log + (1 - frac) * min_log - model_variance = th.exp(model_log_variance) - - def process_xstart(x): - if denoised_fn is not None: - x = denoised_fn(x) - if clip_denoised: - return x.clamp(-1, 1) - return x - - pred_xstart = process_xstart(self._predict_xstart_from_eps(x_t=x, t=t, eps=model_output)) - model_mean, _, _ = self.q_posterior_mean_variance(x_start=pred_xstart, x_t=x, t=t) - - assert model_mean.shape == model_log_variance.shape == pred_xstart.shape == x.shape - return { - "mean": model_mean, - "variance": model_variance, - "log_variance": model_log_variance, - "pred_xstart": pred_xstart, - "extra": extra, - } - - def _predict_xstart_from_eps(self, x_t, t, eps): - assert x_t.shape == eps.shape - return ( - _extract_into_tensor(self.sqrt_recip_alphas_cumprod, t, x_t.shape) * x_t - - _extract_into_tensor(self.sqrt_recipm1_alphas_cumprod, t, x_t.shape) * eps - ) - - def _predict_eps_from_xstart(self, x_t, t, pred_xstart): - return ( - _extract_into_tensor(self.sqrt_recip_alphas_cumprod, t, x_t.shape) * x_t - pred_xstart - ) / _extract_into_tensor(self.sqrt_recipm1_alphas_cumprod, t, x_t.shape) - - def condition_mean(self, cond_fn, p_mean_var, x, t, model_kwargs=None): - """ - Compute the mean for the previous step, given a function cond_fn that - computes the gradient of a conditional log probability with respect to - x. In particular, cond_fn computes grad(log(p(y|x))), and we want to - condition on y. - - This uses the conditioning strategy from Sohl-Dickstein et al. (2015). - """ - gradient = cond_fn(x, t, **model_kwargs) - new_mean = p_mean_var["mean"].float() + p_mean_var["variance"] * gradient.float() - return new_mean - - def condition_score(self, cond_fn, p_mean_var, x, t, model_kwargs=None): - """ - Compute what the p_mean_variance output would have been, should the - model's score function be conditioned by cond_fn. - - See condition_mean() for details on cond_fn. - - Unlike condition_mean(), this instead uses the conditioning strategy - from Song et al (2020). - """ - alpha_bar = _extract_into_tensor(self.alphas_cumprod, t, x.shape) - - eps = self._predict_eps_from_xstart(x, t, p_mean_var["pred_xstart"]) - eps = eps - (1 - alpha_bar).sqrt() * cond_fn(x, t, **model_kwargs) - - out = p_mean_var.copy() - out["pred_xstart"] = self._predict_xstart_from_eps(x, t, eps) - out["mean"], _, _ = self.q_posterior_mean_variance(x_start=out["pred_xstart"], x_t=x, t=t) - return out - - def p_sample( - self, - model, - x, - t, - clip_denoised=True, - denoised_fn=None, - cond_fn=None, - model_kwargs=None, - ): - """ - Sample x_{t-1} from the model at the given timestep. - - :param model: the model to sample from. - :param x: the current tensor at x_{t-1}. - :param t: the value of t, starting at 0 for the first diffusion step. - :param clip_denoised: if True, clip the x_start prediction to [-1, 1]. - :param denoised_fn: if not None, a function which applies to the - x_start prediction before it is used to sample. - :param cond_fn: if not None, this is a gradient function that acts - similarly to the model. - :param model_kwargs: if not None, a dict of extra keyword arguments to - pass to the model. This can be used for conditioning. - :return: a dict containing the following keys: - - 'sample': a random sample from the model. - - 'pred_xstart': a prediction of x_0. - """ - out = self.p_mean_variance( - model, - x, - t, - clip_denoised=clip_denoised, - denoised_fn=denoised_fn, - model_kwargs=model_kwargs, - ) - noise = th.randn_like(x) - nonzero_mask = ( - (t != 0).float().view(-1, *([1] * (len(x.shape) - 1))) - ) # no noise when t == 0 - if cond_fn is not None: - out["mean"] = self.condition_mean(cond_fn, out, x, t, model_kwargs=model_kwargs) - sample = out["mean"] + nonzero_mask * th.exp(0.5 * out["log_variance"]) * noise - return {"sample": sample, "pred_xstart": out["pred_xstart"]} - - def p_sample_loop( - self, - model, - shape, - noise=None, - clip_denoised=True, - denoised_fn=None, - cond_fn=None, - model_kwargs=None, - device=None, - progress=False, - ): - """ - Generate samples from the model. - - :param model: the model module. - :param shape: the shape of the samples, (N, C, H, W). - :param noise: if specified, the noise from the encoder to sample. - Should be of the same shape as `shape`. - :param clip_denoised: if True, clip x_start predictions to [-1, 1]. - :param denoised_fn: if not None, a function which applies to the - x_start prediction before it is used to sample. - :param cond_fn: if not None, this is a gradient function that acts - similarly to the model. - :param model_kwargs: if not None, a dict of extra keyword arguments to - pass to the model. This can be used for conditioning. - :param device: if specified, the device to create the samples on. - If not specified, use a model parameter's device. - :param progress: if True, show a tqdm progress bar. - :return: a non-differentiable batch of samples. - """ - final = None - for sample in self.p_sample_loop_progressive( - model, - shape, - noise=noise, - clip_denoised=clip_denoised, - denoised_fn=denoised_fn, - cond_fn=cond_fn, - model_kwargs=model_kwargs, - device=device, - progress=progress, - ): - final = sample - return final["sample"] - - def p_sample_loop_progressive( - self, - model, - shape, - noise=None, - clip_denoised=True, - denoised_fn=None, - cond_fn=None, - model_kwargs=None, - device=None, - progress=False, - ): - """ - Generate samples from the model and yield intermediate samples from - each timestep of diffusion. - - Arguments are the same as p_sample_loop(). - Returns a generator over dicts, where each dict is the return value of - p_sample(). - """ - if device is None: - device = next(model.parameters()).device - assert isinstance(shape, (tuple, list)) - if noise is not None: - img = noise - else: - img = th.randn(*shape, device=device) - indices = list(range(self.num_timesteps))[::-1] - - if progress: - # Lazy import so that we don't depend on tqdm. - from tqdm.auto import tqdm - - indices = tqdm(indices) - - for i in indices: - t = th.tensor([i] * shape[0], device=device) - with th.no_grad(): - out = self.p_sample( - model, - img, - t, - clip_denoised=clip_denoised, - denoised_fn=denoised_fn, - cond_fn=cond_fn, - model_kwargs=model_kwargs, - ) - yield out - img = out["sample"] - - def ddim_sample( - self, - model, - x, - t, - clip_denoised=True, - denoised_fn=None, - cond_fn=None, - model_kwargs=None, - eta=0.0, - ): - """ - Sample x_{t-1} from the model using DDIM. - - Same usage as p_sample(). - """ - out = self.p_mean_variance( - model, - x, - t, - clip_denoised=clip_denoised, - denoised_fn=denoised_fn, - model_kwargs=model_kwargs, - ) - if cond_fn is not None: - out = self.condition_score(cond_fn, out, x, t, model_kwargs=model_kwargs) - - # Usually our model outputs epsilon, but we re-derive it - # in case we used x_start or x_prev prediction. - eps = self._predict_eps_from_xstart(x, t, out["pred_xstart"]) - - alpha_bar = _extract_into_tensor(self.alphas_cumprod, t, x.shape) - alpha_bar_prev = _extract_into_tensor(self.alphas_cumprod_prev, t, x.shape) - sigma = ( - eta - * th.sqrt((1 - alpha_bar_prev) / (1 - alpha_bar)) - * th.sqrt(1 - alpha_bar / alpha_bar_prev) - ) - # Equation 12. - noise = th.randn_like(x) - mean_pred = ( - out["pred_xstart"] * th.sqrt(alpha_bar_prev) - + th.sqrt(1 - alpha_bar_prev - sigma ** 2) * eps - ) - nonzero_mask = ( - (t != 0).float().view(-1, *([1] * (len(x.shape) - 1))) - ) # no noise when t == 0 - sample = mean_pred + nonzero_mask * sigma * noise - return {"sample": sample, "pred_xstart": out["pred_xstart"]} - - def ddim_reverse_sample( - self, - model, - x, - t, - clip_denoised=True, - denoised_fn=None, - cond_fn=None, - model_kwargs=None, - eta=0.0, - ): - """ - Sample x_{t+1} from the model using DDIM reverse ODE. - """ - assert eta == 0.0, "Reverse ODE only for deterministic path" - out = self.p_mean_variance( - model, - x, - t, - clip_denoised=clip_denoised, - denoised_fn=denoised_fn, - model_kwargs=model_kwargs, - ) - if cond_fn is not None: - out = self.condition_score(cond_fn, out, x, t, model_kwargs=model_kwargs) - # Usually our model outputs epsilon, but we re-derive it - # in case we used x_start or x_prev prediction. - eps = ( - _extract_into_tensor(self.sqrt_recip_alphas_cumprod, t, x.shape) * x - - out["pred_xstart"] - ) / _extract_into_tensor(self.sqrt_recipm1_alphas_cumprod, t, x.shape) - alpha_bar_next = _extract_into_tensor(self.alphas_cumprod_next, t, x.shape) - - # Equation 12. reversed - mean_pred = out["pred_xstart"] * th.sqrt(alpha_bar_next) + th.sqrt(1 - alpha_bar_next) * eps - - return {"sample": mean_pred, "pred_xstart": out["pred_xstart"]} - - def ddim_sample_loop( - self, - model, - shape, - noise=None, - clip_denoised=True, - denoised_fn=None, - cond_fn=None, - model_kwargs=None, - device=None, - progress=False, - eta=0.0, - ): - """ - Generate samples from the model using DDIM. - - Same usage as p_sample_loop(). - """ - final = None - for sample in self.ddim_sample_loop_progressive( - model, - shape, - noise=noise, - clip_denoised=clip_denoised, - denoised_fn=denoised_fn, - cond_fn=cond_fn, - model_kwargs=model_kwargs, - device=device, - progress=progress, - eta=eta, - ): - final = sample - return final["sample"] - - def ddim_sample_loop_progressive( - self, - model, - shape, - noise=None, - clip_denoised=True, - denoised_fn=None, - cond_fn=None, - model_kwargs=None, - device=None, - progress=False, - eta=0.0, - ): - """ - Use DDIM to sample from the model and yield intermediate samples from - each timestep of DDIM. - - Same usage as p_sample_loop_progressive(). - """ - if device is None: - device = next(model.parameters()).device - assert isinstance(shape, (tuple, list)) - if noise is not None: - img = noise - else: - img = th.randn(*shape, device=device) - indices = list(range(self.num_timesteps))[::-1] - - if progress: - # Lazy import so that we don't depend on tqdm. - from tqdm.auto import tqdm - - indices = tqdm(indices) - - for i in indices: - t = th.tensor([i] * shape[0], device=device) - with th.no_grad(): - out = self.ddim_sample( - model, - img, - t, - clip_denoised=clip_denoised, - denoised_fn=denoised_fn, - cond_fn=cond_fn, - model_kwargs=model_kwargs, - eta=eta, - ) - yield out - img = out["sample"] - - -def _extract_into_tensor(arr, timesteps, broadcast_shape): - """ - Extract values from a 1-D numpy array for a batch of indices. - - :param arr: the 1-D numpy array. - :param timesteps: a tensor of indices into the array to extract. - :param broadcast_shape: a larger shape of K dimensions with the batch - dimension equal to the length of timesteps. - :return: a tensor of shape [batch_size, 1, ...] where the shape has K dims. - """ - res = th.from_numpy(arr).to(device=timesteps.device)[timesteps].float() - while len(res.shape) < len(broadcast_shape): - res = res[..., None] - return res + th.zeros(broadcast_shape, device=timesteps.device) diff --git a/spaces/Godrose0728/Aisound02/text/cantonese.py b/spaces/Godrose0728/Aisound02/text/cantonese.py deleted file mode 100644 index 32eae72ef7eb43d493da6d6f75dd46176d0e8808..0000000000000000000000000000000000000000 --- a/spaces/Godrose0728/Aisound02/text/cantonese.py +++ /dev/null @@ -1,59 +0,0 @@ -import re -import cn2an -import opencc - - -converter = opencc.OpenCC('chinese_dialect_lexicons/jyutjyu') - -# List of (Latin alphabet, ipa) pairs: -_latin_to_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('A', 'ei˥'), - ('B', 'biː˥'), - ('C', 'siː˥'), - ('D', 'tiː˥'), - ('E', 'iː˥'), - ('F', 'e˥fuː˨˩'), - ('G', 'tsiː˥'), - ('H', 'ɪk̚˥tsʰyː˨˩'), - ('I', 'ɐi˥'), - ('J', 'tsei˥'), - ('K', 'kʰei˥'), - ('L', 'e˥llou˨˩'), - ('M', 'ɛːm˥'), - ('N', 'ɛːn˥'), - ('O', 'ou˥'), - ('P', 'pʰiː˥'), - ('Q', 'kʰiːu˥'), - ('R', 'aː˥lou˨˩'), - ('S', 'ɛː˥siː˨˩'), - ('T', 'tʰiː˥'), - ('U', 'juː˥'), - ('V', 'wiː˥'), - ('W', 'tʊk̚˥piː˥juː˥'), - ('X', 'ɪk̚˥siː˨˩'), - ('Y', 'waːi˥'), - ('Z', 'iː˨sɛːt̚˥') -]] - - -def number_to_cantonese(text): - return re.sub(r'\d+(?:\.?\d+)?', lambda x: cn2an.an2cn(x.group()), text) - - -def latin_to_ipa(text): - for regex, replacement in _latin_to_ipa: - text = re.sub(regex, replacement, text) - return text - - -def cantonese_to_ipa(text): - text = number_to_cantonese(text.upper()) - text = converter.convert(text).replace('-','').replace('$',' ') - text = re.sub(r'[A-Z]', lambda x: latin_to_ipa(x.group())+' ', text) - text = re.sub(r'[、;:]', ',', text) - text = re.sub(r'\s*,\s*', ', ', text) - text = re.sub(r'\s*。\s*', '. ', text) - text = re.sub(r'\s*?\s*', '? ', text) - text = re.sub(r'\s*!\s*', '! ', text) - text = re.sub(r'\s*$', '', text) - return text diff --git a/spaces/Gradio-Blocks/anime-colorization/scripts/cascaded_pixel_guide_sample.py b/spaces/Gradio-Blocks/anime-colorization/scripts/cascaded_pixel_guide_sample.py deleted file mode 100644 index 889d6f685da5f6829d2d78f6585a5c061338c218..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/anime-colorization/scripts/cascaded_pixel_guide_sample.py +++ /dev/null @@ -1,148 +0,0 @@ -""" -Generate a large batch of samples from a super resolution model, given a batch -of samples from a regular model from image_sample.py. -""" - -import argparse -import os - -import blobfile as bf -import numpy as np -import torch as th -import torch.distributed as dist - -from torchvision import utils -from pixel_guide_diffusion import dist_util, logger -from pixel_guide_diffusion.image_datasets import load_data -from pixel_guide_diffusion.script_util import ( - pg_model_and_diffusion_defaults, - pg_create_model_and_diffusion, - pgsr_model_and_diffusion_defaults, - pgsr_create_model_and_diffusion, - args_to_dict, - add_dict_to_argparser, -) - - -def main(): - args = create_argparser().parse_args() - - dist_util.setup_dist() - logger.configure() - - logger.log("creating model...") - model, diffusion = pg_create_model_and_diffusion( - **args_to_dict(args, pg_model_and_diffusion_defaults().keys()) - ) - model.load_state_dict( - dist_util.load_state_dict(args.model_path, map_location="cpu") - ) - model.to(dist_util.dev()) - model.eval() - - logger.log("creating model2...") - args.num_channels = args.num_channels2 - args.use_attention = args.use_attention2 - model2, diffusion2 = pgsr_create_model_and_diffusion( - **args_to_dict(args, pgsr_model_and_diffusion_defaults().keys()) - ) - model2.load_state_dict( - dist_util.load_state_dict(args.model_path2, map_location="cpu") - ) - model2.to(dist_util.dev()) - model2.eval() - - logger.log("creating data loader...") - data = load_data( - data_dir=args.data_dir, - batch_size=args.batch_size, - image_size=args.large_size, - class_cond=args.class_cond, - guide_dir=args.guide_dir, - guide_size=args.guide_size, - deterministic=True, - ) - - if args.seed > -1: - th.manual_seed(args.seed) - - logger.log("creating samples...") - os.makedirs('sample', exist_ok=True) - i = 0 - while i * args.batch_size < args.num_samples: - if dist.get_rank() == 0: - target, model_kwargs = next(data) - target = target.to(dist_util.dev()) - model_kwargs = {k: v.to(dist_util.dev()) for k, v in model_kwargs.items()} - - with th.no_grad(): - sample_fn = ( - diffusion.p_sample_loop if not args.use_ddim else diffusion.ddim_sample_loop - ) - sample = sample_fn( - model, - (args.batch_size, 3, args.image_size, args.image_size), - clip_denoised=args.clip_denoised, - model_kwargs=model_kwargs, - ) - - model_kwargs["low_res"] = sample - sample_fn2 = ( - diffusion2.p_sample_loop if not args.use_ddim else diffusion2.ddim_sample_loop - ) - sample2 = sample_fn2( - model2, - (args.batch_size, 3, args.large_size, args.large_size), - clip_denoised=args.clip_denoised, - model_kwargs=model_kwargs, - ) - - guide = model_kwargs["guide"] - h, w = guide.shape[2:] - guide = guide.clamp(-1,1).repeat(1,3,1,1) - sample = th.nn.functional.interpolate(sample.clamp(-1,1), size=(h, w)) - sample2 = th.nn.functional.interpolate(sample2.clamp(-1,1), size=(h, w)) - target = th.nn.functional.interpolate(target.clamp(-1,1), size=(h, w)) - - # images = th.cat([guide, sample, sample2, target], 0) - images = th.cat([guide, sample2, target], 0) - utils.save_image( - images, - f"sample/{str(i).zfill(6)}.png", - nrow=args.batch_size, - normalize=True, - range=(-1, 1), - ) - - i += 1 - logger.log(f"created {i * args.batch_size} samples") - - logger.log("sampling complete") - - -def create_argparser(): - defaults = dict( - data_dir="", - guide_dir="", - clip_denoised=True, - num_samples=100, - batch_size=4, - use_ddim=False, - base_samples="", - model_path="", - seed=-1, - ) - defaults.update(pg_model_and_diffusion_defaults()) - defaults.update(pgsr_model_and_diffusion_defaults()) - defaults.update(dict( - num_channels2=128, - use_attention2=True, - model_path2="", - )) - parser = argparse.ArgumentParser() - add_dict_to_argparser(parser, defaults) - return parser - - -if __name__ == "__main__": - main() diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/dcn/mask_rcnn_r101_fpn_dconv_c3-c5_1x_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/dcn/mask_rcnn_r101_fpn_dconv_c3-c5_1x_coco.py deleted file mode 100644 index cb340022ea27f563b8c4a570cf89b5f09e6434cd..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/dcn/mask_rcnn_r101_fpn_dconv_c3-c5_1x_coco.py +++ /dev/null @@ -1,5 +0,0 @@ -_base_ = '../mask_rcnn/mask_rcnn_r101_fpn_1x_coco.py' -model = dict( - backbone=dict( - dcn=dict(type='DCN', deform_groups=1, fallback_on_stride=False), - stage_with_dcn=(False, True, True, True))) diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/rpn/rpn_r101_caffe_fpn_1x_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/rpn/rpn_r101_caffe_fpn_1x_coco.py deleted file mode 100644 index e616fdf46ef82fb1de0519541d20156e789f03ec..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/rpn/rpn_r101_caffe_fpn_1x_coco.py +++ /dev/null @@ -1,4 +0,0 @@ -_base_ = './rpn_r50_caffe_fpn_1x_coco.py' -model = dict( - pretrained='open-mmlab://detectron2/resnet101_caffe', - backbone=dict(depth=101)) diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/models/backbones/uniformer.py b/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/models/backbones/uniformer.py deleted file mode 100644 index 5705a6dd7019f51bc04e4a2c7ff42021821dbd49..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/models/backbones/uniformer.py +++ /dev/null @@ -1,422 +0,0 @@ -# -------------------------------------------------------- -# UniFormer -# Copyright (c) 2022 SenseTime X-Lab -# Licensed under The MIT License [see LICENSE for details] -# Written by Kunchang Li -# -------------------------------------------------------- - -from collections import OrderedDict -import math - -from functools import partial -import torch -import torch.nn as nn -import torch.nn.functional as F -import torch.utils.checkpoint as checkpoint -import numpy as np -from timm.models.layers import DropPath, to_2tuple, trunc_normal_ - -from mmcv_custom import load_checkpoint -from mmdet.utils import get_root_logger -from ..builder import BACKBONES - - -class Mlp(nn.Module): - def __init__(self, in_features, hidden_features=None, out_features=None, act_layer=nn.GELU, drop=0.): - super().__init__() - out_features = out_features or in_features - hidden_features = hidden_features or in_features - self.fc1 = nn.Linear(in_features, hidden_features) - self.act = act_layer() - self.fc2 = nn.Linear(hidden_features, out_features) - self.drop = nn.Dropout(drop) - - def forward(self, x): - x = self.fc1(x) - x = self.act(x) - x = self.drop(x) - x = self.fc2(x) - x = self.drop(x) - return x - - -class CMlp(nn.Module): - def __init__(self, in_features, hidden_features=None, out_features=None, act_layer=nn.GELU, drop=0.): - super().__init__() - out_features = out_features or in_features - hidden_features = hidden_features or in_features - self.fc1 = nn.Conv2d(in_features, hidden_features, 1) - self.act = act_layer() - self.fc2 = nn.Conv2d(hidden_features, out_features, 1) - self.drop = nn.Dropout(drop) - - def forward(self, x): - x = self.fc1(x) - x = self.act(x) - x = self.drop(x) - x = self.fc2(x) - x = self.drop(x) - return x - - -class CBlock(nn.Module): - def __init__(self, dim, num_heads, mlp_ratio=4., qkv_bias=False, qk_scale=None, drop=0., attn_drop=0., - drop_path=0., act_layer=nn.GELU, norm_layer=nn.LayerNorm): - super().__init__() - self.pos_embed = nn.Conv2d(dim, dim, 3, padding=1, groups=dim) - self.norm1 = nn.BatchNorm2d(dim) - self.conv1 = nn.Conv2d(dim, dim, 1) - self.conv2 = nn.Conv2d(dim, dim, 1) - self.attn = nn.Conv2d(dim, dim, 5, padding=2, groups=dim) - # NOTE: drop path for stochastic depth, we shall see if this is better than dropout here - self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity() - self.norm2 = nn.BatchNorm2d(dim) - mlp_hidden_dim = int(dim * mlp_ratio) - self.mlp = CMlp(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop) - - def forward(self, x): - x = x + self.pos_embed(x) - x = x + self.drop_path(self.conv2(self.attn(self.conv1(self.norm1(x))))) - x = x + self.drop_path(self.mlp(self.norm2(x))) - return x - - -class Attention(nn.Module): - def __init__(self, dim, num_heads=8, qkv_bias=False, qk_scale=None, attn_drop=0., proj_drop=0.): - super().__init__() - self.num_heads = num_heads - head_dim = dim // num_heads - # NOTE scale factor was wrong in my original version, can set manually to be compat with prev weights - self.scale = qk_scale or head_dim ** -0.5 - - self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias) - self.attn_drop = nn.Dropout(attn_drop) - self.proj = nn.Linear(dim, dim) - self.proj_drop = nn.Dropout(proj_drop) - - def forward(self, x): - B, N, C = x.shape - qkv = self.qkv(x).reshape(B, N, 3, self.num_heads, C // self.num_heads).permute(2, 0, 3, 1, 4) - q, k, v = qkv[0], qkv[1], qkv[2] # make torchscript happy (cannot use tensor as tuple) - - attn = (q @ k.transpose(-2, -1)) * self.scale - attn = attn.softmax(dim=-1) - attn = self.attn_drop(attn) - - x = (attn @ v).transpose(1, 2).reshape(B, N, C) - x = self.proj(x) - x = self.proj_drop(x) - return x - - -class SABlock(nn.Module): - def __init__(self, dim, num_heads, mlp_ratio=4., qkv_bias=False, qk_scale=None, drop=0., attn_drop=0., - drop_path=0., act_layer=nn.GELU, norm_layer=nn.LayerNorm): - super().__init__() - self.pos_embed = nn.Conv2d(dim, dim, 3, padding=1, groups=dim) - self.norm1 = norm_layer(dim) - self.attn = Attention( - dim, - num_heads=num_heads, qkv_bias=qkv_bias, qk_scale=qk_scale, - attn_drop=attn_drop, proj_drop=drop) - # NOTE: drop path for stochastic depth, we shall see if this is better than dropout here - self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity() - self.norm2 = norm_layer(dim) - mlp_hidden_dim = int(dim * mlp_ratio) - self.mlp = Mlp(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop) - - def forward(self, x): - x = x + self.pos_embed(x) - B, N, H, W = x.shape - x = x.flatten(2).transpose(1, 2) - x = x + self.drop_path(self.attn(self.norm1(x))) - x = x + self.drop_path(self.mlp(self.norm2(x))) - x = x.transpose(1, 2).reshape(B, N, H, W) - return x - - -def window_partition(x, window_size): - """ - Args: - x: (B, H, W, C) - window_size (int): window size - Returns: - windows: (num_windows*B, window_size, window_size, C) - """ - B, H, W, C = x.shape - x = x.view(B, H // window_size, window_size, W // window_size, window_size, C) - windows = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, window_size, window_size, C) - return windows - - -def window_reverse(windows, window_size, H, W): - """ - Args: - windows: (num_windows*B, window_size, window_size, C) - window_size (int): Window size - H (int): Height of image - W (int): Width of image - Returns: - x: (B, H, W, C) - """ - B = int(windows.shape[0] / (H * W / window_size / window_size)) - x = windows.view(B, H // window_size, W // window_size, window_size, window_size, -1) - x = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(B, H, W, -1) - return x - - -class SABlock_Windows(nn.Module): - def __init__(self, dim, num_heads, window_size=14, mlp_ratio=4., qkv_bias=False, qk_scale=None, drop=0., attn_drop=0., - drop_path=0., act_layer=nn.GELU, norm_layer=nn.LayerNorm): - super().__init__() - self.window_size=window_size - self.pos_embed = nn.Conv2d(dim, dim, 3, padding=1, groups=dim) - self.norm1 = norm_layer(dim) - self.attn = Attention( - dim, - num_heads=num_heads, qkv_bias=qkv_bias, qk_scale=qk_scale, - attn_drop=attn_drop, proj_drop=drop) - # NOTE: drop path for stochastic depth, we shall see if this is better than dropout here - self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity() - self.norm2 = norm_layer(dim) - mlp_hidden_dim = int(dim * mlp_ratio) - self.mlp = Mlp(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop) - - def forward(self, x): - x = x + self.pos_embed(x) - x = x.permute(0, 2, 3, 1) - B, H, W, C = x.shape - shortcut = x - x = self.norm1(x) - - pad_l = pad_t = 0 - pad_r = (self.window_size - W % self.window_size) % self.window_size - pad_b = (self.window_size - H % self.window_size) % self.window_size - x = F.pad(x, (0, 0, pad_l, pad_r, pad_t, pad_b)) - _, Hp, Wp, _ = x.shape - - x_windows = window_partition(x, self.window_size) # nW*B, window_size, window_size, C - x_windows = x_windows.view(-1, self.window_size * self.window_size, C) # nW*B, window_size*window_size, C - - # W-MSA/SW-MSA - attn_windows = self.attn(x_windows) # nW*B, window_size*window_size, C - - # merge windows - attn_windows = attn_windows.view(-1, self.window_size, self.window_size, C) - x = window_reverse(attn_windows, self.window_size, Hp, Wp) # B H' W' C - - # reverse cyclic shift - if pad_r > 0 or pad_b > 0: - x = x[:, :H, :W, :].contiguous() - - x = shortcut + self.drop_path(x) - x = x + self.drop_path(self.mlp(self.norm2(x))) - x = x.permute(0, 3, 1, 2).reshape(B, C, H, W) - return x - - -class PatchEmbed(nn.Module): - """ Image to Patch Embedding - """ - def __init__(self, img_size=224, patch_size=16, in_chans=3, embed_dim=768): - super().__init__() - img_size = to_2tuple(img_size) - patch_size = to_2tuple(patch_size) - num_patches = (img_size[1] // patch_size[1]) * (img_size[0] // patch_size[0]) - self.img_size = img_size - self.patch_size = patch_size - self.num_patches = num_patches - self.norm = nn.LayerNorm(embed_dim) - self.proj = nn.Conv2d(in_chans, embed_dim, kernel_size=patch_size, stride=patch_size) - - def forward(self, x): - B, _, H, W = x.shape - x = self.proj(x) - B, _, H, W = x.shape - x = x.flatten(2).transpose(1, 2) - x = self.norm(x) - x = x.reshape(B, H, W, -1).permute(0, 3, 1, 2).contiguous() - return x - - -@BACKBONES.register_module() -class UniFormer(nn.Module): - """ Vision Transformer - A PyTorch impl of : `An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale` - - https://arxiv.org/abs/2010.11929 - """ - def __init__(self, layers=[3, 4, 8, 3], img_size=224, in_chans=3, num_classes=80, embed_dim=[64, 128, 320, 512], - head_dim=64, mlp_ratio=4., qkv_bias=True, qk_scale=None, representation_size=None, - drop_rate=0., attn_drop_rate=0., drop_path_rate=0., norm_layer=partial(nn.LayerNorm, eps=1e-6), - pretrained_path=None, use_checkpoint=False, checkpoint_num=[0, 0, 0, 0], - windows=False, hybrid=False, window_size=14): - """ - Args: - layer (list): number of block in each layer - img_size (int, tuple): input image size - in_chans (int): number of input channels - num_classes (int): number of classes for classification head - embed_dim (int): embedding dimension - head_dim (int): dimension of attention heads - mlp_ratio (int): ratio of mlp hidden dim to embedding dim - qkv_bias (bool): enable bias for qkv if True - qk_scale (float): override default qk scale of head_dim ** -0.5 if set - representation_size (Optional[int]): enable and set representation layer (pre-logits) to this value if set - drop_rate (float): dropout rate - attn_drop_rate (float): attention dropout rate - drop_path_rate (float): stochastic depth rate - norm_layer (nn.Module): normalization layer - pretrained_path (str): path of pretrained model - use_checkpoint (bool): whether use checkpoint - checkpoint_num (list): index for using checkpoint in every stage - windows (bool): whether use window MHRA - hybrid (bool): whether use hybrid MHRA - window_size (int): size of window (>14) - """ - super().__init__() - self.num_classes = num_classes - self.use_checkpoint = use_checkpoint - self.checkpoint_num = checkpoint_num - self.windows = windows - print(f'Use Checkpoint: {self.use_checkpoint}') - print(f'Checkpoint Number: {self.checkpoint_num}') - self.num_features = self.embed_dim = embed_dim # num_features for consistency with other models - norm_layer = norm_layer or partial(nn.LayerNorm, eps=1e-6) - - self.patch_embed1 = PatchEmbed( - img_size=img_size, patch_size=4, in_chans=in_chans, embed_dim=embed_dim[0]) - self.patch_embed2 = PatchEmbed( - img_size=img_size // 4, patch_size=2, in_chans=embed_dim[0], embed_dim=embed_dim[1]) - self.patch_embed3 = PatchEmbed( - img_size=img_size // 8, patch_size=2, in_chans=embed_dim[1], embed_dim=embed_dim[2]) - self.patch_embed4 = PatchEmbed( - img_size=img_size // 16, patch_size=2, in_chans=embed_dim[2], embed_dim=embed_dim[3]) - - self.pos_drop = nn.Dropout(p=drop_rate) - dpr = [x.item() for x in torch.linspace(0, drop_path_rate, sum(layers))] # stochastic depth decay rule - num_heads = [dim // head_dim for dim in embed_dim] - self.blocks1 = nn.ModuleList([ - CBlock( - dim=embed_dim[0], num_heads=num_heads[0], mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, qk_scale=qk_scale, - drop=drop_rate, attn_drop=attn_drop_rate, drop_path=dpr[i], norm_layer=norm_layer) - for i in range(layers[0])]) - self.norm1=norm_layer(embed_dim[0]) - self.blocks2 = nn.ModuleList([ - CBlock( - dim=embed_dim[1], num_heads=num_heads[1], mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, qk_scale=qk_scale, - drop=drop_rate, attn_drop=attn_drop_rate, drop_path=dpr[i+layers[0]], norm_layer=norm_layer) - for i in range(layers[1])]) - self.norm2 = norm_layer(embed_dim[1]) - if self.windows: - print('Use local window for all blocks in stage3') - self.blocks3 = nn.ModuleList([ - SABlock_Windows( - dim=embed_dim[2], num_heads=num_heads[2], window_size=window_size, mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, qk_scale=qk_scale, - drop=drop_rate, attn_drop=attn_drop_rate, drop_path=dpr[i+layers[0]+layers[1]], norm_layer=norm_layer) - for i in range(layers[2])]) - elif hybrid: - print('Use hybrid window for blocks in stage3') - block3 = [] - for i in range(layers[2]): - if (i + 1) % 4 == 0: - block3.append(SABlock( - dim=embed_dim[2], num_heads=num_heads[2], mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, qk_scale=qk_scale, - drop=drop_rate, attn_drop=attn_drop_rate, drop_path=dpr[i+layers[0]+layers[1]], norm_layer=norm_layer)) - else: - block3.append(SABlock_Windows( - dim=embed_dim[2], num_heads=num_heads[2], window_size=window_size, mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, qk_scale=qk_scale, - drop=drop_rate, attn_drop=attn_drop_rate, drop_path=dpr[i+layers[0]+layers[1]], norm_layer=norm_layer)) - self.blocks3 = nn.ModuleList(block3) - else: - print('Use global window for all blocks in stage3') - self.blocks3 = nn.ModuleList([ - SABlock( - dim=embed_dim[2], num_heads=num_heads[2], mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, qk_scale=qk_scale, - drop=drop_rate, attn_drop=attn_drop_rate, drop_path=dpr[i+layers[0]+layers[1]], norm_layer=norm_layer) - for i in range(layers[2])]) - self.norm3 = norm_layer(embed_dim[2]) - self.blocks4 = nn.ModuleList([ - SABlock( - dim=embed_dim[3], num_heads=num_heads[3], mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, qk_scale=qk_scale, - drop=drop_rate, attn_drop=attn_drop_rate, drop_path=dpr[i+layers[0]+layers[1]+layers[2]], norm_layer=norm_layer) - for i in range(layers[3])]) - self.norm4 = norm_layer(embed_dim[3]) - - # Representation layer - if representation_size: - self.num_features = representation_size - self.pre_logits = nn.Sequential(OrderedDict([ - ('fc', nn.Linear(embed_dim, representation_size)), - ('act', nn.Tanh()) - ])) - else: - self.pre_logits = nn.Identity() - - self.apply(self._init_weights) - self.init_weights(pretrained=pretrained_path) - - def init_weights(self, pretrained): - if isinstance(pretrained, str): - logger = get_root_logger() - load_checkpoint(self, pretrained, map_location='cpu', strict=False, logger=logger) - print(f'Load pretrained model from {pretrained}') - def _init_weights(self, m): - if isinstance(m, nn.Linear): - trunc_normal_(m.weight, std=.02) - if isinstance(m, nn.Linear) and m.bias is not None: - nn.init.constant_(m.bias, 0) - elif isinstance(m, nn.LayerNorm): - nn.init.constant_(m.bias, 0) - nn.init.constant_(m.weight, 1.0) - - @torch.jit.ignore - def no_weight_decay(self): - return {'pos_embed', 'cls_token'} - - def get_classifier(self): - return self.head - - def reset_classifier(self, num_classes, global_pool=''): - self.num_classes = num_classes - self.head = nn.Linear(self.embed_dim, num_classes) if num_classes > 0 else nn.Identity() - - def forward_features(self, x): - out = [] - x = self.patch_embed1(x) - x = self.pos_drop(x) - for i, blk in enumerate(self.blocks1): - if self.use_checkpoint and i < self.checkpoint_num[0]: - x = checkpoint.checkpoint(blk, x) - else: - x = blk(x) - x_out = self.norm1(x.permute(0, 2, 3, 1)) - out.append(x_out.permute(0, 3, 1, 2).contiguous()) - x = self.patch_embed2(x) - for i, blk in enumerate(self.blocks2): - if self.use_checkpoint and i < self.checkpoint_num[1]: - x = checkpoint.checkpoint(blk, x) - else: - x = blk(x) - x_out = self.norm2(x.permute(0, 2, 3, 1)) - out.append(x_out.permute(0, 3, 1, 2).contiguous()) - x = self.patch_embed3(x) - for i, blk in enumerate(self.blocks3): - if self.use_checkpoint and i < self.checkpoint_num[2]: - x = checkpoint.checkpoint(blk, x) - else: - x = blk(x) - x_out = self.norm3(x.permute(0, 2, 3, 1)) - out.append(x_out.permute(0, 3, 1, 2).contiguous()) - x = self.patch_embed4(x) - for i, blk in enumerate(self.blocks4): - if self.use_checkpoint and i < self.checkpoint_num[3]: - x = checkpoint.checkpoint(blk, x) - else: - x = blk(x) - x_out = self.norm4(x.permute(0, 2, 3, 1)) - out.append(x_out.permute(0, 3, 1, 2).contiguous()) - return tuple(out) - - def forward(self, x): - x = self.forward_features(x) - return x diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/models/dense_heads/rpn_head.py b/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/models/dense_heads/rpn_head.py deleted file mode 100644 index a888cb8c188ca6fe63045b6230266553fbe8c996..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/models/dense_heads/rpn_head.py +++ /dev/null @@ -1,236 +0,0 @@ -import copy -import warnings - -import torch -import torch.nn as nn -import torch.nn.functional as F -from mmcv import ConfigDict -from mmcv.cnn import normal_init -from mmcv.ops import batched_nms - -from ..builder import HEADS -from .anchor_head import AnchorHead -from .rpn_test_mixin import RPNTestMixin - - -@HEADS.register_module() -class RPNHead(RPNTestMixin, AnchorHead): - """RPN head. - - Args: - in_channels (int): Number of channels in the input feature map. - """ # noqa: W605 - - def __init__(self, in_channels, **kwargs): - super(RPNHead, self).__init__(1, in_channels, **kwargs) - - def _init_layers(self): - """Initialize layers of the head.""" - self.rpn_conv = nn.Conv2d( - self.in_channels, self.feat_channels, 3, padding=1) - self.rpn_cls = nn.Conv2d(self.feat_channels, - self.num_anchors * self.cls_out_channels, 1) - self.rpn_reg = nn.Conv2d(self.feat_channels, self.num_anchors * 4, 1) - - def init_weights(self): - """Initialize weights of the head.""" - normal_init(self.rpn_conv, std=0.01) - normal_init(self.rpn_cls, std=0.01) - normal_init(self.rpn_reg, std=0.01) - - def forward_single(self, x): - """Forward feature map of a single scale level.""" - x = self.rpn_conv(x) - x = F.relu(x, inplace=True) - rpn_cls_score = self.rpn_cls(x) - rpn_bbox_pred = self.rpn_reg(x) - return rpn_cls_score, rpn_bbox_pred - - def loss(self, - cls_scores, - bbox_preds, - gt_bboxes, - img_metas, - gt_bboxes_ignore=None): - """Compute losses of the head. - - Args: - cls_scores (list[Tensor]): Box scores for each scale level - Has shape (N, num_anchors * num_classes, H, W) - bbox_preds (list[Tensor]): Box energies / deltas for each scale - level with shape (N, num_anchors * 4, H, W) - gt_bboxes (list[Tensor]): Ground truth bboxes for each image with - shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes_ignore (None | list[Tensor]): specify which bounding - boxes can be ignored when computing the loss. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - losses = super(RPNHead, self).loss( - cls_scores, - bbox_preds, - gt_bboxes, - None, - img_metas, - gt_bboxes_ignore=gt_bboxes_ignore) - return dict( - loss_rpn_cls=losses['loss_cls'], loss_rpn_bbox=losses['loss_bbox']) - - def _get_bboxes(self, - cls_scores, - bbox_preds, - mlvl_anchors, - img_shapes, - scale_factors, - cfg, - rescale=False): - """Transform outputs for a single batch item into bbox predictions. - - Args: - cls_scores (list[Tensor]): Box scores for each scale level - Has shape (N, num_anchors * num_classes, H, W). - bbox_preds (list[Tensor]): Box energies / deltas for each scale - level with shape (N, num_anchors * 4, H, W). - mlvl_anchors (list[Tensor]): Box reference for each scale level - with shape (num_total_anchors, 4). - img_shapes (list[tuple[int]]): Shape of the input image, - (height, width, 3). - scale_factors (list[ndarray]): Scale factor of the image arange as - (w_scale, h_scale, w_scale, h_scale). - cfg (mmcv.Config): Test / postprocessing configuration, - if None, test_cfg would be used. - rescale (bool): If True, return boxes in original image space. - - Returns: - list[tuple[Tensor, Tensor]]: Each item in result_list is 2-tuple. - The first item is an (n, 5) tensor, where the first 4 columns - are bounding box positions (tl_x, tl_y, br_x, br_y) and the - 5-th column is a score between 0 and 1. The second item is a - (n,) tensor where each item is the predicted class labelof the - corresponding box. - """ - cfg = self.test_cfg if cfg is None else cfg - cfg = copy.deepcopy(cfg) - # bboxes from different level should be independent during NMS, - # level_ids are used as labels for batched NMS to separate them - level_ids = [] - mlvl_scores = [] - mlvl_bbox_preds = [] - mlvl_valid_anchors = [] - batch_size = cls_scores[0].shape[0] - nms_pre_tensor = torch.tensor( - cfg.nms_pre, device=cls_scores[0].device, dtype=torch.long) - for idx in range(len(cls_scores)): - rpn_cls_score = cls_scores[idx] - rpn_bbox_pred = bbox_preds[idx] - assert rpn_cls_score.size()[-2:] == rpn_bbox_pred.size()[-2:] - rpn_cls_score = rpn_cls_score.permute(0, 2, 3, 1) - if self.use_sigmoid_cls: - rpn_cls_score = rpn_cls_score.reshape(batch_size, -1) - scores = rpn_cls_score.sigmoid() - else: - rpn_cls_score = rpn_cls_score.reshape(batch_size, -1, 2) - # We set FG labels to [0, num_class-1] and BG label to - # num_class in RPN head since mmdet v2.5, which is unified to - # be consistent with other head since mmdet v2.0. In mmdet v2.0 - # to v2.4 we keep BG label as 0 and FG label as 1 in rpn head. - scores = rpn_cls_score.softmax(-1)[..., 0] - rpn_bbox_pred = rpn_bbox_pred.permute(0, 2, 3, 1).reshape( - batch_size, -1, 4) - anchors = mlvl_anchors[idx] - anchors = anchors.expand_as(rpn_bbox_pred) - if nms_pre_tensor > 0: - # sort is faster than topk - # _, topk_inds = scores.topk(cfg.nms_pre) - # keep topk op for dynamic k in onnx model - if torch.onnx.is_in_onnx_export(): - # sort op will be converted to TopK in onnx - # and k<=3480 in TensorRT - scores_shape = torch._shape_as_tensor(scores) - nms_pre = torch.where(scores_shape[1] < nms_pre_tensor, - scores_shape[1], nms_pre_tensor) - _, topk_inds = scores.topk(nms_pre) - batch_inds = torch.arange(batch_size).view( - -1, 1).expand_as(topk_inds) - scores = scores[batch_inds, topk_inds] - rpn_bbox_pred = rpn_bbox_pred[batch_inds, topk_inds, :] - anchors = anchors[batch_inds, topk_inds, :] - - elif scores.shape[-1] > cfg.nms_pre: - ranked_scores, rank_inds = scores.sort(descending=True) - topk_inds = rank_inds[:, :cfg.nms_pre] - scores = ranked_scores[:, :cfg.nms_pre] - batch_inds = torch.arange(batch_size).view( - -1, 1).expand_as(topk_inds) - rpn_bbox_pred = rpn_bbox_pred[batch_inds, topk_inds, :] - anchors = anchors[batch_inds, topk_inds, :] - - mlvl_scores.append(scores) - mlvl_bbox_preds.append(rpn_bbox_pred) - mlvl_valid_anchors.append(anchors) - level_ids.append( - scores.new_full(( - batch_size, - scores.size(1), - ), - idx, - dtype=torch.long)) - - batch_mlvl_scores = torch.cat(mlvl_scores, dim=1) - batch_mlvl_anchors = torch.cat(mlvl_valid_anchors, dim=1) - batch_mlvl_rpn_bbox_pred = torch.cat(mlvl_bbox_preds, dim=1) - batch_mlvl_proposals = self.bbox_coder.decode( - batch_mlvl_anchors, batch_mlvl_rpn_bbox_pred, max_shape=img_shapes) - batch_mlvl_ids = torch.cat(level_ids, dim=1) - - # deprecate arguments warning - if 'nms' not in cfg or 'max_num' in cfg or 'nms_thr' in cfg: - warnings.warn( - 'In rpn_proposal or test_cfg, ' - 'nms_thr has been moved to a dict named nms as ' - 'iou_threshold, max_num has been renamed as max_per_img, ' - 'name of original arguments and the way to specify ' - 'iou_threshold of NMS will be deprecated.') - if 'nms' not in cfg: - cfg.nms = ConfigDict(dict(type='nms', iou_threshold=cfg.nms_thr)) - if 'max_num' in cfg: - if 'max_per_img' in cfg: - assert cfg.max_num == cfg.max_per_img, f'You ' \ - f'set max_num and ' \ - f'max_per_img at the same time, but get {cfg.max_num} ' \ - f'and {cfg.max_per_img} respectively' \ - 'Please delete max_num which will be deprecated.' - else: - cfg.max_per_img = cfg.max_num - if 'nms_thr' in cfg: - assert cfg.nms.iou_threshold == cfg.nms_thr, f'You set' \ - f' iou_threshold in nms and ' \ - f'nms_thr at the same time, but get' \ - f' {cfg.nms.iou_threshold} and {cfg.nms_thr}' \ - f' respectively. Please delete the nms_thr ' \ - f'which will be deprecated.' - - result_list = [] - for (mlvl_proposals, mlvl_scores, - mlvl_ids) in zip(batch_mlvl_proposals, batch_mlvl_scores, - batch_mlvl_ids): - # Skip nonzero op while exporting to ONNX - if cfg.min_bbox_size > 0 and (not torch.onnx.is_in_onnx_export()): - w = mlvl_proposals[:, 2] - mlvl_proposals[:, 0] - h = mlvl_proposals[:, 3] - mlvl_proposals[:, 1] - valid_ind = torch.nonzero( - (w >= cfg.min_bbox_size) - & (h >= cfg.min_bbox_size), - as_tuple=False).squeeze() - if valid_ind.sum().item() != len(mlvl_proposals): - mlvl_proposals = mlvl_proposals[valid_ind, :] - mlvl_scores = mlvl_scores[valid_ind] - mlvl_ids = mlvl_ids[valid_ind] - - dets, keep = batched_nms(mlvl_proposals, mlvl_scores, mlvl_ids, - cfg.nms) - result_list.append(dets[:cfg.max_per_img]) - return result_list diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/deeplabv3plus/deeplabv3plus_r50-d8_480x480_80k_pascal_context_59.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/deeplabv3plus/deeplabv3plus_r50-d8_480x480_80k_pascal_context_59.py deleted file mode 100644 index d2af575df7719ebbca7553647de8c531d1a10fee..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/deeplabv3plus/deeplabv3plus_r50-d8_480x480_80k_pascal_context_59.py +++ /dev/null @@ -1,10 +0,0 @@ -_base_ = [ - '../_base_/models/deeplabv3plus_r50-d8.py', - '../_base_/datasets/pascal_context_59.py', '../_base_/default_runtime.py', - '../_base_/schedules/schedule_80k.py' -] -model = dict( - decode_head=dict(num_classes=59), - auxiliary_head=dict(num_classes=59), - test_cfg=dict(mode='slide', crop_size=(480, 480), stride=(320, 320))) -optimizer = dict(type='SGD', lr=0.004, momentum=0.9, weight_decay=0.0001) diff --git a/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/optim/__init__.py b/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/optim/__init__.py deleted file mode 100644 index f48c17dfafa9a2be46a91ed1fb64f54c5572a730..0000000000000000000000000000000000000000 --- a/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/optim/__init__.py +++ /dev/null @@ -1,16 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. -"""Optimization stuff. In particular, optimizers (DAdaptAdam), schedulers -and Exponential Moving Average. -""" - -# flake8: noqa -from .cosine_lr_scheduler import CosineLRScheduler -from .dadam import DAdaptAdam -from .inverse_sqrt_lr_scheduler import InverseSquareRootLRScheduler -from .linear_warmup_lr_scheduler import LinearWarmupLRScheduler -from .polynomial_decay_lr_scheduler import PolynomialDecayLRScheduler -from .ema import ModuleDictEMA diff --git a/spaces/GuujiYae/Grand-Narukami-Shrine/public/info.html b/spaces/GuujiYae/Grand-Narukami-Shrine/public/info.html deleted file mode 100644 index 4685871a325baaca8474e727b54acb0e010b2abb..0000000000000000000000000000000000000000 --- a/spaces/GuujiYae/Grand-Narukami-Shrine/public/info.html +++ /dev/null @@ -1,580 +0,0 @@ - - - - - - - - {title} Proxy - - - - - - - - - - - - -
-
- - -

{title}

- - -
- -
-

Greeting

- - - Welcome to the Grand Narukami Shrine!
- May your journey be filled with fortune, and may the gods guide your path. -
-
- -
-
-
-

- - Uptime - -

- -
-
- - - - days -
-
- - - - hours -
-
- - - - min -
-
- - - - sec -
-
-
- -
- -
-

- - Wait Times - -

- -
-
-

Turbo

- {turbo:estimated} -
- -
- -
-

GPT 4

- {gpt4:estimated} -
- -
- -
-

Claude

- {claude:estimated} -
-
-
-
-
-
- -
-
-

Endpoints

- -
-

OpenAI

- :{endpoint:openai} - -

OpenAI 2

- :{endpoint:openai2} - -

Claude

- :{endpoint:claude} -
-
- -
-
-

Proompts

- -
-
-

Total

- {proompts:total} -
- -
-

Start

- {proompts:start} -
- -
-

Now

- {proompts:now} -
-
-
- -
- -
-

Tokens

- -
-
-

Total

- {tokens:total} -
- -
-

Start

- {tokens:start} -
-
-
-
- -
-

- - Keys - -

- -
- OpenAI: {key:openai} -
- OpenAI Org: {key:openaiOrg} -
- Claude: {key:claude} -
- -
-
-

Turbo

- -
-
-

Usage:

- {turbo:usage} -
- -
-

Active

- {turbo:active} - -

Trial

- {turbo:trial} - -

Revoked

- {turbo:revoked} - -
- -

Over Quota

- {turbo:overQuota} - -

Queue

- {turbo:inQueue} - -

Wait Time

- {turbo:waitTime} -
-
-
- -
- -
-

GPT 4

- -
-
-

Usage:

- {gpt4:usage} -
- -
-

Active

- {gpt4:active} - -

Trial

- {gpt4:trial} - -

Revoked

- {gpt4:revoked} - -
- -

Over Quota

- {gpt4:overQuota} - -

Queue

- {gpt4:inQueue} - -

Wait Time

- {gpt4:waitTime} -
-
-
-
- -
- -
-
-

GPT 4 Turbo

- -
-
-

Usage:

- {gpt4-turbo:usage} -
- -
-

Active

- {gpt4-turbo:active} - -

Trial

- {gpt4-turbo:trial} - -

Revoked

- {gpt4-turbo:revoked} - -
- -

Over Quota

- {gpt4-turbo:overQuota} - -

Queue

- {gpt4-turbo:inQueue} - -

Wait Time

- {gpt4-turbo:waitTime} -
-
-
- -
- -
-

Claude

- -
-
-

Usage:

- {claude:usage} -
- -
-

Active

- {claude:active} - -

Pozzed

- {claude:pozzed} - -
- -

Queue

- {claude:inQueue} - -

Wait Time

- {claude:waitTime} -
-
-
-
-
- -
-

Config

- -
-
-

Build:

- {build} -
- -
-

Gatekeeper

- {config:gatekeeper} -
- -
-

Rate limit

- {config:modelRateLimit} -
- -
-

Logging

- {config:logging} -
- -
-

Max Output Tokens

- -
- OpenAI: {config:maxOutputTokensOpenAI} - Claude: {config:maxOutputTokensAnthropic} -
-
- -
-

Max Context Tokens

- -
- OpenAI: {config:maxContextTokensOpenAI} - Claude: {config:maxContextTokensAnthropic} -
-
- -
-

Model Families

- {config:allowedModelFamilies} -
- -
-

Reject Message

- {config:rejectMessage} -
-
-
-
- - - -
-
-

Leaderboard

-
- - - - - - - - - - - - - - -
Name - - - - - - - -
-
- - - - diff --git a/spaces/HaloMaster/chinesesummary/fengshen/data/task_dataloader/task_datasets.py b/spaces/HaloMaster/chinesesummary/fengshen/data/task_dataloader/task_datasets.py deleted file mode 100644 index a8fe7bcf732c61725853df92d9422f207d55f785..0000000000000000000000000000000000000000 --- a/spaces/HaloMaster/chinesesummary/fengshen/data/task_dataloader/task_datasets.py +++ /dev/null @@ -1,206 +0,0 @@ -# coding=utf8 -from torch.utils.data import Dataset, DataLoader -from tqdm import tqdm -from transformers import AutoTokenizer -import json -import torch -import pytorch_lightning as pl -import os - - -class AbstractCollator: - """ - collector for summary task - """ - - def __init__(self, tokenizer, max_enc_length, max_dec_length, prompt): - self.tokenizer = tokenizer - self.max_enc_length = max_enc_length - self.max_dec_length = max_dec_length - self.prompt = prompt - - def __call__(self, samples): - - labels = [] - attn_mask = [] - # decoder_attn_mask = [] - source_inputs = [] - for sample in samples: - encode_dict = self.tokenizer.encode_plus( - self.prompt + sample['text'], - max_length=self.max_enc_length, - padding='max_length', - truncation=True, - return_tensors='pt') - decode_dict = self.tokenizer.encode_plus( - sample['summary'], - max_length=self.max_dec_length, - padding='max_length', - truncation=True, - return_tensors='pt') - source_inputs.append(encode_dict['input_ids'].squeeze()) - labels.append(decode_dict['input_ids'].squeeze()) - attn_mask.append(encode_dict['attention_mask'].squeeze()) - # decoder_attn_mask.append(decode_dict['attention_mask'].squeeze()) - # labels = torch.tensor(decode_dict['input']) - - source_inputs = torch.stack(source_inputs) - labels = torch.stack(labels) - attn_mask = torch.stack(attn_mask) - # decoder_attn_mask = torch.stack(decoder_attn_mask) - # decode_input_idxs = shift_tokens_right(labels, self.tokenizer.pad_token_id, self.tokenizer.pad_token_id) - end_token_index = torch.where(labels == self.tokenizer.eos_token_id)[1] - for idx, end_idx in enumerate(end_token_index): - labels[idx][end_idx + 1:] = -100 - - return { - "input_ids": source_inputs, - "attention_mask": attn_mask, - "labels": labels, - "text": [sample['text'] for sample in samples], - "summary": [sample['summary'] for sample in samples] - } - - -class LCSTSDataset(Dataset): - ''' - Dataset Used for LCSTS summary task. - ''' - - def __init__(self, data_path, args): - super().__init__() - self.tokenizer = AutoTokenizer.from_pretrained( - args.pretrained_model_path, use_fast=False) - self.data = self.load_data(data_path) - self.prompt = args.prompt - self.max_enc_length = args.max_enc_length - self.max_dec_length = args.max_dec_length - - def __len__(self): - return len(self.data) - - def __getitem__(self, index): - return self.encode(self.data[index]) - - def load_data(self, data_path): - with open(data_path, "r", encoding='utf8') as f: - lines = f.readlines() - samples = [] - for line in tqdm(lines): - obj = json.loads(line) - source = obj['text'] - target = obj['summary'] - samples.append({ - "text": source, - "summary": target - }) - return samples - - def cal_data(self, data_path): - with open(data_path, "r", encoding='utf8') as f: - lines = f.readlines() - samples = [] - enc_sizes = [] - dec_sizes = [] - for line in tqdm(lines): - obj = json.loads(line.strip()) - source = obj['text'] - target = obj['summary'] - enc_input_ids = self.tokenizer.encode(source) - target = self.tokenizer.encode(target) - enc_sizes.append(len(enc_input_ids)) - dec_sizes.append(len(target)-1) - samples.append({ - "enc_input_ids": enc_input_ids, - "dec_input_ids": target[:-1], - "label_ids": target[1:] - }) - max_enc_len = max(enc_sizes) - max_dec_len = max(dec_sizes) - import numpy as np - # mean of len(enc_input_ids): 74.68041911345998 - # mean of len(dec_input_ids): 14.02265483791283 - # max of len(enc_input_ids): 132 - # max of len(dec_input_ids): 31 - print('mean of len(enc_input_ids):', np.mean(enc_sizes), - 'mean of len(dec_input_ids):', np.mean(dec_sizes), - 'max of len(enc_input_ids):', max_enc_len, - 'max of len(dec_input_ids):', max_dec_len) - return samples - - def encode(self, item): - encode_dict = self.tokenizer.encode_plus( - self.prompt + item['text'], - max_length=self.max_enc_length, - padding='max_length', - truncation=True, - return_tensors='pt') - decode_dict = self.tokenizer.encode_plus( - item['summary'], - max_length=self.max_dec_length, - padding='max_length', - truncation=True) - - target = decode_dict['input_ids'] - # print('encode_dict shape:', encode_dict['input_ids'].shape) - labels = torch.tensor(target) - labels[target == self.tokenizer.pad_token_id] = -100 - return { - "input_ids": encode_dict['input_ids'].squeeze(), - "attention_mask": encode_dict['attention_mask'].squeeze(), - "labels": labels.squeeze(), - "text": item['text'], - "summary": item['summary'] - } - - -class LCSTSDataModel(pl.LightningDataModule): - @staticmethod - def add_data_specific_args(parent_args): - parser = parent_args.add_argument_group('LCSTSDataModel') - parser.add_argument( - '--data_dir', default='/cognitive_comp/ganruyi/data_datasets_LCSTS_LCSTS/', type=str) - parser.add_argument('--num_workers', default=8, type=int) - parser.add_argument('--train_data', default='train.jsonl', type=str) - parser.add_argument('--valid_data', default='valid.jsonl', type=str) - parser.add_argument('--test_data', default='test_public.jsonl', type=str) - parser.add_argument('--train_batchsize', default=128, type=int) - parser.add_argument('--valid_batchsize', default=128, type=int) - parser.add_argument('--max_enc_length', default=128, type=int) - parser.add_argument('--max_dec_length', default=30, type=int) - parser.add_argument('--prompt', default='summarize:', type=str) - return parent_args - - def __init__(self, args): - super().__init__() - self.args = args - self.train_batchsize = args.train_batchsize - self.valid_batchsize = args.valid_batchsize - if not args.do_eval_only: - self.train_data = LCSTSDataset(os.path.join( - args.data_dir, args.train_data), args) - self.valid_data = LCSTSDataset(os.path.join( - args.data_dir, args.valid_data), args) - self.test_data = LCSTSDataset(os.path.join( - args.data_dir, args.test_data), args) - - def train_dataloader(self): - return DataLoader(self.train_data, - shuffle=True, - batch_size=self.train_batchsize, - pin_memory=False, - num_workers=self.args.num_workers) - - def val_dataloader(self): - return DataLoader(self.valid_data, - shuffle=False, - batch_size=self.valid_batchsize, - pin_memory=False, - num_workers=self.args.num_workers) - - def predict_dataloader(self): - return DataLoader(self.test_data, - shuffle=False, - batch_size=self.valid_batchsize, - pin_memory=False, - num_workers=self.args.num_workers) diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/data/audio/feature_transforms/specaugment.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/data/audio/feature_transforms/specaugment.py deleted file mode 100644 index ce5802b41a903ea8f3e3e8a169d5048b4e908f99..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/data/audio/feature_transforms/specaugment.py +++ /dev/null @@ -1,131 +0,0 @@ -import math -import numbers -from typing import Optional - -import numpy as np -from fairseq.data.audio.feature_transforms import ( - AudioFeatureTransform, - register_audio_feature_transform, -) - - -@register_audio_feature_transform("specaugment") -class SpecAugmentTransform(AudioFeatureTransform): - """SpecAugment (https://arxiv.org/abs/1904.08779)""" - - @classmethod - def from_config_dict(cls, config=None): - _config = {} if config is None else config - return SpecAugmentTransform( - _config.get("time_warp_W", 0), - _config.get("freq_mask_N", 0), - _config.get("freq_mask_F", 0), - _config.get("time_mask_N", 0), - _config.get("time_mask_T", 0), - _config.get("time_mask_p", 0.0), - _config.get("mask_value", None), - ) - - def __init__( - self, - time_warp_w: int = 0, - freq_mask_n: int = 0, - freq_mask_f: int = 0, - time_mask_n: int = 0, - time_mask_t: int = 0, - time_mask_p: float = 0.0, - mask_value: Optional[float] = 0.0, - ): - # Sanity checks - assert mask_value is None or isinstance( - mask_value, numbers.Number - ), f"mask_value (type: {type(mask_value)}) must be None or a number" - if freq_mask_n > 0: - assert freq_mask_f > 0, ( - f"freq_mask_F ({freq_mask_f}) " - f"must be larger than 0 when doing freq masking." - ) - if time_mask_n > 0: - assert time_mask_t > 0, ( - f"time_mask_T ({time_mask_t}) must be larger than 0 when " - f"doing time masking." - ) - - self.time_warp_w = time_warp_w - self.freq_mask_n = freq_mask_n - self.freq_mask_f = freq_mask_f - self.time_mask_n = time_mask_n - self.time_mask_t = time_mask_t - self.time_mask_p = time_mask_p - self.mask_value = mask_value - - def __repr__(self): - return ( - self.__class__.__name__ - + "(" - + ", ".join( - [ - f"time_warp_w={self.time_warp_w}", - f"freq_mask_n={self.freq_mask_n}", - f"freq_mask_f={self.freq_mask_f}", - f"time_mask_n={self.time_mask_n}", - f"time_mask_t={self.time_mask_t}", - f"time_mask_p={self.time_mask_p}", - ] - ) - + ")" - ) - - def __call__(self, spectrogram): - assert len(spectrogram.shape) == 2, "spectrogram must be a 2-D tensor." - - distorted = spectrogram.copy() # make a copy of input spectrogram. - num_frames = spectrogram.shape[0] # or 'tau' in the paper. - num_freqs = spectrogram.shape[1] # or 'miu' in the paper. - mask_value = self.mask_value - - if mask_value is None: # if no value was specified, use local mean. - mask_value = spectrogram.mean() - - if num_frames == 0: - return spectrogram - - if num_freqs < self.freq_mask_f: - return spectrogram - - if self.time_warp_w > 0: - if 2 * self.time_warp_w < num_frames: - import cv2 - - w0 = np.random.randint(self.time_warp_w, num_frames - self.time_warp_w) - w = np.random.randint(-self.time_warp_w + 1, self.time_warp_w) - upper, lower = distorted[:w0, :], distorted[w0:, :] - upper = cv2.resize( - upper, dsize=(num_freqs, w0 + w), interpolation=cv2.INTER_LINEAR - ) - lower = cv2.resize( - lower, - dsize=(num_freqs, num_frames - w0 - w), - interpolation=cv2.INTER_LINEAR, - ) - distorted = np.concatenate((upper, lower), axis=0) - - for _i in range(self.freq_mask_n): - f = np.random.randint(0, self.freq_mask_f) - f0 = np.random.randint(0, num_freqs - f) - if f != 0: - distorted[:, f0 : f0 + f] = mask_value - - max_time_mask_t = min( - self.time_mask_t, math.floor(num_frames * self.time_mask_p) - ) - if max_time_mask_t < 1: - return distorted - - for _i in range(self.time_mask_n): - t = np.random.randint(0, max_time_mask_t) - t0 = np.random.randint(0, num_frames - t) - if t != 0: - distorted[t0 : t0 + t, :] = mask_value - - return distorted diff --git a/spaces/HarryLee/eCommerceImageCaptioning/utils/BPE/__init__.py b/spaces/HarryLee/eCommerceImageCaptioning/utils/BPE/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Harveenchadha/en_to_indic_translation/indic_nlp_library/setup.py b/spaces/Harveenchadha/en_to_indic_translation/indic_nlp_library/setup.py deleted file mode 100644 index 8b132dc2faeab6c863c6d5ecf04863b2191afdcb..0000000000000000000000000000000000000000 --- a/spaces/Harveenchadha/en_to_indic_translation/indic_nlp_library/setup.py +++ /dev/null @@ -1,48 +0,0 @@ -import setuptools -from pkg_resources import parse_requirements -import pathlib -import os - -def write_version_py(): - with open(os.path.join("indicnlp", "version.txt")) as f: - version = f.read().strip() - - # write version info to fairseq/version.py - with open(os.path.join("indicnlp", "version.py"), "w") as f: - f.write('__version__ = "{}"\n'.format(version)) - return version - -with open("README.md", "r") as fh: - long_description = fh.read() - -version=write_version_py() - -setuptools.setup( - name="indic_nlp_library", # Replace with your own username - version=version, - author="Anoop Kunchukuttan", - author_email="anoop.kunchukuttan@gmail.com", - description="The goal of the Indic NLP Library is to build Python based libraries for common"\ - ' text processing and Natural Language Processing in Indian languages.', - long_description=long_description, - long_description_content_type="text/markdown", - url="https://github.com/anoopkunchukuttan/indic_nlp_library", - # project_urls={ - # "Bug Tracker": "https://bugs.example.com/HelloWorld/", - # "Documentation": "https://docs.example.com/HelloWorld/", - # "Source Code": "https://code.example.com/HelloWorld/", - # }, - packages=setuptools.find_packages(), - license='MIT', - classifiers=[ - "Programming Language :: Python :: 3", - "License :: OSI Approved :: MIT License", - "Operating System :: OS Independent", - ], - python_requires='>=3.5', - download_url='https://github.com/anoopkunchukuttan/indic_nlp_library/archive/master.zip', - install_requires=[ - str(requirement) for requirement - in parse_requirements(pathlib.Path('requirements.txt').open()) - ] -) diff --git a/spaces/HemanthSai7/IntelligentQuestionGenerator/src/Pipeline/TextSummarization.py b/spaces/HemanthSai7/IntelligentQuestionGenerator/src/Pipeline/TextSummarization.py deleted file mode 100644 index 0b25e20c4fc60beda0744622f337f6c61c36942a..0000000000000000000000000000000000000000 --- a/spaces/HemanthSai7/IntelligentQuestionGenerator/src/Pipeline/TextSummarization.py +++ /dev/null @@ -1,50 +0,0 @@ -import torch -from transformers import T5Tokenizer, T5ForConditionalGeneration -import random -import numpy as np -from nltk.tokenize import sent_tokenize - -class T5_Base: - def __init__(self,path,device,model_max_length): - self.model=T5ForConditionalGeneration.from_pretrained(path) - self.tokenizer=T5Tokenizer.from_pretrained(path,model_max_length=model_max_length) - self.device=torch.device(device) - - def set_seed(seed): - random.seed(seed) - np.random.seed(seed) - torch.manual_seed(seed) - torch.cuda.manual_seed_all(seed) - - def preprocess(self,data): - preprocess_text=data.strip().replace('\n',' ') - return preprocess_text - - def post_process(self,data): - final="" - for sent in sent_tokenize(data): - sent=sent.capitalize() - final+=sent+" "+sent - return final - - def getSummary(self,data): - data=self.preprocess(data) - t5_prepared_Data="summarize: "+data - tokenized_text=self.tokenizer.encode_plus(t5_prepared_Data,max_length=512,pad_to_max_length=False,truncation=True,return_tensors='pt').to(self.device) - input_ids,attention_mask=tokenized_text['input_ids'],tokenized_text['attention_mask'] - summary_ids=self.model.generate(input_ids=input_ids, - attention_mask=attention_mask, - early_stopping=True, - num_beams=3, - num_return_sequences=1, - no_repeat_ngram_size=2, - min_length = 75, - max_length=300) - - output=[self.tokenizer.decode(ids,skip_special_tokens=True) for ids in summary_ids] - summary=output[0] - summary=self.post_process(summary) - summary=summary.strip() - return summary - - diff --git a/spaces/Hskbqn/DeepDanbooru_string/app.py b/spaces/Hskbqn/DeepDanbooru_string/app.py deleted file mode 100644 index 49019837c9207cc68cb37be0342f3bc44fd0decb..0000000000000000000000000000000000000000 --- a/spaces/Hskbqn/DeepDanbooru_string/app.py +++ /dev/null @@ -1,185 +0,0 @@ -#!/usr/bin/env python - -from __future__ import annotations - -import argparse -import functools -import os -import html -import pathlib -import tarfile - -import deepdanbooru as dd -import gradio as gr -import huggingface_hub -import numpy as np -import PIL.Image -import tensorflow as tf -import piexif -import piexif.helper - -TITLE = 'DeepDanbooru String' - -TOKEN = os.environ['TOKEN'] -MODEL_REPO = 'CikeyQI/DeepDanbooru_string' -MODEL_FILENAME = 'model-resnet_custom_v3.h5' -LABEL_FILENAME = 'tags.txt' - - -def parse_args() -> argparse.Namespace: - parser = argparse.ArgumentParser() - parser.add_argument('--score-slider-step', type=float, default=0.05) - parser.add_argument('--score-threshold', type=float, default=0.5) - parser.add_argument('--theme', type=str, default='dark-grass') - parser.add_argument('--live', action='store_true') - parser.add_argument('--share', action='store_true') - parser.add_argument('--port', type=int) - parser.add_argument('--disable-queue', - dest='enable_queue', - action='store_false') - parser.add_argument('--allow-flagging', type=str, default='never') - return parser.parse_args() - - -def load_sample_image_paths() -> list[pathlib.Path]: - image_dir = pathlib.Path('images') - if not image_dir.exists(): - dataset_repo = 'hysts/sample-images-TADNE' - path = huggingface_hub.hf_hub_download(dataset_repo, - 'images.tar.gz', - repo_type='dataset', - use_auth_token=TOKEN) - with tarfile.open(path) as f: - f.extractall() - return sorted(image_dir.glob('*')) - - -def load_model() -> tf.keras.Model: - path = huggingface_hub.hf_hub_download(MODEL_REPO, - MODEL_FILENAME, - use_auth_token=TOKEN) - model = tf.keras.models.load_model(path) - return model - - -def load_labels() -> list[str]: - path = huggingface_hub.hf_hub_download(MODEL_REPO, - LABEL_FILENAME, - use_auth_token=TOKEN) - with open(path) as f: - labels = [line.strip() for line in f.readlines()] - return labels - -def plaintext_to_html(text): - text = "

" + "
\n".join([f"{html.escape(x)}" for x in text.split('\n')]) + "

" - return text - -def predict(image: PIL.Image.Image, score_threshold: float, - model: tf.keras.Model, labels: list[str]) -> dict[str, float]: - rawimage = image - _, height, width, _ = model.input_shape - image = np.asarray(image) - image = tf.image.resize(image, - size=(height, width), - method=tf.image.ResizeMethod.AREA, - preserve_aspect_ratio=True) - image = image.numpy() - image = dd.image.transform_and_pad_image(image, width, height) - image = image / 255. - probs = model.predict(image[None, ...])[0] - probs = probs.astype(float) - res = dict() - for prob, label in zip(probs.tolist(), labels): - if prob < score_threshold: - continue - res[label] = prob - b = dict(sorted(res.items(),key=lambda item:item[1], reverse=True)) - a = ', '.join(list(b.keys())).replace('_',' ').replace('(','\(').replace(')','\)') - c = ', '.join(list(b.keys())) - - items = rawimage.info - geninfo = '' - - if "exif" in rawimage.info: - exif = piexif.load(rawimage.info["exif"]) - exif_comment = (exif or {}).get("Exif", {}).get(piexif.ExifIFD.UserComment, b'') - try: - exif_comment = piexif.helper.UserComment.load(exif_comment) - except ValueError: - exif_comment = exif_comment.decode('utf8', errors="ignore") - - items['exif comment'] = exif_comment - geninfo = exif_comment - - for field in ['jfif', 'jfif_version', 'jfif_unit', 'jfif_density', 'dpi', 'exif', - 'loop', 'background', 'timestamp', 'duration']: - items.pop(field, None) - - geninfo = items.get('parameters', geninfo) - - info = f""" -

PNG Info

-""" - for key, text in items.items(): - info += f""" -
-

{plaintext_to_html(str(key))}

-

{plaintext_to_html(str(text))}

-
-""".strip()+"\n" - - if len(info) == 0: - message = "Nothing found in the image." - info = f"

{message}

" - - return (a,c,res,info) - - -def main(): - args = parse_args() - model = load_model() - labels = load_labels() - - func = functools.partial(predict, model=model, labels=labels) - func = functools.update_wrapper(func, predict) - - gr.Interface( - func, - [ - gr.inputs.Image(type='pil', label='Input'), - gr.inputs.Slider(0, - 1, - step=args.score_slider_step, - default=args.score_threshold, - label='Score Threshold'), - ], - [ - gr.outputs.Textbox(label='Output (string)'), - gr.outputs.Textbox(label='Output (raw string)'), - gr.outputs.Label(label='Output (label)'), - gr.outputs.HTML() - ], - examples=[ - ['miku.jpg',0.5], - ['miku2.jpg',0.5] - ], - title=TITLE, - description=''' -Demo for [KichangKim/DeepDanbooru](https://github.com/KichangKim/DeepDanbooru) with "ready to copy" prompt and a prompt analyzer. - -Modified from [hysts/DeepDanbooru](https://huggingface.co/spaces/hysts/DeepDanbooru) - -PNG Info code forked from [AUTOMATIC1111/stable-diffusion-webui](https://github.com/AUTOMATIC1111/stable-diffusion-webui) - ''', - theme=args.theme, - allow_flagging=args.allow_flagging, - live=args.live, - ).launch( - enable_queue=args.enable_queue, - server_port=args.port, - share=args.share, - ) - - -if __name__ == '__main__': - main() diff --git a/spaces/HugoDzz/spaceship_drift/build/_app/immutable/chunks/singletons.afdbe156.js b/spaces/HugoDzz/spaceship_drift/build/_app/immutable/chunks/singletons.afdbe156.js deleted file mode 100644 index 241064a4f39c769fcdb94998c7afddaea114a920..0000000000000000000000000000000000000000 --- a/spaces/HugoDzz/spaceship_drift/build/_app/immutable/chunks/singletons.afdbe156.js +++ /dev/null @@ -1 +0,0 @@ -import{H as d,s as m}from"./index.0d3f7c7a.js";const u=[];function p(e,t=d){let n;const o=new Set;function a(s){if(m(e,s)&&(e=s,n)){const c=!u.length;for(const i of o)i[1](),u.push(i,e);if(c){for(let i=0;i{o.delete(i),o.size===0&&n&&(n(),n=null)}}return{set:a,update:l,subscribe:r}}var g;const x=((g=globalThis.__sveltekit_1h0x2cx)==null?void 0:g.base)??"";var k;const E=((k=globalThis.__sveltekit_1h0x2cx)==null?void 0:k.assets)??x,S="1684138881271",T="sveltekit:snapshot",y="sveltekit:scroll",I="sveltekit:index",_={tap:1,hover:2,viewport:3,eager:4,off:-1};function O(e){let t=e.baseURI;if(!t){const n=e.getElementsByTagName("base");t=n.length?n[0].href:e.URL}return t}function U(){return{x:pageXOffset,y:pageYOffset}}function f(e,t){return e.getAttribute(`data-sveltekit-${t}`)}const b={..._,"":_.hover};function v(e){let t=e.assignedSlot??e.parentNode;return(t==null?void 0:t.nodeType)===11&&(t=t.host),t}function L(e,t){for(;e&&e!==t;){if(e.nodeName.toUpperCase()==="A"&&e.hasAttribute("href"))return e;e=v(e)}}function N(e,t){let n;try{n=new URL(e instanceof SVGAElement?e.href.baseVal:e.href,document.baseURI)}catch{}const o=e instanceof SVGAElement?e.target.baseVal:e.target,a=!n||!!o||A(n,t)||(e.getAttribute("rel")||"").split(/\s+/).includes("external"),l=(n==null?void 0:n.origin)===location.origin&&e.hasAttribute("download");return{url:n,external:a,target:o,download:l}}function P(e){let t=null,n=null,o=null,a=null,l=null,r=null,s=e;for(;s&&s!==document.documentElement;)o===null&&(o=f(s,"preload-code")),a===null&&(a=f(s,"preload-data")),t===null&&(t=f(s,"keepfocus")),n===null&&(n=f(s,"noscroll")),l===null&&(l=f(s,"reload")),r===null&&(r=f(s,"replacestate")),s=v(s);return{preload_code:b[o??"off"],preload_data:b[a??"off"],keep_focus:t==="off"?!1:t===""?!0:null,noscroll:n==="off"?!1:n===""?!0:null,reload:l==="off"?!1:l===""?!0:null,replace_state:r==="off"?!1:r===""?!0:null}}function h(e){const t=p(e);let n=!0;function o(){n=!0,t.update(r=>r)}function a(r){n=!1,t.set(r)}function l(r){let s;return t.subscribe(c=>{(s===void 0||n&&c!==s)&&r(s=c)})}return{notify:o,set:a,subscribe:l}}function w(){const{set:e,subscribe:t}=p(!1);let n;async function o(){clearTimeout(n);try{const a=await fetch(`${E}/_app/version.json`,{headers:{pragma:"no-cache","cache-control":"no-cache"}});if(!a.ok)return!1;const r=(await a.json()).version!==S;return r&&(e(!0),clearTimeout(n)),r}catch{return!1}}return{subscribe:t,check:o}}function A(e,t){return e.origin!==location.origin||!e.pathname.startsWith(t)}function V(e){e.client}const Y={url:h({}),page:h({}),navigating:p(null),updated:w()};export{I,_ as P,y as S,T as a,N as b,P as c,U as d,x as e,L as f,O as g,V as h,A as i,Y as s}; diff --git a/spaces/Hugorowan/BardJukebox/app.py b/spaces/Hugorowan/BardJukebox/app.py deleted file mode 100644 index acbe951b2c8714653de780d004ef756b69f80fd7..0000000000000000000000000000000000000000 --- a/spaces/Hugorowan/BardJukebox/app.py +++ /dev/null @@ -1,49 +0,0 @@ -import gradio as gr -import os - -# Import the model - -model = jukebox.make_vqvae(MODELS['5B_LYRICS'], device="cpu") - - -# Generate music -def generate_music(temperature=1.0, top_k=10, beam_width=5): - z = torch.randn(1, 1024) - audio = model.sample(z, temperature=temperature, top_k=top_k, beam_width=beam_width) - return audio - -# Input audio -def input_audio(): - audio_file = input("Enter the path to the audio file: ") - audio_data = librosa.load(audio_file) - return audio_data - -# Generate music from the input audio -def generate_music_from_audio(audio_data): - z = model.encode(audio_data) - audio = model.decode(z) - return audio - -# Save the music -def save_music(audio, filename): - librosa.output(filename, audio, sr=44100) - -# Play the music -def play_music(audio): - Audio(audio) - -# Create the Gradio interface -app = gr.Interface( - generate_music, - inputs=[gr.inputs.Slider(label="Temperature", min=0.0, max=1.0, step=0.1), - gr.inputs.Slider(label="Top K", min=1, max=10, step=1), - gr.inputs.Slider(label="Beam Width", min=1, max=10, step=1)], - outputs=gr.outputs.Audio(), - title="OpenAI Jukebox", - description="Generate music using OpenAI Jukebox", - allow_screenshot=True, - clear_output=True -) - -# Run the app -app.launch() \ No newline at end of file diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/data/encoders/subword_nmt_bpe.py b/spaces/ICML2022/OFA/fairseq/fairseq/data/encoders/subword_nmt_bpe.py deleted file mode 100644 index 5d724d2730a5895ca55af2998c2ced471625b516..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/fairseq/data/encoders/subword_nmt_bpe.py +++ /dev/null @@ -1,54 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from dataclasses import dataclass, field - -from fairseq import file_utils -from fairseq.data.encoders import register_bpe -from fairseq.dataclass import FairseqDataclass - - -@dataclass -class SubwordNMTBPEConfig(FairseqDataclass): - bpe_codes: str = field(default="???", metadata={"help": "path to subword NMT BPE"}) - bpe_separator: str = field(default="@@", metadata={"help": "BPE separator"}) - - -@register_bpe("subword_nmt", dataclass=SubwordNMTBPEConfig) -class SubwordNMTBPE(object): - def __init__(self, cfg): - if cfg.bpe_codes is None: - raise ValueError("--bpe-codes is required for --bpe=subword_nmt") - codes = file_utils.cached_path(cfg.bpe_codes) - try: - from subword_nmt import apply_bpe - - bpe_parser = apply_bpe.create_parser() - bpe_args = bpe_parser.parse_args( - [ - "--codes", - codes, - "--separator", - cfg.bpe_separator, - ] - ) - self.bpe = apply_bpe.BPE( - bpe_args.codes, - bpe_args.merges, - bpe_args.separator, - None, - bpe_args.glossaries, - ) - self.bpe_symbol = bpe_args.separator + " " - except ImportError: - raise ImportError( - "Please install subword_nmt with: pip install subword-nmt" - ) - - def encode(self, x: str) -> str: - return self.bpe.process_line(x) - - def decode(self, x: str) -> str: - return (x + " ").replace(self.bpe_symbol, "").rstrip() diff --git a/spaces/Ibtehaj10/cheating-detection-FYP/yolovs5/utils/loggers/comet/hpo.py b/spaces/Ibtehaj10/cheating-detection-FYP/yolovs5/utils/loggers/comet/hpo.py deleted file mode 100644 index 7dd5c92e8de170222b3cd3eae858f4f3cfddaff6..0000000000000000000000000000000000000000 --- a/spaces/Ibtehaj10/cheating-detection-FYP/yolovs5/utils/loggers/comet/hpo.py +++ /dev/null @@ -1,118 +0,0 @@ -import argparse -import json -import logging -import os -import sys -from pathlib import Path - -import comet_ml - -logger = logging.getLogger(__name__) - -FILE = Path(__file__).resolve() -ROOT = FILE.parents[3] # YOLOv5 root directory -if str(ROOT) not in sys.path: - sys.path.append(str(ROOT)) # add ROOT to PATH - -from train import train -from utils.callbacks import Callbacks -from utils.general import increment_path -from utils.torch_utils import select_device - -# Project Configuration -config = comet_ml.config.get_config() -COMET_PROJECT_NAME = config.get_string(os.getenv("COMET_PROJECT_NAME"), "comet.project_name", default="yolov5") - - -def get_args(known=False): - parser = argparse.ArgumentParser() - parser.add_argument('--weights', type=str, default=ROOT / 'yolov5s.pt', help='initial weights path') - parser.add_argument('--cfg', type=str, default='', help='model.yaml path') - parser.add_argument('--data', type=str, default=ROOT / 'data/coco128.yaml', help='dataset.yaml path') - parser.add_argument('--hyp', type=str, default=ROOT / 'data/hyps/hyp.scratch-low.yaml', help='hyperparameters path') - parser.add_argument('--epochs', type=int, default=300, help='total training epochs') - parser.add_argument('--batch-size', type=int, default=16, help='total batch size for all GPUs, -1 for autobatch') - parser.add_argument('--imgsz', '--img', '--img-size', type=int, default=640, help='train, val image size (pixels)') - parser.add_argument('--rect', action='store_true', help='rectangular training') - parser.add_argument('--resume', nargs='?', const=True, default=False, help='resume most recent training') - parser.add_argument('--nosave', action='store_true', help='only save final checkpoint') - parser.add_argument('--noval', action='store_true', help='only validate final epoch') - parser.add_argument('--noautoanchor', action='store_true', help='disable AutoAnchor') - parser.add_argument('--noplots', action='store_true', help='save no plot files') - parser.add_argument('--evolve', type=int, nargs='?', const=300, help='evolve hyperparameters for x generations') - parser.add_argument('--bucket', type=str, default='', help='gsutil bucket') - parser.add_argument('--cache', type=str, nargs='?', const='ram', help='--cache images in "ram" (default) or "disk"') - parser.add_argument('--image-weights', action='store_true', help='use weighted image selection for training') - parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu') - parser.add_argument('--multi-scale', action='store_true', help='vary img-size +/- 50%%') - parser.add_argument('--single-cls', action='store_true', help='train multi-class data as single-class') - parser.add_argument('--optimizer', type=str, choices=['SGD', 'Adam', 'AdamW'], default='SGD', help='optimizer') - parser.add_argument('--sync-bn', action='store_true', help='use SyncBatchNorm, only available in DDP mode') - parser.add_argument('--workers', type=int, default=8, help='max dataloader workers (per RANK in DDP mode)') - parser.add_argument('--project', default=ROOT / 'runs/train', help='save to project/name') - parser.add_argument('--name', default='exp', help='save to project/name') - parser.add_argument('--exist-ok', action='store_true', help='existing project/name ok, do not increment') - parser.add_argument('--quad', action='store_true', help='quad dataloader') - parser.add_argument('--cos-lr', action='store_true', help='cosine LR scheduler') - parser.add_argument('--label-smoothing', type=float, default=0.0, help='Label smoothing epsilon') - parser.add_argument('--patience', type=int, default=100, help='EarlyStopping patience (epochs without improvement)') - parser.add_argument('--freeze', nargs='+', type=int, default=[0], help='Freeze layers: backbone=10, first3=0 1 2') - parser.add_argument('--save-period', type=int, default=-1, help='Save checkpoint every x epochs (disabled if < 1)') - parser.add_argument('--seed', type=int, default=0, help='Global training seed') - parser.add_argument('--local_rank', type=int, default=-1, help='Automatic DDP Multi-GPU argument, do not modify') - - # Weights & Biases arguments - parser.add_argument('--entity', default=None, help='W&B: Entity') - parser.add_argument('--upload_dataset', nargs='?', const=True, default=False, help='W&B: Upload data, "val" option') - parser.add_argument('--bbox_interval', type=int, default=-1, help='W&B: Set bounding-box image logging interval') - parser.add_argument('--artifact_alias', type=str, default='latest', help='W&B: Version of dataset artifact to use') - - # Comet Arguments - parser.add_argument("--comet_optimizer_config", type=str, help="Comet: Path to a Comet Optimizer Config File.") - parser.add_argument("--comet_optimizer_id", type=str, help="Comet: ID of the Comet Optimizer sweep.") - parser.add_argument("--comet_optimizer_objective", type=str, help="Comet: Set to 'minimize' or 'maximize'.") - parser.add_argument("--comet_optimizer_metric", type=str, help="Comet: Metric to Optimize.") - parser.add_argument("--comet_optimizer_workers", - type=int, - default=1, - help="Comet: Number of Parallel Workers to use with the Comet Optimizer.") - - return parser.parse_known_args()[0] if known else parser.parse_args() - - -def run(parameters, opt): - hyp_dict = {k: v for k, v in parameters.items() if k not in ["epochs", "batch_size"]} - - opt.save_dir = str(increment_path(Path(opt.project) / opt.name, exist_ok=opt.exist_ok or opt.evolve)) - opt.batch_size = parameters.get("batch_size") - opt.epochs = parameters.get("epochs") - - device = select_device(opt.device, batch_size=opt.batch_size) - train(hyp_dict, opt, device, callbacks=Callbacks()) - - -if __name__ == "__main__": - opt = get_args(known=True) - - opt.weights = str(opt.weights) - opt.cfg = str(opt.cfg) - opt.data = str(opt.data) - opt.project = str(opt.project) - - optimizer_id = os.getenv("COMET_OPTIMIZER_ID") - if optimizer_id is None: - with open(opt.comet_optimizer_config) as f: - optimizer_config = json.load(f) - optimizer = comet_ml.Optimizer(optimizer_config) - else: - optimizer = comet_ml.Optimizer(optimizer_id) - - opt.comet_optimizer_id = optimizer.id - status = optimizer.status() - - opt.comet_optimizer_objective = status["spec"]["objective"] - opt.comet_optimizer_metric = status["spec"]["metric"] - - logger.info("COMET INFO: Starting Hyperparameter Sweep") - for parameter in optimizer.get_parameters(): - run(parameter["parameters"], opt) diff --git a/spaces/Illumotion/Koboldcpp/examples/server/index.html.hpp b/spaces/Illumotion/Koboldcpp/examples/server/index.html.hpp deleted file mode 100644 index f302329299f43445ef3dc1ecab64ed02a8e44489..0000000000000000000000000000000000000000 --- a/spaces/Illumotion/Koboldcpp/examples/server/index.html.hpp +++ /dev/null @@ -1,2338 +0,0 @@ -unsigned char index_html[] = { - 0x3c, 0x68, 0x74, 0x6d, 0x6c, 0x3e, 0x0a, 0x0a, 0x3c, 0x68, 0x65, 0x61, - 0x64, 0x3e, 0x0a, 0x20, 0x20, 0x3c, 0x6d, 0x65, 0x74, 0x61, 0x20, 0x63, - 0x68, 0x61, 0x72, 0x73, 0x65, 0x74, 0x3d, 0x22, 0x55, 0x54, 0x46, 0x2d, - 0x38, 0x22, 0x3e, 0x0a, 0x20, 0x20, 0x3c, 0x6d, 0x65, 0x74, 0x61, 0x20, - 0x6e, 0x61, 0x6d, 0x65, 0x3d, 0x22, 0x76, 0x69, 0x65, 0x77, 0x70, 0x6f, - 0x72, 0x74, 0x22, 0x20, 0x63, 0x6f, 0x6e, 0x74, 0x65, 0x6e, 0x74, 0x3d, - 0x22, 0x77, 0x69, 0x64, 0x74, 0x68, 0x3d, 0x64, 0x65, 0x76, 0x69, 0x63, - 0x65, 0x2d, 0x77, 0x69, 0x64, 0x74, 0x68, 0x2c, 0x20, 0x69, 0x6e, 0x69, - 0x74, 0x69, 0x61, 0x6c, 0x2d, 0x73, 0x63, 0x61, 0x6c, 0x65, 0x3d, 0x31, - 0x2c, 0x20, 0x6d, 0x61, 0x78, 0x69, 0x6d, 0x75, 0x6d, 0x2d, 0x73, 0x63, - 0x61, 0x6c, 0x65, 0x3d, 0x31, 0x22, 0x20, 0x2f, 0x3e, 0x0a, 0x20, 0x20, - 0x3c, 0x6d, 0x65, 0x74, 0x61, 0x20, 0x6e, 0x61, 0x6d, 0x65, 0x3d, 0x22, - 0x63, 0x6f, 0x6c, 0x6f, 0x72, 0x2d, 0x73, 0x63, 0x68, 0x65, 0x6d, 0x65, - 0x22, 0x20, 0x63, 0x6f, 0x6e, 0x74, 0x65, 0x6e, 0x74, 0x3d, 0x22, 0x6c, - 0x69, 0x67, 0x68, 0x74, 0x20, 0x64, 0x61, 0x72, 0x6b, 0x22, 0x3e, 0x0a, - 0x20, 0x20, 0x3c, 0x74, 0x69, 0x74, 0x6c, 0x65, 0x3e, 0x6c, 0x6c, 0x61, - 0x6d, 0x61, 0x2e, 0x63, 0x70, 0x70, 0x20, 0x2d, 0x20, 0x63, 0x68, 0x61, - 0x74, 0x3c, 0x2f, 0x74, 0x69, 0x74, 0x6c, 0x65, 0x3e, 0x0a, 0x0a, 0x20, - 0x20, 0x3c, 0x73, 0x74, 0x79, 0x6c, 0x65, 0x3e, 0x0a, 0x20, 0x20, 0x20, - 0x20, 0x62, 0x6f, 0x64, 0x79, 0x20, 0x7b, 0x0a, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x66, 0x6f, 0x6e, 0x74, 0x2d, 0x66, 0x61, 0x6d, 0x69, 0x6c, - 0x79, 0x3a, 0x20, 0x73, 0x79, 0x73, 0x74, 0x65, 0x6d, 0x2d, 0x75, 0x69, - 0x3b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x66, 0x6f, 0x6e, 0x74, - 0x2d, 0x73, 0x69, 0x7a, 0x65, 0x3a, 0x20, 0x39, 0x30, 0x25, 0x3b, 0x0a, - 0x20, 0x20, 0x20, 0x20, 0x7d, 0x0a, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x23, - 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x20, 0x7b, 0x0a, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x6d, 0x61, 0x72, 0x67, 0x69, 0x6e, - 0x3a, 0x20, 0x30, 0x65, 0x6d, 0x20, 0x61, 0x75, 0x74, 0x6f, 0x3b, 0x0a, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x64, 0x69, 0x73, 0x70, 0x6c, 0x61, - 0x79, 0x3a, 0x20, 0x66, 0x6c, 0x65, 0x78, 0x3b, 0x0a, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x66, 0x6c, 0x65, 0x78, 0x2d, 0x64, 0x69, 0x72, 0x65, - 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x3a, 0x20, 0x63, 0x6f, 0x6c, 0x75, 0x6d, - 0x6e, 0x3b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x6a, 0x75, 0x73, - 0x74, 0x69, 0x66, 0x79, 0x2d, 0x63, 0x6f, 0x6e, 0x74, 0x65, 0x6e, 0x74, - 0x3a, 0x20, 0x73, 0x70, 0x61, 0x63, 0x65, 0x2d, 0x62, 0x65, 0x74, 0x77, - 0x65, 0x65, 0x6e, 0x3b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x68, - 0x65, 0x69, 0x67, 0x68, 0x74, 0x3a, 0x20, 0x31, 0x30, 0x30, 0x25, 0x3b, - 0x0a, 0x20, 0x20, 0x20, 0x20, 0x7d, 0x0a, 0x0a, 0x20, 0x20, 0x20, 0x20, - 0x6d, 0x61, 0x69, 0x6e, 0x20, 0x7b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x6d, 0x61, 0x72, 0x67, 0x69, 0x6e, 0x3a, 0x20, 0x33, 0x70, 0x78, - 0x3b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x64, 0x69, 0x73, 0x70, - 0x6c, 0x61, 0x79, 0x3a, 0x20, 0x66, 0x6c, 0x65, 0x78, 0x3b, 0x0a, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x66, 0x6c, 0x65, 0x78, 0x2d, 0x64, 0x69, - 0x72, 0x65, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x3a, 0x20, 0x63, 0x6f, 0x6c, - 0x75, 0x6d, 0x6e, 0x3b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x6a, - 0x75, 0x73, 0x74, 0x69, 0x66, 0x79, 0x2d, 0x63, 0x6f, 0x6e, 0x74, 0x65, - 0x6e, 0x74, 0x3a, 0x20, 0x73, 0x70, 0x61, 0x63, 0x65, 0x2d, 0x62, 0x65, - 0x74, 0x77, 0x65, 0x65, 0x6e, 0x3b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x67, 0x61, 0x70, 0x3a, 0x20, 0x31, 0x65, 0x6d, 0x3b, 0x0a, 0x0a, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x66, 0x6c, 0x65, 0x78, 0x2d, 0x67, - 0x72, 0x6f, 0x77, 0x3a, 0x20, 0x31, 0x3b, 0x0a, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x6f, 0x76, 0x65, 0x72, 0x66, 0x6c, 0x6f, 0x77, 0x2d, 0x79, - 0x3a, 0x20, 0x61, 0x75, 0x74, 0x6f, 0x3b, 0x0a, 0x0a, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x62, 0x6f, 0x72, 0x64, 0x65, 0x72, 0x3a, 0x20, 0x31, - 0x70, 0x78, 0x20, 0x73, 0x6f, 0x6c, 0x69, 0x64, 0x20, 0x23, 0x63, 0x63, - 0x63, 0x3b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x62, 0x6f, 0x72, - 0x64, 0x65, 0x72, 0x2d, 0x72, 0x61, 0x64, 0x69, 0x75, 0x73, 0x3a, 0x20, - 0x35, 0x70, 0x78, 0x3b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x70, - 0x61, 0x64, 0x64, 0x69, 0x6e, 0x67, 0x3a, 0x20, 0x30, 0x2e, 0x35, 0x65, - 0x6d, 0x3b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x7d, 0x0a, 0x0a, 0x20, 0x20, - 0x20, 0x20, 0x62, 0x6f, 0x64, 0x79, 0x20, 0x7b, 0x0a, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x6d, 0x61, 0x78, 0x2d, 0x77, 0x69, 0x64, 0x74, 0x68, - 0x3a, 0x20, 0x36, 0x30, 0x30, 0x70, 0x78, 0x3b, 0x0a, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x6d, 0x69, 0x6e, 0x2d, 0x77, 0x69, 0x64, 0x74, 0x68, - 0x3a, 0x20, 0x33, 0x30, 0x30, 0x70, 0x78, 0x3b, 0x0a, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x6c, 0x69, 0x6e, 0x65, 0x2d, 0x68, 0x65, 0x69, 0x67, - 0x68, 0x74, 0x3a, 0x20, 0x31, 0x2e, 0x32, 0x3b, 0x0a, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x6d, 0x61, 0x72, 0x67, 0x69, 0x6e, 0x3a, 0x20, 0x30, - 0x20, 0x61, 0x75, 0x74, 0x6f, 0x3b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x70, 0x61, 0x64, 0x64, 0x69, 0x6e, 0x67, 0x3a, 0x20, 0x30, 0x20, - 0x30, 0x2e, 0x35, 0x65, 0x6d, 0x3b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x7d, - 0x0a, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x70, 0x20, 0x7b, 0x0a, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x6f, 0x76, 0x65, 0x72, 0x66, 0x6c, 0x6f, 0x77, - 0x2d, 0x77, 0x72, 0x61, 0x70, 0x3a, 0x20, 0x62, 0x72, 0x65, 0x61, 0x6b, - 0x2d, 0x77, 0x6f, 0x72, 0x64, 0x3b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x77, 0x6f, 0x72, 0x64, 0x2d, 0x77, 0x72, 0x61, 0x70, 0x3a, 0x20, - 0x62, 0x72, 0x65, 0x61, 0x6b, 0x2d, 0x77, 0x6f, 0x72, 0x64, 0x3b, 0x0a, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x68, 0x79, 0x70, 0x68, 0x65, 0x6e, - 0x73, 0x3a, 0x20, 0x61, 0x75, 0x74, 0x6f, 0x3b, 0x0a, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x6d, 0x61, 0x72, 0x67, 0x69, 0x6e, 0x2d, 0x74, 0x6f, - 0x70, 0x3a, 0x20, 0x30, 0x2e, 0x35, 0x65, 0x6d, 0x3b, 0x0a, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x6d, 0x61, 0x72, 0x67, 0x69, 0x6e, 0x2d, 0x62, - 0x6f, 0x74, 0x74, 0x6f, 0x6d, 0x3a, 0x20, 0x30, 0x2e, 0x35, 0x65, 0x6d, - 0x3b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x7d, 0x0a, 0x0a, 0x20, 0x20, 0x20, - 0x20, 0x23, 0x77, 0x72, 0x69, 0x74, 0x65, 0x20, 0x66, 0x6f, 0x72, 0x6d, - 0x20, 0x7b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x6d, 0x61, 0x72, - 0x67, 0x69, 0x6e, 0x3a, 0x20, 0x31, 0x65, 0x6d, 0x20, 0x30, 0x20, 0x30, - 0x20, 0x30, 0x3b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x64, 0x69, - 0x73, 0x70, 0x6c, 0x61, 0x79, 0x3a, 0x20, 0x66, 0x6c, 0x65, 0x78, 0x3b, - 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x66, 0x6c, 0x65, 0x78, 0x2d, - 0x64, 0x69, 0x72, 0x65, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x3a, 0x20, 0x63, - 0x6f, 0x6c, 0x75, 0x6d, 0x6e, 0x3b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x67, 0x61, 0x70, 0x3a, 0x20, 0x30, 0x2e, 0x35, 0x65, 0x6d, 0x3b, - 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x61, 0x6c, 0x69, 0x67, 0x6e, - 0x2d, 0x69, 0x74, 0x65, 0x6d, 0x73, 0x3a, 0x20, 0x73, 0x74, 0x72, 0x65, - 0x74, 0x63, 0x68, 0x3b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x7d, 0x0a, 0x0a, - 0x20, 0x20, 0x20, 0x20, 0x2e, 0x72, 0x69, 0x67, 0x68, 0x74, 0x20, 0x7b, - 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x64, 0x69, 0x73, 0x70, 0x6c, - 0x61, 0x79, 0x3a, 0x20, 0x66, 0x6c, 0x65, 0x78, 0x3b, 0x0a, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x66, 0x6c, 0x65, 0x78, 0x2d, 0x64, 0x69, 0x72, - 0x65, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x3a, 0x20, 0x72, 0x6f, 0x77, 0x3b, - 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x67, 0x61, 0x70, 0x3a, 0x20, - 0x30, 0x2e, 0x35, 0x65, 0x6d, 0x3b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x6a, 0x75, 0x73, 0x74, 0x69, 0x66, 0x79, 0x2d, 0x63, 0x6f, 0x6e, - 0x74, 0x65, 0x6e, 0x74, 0x3a, 0x20, 0x66, 0x6c, 0x65, 0x78, 0x2d, 0x65, - 0x6e, 0x64, 0x3b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x7d, 0x0a, 0x0a, 0x20, - 0x20, 0x20, 0x20, 0x66, 0x69, 0x65, 0x6c, 0x64, 0x73, 0x65, 0x74, 0x20, - 0x7b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x62, 0x6f, 0x72, 0x64, - 0x65, 0x72, 0x3a, 0x20, 0x6e, 0x6f, 0x6e, 0x65, 0x3b, 0x0a, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x70, 0x61, 0x64, 0x64, 0x69, 0x6e, 0x67, 0x3a, - 0x20, 0x30, 0x3b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x6d, 0x61, - 0x72, 0x67, 0x69, 0x6e, 0x3a, 0x20, 0x30, 0x3b, 0x0a, 0x20, 0x20, 0x20, - 0x20, 0x7d, 0x0a, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x66, 0x69, 0x65, 0x6c, - 0x64, 0x73, 0x65, 0x74, 0x2e, 0x74, 0x77, 0x6f, 0x20, 0x7b, 0x0a, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x64, 0x69, 0x73, 0x70, 0x6c, 0x61, 0x79, - 0x3a, 0x20, 0x67, 0x72, 0x69, 0x64, 0x3b, 0x0a, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x67, 0x72, 0x69, 0x64, 0x2d, 0x74, 0x65, 0x6d, 0x70, 0x6c, - 0x61, 0x74, 0x65, 0x3a, 0x20, 0x22, 0x61, 0x20, 0x61, 0x22, 0x3b, 0x0a, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x67, 0x61, 0x70, 0x3a, 0x20, 0x31, - 0x65, 0x6d, 0x3b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x7d, 0x0a, 0x0a, 0x20, - 0x20, 0x20, 0x20, 0x66, 0x69, 0x65, 0x6c, 0x64, 0x73, 0x65, 0x74, 0x2e, - 0x74, 0x68, 0x72, 0x65, 0x65, 0x20, 0x7b, 0x0a, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x64, 0x69, 0x73, 0x70, 0x6c, 0x61, 0x79, 0x3a, 0x20, 0x67, - 0x72, 0x69, 0x64, 0x3b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x67, - 0x72, 0x69, 0x64, 0x2d, 0x74, 0x65, 0x6d, 0x70, 0x6c, 0x61, 0x74, 0x65, - 0x3a, 0x20, 0x22, 0x61, 0x20, 0x61, 0x20, 0x61, 0x22, 0x3b, 0x0a, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x67, 0x61, 0x70, 0x3a, 0x20, 0x31, 0x65, - 0x6d, 0x3b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x7d, 0x0a, 0x0a, 0x20, 0x20, - 0x20, 0x20, 0x64, 0x65, 0x74, 0x61, 0x69, 0x6c, 0x73, 0x20, 0x7b, 0x0a, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x62, 0x6f, 0x72, 0x64, 0x65, 0x72, - 0x3a, 0x20, 0x31, 0x70, 0x78, 0x20, 0x73, 0x6f, 0x6c, 0x69, 0x64, 0x20, - 0x23, 0x61, 0x61, 0x61, 0x3b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x62, 0x6f, 0x72, 0x64, 0x65, 0x72, 0x2d, 0x72, 0x61, 0x64, 0x69, 0x75, - 0x73, 0x3a, 0x20, 0x34, 0x70, 0x78, 0x3b, 0x0a, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x70, 0x61, 0x64, 0x64, 0x69, 0x6e, 0x67, 0x3a, 0x20, 0x30, - 0x2e, 0x35, 0x65, 0x6d, 0x20, 0x30, 0x2e, 0x35, 0x65, 0x6d, 0x20, 0x30, - 0x3b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x6d, 0x61, 0x72, 0x67, - 0x69, 0x6e, 0x2d, 0x74, 0x6f, 0x70, 0x3a, 0x20, 0x30, 0x2e, 0x35, 0x65, - 0x6d, 0x3b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x7d, 0x0a, 0x0a, 0x20, 0x20, - 0x20, 0x20, 0x73, 0x75, 0x6d, 0x6d, 0x61, 0x72, 0x79, 0x20, 0x7b, 0x0a, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x66, 0x6f, 0x6e, 0x74, 0x2d, 0x77, - 0x65, 0x69, 0x67, 0x68, 0x74, 0x3a, 0x20, 0x62, 0x6f, 0x6c, 0x64, 0x3b, - 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x6d, 0x61, 0x72, 0x67, 0x69, - 0x6e, 0x3a, 0x20, 0x2d, 0x30, 0x2e, 0x35, 0x65, 0x6d, 0x20, 0x2d, 0x30, - 0x2e, 0x35, 0x65, 0x6d, 0x20, 0x30, 0x3b, 0x0a, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x70, 0x61, 0x64, 0x64, 0x69, 0x6e, 0x67, 0x3a, 0x20, 0x30, - 0x2e, 0x35, 0x65, 0x6d, 0x3b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x63, 0x75, 0x72, 0x73, 0x6f, 0x72, 0x3a, 0x20, 0x70, 0x6f, 0x69, 0x6e, - 0x74, 0x65, 0x72, 0x3b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x7d, 0x0a, 0x0a, - 0x20, 0x20, 0x20, 0x20, 0x64, 0x65, 0x74, 0x61, 0x69, 0x6c, 0x73, 0x5b, - 0x6f, 0x70, 0x65, 0x6e, 0x5d, 0x20, 0x7b, 0x0a, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x70, 0x61, 0x64, 0x64, 0x69, 0x6e, 0x67, 0x3a, 0x20, 0x30, - 0x2e, 0x35, 0x65, 0x6d, 0x3b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x7d, 0x0a, - 0x0a, 0x20, 0x20, 0x20, 0x20, 0x2e, 0x70, 0x72, 0x6f, 0x62, 0x2d, 0x73, - 0x65, 0x74, 0x20, 0x7b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x70, - 0x61, 0x64, 0x64, 0x69, 0x6e, 0x67, 0x3a, 0x20, 0x30, 0x2e, 0x33, 0x65, - 0x6d, 0x3b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x62, 0x6f, 0x72, - 0x64, 0x65, 0x72, 0x2d, 0x62, 0x6f, 0x74, 0x74, 0x6f, 0x6d, 0x3a, 0x20, - 0x31, 0x70, 0x78, 0x20, 0x73, 0x6f, 0x6c, 0x69, 0x64, 0x20, 0x23, 0x63, - 0x63, 0x63, 0x3b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x7d, 0x0a, 0x0a, 0x20, - 0x20, 0x20, 0x20, 0x2e, 0x70, 0x6f, 0x70, 0x6f, 0x76, 0x65, 0x72, 0x2d, - 0x63, 0x6f, 0x6e, 0x74, 0x65, 0x6e, 0x74, 0x20, 0x7b, 0x0a, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x70, 0x6f, 0x73, 0x69, 0x74, 0x69, 0x6f, 0x6e, - 0x3a, 0x20, 0x61, 0x62, 0x73, 0x6f, 0x6c, 0x75, 0x74, 0x65, 0x3b, 0x0a, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x62, 0x61, 0x63, 0x6b, 0x67, 0x72, - 0x6f, 0x75, 0x6e, 0x64, 0x2d, 0x63, 0x6f, 0x6c, 0x6f, 0x72, 0x3a, 0x20, - 0x77, 0x68, 0x69, 0x74, 0x65, 0x3b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x70, 0x61, 0x64, 0x64, 0x69, 0x6e, 0x67, 0x3a, 0x20, 0x30, 0x2e, - 0x32, 0x65, 0x6d, 0x3b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x62, - 0x6f, 0x78, 0x2d, 0x73, 0x68, 0x61, 0x64, 0x6f, 0x77, 0x3a, 0x20, 0x30, - 0x20, 0x30, 0x20, 0x31, 0x30, 0x70, 0x78, 0x20, 0x72, 0x67, 0x62, 0x61, - 0x28, 0x30, 0x2c, 0x20, 0x30, 0x2c, 0x20, 0x30, 0x2c, 0x20, 0x30, 0x2e, - 0x31, 0x29, 0x3b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x7d, 0x0a, 0x0a, 0x20, - 0x20, 0x20, 0x20, 0x74, 0x65, 0x78, 0x74, 0x61, 0x72, 0x65, 0x61, 0x20, - 0x7b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x70, 0x61, 0x64, 0x64, - 0x69, 0x6e, 0x67, 0x3a, 0x20, 0x35, 0x70, 0x78, 0x3b, 0x0a, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x66, 0x6c, 0x65, 0x78, 0x2d, 0x67, 0x72, 0x6f, - 0x77, 0x3a, 0x20, 0x31, 0x3b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x77, 0x69, 0x64, 0x74, 0x68, 0x3a, 0x20, 0x31, 0x30, 0x30, 0x25, 0x3b, - 0x0a, 0x20, 0x20, 0x20, 0x20, 0x7d, 0x0a, 0x0a, 0x20, 0x20, 0x20, 0x20, - 0x70, 0x72, 0x65, 0x20, 0x63, 0x6f, 0x64, 0x65, 0x20, 0x7b, 0x0a, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x64, 0x69, 0x73, 0x70, 0x6c, 0x61, 0x79, - 0x3a, 0x20, 0x62, 0x6c, 0x6f, 0x63, 0x6b, 0x3b, 0x0a, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x62, 0x61, 0x63, 0x6b, 0x67, 0x72, 0x6f, 0x75, 0x6e, - 0x64, 0x2d, 0x63, 0x6f, 0x6c, 0x6f, 0x72, 0x3a, 0x20, 0x23, 0x32, 0x32, - 0x32, 0x3b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x63, 0x6f, 0x6c, - 0x6f, 0x72, 0x3a, 0x20, 0x23, 0x64, 0x64, 0x64, 0x3b, 0x0a, 0x20, 0x20, - 0x20, 0x20, 0x7d, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x63, 0x6f, 0x64, 0x65, - 0x20, 0x7b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x66, 0x6f, 0x6e, - 0x74, 0x2d, 0x66, 0x61, 0x6d, 0x69, 0x6c, 0x79, 0x3a, 0x20, 0x6d, 0x6f, - 0x6e, 0x6f, 0x73, 0x70, 0x61, 0x63, 0x65, 0x3b, 0x0a, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x70, 0x61, 0x64, 0x64, 0x69, 0x6e, 0x67, 0x3a, 0x20, - 0x30, 0x2e, 0x31, 0x65, 0x6d, 0x20, 0x30, 0x2e, 0x33, 0x65, 0x6d, 0x3b, - 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x62, 0x6f, 0x72, 0x64, 0x65, - 0x72, 0x2d, 0x72, 0x61, 0x64, 0x69, 0x75, 0x73, 0x3a, 0x20, 0x33, 0x70, - 0x78, 0x3b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x7d, 0x0a, 0x0a, 0x20, 0x20, - 0x20, 0x20, 0x66, 0x69, 0x65, 0x6c, 0x64, 0x73, 0x65, 0x74, 0x20, 0x6c, - 0x61, 0x62, 0x65, 0x6c, 0x20, 0x7b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x6d, 0x61, 0x72, 0x67, 0x69, 0x6e, 0x3a, 0x20, 0x30, 0x2e, 0x35, - 0x65, 0x6d, 0x20, 0x30, 0x3b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x64, 0x69, 0x73, 0x70, 0x6c, 0x61, 0x79, 0x3a, 0x20, 0x62, 0x6c, 0x6f, - 0x63, 0x6b, 0x3b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x7d, 0x0a, 0x0a, 0x20, - 0x20, 0x20, 0x20, 0x68, 0x65, 0x61, 0x64, 0x65, 0x72, 0x2c, 0x20, 0x66, - 0x6f, 0x6f, 0x74, 0x65, 0x72, 0x20, 0x7b, 0x0a, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x74, 0x65, 0x78, 0x74, 0x2d, 0x61, 0x6c, 0x69, 0x67, 0x6e, - 0x3a, 0x20, 0x63, 0x65, 0x6e, 0x74, 0x65, 0x72, 0x3b, 0x0a, 0x20, 0x20, - 0x20, 0x20, 0x7d, 0x0a, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x66, 0x6f, 0x6f, - 0x74, 0x65, 0x72, 0x20, 0x7b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x66, 0x6f, 0x6e, 0x74, 0x2d, 0x73, 0x69, 0x7a, 0x65, 0x3a, 0x20, 0x38, - 0x30, 0x25, 0x3b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x63, 0x6f, - 0x6c, 0x6f, 0x72, 0x3a, 0x20, 0x23, 0x38, 0x38, 0x38, 0x3b, 0x0a, 0x20, - 0x20, 0x20, 0x20, 0x7d, 0x0a, 0x0a, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x40, - 0x6b, 0x65, 0x79, 0x66, 0x72, 0x61, 0x6d, 0x65, 0x73, 0x20, 0x6c, 0x6f, - 0x61, 0x64, 0x69, 0x6e, 0x67, 0x2d, 0x62, 0x67, 0x2d, 0x77, 0x69, 0x70, - 0x65, 0x20, 0x7b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x30, 0x25, - 0x20, 0x7b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x62, - 0x61, 0x63, 0x6b, 0x67, 0x72, 0x6f, 0x75, 0x6e, 0x64, 0x2d, 0x70, 0x6f, - 0x73, 0x69, 0x74, 0x69, 0x6f, 0x6e, 0x3a, 0x20, 0x30, 0x25, 0x3b, 0x0a, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x7d, 0x0a, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x31, 0x30, 0x30, 0x25, 0x20, 0x7b, 0x0a, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x62, 0x61, 0x63, 0x6b, 0x67, 0x72, 0x6f, - 0x75, 0x6e, 0x64, 0x2d, 0x70, 0x6f, 0x73, 0x69, 0x74, 0x69, 0x6f, 0x6e, - 0x3a, 0x20, 0x31, 0x30, 0x30, 0x25, 0x3b, 0x0a, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x7d, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x7d, 0x0a, 0x0a, 0x20, - 0x20, 0x20, 0x20, 0x2e, 0x6c, 0x6f, 0x61, 0x64, 0x69, 0x6e, 0x67, 0x20, - 0x7b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x2d, 0x2d, 0x6c, 0x6f, - 0x61, 0x64, 0x69, 0x6e, 0x67, 0x2d, 0x63, 0x6f, 0x6c, 0x6f, 0x72, 0x2d, - 0x31, 0x3a, 0x20, 0x23, 0x65, 0x65, 0x65, 0x65, 0x65, 0x65, 0x30, 0x30, - 0x3b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x2d, 0x2d, 0x6c, 0x6f, - 0x61, 0x64, 0x69, 0x6e, 0x67, 0x2d, 0x63, 0x6f, 0x6c, 0x6f, 0x72, 0x2d, - 0x32, 0x3a, 0x20, 0x23, 0x65, 0x65, 0x65, 0x65, 0x65, 0x65, 0x66, 0x66, - 0x3b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x62, 0x61, 0x63, 0x6b, - 0x67, 0x72, 0x6f, 0x75, 0x6e, 0x64, 0x2d, 0x73, 0x69, 0x7a, 0x65, 0x3a, - 0x20, 0x35, 0x30, 0x25, 0x20, 0x31, 0x30, 0x30, 0x25, 0x3b, 0x0a, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x62, 0x61, 0x63, 0x6b, 0x67, 0x72, 0x6f, - 0x75, 0x6e, 0x64, 0x2d, 0x69, 0x6d, 0x61, 0x67, 0x65, 0x3a, 0x20, 0x6c, - 0x69, 0x6e, 0x65, 0x61, 0x72, 0x2d, 0x67, 0x72, 0x61, 0x64, 0x69, 0x65, - 0x6e, 0x74, 0x28, 0x39, 0x30, 0x64, 0x65, 0x67, 0x2c, 0x20, 0x76, 0x61, - 0x72, 0x28, 0x2d, 0x2d, 0x6c, 0x6f, 0x61, 0x64, 0x69, 0x6e, 0x67, 0x2d, - 0x63, 0x6f, 0x6c, 0x6f, 0x72, 0x2d, 0x31, 0x29, 0x2c, 0x20, 0x76, 0x61, - 0x72, 0x28, 0x2d, 0x2d, 0x6c, 0x6f, 0x61, 0x64, 0x69, 0x6e, 0x67, 0x2d, - 0x63, 0x6f, 0x6c, 0x6f, 0x72, 0x2d, 0x32, 0x29, 0x2c, 0x20, 0x76, 0x61, - 0x72, 0x28, 0x2d, 0x2d, 0x6c, 0x6f, 0x61, 0x64, 0x69, 0x6e, 0x67, 0x2d, - 0x63, 0x6f, 0x6c, 0x6f, 0x72, 0x2d, 0x31, 0x29, 0x29, 0x3b, 0x0a, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x61, 0x6e, 0x69, 0x6d, 0x61, 0x74, 0x69, - 0x6f, 0x6e, 0x3a, 0x20, 0x6c, 0x6f, 0x61, 0x64, 0x69, 0x6e, 0x67, 0x2d, - 0x62, 0x67, 0x2d, 0x77, 0x69, 0x70, 0x65, 0x20, 0x32, 0x73, 0x20, 0x6c, - 0x69, 0x6e, 0x65, 0x61, 0x72, 0x20, 0x69, 0x6e, 0x66, 0x69, 0x6e, 0x69, - 0x74, 0x65, 0x3b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x7d, 0x0a, 0x0a, 0x20, - 0x20, 0x20, 0x20, 0x40, 0x6d, 0x65, 0x64, 0x69, 0x61, 0x20, 0x28, 0x70, - 0x72, 0x65, 0x66, 0x65, 0x72, 0x73, 0x2d, 0x63, 0x6f, 0x6c, 0x6f, 0x72, - 0x2d, 0x73, 0x63, 0x68, 0x65, 0x6d, 0x65, 0x3a, 0x20, 0x64, 0x61, 0x72, - 0x6b, 0x29, 0x20, 0x7b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x2e, - 0x6c, 0x6f, 0x61, 0x64, 0x69, 0x6e, 0x67, 0x20, 0x7b, 0x0a, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x2d, 0x2d, 0x6c, 0x6f, 0x61, 0x64, - 0x69, 0x6e, 0x67, 0x2d, 0x63, 0x6f, 0x6c, 0x6f, 0x72, 0x2d, 0x31, 0x3a, - 0x20, 0x23, 0x32, 0x32, 0x32, 0x32, 0x32, 0x32, 0x30, 0x30, 0x3b, 0x0a, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x2d, 0x2d, 0x6c, 0x6f, - 0x61, 0x64, 0x69, 0x6e, 0x67, 0x2d, 0x63, 0x6f, 0x6c, 0x6f, 0x72, 0x2d, - 0x32, 0x3a, 0x20, 0x23, 0x32, 0x32, 0x32, 0x32, 0x32, 0x32, 0x66, 0x66, - 0x3b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x7d, 0x0a, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x2e, 0x70, 0x6f, 0x70, 0x6f, 0x76, 0x65, 0x72, - 0x2d, 0x63, 0x6f, 0x6e, 0x74, 0x65, 0x6e, 0x74, 0x20, 0x7b, 0x0a, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x62, 0x61, 0x63, 0x6b, 0x67, - 0x72, 0x6f, 0x75, 0x6e, 0x64, 0x2d, 0x63, 0x6f, 0x6c, 0x6f, 0x72, 0x3a, - 0x20, 0x62, 0x6c, 0x61, 0x63, 0x6b, 0x3b, 0x0a, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x7d, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x7d, 0x0a, 0x20, 0x20, - 0x3c, 0x2f, 0x73, 0x74, 0x79, 0x6c, 0x65, 0x3e, 0x0a, 0x0a, 0x20, 0x20, - 0x3c, 0x73, 0x63, 0x72, 0x69, 0x70, 0x74, 0x20, 0x74, 0x79, 0x70, 0x65, - 0x3d, 0x22, 0x6d, 0x6f, 0x64, 0x75, 0x6c, 0x65, 0x22, 0x3e, 0x0a, 0x20, - 0x20, 0x20, 0x20, 0x69, 0x6d, 0x70, 0x6f, 0x72, 0x74, 0x20, 0x7b, 0x0a, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x68, 0x74, 0x6d, 0x6c, 0x2c, 0x20, - 0x68, 0x2c, 0x20, 0x73, 0x69, 0x67, 0x6e, 0x61, 0x6c, 0x2c, 0x20, 0x65, - 0x66, 0x66, 0x65, 0x63, 0x74, 0x2c, 0x20, 0x63, 0x6f, 0x6d, 0x70, 0x75, - 0x74, 0x65, 0x64, 0x2c, 0x20, 0x72, 0x65, 0x6e, 0x64, 0x65, 0x72, 0x2c, - 0x20, 0x75, 0x73, 0x65, 0x53, 0x69, 0x67, 0x6e, 0x61, 0x6c, 0x2c, 0x20, - 0x75, 0x73, 0x65, 0x45, 0x66, 0x66, 0x65, 0x63, 0x74, 0x2c, 0x20, 0x75, - 0x73, 0x65, 0x52, 0x65, 0x66, 0x2c, 0x20, 0x43, 0x6f, 0x6d, 0x70, 0x6f, - 0x6e, 0x65, 0x6e, 0x74, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x7d, 0x20, 0x66, - 0x72, 0x6f, 0x6d, 0x20, 0x27, 0x2f, 0x69, 0x6e, 0x64, 0x65, 0x78, 0x2e, - 0x6a, 0x73, 0x27, 0x3b, 0x0a, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x69, 0x6d, - 0x70, 0x6f, 0x72, 0x74, 0x20, 0x7b, 0x20, 0x6c, 0x6c, 0x61, 0x6d, 0x61, - 0x20, 0x7d, 0x20, 0x66, 0x72, 0x6f, 0x6d, 0x20, 0x27, 0x2f, 0x63, 0x6f, - 0x6d, 0x70, 0x6c, 0x65, 0x74, 0x69, 0x6f, 0x6e, 0x2e, 0x6a, 0x73, 0x27, - 0x3b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x69, 0x6d, 0x70, 0x6f, 0x72, 0x74, - 0x20, 0x7b, 0x20, 0x53, 0x63, 0x68, 0x65, 0x6d, 0x61, 0x43, 0x6f, 0x6e, - 0x76, 0x65, 0x72, 0x74, 0x65, 0x72, 0x20, 0x7d, 0x20, 0x66, 0x72, 0x6f, - 0x6d, 0x20, 0x27, 0x2f, 0x6a, 0x73, 0x6f, 0x6e, 0x2d, 0x73, 0x63, 0x68, - 0x65, 0x6d, 0x61, 0x2d, 0x74, 0x6f, 0x2d, 0x67, 0x72, 0x61, 0x6d, 0x6d, - 0x61, 0x72, 0x2e, 0x6d, 0x6a, 0x73, 0x27, 0x3b, 0x0a, 0x0a, 0x20, 0x20, - 0x20, 0x20, 0x63, 0x6f, 0x6e, 0x73, 0x74, 0x20, 0x73, 0x65, 0x73, 0x73, - 0x69, 0x6f, 0x6e, 0x20, 0x3d, 0x20, 0x73, 0x69, 0x67, 0x6e, 0x61, 0x6c, - 0x28, 0x7b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x70, 0x72, 0x6f, - 0x6d, 0x70, 0x74, 0x3a, 0x20, 0x22, 0x54, 0x68, 0x69, 0x73, 0x20, 0x69, - 0x73, 0x20, 0x61, 0x20, 0x63, 0x6f, 0x6e, 0x76, 0x65, 0x72, 0x73, 0x61, - 0x74, 0x69, 0x6f, 0x6e, 0x20, 0x62, 0x65, 0x74, 0x77, 0x65, 0x65, 0x6e, - 0x20, 0x55, 0x73, 0x65, 0x72, 0x20, 0x61, 0x6e, 0x64, 0x20, 0x4c, 0x6c, - 0x61, 0x6d, 0x61, 0x2c, 0x20, 0x61, 0x20, 0x66, 0x72, 0x69, 0x65, 0x6e, - 0x64, 0x6c, 0x79, 0x20, 0x63, 0x68, 0x61, 0x74, 0x62, 0x6f, 0x74, 0x2e, - 0x20, 0x4c, 0x6c, 0x61, 0x6d, 0x61, 0x20, 0x69, 0x73, 0x20, 0x68, 0x65, - 0x6c, 0x70, 0x66, 0x75, 0x6c, 0x2c, 0x20, 0x6b, 0x69, 0x6e, 0x64, 0x2c, - 0x20, 0x68, 0x6f, 0x6e, 0x65, 0x73, 0x74, 0x2c, 0x20, 0x67, 0x6f, 0x6f, - 0x64, 0x20, 0x61, 0x74, 0x20, 0x77, 0x72, 0x69, 0x74, 0x69, 0x6e, 0x67, - 0x2c, 0x20, 0x61, 0x6e, 0x64, 0x20, 0x6e, 0x65, 0x76, 0x65, 0x72, 0x20, - 0x66, 0x61, 0x69, 0x6c, 0x73, 0x20, 0x74, 0x6f, 0x20, 0x61, 0x6e, 0x73, - 0x77, 0x65, 0x72, 0x20, 0x61, 0x6e, 0x79, 0x20, 0x72, 0x65, 0x71, 0x75, - 0x65, 0x73, 0x74, 0x73, 0x20, 0x69, 0x6d, 0x6d, 0x65, 0x64, 0x69, 0x61, - 0x74, 0x65, 0x6c, 0x79, 0x20, 0x61, 0x6e, 0x64, 0x20, 0x77, 0x69, 0x74, - 0x68, 0x20, 0x70, 0x72, 0x65, 0x63, 0x69, 0x73, 0x69, 0x6f, 0x6e, 0x2e, - 0x22, 0x2c, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x74, 0x65, 0x6d, - 0x70, 0x6c, 0x61, 0x74, 0x65, 0x3a, 0x20, 0x22, 0x7b, 0x7b, 0x70, 0x72, - 0x6f, 0x6d, 0x70, 0x74, 0x7d, 0x7d, 0x5c, 0x6e, 0x5c, 0x6e, 0x7b, 0x7b, - 0x68, 0x69, 0x73, 0x74, 0x6f, 0x72, 0x79, 0x7d, 0x7d, 0x5c, 0x6e, 0x7b, - 0x7b, 0x63, 0x68, 0x61, 0x72, 0x7d, 0x7d, 0x3a, 0x22, 0x2c, 0x0a, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x68, 0x69, 0x73, 0x74, 0x6f, 0x72, 0x79, - 0x54, 0x65, 0x6d, 0x70, 0x6c, 0x61, 0x74, 0x65, 0x3a, 0x20, 0x22, 0x7b, - 0x7b, 0x6e, 0x61, 0x6d, 0x65, 0x7d, 0x7d, 0x3a, 0x20, 0x7b, 0x7b, 0x6d, - 0x65, 0x73, 0x73, 0x61, 0x67, 0x65, 0x7d, 0x7d, 0x22, 0x2c, 0x0a, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x74, 0x72, 0x61, 0x6e, 0x73, 0x63, 0x72, - 0x69, 0x70, 0x74, 0x3a, 0x20, 0x5b, 0x5d, 0x2c, 0x0a, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x74, 0x79, 0x70, 0x65, 0x3a, 0x20, 0x22, 0x63, 0x68, - 0x61, 0x74, 0x22, 0x2c, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x63, - 0x68, 0x61, 0x72, 0x3a, 0x20, 0x22, 0x4c, 0x6c, 0x61, 0x6d, 0x61, 0x22, - 0x2c, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x75, 0x73, 0x65, 0x72, - 0x3a, 0x20, 0x22, 0x55, 0x73, 0x65, 0x72, 0x22, 0x2c, 0x0a, 0x20, 0x20, - 0x20, 0x20, 0x7d, 0x29, 0x0a, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x63, 0x6f, - 0x6e, 0x73, 0x74, 0x20, 0x70, 0x61, 0x72, 0x61, 0x6d, 0x73, 0x20, 0x3d, - 0x20, 0x73, 0x69, 0x67, 0x6e, 0x61, 0x6c, 0x28, 0x7b, 0x0a, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x6e, 0x5f, 0x70, 0x72, 0x65, 0x64, 0x69, 0x63, - 0x74, 0x3a, 0x20, 0x34, 0x30, 0x30, 0x2c, 0x0a, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x74, 0x65, 0x6d, 0x70, 0x65, 0x72, 0x61, 0x74, 0x75, 0x72, - 0x65, 0x3a, 0x20, 0x30, 0x2e, 0x37, 0x2c, 0x0a, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x72, 0x65, 0x70, 0x65, 0x61, 0x74, 0x5f, 0x6c, 0x61, 0x73, - 0x74, 0x5f, 0x6e, 0x3a, 0x20, 0x32, 0x35, 0x36, 0x2c, 0x20, 0x2f, 0x2f, - 0x20, 0x30, 0x20, 0x3d, 0x20, 0x64, 0x69, 0x73, 0x61, 0x62, 0x6c, 0x65, - 0x20, 0x70, 0x65, 0x6e, 0x61, 0x6c, 0x74, 0x79, 0x2c, 0x20, 0x2d, 0x31, - 0x20, 0x3d, 0x20, 0x63, 0x6f, 0x6e, 0x74, 0x65, 0x78, 0x74, 0x20, 0x73, - 0x69, 0x7a, 0x65, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x72, 0x65, - 0x70, 0x65, 0x61, 0x74, 0x5f, 0x70, 0x65, 0x6e, 0x61, 0x6c, 0x74, 0x79, - 0x3a, 0x20, 0x31, 0x2e, 0x31, 0x38, 0x2c, 0x20, 0x2f, 0x2f, 0x20, 0x31, - 0x2e, 0x30, 0x20, 0x3d, 0x20, 0x64, 0x69, 0x73, 0x61, 0x62, 0x6c, 0x65, - 0x64, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x74, 0x6f, 0x70, 0x5f, - 0x6b, 0x3a, 0x20, 0x34, 0x30, 0x2c, 0x20, 0x2f, 0x2f, 0x20, 0x3c, 0x3d, - 0x20, 0x30, 0x20, 0x74, 0x6f, 0x20, 0x75, 0x73, 0x65, 0x20, 0x76, 0x6f, - 0x63, 0x61, 0x62, 0x20, 0x73, 0x69, 0x7a, 0x65, 0x0a, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x74, 0x6f, 0x70, 0x5f, 0x70, 0x3a, 0x20, 0x30, 0x2e, - 0x35, 0x2c, 0x20, 0x2f, 0x2f, 0x20, 0x31, 0x2e, 0x30, 0x20, 0x3d, 0x20, - 0x64, 0x69, 0x73, 0x61, 0x62, 0x6c, 0x65, 0x64, 0x0a, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x74, 0x66, 0x73, 0x5f, 0x7a, 0x3a, 0x20, 0x31, 0x2e, - 0x30, 0x2c, 0x20, 0x2f, 0x2f, 0x20, 0x31, 0x2e, 0x30, 0x20, 0x3d, 0x20, - 0x64, 0x69, 0x73, 0x61, 0x62, 0x6c, 0x65, 0x64, 0x0a, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x74, 0x79, 0x70, 0x69, 0x63, 0x61, 0x6c, 0x5f, 0x70, - 0x3a, 0x20, 0x31, 0x2e, 0x30, 0x2c, 0x20, 0x2f, 0x2f, 0x20, 0x31, 0x2e, - 0x30, 0x20, 0x3d, 0x20, 0x64, 0x69, 0x73, 0x61, 0x62, 0x6c, 0x65, 0x64, - 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x70, 0x72, 0x65, 0x73, 0x65, - 0x6e, 0x63, 0x65, 0x5f, 0x70, 0x65, 0x6e, 0x61, 0x6c, 0x74, 0x79, 0x3a, - 0x20, 0x30, 0x2e, 0x30, 0x2c, 0x20, 0x2f, 0x2f, 0x20, 0x30, 0x2e, 0x30, - 0x20, 0x3d, 0x20, 0x64, 0x69, 0x73, 0x61, 0x62, 0x6c, 0x65, 0x64, 0x0a, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x66, 0x72, 0x65, 0x71, 0x75, 0x65, - 0x6e, 0x63, 0x79, 0x5f, 0x70, 0x65, 0x6e, 0x61, 0x6c, 0x74, 0x79, 0x3a, - 0x20, 0x30, 0x2e, 0x30, 0x2c, 0x20, 0x2f, 0x2f, 0x20, 0x30, 0x2e, 0x30, - 0x20, 0x3d, 0x20, 0x64, 0x69, 0x73, 0x61, 0x62, 0x6c, 0x65, 0x64, 0x0a, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x6d, 0x69, 0x72, 0x6f, 0x73, 0x74, - 0x61, 0x74, 0x3a, 0x20, 0x30, 0x2c, 0x20, 0x2f, 0x2f, 0x20, 0x30, 0x2f, - 0x31, 0x2f, 0x32, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x6d, 0x69, - 0x72, 0x6f, 0x73, 0x74, 0x61, 0x74, 0x5f, 0x74, 0x61, 0x75, 0x3a, 0x20, - 0x35, 0x2c, 0x20, 0x2f, 0x2f, 0x20, 0x74, 0x61, 0x72, 0x67, 0x65, 0x74, - 0x20, 0x65, 0x6e, 0x74, 0x72, 0x6f, 0x70, 0x79, 0x0a, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x6d, 0x69, 0x72, 0x6f, 0x73, 0x74, 0x61, 0x74, 0x5f, - 0x65, 0x74, 0x61, 0x3a, 0x20, 0x30, 0x2e, 0x31, 0x2c, 0x20, 0x2f, 0x2f, - 0x20, 0x6c, 0x65, 0x61, 0x72, 0x6e, 0x69, 0x6e, 0x67, 0x20, 0x72, 0x61, - 0x74, 0x65, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x67, 0x72, 0x61, - 0x6d, 0x6d, 0x61, 0x72, 0x3a, 0x20, 0x27, 0x27, 0x2c, 0x0a, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x6e, 0x5f, 0x70, 0x72, 0x6f, 0x62, 0x73, 0x3a, - 0x20, 0x30, 0x2c, 0x20, 0x2f, 0x2f, 0x20, 0x6e, 0x6f, 0x20, 0x63, 0x6f, - 0x6d, 0x70, 0x6c, 0x65, 0x74, 0x69, 0x6f, 0x6e, 0x5f, 0x70, 0x72, 0x6f, - 0x62, 0x61, 0x62, 0x69, 0x6c, 0x69, 0x74, 0x69, 0x65, 0x73, 0x0a, 0x20, - 0x20, 0x20, 0x20, 0x7d, 0x29, 0x0a, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x2f, - 0x2a, 0x20, 0x53, 0x54, 0x41, 0x52, 0x54, 0x3a, 0x20, 0x53, 0x75, 0x70, - 0x70, 0x6f, 0x72, 0x74, 0x20, 0x66, 0x6f, 0x72, 0x20, 0x73, 0x74, 0x6f, - 0x72, 0x69, 0x6e, 0x67, 0x20, 0x70, 0x72, 0x6f, 0x6d, 0x70, 0x74, 0x20, - 0x74, 0x65, 0x6d, 0x70, 0x6c, 0x61, 0x74, 0x65, 0x73, 0x20, 0x61, 0x6e, - 0x64, 0x20, 0x70, 0x61, 0x72, 0x61, 0x6d, 0x65, 0x74, 0x65, 0x72, 0x73, - 0x20, 0x69, 0x6e, 0x20, 0x62, 0x6f, 0x72, 0x77, 0x73, 0x65, 0x72, 0x20, - 0x4c, 0x6f, 0x63, 0x61, 0x6c, 0x53, 0x74, 0x6f, 0x72, 0x61, 0x67, 0x65, - 0x20, 0x2a, 0x2f, 0x0a, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x63, 0x6f, 0x6e, - 0x73, 0x74, 0x20, 0x6c, 0x6f, 0x63, 0x61, 0x6c, 0x5f, 0x73, 0x74, 0x6f, - 0x72, 0x61, 0x67, 0x65, 0x5f, 0x73, 0x74, 0x6f, 0x72, 0x61, 0x67, 0x65, - 0x4b, 0x65, 0x79, 0x20, 0x3d, 0x20, 0x22, 0x6c, 0x6c, 0x61, 0x6d, 0x61, - 0x63, 0x70, 0x70, 0x5f, 0x73, 0x65, 0x72, 0x76, 0x65, 0x72, 0x5f, 0x6c, - 0x6f, 0x63, 0x61, 0x6c, 0x5f, 0x73, 0x74, 0x6f, 0x72, 0x61, 0x67, 0x65, - 0x22, 0x3b, 0x0a, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x66, 0x75, 0x6e, 0x63, - 0x74, 0x69, 0x6f, 0x6e, 0x20, 0x6c, 0x6f, 0x63, 0x61, 0x6c, 0x5f, 0x73, - 0x74, 0x6f, 0x72, 0x61, 0x67, 0x65, 0x5f, 0x73, 0x65, 0x74, 0x44, 0x61, - 0x74, 0x61, 0x46, 0x72, 0x6f, 0x6d, 0x4f, 0x62, 0x6a, 0x65, 0x63, 0x74, - 0x28, 0x74, 0x61, 0x67, 0x2c, 0x20, 0x63, 0x6f, 0x6e, 0x74, 0x65, 0x6e, - 0x74, 0x29, 0x20, 0x7b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x6c, - 0x6f, 0x63, 0x61, 0x6c, 0x53, 0x74, 0x6f, 0x72, 0x61, 0x67, 0x65, 0x2e, - 0x73, 0x65, 0x74, 0x49, 0x74, 0x65, 0x6d, 0x28, 0x6c, 0x6f, 0x63, 0x61, - 0x6c, 0x5f, 0x73, 0x74, 0x6f, 0x72, 0x61, 0x67, 0x65, 0x5f, 0x73, 0x74, - 0x6f, 0x72, 0x61, 0x67, 0x65, 0x4b, 0x65, 0x79, 0x20, 0x2b, 0x20, 0x27, - 0x2f, 0x27, 0x20, 0x2b, 0x20, 0x74, 0x61, 0x67, 0x2c, 0x20, 0x4a, 0x53, - 0x4f, 0x4e, 0x2e, 0x73, 0x74, 0x72, 0x69, 0x6e, 0x67, 0x69, 0x66, 0x79, - 0x28, 0x63, 0x6f, 0x6e, 0x74, 0x65, 0x6e, 0x74, 0x29, 0x29, 0x3b, 0x0a, - 0x20, 0x20, 0x20, 0x20, 0x7d, 0x0a, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x66, - 0x75, 0x6e, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x20, 0x6c, 0x6f, 0x63, 0x61, - 0x6c, 0x5f, 0x73, 0x74, 0x6f, 0x72, 0x61, 0x67, 0x65, 0x5f, 0x73, 0x65, - 0x74, 0x44, 0x61, 0x74, 0x61, 0x46, 0x72, 0x6f, 0x6d, 0x52, 0x61, 0x77, - 0x54, 0x65, 0x78, 0x74, 0x28, 0x74, 0x61, 0x67, 0x2c, 0x20, 0x63, 0x6f, - 0x6e, 0x74, 0x65, 0x6e, 0x74, 0x29, 0x20, 0x7b, 0x0a, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x6c, 0x6f, 0x63, 0x61, 0x6c, 0x53, 0x74, 0x6f, 0x72, - 0x61, 0x67, 0x65, 0x2e, 0x73, 0x65, 0x74, 0x49, 0x74, 0x65, 0x6d, 0x28, - 0x6c, 0x6f, 0x63, 0x61, 0x6c, 0x5f, 0x73, 0x74, 0x6f, 0x72, 0x61, 0x67, - 0x65, 0x5f, 0x73, 0x74, 0x6f, 0x72, 0x61, 0x67, 0x65, 0x4b, 0x65, 0x79, - 0x20, 0x2b, 0x20, 0x27, 0x2f, 0x27, 0x20, 0x2b, 0x20, 0x74, 0x61, 0x67, - 0x2c, 0x20, 0x63, 0x6f, 0x6e, 0x74, 0x65, 0x6e, 0x74, 0x29, 0x3b, 0x0a, - 0x20, 0x20, 0x20, 0x20, 0x7d, 0x0a, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x66, - 0x75, 0x6e, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x20, 0x6c, 0x6f, 0x63, 0x61, - 0x6c, 0x5f, 0x73, 0x74, 0x6f, 0x72, 0x61, 0x67, 0x65, 0x5f, 0x67, 0x65, - 0x74, 0x44, 0x61, 0x74, 0x61, 0x41, 0x73, 0x4f, 0x62, 0x6a, 0x65, 0x63, - 0x74, 0x28, 0x74, 0x61, 0x67, 0x29, 0x20, 0x7b, 0x0a, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x63, 0x6f, 0x6e, 0x73, 0x74, 0x20, 0x69, 0x74, 0x65, - 0x6d, 0x20, 0x3d, 0x20, 0x6c, 0x6f, 0x63, 0x61, 0x6c, 0x53, 0x74, 0x6f, - 0x72, 0x61, 0x67, 0x65, 0x2e, 0x67, 0x65, 0x74, 0x49, 0x74, 0x65, 0x6d, - 0x28, 0x6c, 0x6f, 0x63, 0x61, 0x6c, 0x5f, 0x73, 0x74, 0x6f, 0x72, 0x61, - 0x67, 0x65, 0x5f, 0x73, 0x74, 0x6f, 0x72, 0x61, 0x67, 0x65, 0x4b, 0x65, - 0x79, 0x20, 0x2b, 0x20, 0x27, 0x2f, 0x27, 0x20, 0x2b, 0x20, 0x74, 0x61, - 0x67, 0x29, 0x3b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x69, 0x66, - 0x20, 0x28, 0x21, 0x69, 0x74, 0x65, 0x6d, 0x29, 0x20, 0x7b, 0x0a, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x72, 0x65, 0x74, 0x75, 0x72, - 0x6e, 0x20, 0x6e, 0x75, 0x6c, 0x6c, 0x3b, 0x0a, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x7d, 0x20, 0x65, 0x6c, 0x73, 0x65, 0x20, 0x7b, 0x0a, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x72, 0x65, 0x74, 0x75, 0x72, - 0x6e, 0x20, 0x4a, 0x53, 0x4f, 0x4e, 0x2e, 0x70, 0x61, 0x72, 0x73, 0x65, - 0x28, 0x69, 0x74, 0x65, 0x6d, 0x29, 0x3b, 0x0a, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x7d, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x7d, 0x0a, 0x0a, 0x20, - 0x20, 0x20, 0x20, 0x66, 0x75, 0x6e, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x20, - 0x6c, 0x6f, 0x63, 0x61, 0x6c, 0x5f, 0x73, 0x74, 0x6f, 0x72, 0x61, 0x67, - 0x65, 0x5f, 0x67, 0x65, 0x74, 0x44, 0x61, 0x74, 0x61, 0x41, 0x73, 0x52, - 0x61, 0x77, 0x54, 0x65, 0x78, 0x74, 0x28, 0x74, 0x61, 0x67, 0x29, 0x20, - 0x7b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x63, 0x6f, 0x6e, 0x73, - 0x74, 0x20, 0x69, 0x74, 0x65, 0x6d, 0x20, 0x3d, 0x20, 0x6c, 0x6f, 0x63, - 0x61, 0x6c, 0x53, 0x74, 0x6f, 0x72, 0x61, 0x67, 0x65, 0x2e, 0x67, 0x65, - 0x74, 0x49, 0x74, 0x65, 0x6d, 0x28, 0x6c, 0x6f, 0x63, 0x61, 0x6c, 0x5f, - 0x73, 0x74, 0x6f, 0x72, 0x61, 0x67, 0x65, 0x5f, 0x73, 0x74, 0x6f, 0x72, - 0x61, 0x67, 0x65, 0x4b, 0x65, 0x79, 0x20, 0x2b, 0x20, 0x27, 0x2f, 0x27, - 0x20, 0x2b, 0x20, 0x74, 0x61, 0x67, 0x29, 0x3b, 0x0a, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x69, 0x66, 0x20, 0x28, 0x21, 0x69, 0x74, 0x65, 0x6d, - 0x29, 0x20, 0x7b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x72, 0x65, 0x74, 0x75, 0x72, 0x6e, 0x20, 0x6e, 0x75, 0x6c, 0x6c, 0x3b, - 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x7d, 0x20, 0x65, 0x6c, 0x73, - 0x65, 0x20, 0x7b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x72, 0x65, 0x74, 0x75, 0x72, 0x6e, 0x20, 0x69, 0x74, 0x65, 0x6d, 0x3b, - 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x7d, 0x0a, 0x20, 0x20, 0x20, - 0x20, 0x7d, 0x0a, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x2f, 0x2f, 0x20, 0x63, - 0x72, 0x65, 0x61, 0x74, 0x65, 0x20, 0x61, 0x20, 0x63, 0x6f, 0x6e, 0x74, - 0x61, 0x69, 0x6e, 0x65, 0x72, 0x20, 0x66, 0x6f, 0x72, 0x20, 0x75, 0x73, - 0x65, 0x72, 0x20, 0x74, 0x65, 0x6d, 0x70, 0x6c, 0x61, 0x74, 0x65, 0x73, - 0x20, 0x61, 0x6e, 0x64, 0x20, 0x73, 0x65, 0x74, 0x74, 0x69, 0x6e, 0x67, - 0x73, 0x0a, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x63, 0x6f, 0x6e, 0x73, 0x74, - 0x20, 0x73, 0x61, 0x76, 0x65, 0x64, 0x55, 0x73, 0x65, 0x72, 0x54, 0x65, - 0x6d, 0x70, 0x6c, 0x61, 0x74, 0x65, 0x73, 0x20, 0x3d, 0x20, 0x73, 0x69, - 0x67, 0x6e, 0x61, 0x6c, 0x28, 0x7b, 0x7d, 0x29, 0x0a, 0x20, 0x20, 0x20, - 0x20, 0x63, 0x6f, 0x6e, 0x73, 0x74, 0x20, 0x73, 0x65, 0x6c, 0x65, 0x63, - 0x74, 0x65, 0x64, 0x55, 0x73, 0x65, 0x72, 0x54, 0x65, 0x6d, 0x70, 0x6c, - 0x61, 0x74, 0x65, 0x20, 0x3d, 0x20, 0x73, 0x69, 0x67, 0x6e, 0x61, 0x6c, - 0x28, 0x7b, 0x20, 0x6e, 0x61, 0x6d, 0x65, 0x3a, 0x20, 0x27, 0x27, 0x2c, - 0x20, 0x74, 0x65, 0x6d, 0x70, 0x6c, 0x61, 0x74, 0x65, 0x3a, 0x20, 0x7b, - 0x20, 0x73, 0x65, 0x73, 0x73, 0x69, 0x6f, 0x6e, 0x3a, 0x20, 0x7b, 0x7d, - 0x2c, 0x20, 0x70, 0x61, 0x72, 0x61, 0x6d, 0x73, 0x3a, 0x20, 0x7b, 0x7d, - 0x20, 0x7d, 0x20, 0x7d, 0x29, 0x0a, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x2f, - 0x2f, 0x20, 0x6c, 0x65, 0x74, 0x27, 0x73, 0x20, 0x69, 0x6d, 0x70, 0x6f, - 0x72, 0x74, 0x20, 0x6c, 0x6f, 0x63, 0x61, 0x6c, 0x6c, 0x79, 0x20, 0x73, - 0x61, 0x76, 0x65, 0x64, 0x20, 0x74, 0x65, 0x6d, 0x70, 0x6c, 0x61, 0x74, - 0x65, 0x73, 0x20, 0x61, 0x6e, 0x64, 0x20, 0x73, 0x65, 0x74, 0x74, 0x69, - 0x6e, 0x67, 0x73, 0x20, 0x69, 0x66, 0x20, 0x74, 0x68, 0x65, 0x72, 0x65, - 0x20, 0x61, 0x72, 0x65, 0x20, 0x61, 0x6e, 0x79, 0x0a, 0x20, 0x20, 0x20, - 0x20, 0x2f, 0x2f, 0x20, 0x75, 0x73, 0x65, 0x72, 0x20, 0x74, 0x65, 0x6d, - 0x70, 0x6c, 0x61, 0x74, 0x65, 0x73, 0x20, 0x61, 0x6e, 0x64, 0x20, 0x73, - 0x65, 0x74, 0x74, 0x69, 0x6e, 0x67, 0x73, 0x20, 0x61, 0x72, 0x65, 0x20, - 0x73, 0x74, 0x6f, 0x72, 0x65, 0x64, 0x20, 0x69, 0x6e, 0x20, 0x6f, 0x6e, - 0x65, 0x20, 0x6f, 0x62, 0x6a, 0x65, 0x63, 0x74, 0x0a, 0x20, 0x20, 0x20, - 0x20, 0x2f, 0x2f, 0x20, 0x69, 0x6e, 0x20, 0x66, 0x6f, 0x72, 0x6d, 0x20, - 0x6f, 0x66, 0x20, 0x7b, 0x20, 0x22, 0x74, 0x65, 0x6d, 0x70, 0x6c, 0x61, - 0x74, 0x65, 0x6e, 0x61, 0x6d, 0x65, 0x22, 0x3a, 0x20, 0x22, 0x74, 0x65, - 0x6d, 0x70, 0x6c, 0x61, 0x74, 0x65, 0x64, 0x61, 0x74, 0x61, 0x22, 0x20, - 0x7d, 0x20, 0x61, 0x6e, 0x64, 0x20, 0x7b, 0x20, 0x22, 0x73, 0x65, 0x74, - 0x74, 0x69, 0x6e, 0x67, 0x73, 0x74, 0x65, 0x6d, 0x70, 0x6c, 0x61, 0x74, - 0x65, 0x6e, 0x61, 0x6d, 0x65, 0x22, 0x3a, 0x22, 0x73, 0x65, 0x74, 0x74, - 0x69, 0x6e, 0x67, 0x73, 0x64, 0x61, 0x74, 0x61, 0x22, 0x20, 0x7d, 0x0a, - 0x0a, 0x20, 0x20, 0x20, 0x20, 0x63, 0x6f, 0x6e, 0x73, 0x6f, 0x6c, 0x65, - 0x2e, 0x6c, 0x6f, 0x67, 0x28, 0x27, 0x49, 0x6d, 0x70, 0x6f, 0x72, 0x74, - 0x69, 0x6e, 0x67, 0x20, 0x73, 0x61, 0x76, 0x65, 0x64, 0x20, 0x74, 0x65, - 0x6d, 0x70, 0x6c, 0x61, 0x74, 0x65, 0x73, 0x27, 0x29, 0x0a, 0x0a, 0x20, - 0x20, 0x20, 0x20, 0x6c, 0x65, 0x74, 0x20, 0x69, 0x6d, 0x70, 0x6f, 0x72, - 0x74, 0x65, 0x64, 0x54, 0x65, 0x6d, 0x70, 0x6c, 0x61, 0x74, 0x65, 0x73, - 0x20, 0x3d, 0x20, 0x6c, 0x6f, 0x63, 0x61, 0x6c, 0x5f, 0x73, 0x74, 0x6f, - 0x72, 0x61, 0x67, 0x65, 0x5f, 0x67, 0x65, 0x74, 0x44, 0x61, 0x74, 0x61, - 0x41, 0x73, 0x4f, 0x62, 0x6a, 0x65, 0x63, 0x74, 0x28, 0x27, 0x75, 0x73, - 0x65, 0x72, 0x5f, 0x74, 0x65, 0x6d, 0x70, 0x6c, 0x61, 0x74, 0x65, 0x73, - 0x27, 0x29, 0x0a, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x69, 0x66, 0x20, 0x28, - 0x69, 0x6d, 0x70, 0x6f, 0x72, 0x74, 0x65, 0x64, 0x54, 0x65, 0x6d, 0x70, - 0x6c, 0x61, 0x74, 0x65, 0x73, 0x29, 0x20, 0x7b, 0x0a, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x2f, 0x2f, 0x20, 0x73, 0x61, 0x76, 0x65, 0x64, 0x20, - 0x74, 0x65, 0x6d, 0x70, 0x6c, 0x61, 0x74, 0x65, 0x73, 0x20, 0x77, 0x65, - 0x72, 0x65, 0x20, 0x73, 0x75, 0x63, 0x63, 0x65, 0x73, 0x73, 0x66, 0x75, - 0x6c, 0x79, 0x20, 0x69, 0x6d, 0x70, 0x6f, 0x72, 0x74, 0x65, 0x64, 0x2e, - 0x0a, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x63, 0x6f, 0x6e, 0x73, - 0x6f, 0x6c, 0x65, 0x2e, 0x6c, 0x6f, 0x67, 0x28, 0x27, 0x50, 0x72, 0x6f, - 0x63, 0x65, 0x73, 0x73, 0x69, 0x6e, 0x67, 0x20, 0x73, 0x61, 0x76, 0x65, - 0x64, 0x20, 0x74, 0x65, 0x6d, 0x70, 0x6c, 0x61, 0x74, 0x65, 0x73, 0x20, - 0x61, 0x6e, 0x64, 0x20, 0x75, 0x70, 0x64, 0x61, 0x74, 0x69, 0x6e, 0x67, - 0x20, 0x64, 0x65, 0x66, 0x61, 0x75, 0x6c, 0x74, 0x20, 0x74, 0x65, 0x6d, - 0x70, 0x6c, 0x61, 0x74, 0x65, 0x27, 0x29, 0x0a, 0x0a, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x2f, 0x2f, 0x63, 0x6f, 0x6e, 0x73, 0x6f, 0x6c, 0x65, - 0x2e, 0x6c, 0x6f, 0x67, 0x28, 0x69, 0x6d, 0x70, 0x6f, 0x72, 0x74, 0x65, - 0x64, 0x54, 0x65, 0x6d, 0x70, 0x6c, 0x61, 0x74, 0x65, 0x73, 0x29, 0x3b, - 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x73, 0x61, 0x76, 0x65, 0x64, - 0x55, 0x73, 0x65, 0x72, 0x54, 0x65, 0x6d, 0x70, 0x6c, 0x61, 0x74, 0x65, - 0x73, 0x2e, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x20, 0x3d, 0x20, 0x69, 0x6d, - 0x70, 0x6f, 0x72, 0x74, 0x65, 0x64, 0x54, 0x65, 0x6d, 0x70, 0x6c, 0x61, - 0x74, 0x65, 0x73, 0x3b, 0x0a, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x2f, 0x2f, 0x6f, 0x76, 0x65, 0x72, 0x72, 0x69, 0x64, 0x65, 0x20, 0x64, - 0x65, 0x66, 0x61, 0x75, 0x6c, 0x74, 0x20, 0x74, 0x65, 0x6d, 0x70, 0x6c, - 0x61, 0x74, 0x65, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x73, 0x61, - 0x76, 0x65, 0x64, 0x55, 0x73, 0x65, 0x72, 0x54, 0x65, 0x6d, 0x70, 0x6c, - 0x61, 0x74, 0x65, 0x73, 0x2e, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x2e, 0x64, - 0x65, 0x66, 0x61, 0x75, 0x6c, 0x74, 0x20, 0x3d, 0x20, 0x7b, 0x20, 0x73, - 0x65, 0x73, 0x73, 0x69, 0x6f, 0x6e, 0x3a, 0x20, 0x73, 0x65, 0x73, 0x73, - 0x69, 0x6f, 0x6e, 0x2e, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x2c, 0x20, 0x70, - 0x61, 0x72, 0x61, 0x6d, 0x73, 0x3a, 0x20, 0x70, 0x61, 0x72, 0x61, 0x6d, - 0x73, 0x2e, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x20, 0x7d, 0x0a, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x6c, 0x6f, 0x63, 0x61, 0x6c, 0x5f, 0x73, 0x74, - 0x6f, 0x72, 0x61, 0x67, 0x65, 0x5f, 0x73, 0x65, 0x74, 0x44, 0x61, 0x74, - 0x61, 0x46, 0x72, 0x6f, 0x6d, 0x4f, 0x62, 0x6a, 0x65, 0x63, 0x74, 0x28, - 0x27, 0x75, 0x73, 0x65, 0x72, 0x5f, 0x74, 0x65, 0x6d, 0x70, 0x6c, 0x61, - 0x74, 0x65, 0x73, 0x27, 0x2c, 0x20, 0x73, 0x61, 0x76, 0x65, 0x64, 0x55, - 0x73, 0x65, 0x72, 0x54, 0x65, 0x6d, 0x70, 0x6c, 0x61, 0x74, 0x65, 0x73, - 0x2e, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x29, 0x0a, 0x20, 0x20, 0x20, 0x20, - 0x7d, 0x20, 0x65, 0x6c, 0x73, 0x65, 0x20, 0x7b, 0x0a, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x2f, 0x2f, 0x20, 0x6e, 0x6f, 0x20, 0x73, 0x61, 0x76, - 0x65, 0x64, 0x20, 0x74, 0x65, 0x6d, 0x70, 0x6c, 0x61, 0x74, 0x65, 0x73, - 0x20, 0x64, 0x65, 0x74, 0x65, 0x63, 0x74, 0x65, 0x64, 0x2e, 0x0a, 0x0a, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x63, 0x6f, 0x6e, 0x73, 0x6f, 0x6c, - 0x65, 0x2e, 0x6c, 0x6f, 0x67, 0x28, 0x27, 0x49, 0x6e, 0x69, 0x74, 0x69, - 0x61, 0x6c, 0x69, 0x7a, 0x69, 0x6e, 0x67, 0x20, 0x4c, 0x6f, 0x63, 0x61, - 0x6c, 0x53, 0x74, 0x6f, 0x72, 0x61, 0x67, 0x65, 0x20, 0x61, 0x6e, 0x64, - 0x20, 0x73, 0x61, 0x76, 0x69, 0x6e, 0x67, 0x20, 0x64, 0x65, 0x66, 0x61, - 0x75, 0x6c, 0x74, 0x20, 0x74, 0x65, 0x6d, 0x70, 0x6c, 0x61, 0x74, 0x65, - 0x27, 0x29, 0x0a, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x73, 0x61, - 0x76, 0x65, 0x64, 0x55, 0x73, 0x65, 0x72, 0x54, 0x65, 0x6d, 0x70, 0x6c, - 0x61, 0x74, 0x65, 0x73, 0x2e, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x20, 0x3d, - 0x20, 0x7b, 0x20, 0x22, 0x64, 0x65, 0x66, 0x61, 0x75, 0x6c, 0x74, 0x22, - 0x3a, 0x20, 0x7b, 0x20, 0x73, 0x65, 0x73, 0x73, 0x69, 0x6f, 0x6e, 0x3a, - 0x20, 0x73, 0x65, 0x73, 0x73, 0x69, 0x6f, 0x6e, 0x2e, 0x76, 0x61, 0x6c, - 0x75, 0x65, 0x2c, 0x20, 0x70, 0x61, 0x72, 0x61, 0x6d, 0x73, 0x3a, 0x20, - 0x70, 0x61, 0x72, 0x61, 0x6d, 0x73, 0x2e, 0x76, 0x61, 0x6c, 0x75, 0x65, - 0x20, 0x7d, 0x20, 0x7d, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x6c, - 0x6f, 0x63, 0x61, 0x6c, 0x5f, 0x73, 0x74, 0x6f, 0x72, 0x61, 0x67, 0x65, - 0x5f, 0x73, 0x65, 0x74, 0x44, 0x61, 0x74, 0x61, 0x46, 0x72, 0x6f, 0x6d, - 0x4f, 0x62, 0x6a, 0x65, 0x63, 0x74, 0x28, 0x27, 0x75, 0x73, 0x65, 0x72, - 0x5f, 0x74, 0x65, 0x6d, 0x70, 0x6c, 0x61, 0x74, 0x65, 0x73, 0x27, 0x2c, - 0x20, 0x73, 0x61, 0x76, 0x65, 0x64, 0x55, 0x73, 0x65, 0x72, 0x54, 0x65, - 0x6d, 0x70, 0x6c, 0x61, 0x74, 0x65, 0x73, 0x2e, 0x76, 0x61, 0x6c, 0x75, - 0x65, 0x29, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x7d, 0x0a, 0x0a, 0x20, 0x20, - 0x20, 0x20, 0x66, 0x75, 0x6e, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x20, 0x75, - 0x73, 0x65, 0x72, 0x54, 0x65, 0x6d, 0x70, 0x6c, 0x61, 0x74, 0x65, 0x52, - 0x65, 0x73, 0x65, 0x74, 0x54, 0x6f, 0x44, 0x65, 0x66, 0x61, 0x75, 0x6c, - 0x74, 0x28, 0x29, 0x20, 0x7b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x63, 0x6f, 0x6e, 0x73, 0x6f, 0x6c, 0x65, 0x2e, 0x6c, 0x6f, 0x67, 0x28, - 0x27, 0x52, 0x65, 0x73, 0x65, 0x74, 0x69, 0x6e, 0x67, 0x20, 0x74, 0x68, - 0x65, 0x6d, 0x70, 0x6c, 0x61, 0x74, 0x65, 0x20, 0x74, 0x6f, 0x20, 0x64, - 0x65, 0x66, 0x61, 0x75, 0x6c, 0x74, 0x27, 0x29, 0x0a, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x73, 0x65, 0x6c, 0x65, 0x63, 0x74, 0x65, 0x64, 0x55, - 0x73, 0x65, 0x72, 0x54, 0x65, 0x6d, 0x70, 0x6c, 0x61, 0x74, 0x65, 0x2e, - 0x76, 0x61, 0x6c, 0x75, 0x65, 0x2e, 0x6e, 0x61, 0x6d, 0x65, 0x20, 0x3d, - 0x20, 0x27, 0x64, 0x65, 0x66, 0x61, 0x75, 0x6c, 0x74, 0x27, 0x3b, 0x0a, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x73, 0x65, 0x6c, 0x65, 0x63, 0x74, - 0x65, 0x64, 0x55, 0x73, 0x65, 0x72, 0x54, 0x65, 0x6d, 0x70, 0x6c, 0x61, - 0x74, 0x65, 0x2e, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x2e, 0x64, 0x61, 0x74, - 0x61, 0x20, 0x3d, 0x20, 0x73, 0x61, 0x76, 0x65, 0x64, 0x55, 0x73, 0x65, - 0x72, 0x54, 0x65, 0x6d, 0x70, 0x6c, 0x61, 0x74, 0x65, 0x73, 0x2e, 0x76, - 0x61, 0x6c, 0x75, 0x65, 0x5b, 0x27, 0x64, 0x65, 0x66, 0x61, 0x75, 0x6c, - 0x74, 0x27, 0x5d, 0x3b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x7d, 0x0a, 0x0a, - 0x20, 0x20, 0x20, 0x20, 0x66, 0x75, 0x6e, 0x63, 0x74, 0x69, 0x6f, 0x6e, - 0x20, 0x75, 0x73, 0x65, 0x72, 0x54, 0x65, 0x6d, 0x70, 0x6c, 0x61, 0x74, - 0x65, 0x41, 0x70, 0x70, 0x6c, 0x79, 0x28, 0x74, 0x29, 0x20, 0x7b, 0x0a, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x73, 0x65, 0x73, 0x73, 0x69, 0x6f, - 0x6e, 0x2e, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x20, 0x3d, 0x20, 0x74, 0x2e, - 0x64, 0x61, 0x74, 0x61, 0x2e, 0x73, 0x65, 0x73, 0x73, 0x69, 0x6f, 0x6e, - 0x3b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x70, 0x61, 0x72, 0x61, - 0x6d, 0x73, 0x2e, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x20, 0x3d, 0x20, 0x74, - 0x2e, 0x64, 0x61, 0x74, 0x61, 0x2e, 0x70, 0x61, 0x72, 0x61, 0x6d, 0x73, - 0x3b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x7d, 0x0a, 0x0a, 0x20, 0x20, 0x20, - 0x20, 0x66, 0x75, 0x6e, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x20, 0x75, 0x73, - 0x65, 0x72, 0x54, 0x65, 0x6d, 0x70, 0x6c, 0x61, 0x74, 0x65, 0x52, 0x65, - 0x73, 0x65, 0x74, 0x54, 0x6f, 0x44, 0x65, 0x66, 0x61, 0x75, 0x6c, 0x74, - 0x41, 0x6e, 0x64, 0x41, 0x70, 0x70, 0x6c, 0x79, 0x28, 0x29, 0x20, 0x7b, - 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x75, 0x73, 0x65, 0x72, 0x54, - 0x65, 0x6d, 0x70, 0x6c, 0x61, 0x74, 0x65, 0x52, 0x65, 0x73, 0x65, 0x74, - 0x54, 0x6f, 0x44, 0x65, 0x66, 0x61, 0x75, 0x6c, 0x74, 0x28, 0x29, 0x0a, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x75, 0x73, 0x65, 0x72, 0x54, 0x65, - 0x6d, 0x70, 0x6c, 0x61, 0x74, 0x65, 0x41, 0x70, 0x70, 0x6c, 0x79, 0x28, - 0x73, 0x65, 0x6c, 0x65, 0x63, 0x74, 0x65, 0x64, 0x55, 0x73, 0x65, 0x72, - 0x54, 0x65, 0x6d, 0x70, 0x6c, 0x61, 0x74, 0x65, 0x2e, 0x76, 0x61, 0x6c, - 0x75, 0x65, 0x29, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x7d, 0x0a, 0x0a, 0x20, - 0x20, 0x20, 0x20, 0x66, 0x75, 0x6e, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x20, - 0x75, 0x73, 0x65, 0x72, 0x54, 0x65, 0x6d, 0x70, 0x6c, 0x61, 0x74, 0x65, - 0x4c, 0x6f, 0x61, 0x64, 0x41, 0x6e, 0x64, 0x41, 0x70, 0x70, 0x6c, 0x79, - 0x41, 0x75, 0x74, 0x6f, 0x73, 0x61, 0x76, 0x65, 0x64, 0x28, 0x29, 0x20, - 0x7b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x2f, 0x2f, 0x20, 0x67, - 0x65, 0x74, 0x20, 0x61, 0x75, 0x74, 0x6f, 0x73, 0x61, 0x76, 0x65, 0x64, - 0x20, 0x6c, 0x61, 0x73, 0x74, 0x20, 0x75, 0x73, 0x65, 0x64, 0x20, 0x74, - 0x65, 0x6d, 0x70, 0x6c, 0x61, 0x74, 0x65, 0x0a, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x6c, 0x65, 0x74, 0x20, 0x6c, 0x61, 0x73, 0x74, 0x55, 0x73, - 0x65, 0x64, 0x54, 0x65, 0x6d, 0x70, 0x6c, 0x61, 0x74, 0x65, 0x20, 0x3d, - 0x20, 0x6c, 0x6f, 0x63, 0x61, 0x6c, 0x5f, 0x73, 0x74, 0x6f, 0x72, 0x61, - 0x67, 0x65, 0x5f, 0x67, 0x65, 0x74, 0x44, 0x61, 0x74, 0x61, 0x41, 0x73, - 0x4f, 0x62, 0x6a, 0x65, 0x63, 0x74, 0x28, 0x27, 0x75, 0x73, 0x65, 0x72, - 0x5f, 0x74, 0x65, 0x6d, 0x70, 0x6c, 0x61, 0x74, 0x65, 0x73, 0x5f, 0x6c, - 0x61, 0x73, 0x74, 0x27, 0x29, 0x0a, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x69, 0x66, 0x20, 0x28, 0x6c, 0x61, 0x73, 0x74, 0x55, 0x73, 0x65, - 0x64, 0x54, 0x65, 0x6d, 0x70, 0x6c, 0x61, 0x74, 0x65, 0x29, 0x20, 0x7b, - 0x0a, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x63, 0x6f, - 0x6e, 0x73, 0x6f, 0x6c, 0x65, 0x2e, 0x6c, 0x6f, 0x67, 0x28, 0x27, 0x41, - 0x75, 0x74, 0x6f, 0x73, 0x61, 0x76, 0x65, 0x64, 0x20, 0x74, 0x65, 0x6d, - 0x70, 0x6c, 0x61, 0x74, 0x65, 0x20, 0x66, 0x6f, 0x75, 0x6e, 0x64, 0x2c, - 0x20, 0x72, 0x65, 0x73, 0x74, 0x6f, 0x72, 0x69, 0x6e, 0x67, 0x27, 0x29, - 0x0a, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x73, 0x65, - 0x6c, 0x65, 0x63, 0x74, 0x65, 0x64, 0x55, 0x73, 0x65, 0x72, 0x54, 0x65, - 0x6d, 0x70, 0x6c, 0x61, 0x74, 0x65, 0x2e, 0x76, 0x61, 0x6c, 0x75, 0x65, - 0x20, 0x3d, 0x20, 0x6c, 0x61, 0x73, 0x74, 0x55, 0x73, 0x65, 0x64, 0x54, - 0x65, 0x6d, 0x70, 0x6c, 0x61, 0x74, 0x65, 0x0a, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x7d, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x65, 0x6c, - 0x73, 0x65, 0x20, 0x7b, 0x0a, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x63, 0x6f, 0x6e, 0x73, 0x6f, 0x6c, 0x65, 0x2e, 0x6c, 0x6f, - 0x67, 0x28, 0x27, 0x4e, 0x6f, 0x20, 0x61, 0x75, 0x74, 0x6f, 0x73, 0x61, - 0x76, 0x65, 0x64, 0x20, 0x74, 0x65, 0x6d, 0x70, 0x6c, 0x61, 0x74, 0x65, - 0x20, 0x66, 0x6f, 0x75, 0x6e, 0x64, 0x2c, 0x20, 0x75, 0x73, 0x69, 0x6e, - 0x67, 0x20, 0x64, 0x65, 0x66, 0x61, 0x75, 0x6c, 0x74, 0x20, 0x74, 0x65, - 0x6d, 0x70, 0x6c, 0x61, 0x74, 0x65, 0x27, 0x29, 0x0a, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x2f, 0x2f, 0x20, 0x6e, 0x6f, 0x20, 0x61, - 0x75, 0x74, 0x6f, 0x73, 0x61, 0x76, 0x65, 0x64, 0x20, 0x6c, 0x61, 0x73, - 0x74, 0x20, 0x75, 0x73, 0x65, 0x64, 0x20, 0x74, 0x65, 0x6d, 0x70, 0x6c, - 0x61, 0x74, 0x65, 0x20, 0x77, 0x61, 0x73, 0x20, 0x66, 0x6f, 0x75, 0x6e, - 0x64, 0x2c, 0x20, 0x73, 0x6f, 0x20, 0x6c, 0x6f, 0x61, 0x64, 0x20, 0x66, - 0x72, 0x6f, 0x6d, 0x20, 0x64, 0x65, 0x66, 0x61, 0x75, 0x6c, 0x74, 0x2e, - 0x0a, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x75, 0x73, - 0x65, 0x72, 0x54, 0x65, 0x6d, 0x70, 0x6c, 0x61, 0x74, 0x65, 0x52, 0x65, - 0x73, 0x65, 0x74, 0x54, 0x6f, 0x44, 0x65, 0x66, 0x61, 0x75, 0x6c, 0x74, - 0x28, 0x29, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x7d, 0x0a, 0x0a, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x63, 0x6f, 0x6e, 0x73, 0x6f, 0x6c, - 0x65, 0x2e, 0x6c, 0x6f, 0x67, 0x28, 0x27, 0x41, 0x70, 0x70, 0x6c, 0x79, - 0x69, 0x6e, 0x67, 0x20, 0x74, 0x65, 0x6d, 0x70, 0x6c, 0x61, 0x74, 0x65, - 0x27, 0x29, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x2f, 0x2f, 0x20, - 0x61, 0x6e, 0x64, 0x20, 0x75, 0x70, 0x64, 0x61, 0x74, 0x65, 0x20, 0x69, - 0x6e, 0x74, 0x65, 0x72, 0x6e, 0x61, 0x6c, 0x20, 0x64, 0x61, 0x74, 0x61, - 0x20, 0x66, 0x72, 0x6f, 0x6d, 0x20, 0x74, 0x65, 0x6d, 0x70, 0x6c, 0x61, - 0x74, 0x65, 0x73, 0x0a, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x75, - 0x73, 0x65, 0x72, 0x54, 0x65, 0x6d, 0x70, 0x6c, 0x61, 0x74, 0x65, 0x41, - 0x70, 0x70, 0x6c, 0x79, 0x28, 0x73, 0x65, 0x6c, 0x65, 0x63, 0x74, 0x65, - 0x64, 0x55, 0x73, 0x65, 0x72, 0x54, 0x65, 0x6d, 0x70, 0x6c, 0x61, 0x74, - 0x65, 0x2e, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x29, 0x0a, 0x20, 0x20, 0x20, - 0x20, 0x7d, 0x0a, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x2f, 0x2f, 0x63, 0x6f, - 0x6e, 0x73, 0x6f, 0x6c, 0x65, 0x2e, 0x6c, 0x6f, 0x67, 0x28, 0x73, 0x61, - 0x76, 0x65, 0x64, 0x55, 0x73, 0x65, 0x72, 0x54, 0x65, 0x6d, 0x70, 0x6c, - 0x61, 0x74, 0x65, 0x73, 0x2e, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x29, 0x0a, - 0x20, 0x20, 0x20, 0x20, 0x2f, 0x2f, 0x63, 0x6f, 0x6e, 0x73, 0x6f, 0x6c, - 0x65, 0x2e, 0x6c, 0x6f, 0x67, 0x28, 0x73, 0x65, 0x6c, 0x65, 0x63, 0x74, - 0x65, 0x64, 0x55, 0x73, 0x65, 0x72, 0x54, 0x65, 0x6d, 0x70, 0x6c, 0x61, - 0x74, 0x65, 0x2e, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x29, 0x0a, 0x0a, 0x20, - 0x20, 0x20, 0x20, 0x66, 0x75, 0x6e, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x20, - 0x75, 0x73, 0x65, 0x72, 0x54, 0x65, 0x6d, 0x70, 0x6c, 0x61, 0x74, 0x65, - 0x41, 0x75, 0x74, 0x6f, 0x73, 0x61, 0x76, 0x65, 0x28, 0x29, 0x20, 0x7b, - 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x63, 0x6f, 0x6e, 0x73, 0x6f, - 0x6c, 0x65, 0x2e, 0x6c, 0x6f, 0x67, 0x28, 0x27, 0x54, 0x65, 0x6d, 0x70, - 0x6c, 0x61, 0x74, 0x65, 0x20, 0x41, 0x75, 0x74, 0x6f, 0x73, 0x61, 0x76, - 0x65, 0x2e, 0x2e, 0x2e, 0x27, 0x29, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x69, 0x66, 0x20, 0x28, 0x73, 0x65, 0x6c, 0x65, 0x63, 0x74, 0x65, - 0x64, 0x55, 0x73, 0x65, 0x72, 0x54, 0x65, 0x6d, 0x70, 0x6c, 0x61, 0x74, - 0x65, 0x2e, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x2e, 0x6e, 0x61, 0x6d, 0x65, - 0x20, 0x3d, 0x3d, 0x20, 0x27, 0x64, 0x65, 0x66, 0x61, 0x75, 0x6c, 0x74, - 0x27, 0x29, 0x20, 0x7b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x2f, 0x2f, 0x20, 0x77, 0x65, 0x20, 0x64, 0x6f, 0x6e, 0x27, 0x74, - 0x20, 0x77, 0x61, 0x6e, 0x74, 0x20, 0x74, 0x6f, 0x20, 0x73, 0x61, 0x76, - 0x65, 0x20, 0x6f, 0x76, 0x65, 0x72, 0x20, 0x64, 0x65, 0x66, 0x61, 0x75, - 0x6c, 0x74, 0x20, 0x74, 0x65, 0x6d, 0x70, 0x6c, 0x61, 0x74, 0x65, 0x2c, - 0x20, 0x73, 0x6f, 0x20, 0x6c, 0x65, 0x74, 0x27, 0x73, 0x20, 0x63, 0x72, - 0x65, 0x61, 0x74, 0x65, 0x20, 0x61, 0x20, 0x6e, 0x65, 0x77, 0x20, 0x6f, - 0x6e, 0x65, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x6c, - 0x65, 0x74, 0x20, 0x6e, 0x65, 0x77, 0x54, 0x65, 0x6d, 0x70, 0x6c, 0x61, - 0x74, 0x65, 0x4e, 0x61, 0x6d, 0x65, 0x20, 0x3d, 0x20, 0x27, 0x55, 0x73, - 0x65, 0x72, 0x54, 0x65, 0x6d, 0x70, 0x6c, 0x61, 0x74, 0x65, 0x2d, 0x27, - 0x20, 0x2b, 0x20, 0x44, 0x61, 0x74, 0x65, 0x2e, 0x6e, 0x6f, 0x77, 0x28, - 0x29, 0x2e, 0x74, 0x6f, 0x53, 0x74, 0x72, 0x69, 0x6e, 0x67, 0x28, 0x29, - 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x6c, 0x65, 0x74, - 0x20, 0x6e, 0x65, 0x77, 0x54, 0x65, 0x6d, 0x70, 0x6c, 0x61, 0x74, 0x65, - 0x20, 0x3d, 0x20, 0x7b, 0x20, 0x27, 0x6e, 0x61, 0x6d, 0x65, 0x27, 0x3a, - 0x20, 0x6e, 0x65, 0x77, 0x54, 0x65, 0x6d, 0x70, 0x6c, 0x61, 0x74, 0x65, - 0x4e, 0x61, 0x6d, 0x65, 0x2c, 0x20, 0x27, 0x64, 0x61, 0x74, 0x61, 0x27, - 0x3a, 0x20, 0x7b, 0x20, 0x27, 0x73, 0x65, 0x73, 0x73, 0x69, 0x6f, 0x6e, - 0x27, 0x3a, 0x20, 0x73, 0x65, 0x73, 0x73, 0x69, 0x6f, 0x6e, 0x2e, 0x76, - 0x61, 0x6c, 0x75, 0x65, 0x2c, 0x20, 0x27, 0x70, 0x61, 0x72, 0x61, 0x6d, - 0x73, 0x27, 0x3a, 0x20, 0x70, 0x61, 0x72, 0x61, 0x6d, 0x73, 0x2e, 0x76, - 0x61, 0x6c, 0x75, 0x65, 0x20, 0x7d, 0x20, 0x7d, 0x0a, 0x0a, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x63, 0x6f, 0x6e, 0x73, 0x6f, 0x6c, - 0x65, 0x2e, 0x6c, 0x6f, 0x67, 0x28, 0x27, 0x53, 0x61, 0x76, 0x69, 0x6e, - 0x67, 0x20, 0x61, 0x73, 0x20, 0x27, 0x20, 0x2b, 0x20, 0x6e, 0x65, 0x77, - 0x54, 0x65, 0x6d, 0x70, 0x6c, 0x61, 0x74, 0x65, 0x4e, 0x61, 0x6d, 0x65, - 0x29, 0x0a, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x2f, - 0x2f, 0x20, 0x73, 0x61, 0x76, 0x65, 0x20, 0x69, 0x6e, 0x20, 0x74, 0x68, - 0x65, 0x20, 0x61, 0x75, 0x74, 0x6f, 0x73, 0x61, 0x76, 0x65, 0x20, 0x73, - 0x6c, 0x6f, 0x74, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x6c, 0x6f, 0x63, 0x61, 0x6c, 0x5f, 0x73, 0x74, 0x6f, 0x72, 0x61, 0x67, - 0x65, 0x5f, 0x73, 0x65, 0x74, 0x44, 0x61, 0x74, 0x61, 0x46, 0x72, 0x6f, - 0x6d, 0x4f, 0x62, 0x6a, 0x65, 0x63, 0x74, 0x28, 0x27, 0x75, 0x73, 0x65, - 0x72, 0x5f, 0x74, 0x65, 0x6d, 0x70, 0x6c, 0x61, 0x74, 0x65, 0x73, 0x5f, - 0x6c, 0x61, 0x73, 0x74, 0x27, 0x2c, 0x20, 0x6e, 0x65, 0x77, 0x54, 0x65, - 0x6d, 0x70, 0x6c, 0x61, 0x74, 0x65, 0x29, 0x0a, 0x0a, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x2f, 0x2f, 0x20, 0x61, 0x6e, 0x64, 0x20, - 0x6c, 0x6f, 0x61, 0x64, 0x20, 0x69, 0x74, 0x20, 0x62, 0x61, 0x63, 0x6b, - 0x20, 0x61, 0x6e, 0x64, 0x20, 0x61, 0x70, 0x70, 0x6c, 0x79, 0x0a, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x75, 0x73, 0x65, 0x72, 0x54, - 0x65, 0x6d, 0x70, 0x6c, 0x61, 0x74, 0x65, 0x4c, 0x6f, 0x61, 0x64, 0x41, - 0x6e, 0x64, 0x41, 0x70, 0x70, 0x6c, 0x79, 0x41, 0x75, 0x74, 0x6f, 0x73, - 0x61, 0x76, 0x65, 0x64, 0x28, 0x29, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x7d, 0x20, 0x65, 0x6c, 0x73, 0x65, 0x20, 0x7b, 0x0a, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x6c, 0x6f, 0x63, 0x61, 0x6c, 0x5f, - 0x73, 0x74, 0x6f, 0x72, 0x61, 0x67, 0x65, 0x5f, 0x73, 0x65, 0x74, 0x44, - 0x61, 0x74, 0x61, 0x46, 0x72, 0x6f, 0x6d, 0x4f, 0x62, 0x6a, 0x65, 0x63, - 0x74, 0x28, 0x27, 0x75, 0x73, 0x65, 0x72, 0x5f, 0x74, 0x65, 0x6d, 0x70, - 0x6c, 0x61, 0x74, 0x65, 0x73, 0x5f, 0x6c, 0x61, 0x73, 0x74, 0x27, 0x2c, - 0x20, 0x7b, 0x20, 0x27, 0x6e, 0x61, 0x6d, 0x65, 0x27, 0x3a, 0x20, 0x73, - 0x65, 0x6c, 0x65, 0x63, 0x74, 0x65, 0x64, 0x55, 0x73, 0x65, 0x72, 0x54, - 0x65, 0x6d, 0x70, 0x6c, 0x61, 0x74, 0x65, 0x2e, 0x76, 0x61, 0x6c, 0x75, - 0x65, 0x2e, 0x6e, 0x61, 0x6d, 0x65, 0x2c, 0x20, 0x27, 0x64, 0x61, 0x74, - 0x61, 0x27, 0x3a, 0x20, 0x7b, 0x20, 0x27, 0x73, 0x65, 0x73, 0x73, 0x69, - 0x6f, 0x6e, 0x27, 0x3a, 0x20, 0x73, 0x65, 0x73, 0x73, 0x69, 0x6f, 0x6e, - 0x2e, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x2c, 0x20, 0x27, 0x70, 0x61, 0x72, - 0x61, 0x6d, 0x73, 0x27, 0x3a, 0x20, 0x70, 0x61, 0x72, 0x61, 0x6d, 0x73, - 0x2e, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x20, 0x7d, 0x20, 0x7d, 0x29, 0x0a, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x7d, 0x0a, 0x20, 0x20, 0x20, 0x20, - 0x7d, 0x0a, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x63, 0x6f, 0x6e, 0x73, 0x6f, - 0x6c, 0x65, 0x2e, 0x6c, 0x6f, 0x67, 0x28, 0x27, 0x43, 0x68, 0x65, 0x63, - 0x6b, 0x69, 0x6e, 0x67, 0x20, 0x66, 0x6f, 0x72, 0x20, 0x61, 0x75, 0x74, - 0x6f, 0x73, 0x61, 0x76, 0x65, 0x64, 0x20, 0x6c, 0x61, 0x73, 0x74, 0x20, - 0x75, 0x73, 0x65, 0x64, 0x20, 0x74, 0x65, 0x6d, 0x70, 0x6c, 0x61, 0x74, - 0x65, 0x27, 0x29, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x75, 0x73, 0x65, 0x72, - 0x54, 0x65, 0x6d, 0x70, 0x6c, 0x61, 0x74, 0x65, 0x4c, 0x6f, 0x61, 0x64, - 0x41, 0x6e, 0x64, 0x41, 0x70, 0x70, 0x6c, 0x79, 0x41, 0x75, 0x74, 0x6f, - 0x73, 0x61, 0x76, 0x65, 0x64, 0x28, 0x29, 0x0a, 0x0a, 0x20, 0x20, 0x20, - 0x20, 0x2f, 0x2a, 0x20, 0x45, 0x4e, 0x44, 0x3a, 0x20, 0x53, 0x75, 0x70, - 0x70, 0x6f, 0x72, 0x74, 0x20, 0x66, 0x6f, 0x72, 0x20, 0x73, 0x74, 0x6f, - 0x72, 0x69, 0x6e, 0x67, 0x20, 0x70, 0x72, 0x6f, 0x6d, 0x70, 0x74, 0x20, - 0x74, 0x65, 0x6d, 0x70, 0x6c, 0x61, 0x74, 0x65, 0x73, 0x20, 0x61, 0x6e, - 0x64, 0x20, 0x70, 0x61, 0x72, 0x61, 0x6d, 0x65, 0x74, 0x65, 0x72, 0x73, - 0x20, 0x69, 0x6e, 0x20, 0x62, 0x72, 0x6f, 0x77, 0x73, 0x65, 0x72, 0x73, - 0x20, 0x4c, 0x6f, 0x63, 0x61, 0x6c, 0x53, 0x74, 0x6f, 0x72, 0x61, 0x67, - 0x65, 0x20, 0x2a, 0x2f, 0x0a, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x63, 0x6f, - 0x6e, 0x73, 0x74, 0x20, 0x6c, 0x6c, 0x61, 0x6d, 0x61, 0x53, 0x74, 0x61, - 0x74, 0x73, 0x20, 0x3d, 0x20, 0x73, 0x69, 0x67, 0x6e, 0x61, 0x6c, 0x28, - 0x6e, 0x75, 0x6c, 0x6c, 0x29, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x63, 0x6f, - 0x6e, 0x73, 0x74, 0x20, 0x63, 0x6f, 0x6e, 0x74, 0x72, 0x6f, 0x6c, 0x6c, - 0x65, 0x72, 0x20, 0x3d, 0x20, 0x73, 0x69, 0x67, 0x6e, 0x61, 0x6c, 0x28, - 0x6e, 0x75, 0x6c, 0x6c, 0x29, 0x0a, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x2f, - 0x2f, 0x20, 0x63, 0x75, 0x72, 0x72, 0x65, 0x6e, 0x74, 0x6c, 0x79, 0x20, - 0x67, 0x65, 0x6e, 0x65, 0x72, 0x61, 0x74, 0x69, 0x6e, 0x67, 0x20, 0x61, - 0x20, 0x63, 0x6f, 0x6d, 0x70, 0x6c, 0x65, 0x74, 0x69, 0x6f, 0x6e, 0x3f, - 0x0a, 0x20, 0x20, 0x20, 0x20, 0x63, 0x6f, 0x6e, 0x73, 0x74, 0x20, 0x67, - 0x65, 0x6e, 0x65, 0x72, 0x61, 0x74, 0x69, 0x6e, 0x67, 0x20, 0x3d, 0x20, - 0x63, 0x6f, 0x6d, 0x70, 0x75, 0x74, 0x65, 0x64, 0x28, 0x28, 0x29, 0x20, - 0x3d, 0x3e, 0x20, 0x63, 0x6f, 0x6e, 0x74, 0x72, 0x6f, 0x6c, 0x6c, 0x65, - 0x72, 0x2e, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x20, 0x21, 0x3d, 0x20, 0x6e, - 0x75, 0x6c, 0x6c, 0x29, 0x0a, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x2f, 0x2f, - 0x20, 0x68, 0x61, 0x73, 0x20, 0x74, 0x68, 0x65, 0x20, 0x75, 0x73, 0x65, - 0x72, 0x20, 0x73, 0x74, 0x61, 0x72, 0x74, 0x65, 0x64, 0x20, 0x61, 0x20, - 0x63, 0x68, 0x61, 0x74, 0x3f, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x63, 0x6f, - 0x6e, 0x73, 0x74, 0x20, 0x63, 0x68, 0x61, 0x74, 0x53, 0x74, 0x61, 0x72, - 0x74, 0x65, 0x64, 0x20, 0x3d, 0x20, 0x63, 0x6f, 0x6d, 0x70, 0x75, 0x74, - 0x65, 0x64, 0x28, 0x28, 0x29, 0x20, 0x3d, 0x3e, 0x20, 0x73, 0x65, 0x73, - 0x73, 0x69, 0x6f, 0x6e, 0x2e, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x2e, 0x74, - 0x72, 0x61, 0x6e, 0x73, 0x63, 0x72, 0x69, 0x70, 0x74, 0x2e, 0x6c, 0x65, - 0x6e, 0x67, 0x74, 0x68, 0x20, 0x3e, 0x20, 0x30, 0x29, 0x0a, 0x0a, 0x20, - 0x20, 0x20, 0x20, 0x63, 0x6f, 0x6e, 0x73, 0x74, 0x20, 0x74, 0x72, 0x61, - 0x6e, 0x73, 0x63, 0x72, 0x69, 0x70, 0x74, 0x55, 0x70, 0x64, 0x61, 0x74, - 0x65, 0x20, 0x3d, 0x20, 0x28, 0x74, 0x72, 0x61, 0x6e, 0x73, 0x63, 0x72, - 0x69, 0x70, 0x74, 0x29, 0x20, 0x3d, 0x3e, 0x20, 0x7b, 0x0a, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x73, 0x65, 0x73, 0x73, 0x69, 0x6f, 0x6e, 0x2e, - 0x76, 0x61, 0x6c, 0x75, 0x65, 0x20, 0x3d, 0x20, 0x7b, 0x0a, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x2e, 0x2e, 0x2e, 0x73, 0x65, 0x73, - 0x73, 0x69, 0x6f, 0x6e, 0x2e, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x2c, 0x0a, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x74, 0x72, 0x61, 0x6e, - 0x73, 0x63, 0x72, 0x69, 0x70, 0x74, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x7d, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x7d, 0x0a, 0x0a, 0x20, 0x20, - 0x20, 0x20, 0x2f, 0x2f, 0x20, 0x73, 0x69, 0x6d, 0x70, 0x6c, 0x65, 0x20, - 0x74, 0x65, 0x6d, 0x70, 0x6c, 0x61, 0x74, 0x65, 0x20, 0x72, 0x65, 0x70, - 0x6c, 0x61, 0x63, 0x65, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x63, 0x6f, 0x6e, - 0x73, 0x74, 0x20, 0x74, 0x65, 0x6d, 0x70, 0x6c, 0x61, 0x74, 0x65, 0x20, - 0x3d, 0x20, 0x28, 0x73, 0x74, 0x72, 0x2c, 0x20, 0x65, 0x78, 0x74, 0x72, - 0x61, 0x53, 0x65, 0x74, 0x74, 0x69, 0x6e, 0x67, 0x73, 0x29, 0x20, 0x3d, - 0x3e, 0x20, 0x7b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x6c, 0x65, - 0x74, 0x20, 0x73, 0x65, 0x74, 0x74, 0x69, 0x6e, 0x67, 0x73, 0x20, 0x3d, - 0x20, 0x73, 0x65, 0x73, 0x73, 0x69, 0x6f, 0x6e, 0x2e, 0x76, 0x61, 0x6c, - 0x75, 0x65, 0x3b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x69, 0x66, - 0x20, 0x28, 0x65, 0x78, 0x74, 0x72, 0x61, 0x53, 0x65, 0x74, 0x74, 0x69, - 0x6e, 0x67, 0x73, 0x29, 0x20, 0x7b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x73, 0x65, 0x74, 0x74, 0x69, 0x6e, 0x67, 0x73, 0x20, - 0x3d, 0x20, 0x7b, 0x20, 0x2e, 0x2e, 0x2e, 0x73, 0x65, 0x74, 0x74, 0x69, - 0x6e, 0x67, 0x73, 0x2c, 0x20, 0x2e, 0x2e, 0x2e, 0x65, 0x78, 0x74, 0x72, - 0x61, 0x53, 0x65, 0x74, 0x74, 0x69, 0x6e, 0x67, 0x73, 0x20, 0x7d, 0x3b, - 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x7d, 0x0a, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x72, 0x65, 0x74, 0x75, 0x72, 0x6e, 0x20, 0x53, 0x74, - 0x72, 0x69, 0x6e, 0x67, 0x28, 0x73, 0x74, 0x72, 0x29, 0x2e, 0x72, 0x65, - 0x70, 0x6c, 0x61, 0x63, 0x65, 0x41, 0x6c, 0x6c, 0x28, 0x2f, 0x5c, 0x7b, - 0x5c, 0x7b, 0x28, 0x2e, 0x2a, 0x3f, 0x29, 0x5c, 0x7d, 0x5c, 0x7d, 0x2f, - 0x67, 0x2c, 0x20, 0x28, 0x5f, 0x2c, 0x20, 0x6b, 0x65, 0x79, 0x29, 0x20, - 0x3d, 0x3e, 0x20, 0x74, 0x65, 0x6d, 0x70, 0x6c, 0x61, 0x74, 0x65, 0x28, - 0x73, 0x65, 0x74, 0x74, 0x69, 0x6e, 0x67, 0x73, 0x5b, 0x6b, 0x65, 0x79, - 0x5d, 0x29, 0x29, 0x3b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x7d, 0x0a, 0x0a, - 0x20, 0x20, 0x20, 0x20, 0x2f, 0x2f, 0x20, 0x73, 0x65, 0x6e, 0x64, 0x20, - 0x6d, 0x65, 0x73, 0x73, 0x61, 0x67, 0x65, 0x20, 0x74, 0x6f, 0x20, 0x73, - 0x65, 0x72, 0x76, 0x65, 0x72, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x63, 0x6f, - 0x6e, 0x73, 0x74, 0x20, 0x63, 0x68, 0x61, 0x74, 0x20, 0x3d, 0x20, 0x61, - 0x73, 0x79, 0x6e, 0x63, 0x20, 0x28, 0x6d, 0x73, 0x67, 0x29, 0x20, 0x3d, - 0x3e, 0x20, 0x7b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x69, 0x66, - 0x20, 0x28, 0x63, 0x6f, 0x6e, 0x74, 0x72, 0x6f, 0x6c, 0x6c, 0x65, 0x72, - 0x2e, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x29, 0x20, 0x7b, 0x0a, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x63, 0x6f, 0x6e, 0x73, 0x6f, 0x6c, - 0x65, 0x2e, 0x6c, 0x6f, 0x67, 0x28, 0x27, 0x61, 0x6c, 0x72, 0x65, 0x61, - 0x64, 0x79, 0x20, 0x72, 0x75, 0x6e, 0x6e, 0x69, 0x6e, 0x67, 0x2e, 0x2e, - 0x2e, 0x27, 0x29, 0x3b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x72, 0x65, 0x74, 0x75, 0x72, 0x6e, 0x3b, 0x0a, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x7d, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x63, - 0x6f, 0x6e, 0x74, 0x72, 0x6f, 0x6c, 0x6c, 0x65, 0x72, 0x2e, 0x76, 0x61, - 0x6c, 0x75, 0x65, 0x20, 0x3d, 0x20, 0x6e, 0x65, 0x77, 0x20, 0x41, 0x62, - 0x6f, 0x72, 0x74, 0x43, 0x6f, 0x6e, 0x74, 0x72, 0x6f, 0x6c, 0x6c, 0x65, - 0x72, 0x28, 0x29, 0x3b, 0x0a, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x74, 0x72, 0x61, 0x6e, 0x73, 0x63, 0x72, 0x69, 0x70, 0x74, 0x55, 0x70, - 0x64, 0x61, 0x74, 0x65, 0x28, 0x5b, 0x2e, 0x2e, 0x2e, 0x73, 0x65, 0x73, - 0x73, 0x69, 0x6f, 0x6e, 0x2e, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x2e, 0x74, - 0x72, 0x61, 0x6e, 0x73, 0x63, 0x72, 0x69, 0x70, 0x74, 0x2c, 0x20, 0x5b, - 0x22, 0x7b, 0x7b, 0x75, 0x73, 0x65, 0x72, 0x7d, 0x7d, 0x22, 0x2c, 0x20, - 0x6d, 0x73, 0x67, 0x5d, 0x5d, 0x29, 0x0a, 0x0a, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x63, 0x6f, 0x6e, 0x73, 0x74, 0x20, 0x70, 0x72, 0x6f, 0x6d, - 0x70, 0x74, 0x20, 0x3d, 0x20, 0x74, 0x65, 0x6d, 0x70, 0x6c, 0x61, 0x74, - 0x65, 0x28, 0x73, 0x65, 0x73, 0x73, 0x69, 0x6f, 0x6e, 0x2e, 0x76, 0x61, - 0x6c, 0x75, 0x65, 0x2e, 0x74, 0x65, 0x6d, 0x70, 0x6c, 0x61, 0x74, 0x65, - 0x2c, 0x20, 0x7b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x6d, 0x65, 0x73, 0x73, 0x61, 0x67, 0x65, 0x3a, 0x20, 0x6d, 0x73, 0x67, - 0x2c, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x68, 0x69, - 0x73, 0x74, 0x6f, 0x72, 0x79, 0x3a, 0x20, 0x73, 0x65, 0x73, 0x73, 0x69, - 0x6f, 0x6e, 0x2e, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x2e, 0x74, 0x72, 0x61, - 0x6e, 0x73, 0x63, 0x72, 0x69, 0x70, 0x74, 0x2e, 0x66, 0x6c, 0x61, 0x74, - 0x4d, 0x61, 0x70, 0x28, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x28, 0x5b, 0x6e, 0x61, 0x6d, 0x65, 0x2c, 0x20, 0x64, - 0x61, 0x74, 0x61, 0x5d, 0x29, 0x20, 0x3d, 0x3e, 0x0a, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x74, 0x65, 0x6d, - 0x70, 0x6c, 0x61, 0x74, 0x65, 0x28, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x73, 0x65, 0x73, - 0x73, 0x69, 0x6f, 0x6e, 0x2e, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x2e, 0x68, - 0x69, 0x73, 0x74, 0x6f, 0x72, 0x79, 0x54, 0x65, 0x6d, 0x70, 0x6c, 0x61, - 0x74, 0x65, 0x2c, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x7b, 0x0a, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x6e, 0x61, 0x6d, 0x65, 0x2c, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x6d, 0x65, - 0x73, 0x73, 0x61, 0x67, 0x65, 0x3a, 0x20, 0x41, 0x72, 0x72, 0x61, 0x79, - 0x2e, 0x69, 0x73, 0x41, 0x72, 0x72, 0x61, 0x79, 0x28, 0x64, 0x61, 0x74, - 0x61, 0x29, 0x20, 0x3f, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x64, - 0x61, 0x74, 0x61, 0x2e, 0x6d, 0x61, 0x70, 0x28, 0x6d, 0x73, 0x67, 0x20, - 0x3d, 0x3e, 0x20, 0x6d, 0x73, 0x67, 0x2e, 0x63, 0x6f, 0x6e, 0x74, 0x65, - 0x6e, 0x74, 0x29, 0x2e, 0x6a, 0x6f, 0x69, 0x6e, 0x28, 0x27, 0x27, 0x29, - 0x2e, 0x72, 0x65, 0x70, 0x6c, 0x61, 0x63, 0x65, 0x28, 0x2f, 0x5e, 0x5c, - 0x73, 0x2f, 0x2c, 0x20, 0x27, 0x27, 0x29, 0x20, 0x3a, 0x0a, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x64, 0x61, 0x74, 0x61, 0x2c, 0x0a, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x7d, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x29, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x29, 0x2e, 0x6a, 0x6f, 0x69, 0x6e, 0x28, 0x22, 0x5c, 0x6e, 0x22, 0x29, - 0x2c, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x7d, 0x29, 0x3b, 0x0a, - 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x63, 0x6f, 0x6e, 0x73, 0x74, - 0x20, 0x63, 0x75, 0x72, 0x72, 0x65, 0x6e, 0x74, 0x4d, 0x65, 0x73, 0x73, - 0x61, 0x67, 0x65, 0x73, 0x20, 0x3d, 0x20, 0x5b, 0x5d, 0x3b, 0x0a, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x63, 0x6f, 0x6e, 0x73, 0x74, 0x20, 0x68, - 0x69, 0x73, 0x74, 0x6f, 0x72, 0x79, 0x20, 0x3d, 0x20, 0x73, 0x65, 0x73, - 0x73, 0x69, 0x6f, 0x6e, 0x2e, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x2e, 0x74, - 0x72, 0x61, 0x6e, 0x73, 0x63, 0x72, 0x69, 0x70, 0x74, 0x0a, 0x0a, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x63, 0x6f, 0x6e, 0x73, 0x74, 0x20, 0x6c, - 0x6c, 0x61, 0x6d, 0x61, 0x50, 0x61, 0x72, 0x61, 0x6d, 0x73, 0x20, 0x3d, - 0x20, 0x7b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x2e, - 0x2e, 0x2e, 0x70, 0x61, 0x72, 0x61, 0x6d, 0x73, 0x2e, 0x76, 0x61, 0x6c, - 0x75, 0x65, 0x2c, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x73, 0x74, 0x6f, 0x70, 0x3a, 0x20, 0x5b, 0x22, 0x3c, 0x2f, 0x73, 0x3e, - 0x22, 0x2c, 0x20, 0x74, 0x65, 0x6d, 0x70, 0x6c, 0x61, 0x74, 0x65, 0x28, - 0x22, 0x7b, 0x7b, 0x63, 0x68, 0x61, 0x72, 0x7d, 0x7d, 0x3a, 0x22, 0x29, - 0x2c, 0x20, 0x74, 0x65, 0x6d, 0x70, 0x6c, 0x61, 0x74, 0x65, 0x28, 0x22, - 0x7b, 0x7b, 0x75, 0x73, 0x65, 0x72, 0x7d, 0x7d, 0x3a, 0x22, 0x29, 0x5d, - 0x2c, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x7d, 0x0a, 0x0a, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x66, 0x6f, 0x72, 0x20, 0x61, 0x77, 0x61, - 0x69, 0x74, 0x20, 0x28, 0x63, 0x6f, 0x6e, 0x73, 0x74, 0x20, 0x63, 0x68, - 0x75, 0x6e, 0x6b, 0x20, 0x6f, 0x66, 0x20, 0x6c, 0x6c, 0x61, 0x6d, 0x61, - 0x28, 0x70, 0x72, 0x6f, 0x6d, 0x70, 0x74, 0x2c, 0x20, 0x6c, 0x6c, 0x61, - 0x6d, 0x61, 0x50, 0x61, 0x72, 0x61, 0x6d, 0x73, 0x2c, 0x20, 0x7b, 0x20, - 0x63, 0x6f, 0x6e, 0x74, 0x72, 0x6f, 0x6c, 0x6c, 0x65, 0x72, 0x3a, 0x20, - 0x63, 0x6f, 0x6e, 0x74, 0x72, 0x6f, 0x6c, 0x6c, 0x65, 0x72, 0x2e, 0x76, - 0x61, 0x6c, 0x75, 0x65, 0x20, 0x7d, 0x29, 0x29, 0x20, 0x7b, 0x0a, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x63, 0x6f, 0x6e, 0x73, 0x74, - 0x20, 0x64, 0x61, 0x74, 0x61, 0x20, 0x3d, 0x20, 0x63, 0x68, 0x75, 0x6e, - 0x6b, 0x2e, 0x64, 0x61, 0x74, 0x61, 0x3b, 0x0a, 0x0a, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x69, 0x66, 0x20, 0x28, 0x64, 0x61, 0x74, - 0x61, 0x2e, 0x73, 0x74, 0x6f, 0x70, 0x29, 0x20, 0x7b, 0x0a, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x77, 0x68, 0x69, 0x6c, - 0x65, 0x20, 0x28, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x63, 0x75, 0x72, 0x72, 0x65, 0x6e, 0x74, 0x4d, - 0x65, 0x73, 0x73, 0x61, 0x67, 0x65, 0x73, 0x2e, 0x6c, 0x65, 0x6e, 0x67, - 0x74, 0x68, 0x20, 0x3e, 0x20, 0x30, 0x20, 0x26, 0x26, 0x0a, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x63, 0x75, - 0x72, 0x72, 0x65, 0x6e, 0x74, 0x4d, 0x65, 0x73, 0x73, 0x61, 0x67, 0x65, - 0x73, 0x5b, 0x63, 0x75, 0x72, 0x72, 0x65, 0x6e, 0x74, 0x4d, 0x65, 0x73, - 0x73, 0x61, 0x67, 0x65, 0x73, 0x2e, 0x6c, 0x65, 0x6e, 0x67, 0x74, 0x68, - 0x20, 0x2d, 0x20, 0x31, 0x5d, 0x2e, 0x63, 0x6f, 0x6e, 0x74, 0x65, 0x6e, - 0x74, 0x2e, 0x6d, 0x61, 0x74, 0x63, 0x68, 0x28, 0x2f, 0x5c, 0x6e, 0x24, - 0x2f, 0x29, 0x20, 0x21, 0x3d, 0x20, 0x6e, 0x75, 0x6c, 0x6c, 0x0a, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x29, 0x20, 0x7b, - 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x63, 0x75, 0x72, 0x72, 0x65, 0x6e, 0x74, 0x4d, 0x65, 0x73, 0x73, - 0x61, 0x67, 0x65, 0x73, 0x2e, 0x70, 0x6f, 0x70, 0x28, 0x29, 0x3b, 0x0a, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x7d, 0x0a, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x74, 0x72, - 0x61, 0x6e, 0x73, 0x63, 0x72, 0x69, 0x70, 0x74, 0x55, 0x70, 0x64, 0x61, - 0x74, 0x65, 0x28, 0x5b, 0x2e, 0x2e, 0x2e, 0x68, 0x69, 0x73, 0x74, 0x6f, - 0x72, 0x79, 0x2c, 0x20, 0x5b, 0x22, 0x7b, 0x7b, 0x63, 0x68, 0x61, 0x72, - 0x7d, 0x7d, 0x22, 0x2c, 0x20, 0x63, 0x75, 0x72, 0x72, 0x65, 0x6e, 0x74, - 0x4d, 0x65, 0x73, 0x73, 0x61, 0x67, 0x65, 0x73, 0x5d, 0x5d, 0x29, 0x0a, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x63, 0x6f, - 0x6e, 0x73, 0x6f, 0x6c, 0x65, 0x2e, 0x6c, 0x6f, 0x67, 0x28, 0x22, 0x43, - 0x6f, 0x6d, 0x70, 0x6c, 0x65, 0x74, 0x69, 0x6f, 0x6e, 0x20, 0x66, 0x69, - 0x6e, 0x69, 0x73, 0x68, 0x65, 0x64, 0x3a, 0x20, 0x27, 0x22, 0x2c, 0x20, - 0x63, 0x75, 0x72, 0x72, 0x65, 0x6e, 0x74, 0x4d, 0x65, 0x73, 0x73, 0x61, - 0x67, 0x65, 0x73, 0x2e, 0x6d, 0x61, 0x70, 0x28, 0x6d, 0x73, 0x67, 0x20, - 0x3d, 0x3e, 0x20, 0x6d, 0x73, 0x67, 0x2e, 0x63, 0x6f, 0x6e, 0x74, 0x65, - 0x6e, 0x74, 0x29, 0x2e, 0x6a, 0x6f, 0x69, 0x6e, 0x28, 0x27, 0x27, 0x29, - 0x2c, 0x20, 0x22, 0x27, 0x2c, 0x20, 0x73, 0x75, 0x6d, 0x6d, 0x61, 0x72, - 0x79, 0x3a, 0x20, 0x22, 0x2c, 0x20, 0x64, 0x61, 0x74, 0x61, 0x29, 0x3b, - 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x7d, 0x20, 0x65, - 0x6c, 0x73, 0x65, 0x20, 0x7b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x63, 0x75, 0x72, 0x72, 0x65, 0x6e, 0x74, 0x4d, - 0x65, 0x73, 0x73, 0x61, 0x67, 0x65, 0x73, 0x2e, 0x70, 0x75, 0x73, 0x68, - 0x28, 0x64, 0x61, 0x74, 0x61, 0x29, 0x3b, 0x0a, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x74, 0x72, 0x61, 0x6e, 0x73, 0x63, - 0x72, 0x69, 0x70, 0x74, 0x55, 0x70, 0x64, 0x61, 0x74, 0x65, 0x28, 0x5b, - 0x2e, 0x2e, 0x2e, 0x68, 0x69, 0x73, 0x74, 0x6f, 0x72, 0x79, 0x2c, 0x20, - 0x5b, 0x22, 0x7b, 0x7b, 0x63, 0x68, 0x61, 0x72, 0x7d, 0x7d, 0x22, 0x2c, - 0x20, 0x63, 0x75, 0x72, 0x72, 0x65, 0x6e, 0x74, 0x4d, 0x65, 0x73, 0x73, - 0x61, 0x67, 0x65, 0x73, 0x5d, 0x5d, 0x29, 0x0a, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x7d, 0x0a, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x69, 0x66, 0x20, 0x28, 0x64, 0x61, 0x74, 0x61, 0x2e, - 0x74, 0x69, 0x6d, 0x69, 0x6e, 0x67, 0x73, 0x29, 0x20, 0x7b, 0x0a, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x6c, 0x6c, 0x61, - 0x6d, 0x61, 0x53, 0x74, 0x61, 0x74, 0x73, 0x2e, 0x76, 0x61, 0x6c, 0x75, - 0x65, 0x20, 0x3d, 0x20, 0x64, 0x61, 0x74, 0x61, 0x2e, 0x74, 0x69, 0x6d, - 0x69, 0x6e, 0x67, 0x73, 0x3b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x7d, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x7d, 0x0a, - 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x63, 0x6f, 0x6e, 0x74, 0x72, - 0x6f, 0x6c, 0x6c, 0x65, 0x72, 0x2e, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x20, - 0x3d, 0x20, 0x6e, 0x75, 0x6c, 0x6c, 0x3b, 0x0a, 0x20, 0x20, 0x20, 0x20, - 0x7d, 0x0a, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x66, 0x75, 0x6e, 0x63, 0x74, - 0x69, 0x6f, 0x6e, 0x20, 0x4d, 0x65, 0x73, 0x73, 0x61, 0x67, 0x65, 0x49, - 0x6e, 0x70, 0x75, 0x74, 0x28, 0x29, 0x20, 0x7b, 0x0a, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x63, 0x6f, 0x6e, 0x73, 0x74, 0x20, 0x6d, 0x65, 0x73, - 0x73, 0x61, 0x67, 0x65, 0x20, 0x3d, 0x20, 0x75, 0x73, 0x65, 0x53, 0x69, - 0x67, 0x6e, 0x61, 0x6c, 0x28, 0x22, 0x22, 0x29, 0x0a, 0x0a, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x63, 0x6f, 0x6e, 0x73, 0x74, 0x20, 0x73, 0x74, - 0x6f, 0x70, 0x20, 0x3d, 0x20, 0x28, 0x65, 0x29, 0x20, 0x3d, 0x3e, 0x20, - 0x7b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x65, 0x2e, - 0x70, 0x72, 0x65, 0x76, 0x65, 0x6e, 0x74, 0x44, 0x65, 0x66, 0x61, 0x75, - 0x6c, 0x74, 0x28, 0x29, 0x3b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x69, 0x66, 0x20, 0x28, 0x63, 0x6f, 0x6e, 0x74, 0x72, 0x6f, - 0x6c, 0x6c, 0x65, 0x72, 0x2e, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x29, 0x20, - 0x7b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x63, 0x6f, 0x6e, 0x74, 0x72, 0x6f, 0x6c, 0x6c, 0x65, 0x72, 0x2e, 0x76, - 0x61, 0x6c, 0x75, 0x65, 0x2e, 0x61, 0x62, 0x6f, 0x72, 0x74, 0x28, 0x29, - 0x3b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x63, 0x6f, 0x6e, 0x74, 0x72, 0x6f, 0x6c, 0x6c, 0x65, 0x72, 0x2e, 0x76, - 0x61, 0x6c, 0x75, 0x65, 0x20, 0x3d, 0x20, 0x6e, 0x75, 0x6c, 0x6c, 0x3b, - 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x7d, 0x0a, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x7d, 0x0a, 0x0a, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x63, 0x6f, 0x6e, 0x73, 0x74, 0x20, 0x72, 0x65, 0x73, 0x65, - 0x74, 0x20, 0x3d, 0x20, 0x28, 0x65, 0x29, 0x20, 0x3d, 0x3e, 0x20, 0x7b, - 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x73, 0x74, 0x6f, - 0x70, 0x28, 0x65, 0x29, 0x3b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x74, 0x72, 0x61, 0x6e, 0x73, 0x63, 0x72, 0x69, 0x70, 0x74, - 0x55, 0x70, 0x64, 0x61, 0x74, 0x65, 0x28, 0x5b, 0x5d, 0x29, 0x3b, 0x0a, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x7d, 0x0a, 0x0a, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x63, 0x6f, 0x6e, 0x73, 0x74, 0x20, 0x73, 0x75, 0x62, - 0x6d, 0x69, 0x74, 0x20, 0x3d, 0x20, 0x28, 0x65, 0x29, 0x20, 0x3d, 0x3e, - 0x20, 0x7b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x73, - 0x74, 0x6f, 0x70, 0x28, 0x65, 0x29, 0x3b, 0x0a, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x63, 0x68, 0x61, 0x74, 0x28, 0x6d, 0x65, 0x73, - 0x73, 0x61, 0x67, 0x65, 0x2e, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x29, 0x3b, - 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x6d, 0x65, 0x73, - 0x73, 0x61, 0x67, 0x65, 0x2e, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x20, 0x3d, - 0x20, 0x22, 0x22, 0x3b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x7d, - 0x0a, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x63, 0x6f, 0x6e, 0x73, - 0x74, 0x20, 0x65, 0x6e, 0x74, 0x65, 0x72, 0x53, 0x75, 0x62, 0x6d, 0x69, - 0x74, 0x73, 0x20, 0x3d, 0x20, 0x28, 0x65, 0x76, 0x65, 0x6e, 0x74, 0x29, - 0x20, 0x3d, 0x3e, 0x20, 0x7b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x69, 0x66, 0x20, 0x28, 0x65, 0x76, 0x65, 0x6e, 0x74, 0x2e, - 0x77, 0x68, 0x69, 0x63, 0x68, 0x20, 0x3d, 0x3d, 0x3d, 0x20, 0x31, 0x33, - 0x20, 0x26, 0x26, 0x20, 0x21, 0x65, 0x76, 0x65, 0x6e, 0x74, 0x2e, 0x73, - 0x68, 0x69, 0x66, 0x74, 0x4b, 0x65, 0x79, 0x29, 0x20, 0x7b, 0x0a, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x73, 0x75, 0x62, - 0x6d, 0x69, 0x74, 0x28, 0x65, 0x76, 0x65, 0x6e, 0x74, 0x29, 0x3b, 0x0a, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x7d, 0x0a, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x7d, 0x0a, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x72, 0x65, 0x74, 0x75, 0x72, 0x6e, 0x20, 0x68, 0x74, 0x6d, 0x6c, - 0x60, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x3c, 0x66, - 0x6f, 0x72, 0x6d, 0x20, 0x6f, 0x6e, 0x73, 0x75, 0x62, 0x6d, 0x69, 0x74, - 0x3d, 0x24, 0x7b, 0x73, 0x75, 0x62, 0x6d, 0x69, 0x74, 0x7d, 0x3e, 0x0a, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x3c, 0x64, - 0x69, 0x76, 0x3e, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x3c, 0x74, 0x65, 0x78, 0x74, 0x61, 0x72, 0x65, - 0x61, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x63, 0x6c, 0x61, 0x73, 0x73, 0x4e, 0x61, - 0x6d, 0x65, 0x3d, 0x24, 0x7b, 0x67, 0x65, 0x6e, 0x65, 0x72, 0x61, 0x74, - 0x69, 0x6e, 0x67, 0x2e, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x20, 0x3f, 0x20, - 0x22, 0x6c, 0x6f, 0x61, 0x64, 0x69, 0x6e, 0x67, 0x22, 0x20, 0x3a, 0x20, - 0x6e, 0x75, 0x6c, 0x6c, 0x7d, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x6f, 0x6e, 0x69, - 0x6e, 0x70, 0x75, 0x74, 0x3d, 0x24, 0x7b, 0x28, 0x65, 0x29, 0x20, 0x3d, - 0x3e, 0x20, 0x6d, 0x65, 0x73, 0x73, 0x61, 0x67, 0x65, 0x2e, 0x76, 0x61, - 0x6c, 0x75, 0x65, 0x20, 0x3d, 0x20, 0x65, 0x2e, 0x74, 0x61, 0x72, 0x67, - 0x65, 0x74, 0x2e, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x7d, 0x0a, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x6f, 0x6e, 0x6b, 0x65, 0x79, 0x70, 0x72, 0x65, 0x73, 0x73, 0x3d, - 0x24, 0x7b, 0x65, 0x6e, 0x74, 0x65, 0x72, 0x53, 0x75, 0x62, 0x6d, 0x69, - 0x74, 0x73, 0x7d, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x70, 0x6c, 0x61, 0x63, 0x65, - 0x68, 0x6f, 0x6c, 0x64, 0x65, 0x72, 0x3d, 0x22, 0x53, 0x61, 0x79, 0x20, - 0x73, 0x6f, 0x6d, 0x65, 0x74, 0x68, 0x69, 0x6e, 0x67, 0x2e, 0x2e, 0x2e, - 0x22, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x72, 0x6f, 0x77, 0x73, 0x3d, 0x32, 0x0a, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x74, 0x79, 0x70, 0x65, 0x3d, 0x22, 0x74, 0x65, 0x78, - 0x74, 0x22, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x3d, - 0x22, 0x24, 0x7b, 0x6d, 0x65, 0x73, 0x73, 0x61, 0x67, 0x65, 0x7d, 0x22, - 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x2f, 0x3e, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x3c, 0x2f, 0x64, 0x69, 0x76, 0x3e, 0x0a, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x3c, 0x64, 0x69, 0x76, 0x20, - 0x63, 0x6c, 0x61, 0x73, 0x73, 0x3d, 0x22, 0x72, 0x69, 0x67, 0x68, 0x74, - 0x22, 0x3e, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x3c, 0x62, 0x75, 0x74, 0x74, 0x6f, 0x6e, 0x20, 0x74, - 0x79, 0x70, 0x65, 0x3d, 0x22, 0x73, 0x75, 0x62, 0x6d, 0x69, 0x74, 0x22, - 0x20, 0x64, 0x69, 0x73, 0x61, 0x62, 0x6c, 0x65, 0x64, 0x3d, 0x24, 0x7b, - 0x67, 0x65, 0x6e, 0x65, 0x72, 0x61, 0x74, 0x69, 0x6e, 0x67, 0x2e, 0x76, - 0x61, 0x6c, 0x75, 0x65, 0x7d, 0x3e, 0x53, 0x65, 0x6e, 0x64, 0x3c, 0x2f, - 0x62, 0x75, 0x74, 0x74, 0x6f, 0x6e, 0x3e, 0x0a, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x3c, 0x62, 0x75, 0x74, - 0x74, 0x6f, 0x6e, 0x20, 0x6f, 0x6e, 0x63, 0x6c, 0x69, 0x63, 0x6b, 0x3d, - 0x24, 0x7b, 0x73, 0x74, 0x6f, 0x70, 0x7d, 0x20, 0x64, 0x69, 0x73, 0x61, - 0x62, 0x6c, 0x65, 0x64, 0x3d, 0x24, 0x7b, 0x21, 0x67, 0x65, 0x6e, 0x65, - 0x72, 0x61, 0x74, 0x69, 0x6e, 0x67, 0x2e, 0x76, 0x61, 0x6c, 0x75, 0x65, - 0x7d, 0x3e, 0x53, 0x74, 0x6f, 0x70, 0x3c, 0x2f, 0x62, 0x75, 0x74, 0x74, - 0x6f, 0x6e, 0x3e, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x3c, 0x62, 0x75, 0x74, 0x74, 0x6f, 0x6e, 0x20, - 0x6f, 0x6e, 0x63, 0x6c, 0x69, 0x63, 0x6b, 0x3d, 0x24, 0x7b, 0x72, 0x65, - 0x73, 0x65, 0x74, 0x7d, 0x3e, 0x52, 0x65, 0x73, 0x65, 0x74, 0x3c, 0x2f, - 0x62, 0x75, 0x74, 0x74, 0x6f, 0x6e, 0x3e, 0x0a, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x3c, 0x2f, 0x64, 0x69, 0x76, 0x3e, - 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x3c, 0x2f, 0x66, - 0x6f, 0x72, 0x6d, 0x3e, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x60, - 0x0a, 0x20, 0x20, 0x20, 0x20, 0x7d, 0x0a, 0x0a, 0x20, 0x20, 0x20, 0x20, - 0x63, 0x6f, 0x6e, 0x73, 0x74, 0x20, 0x43, 0x68, 0x61, 0x74, 0x4c, 0x6f, - 0x67, 0x20, 0x3d, 0x20, 0x28, 0x70, 0x72, 0x6f, 0x70, 0x73, 0x29, 0x20, - 0x3d, 0x3e, 0x20, 0x7b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x63, - 0x6f, 0x6e, 0x73, 0x74, 0x20, 0x6d, 0x65, 0x73, 0x73, 0x61, 0x67, 0x65, - 0x73, 0x20, 0x3d, 0x20, 0x73, 0x65, 0x73, 0x73, 0x69, 0x6f, 0x6e, 0x2e, - 0x76, 0x61, 0x6c, 0x75, 0x65, 0x2e, 0x74, 0x72, 0x61, 0x6e, 0x73, 0x63, - 0x72, 0x69, 0x70, 0x74, 0x3b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x63, 0x6f, 0x6e, 0x73, 0x74, 0x20, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, - 0x6e, 0x65, 0x72, 0x20, 0x3d, 0x20, 0x75, 0x73, 0x65, 0x52, 0x65, 0x66, - 0x28, 0x6e, 0x75, 0x6c, 0x6c, 0x29, 0x0a, 0x0a, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x75, 0x73, 0x65, 0x45, 0x66, 0x66, 0x65, 0x63, 0x74, 0x28, - 0x28, 0x29, 0x20, 0x3d, 0x3e, 0x20, 0x7b, 0x0a, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x2f, 0x2f, 0x20, 0x73, 0x63, 0x72, 0x6f, 0x6c, - 0x6c, 0x20, 0x74, 0x6f, 0x20, 0x62, 0x6f, 0x74, 0x74, 0x6f, 0x6d, 0x20, - 0x28, 0x69, 0x66, 0x20, 0x6e, 0x65, 0x65, 0x64, 0x65, 0x64, 0x29, 0x0a, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x63, 0x6f, 0x6e, 0x73, - 0x74, 0x20, 0x70, 0x61, 0x72, 0x65, 0x6e, 0x74, 0x20, 0x3d, 0x20, 0x63, - 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x2e, 0x63, 0x75, 0x72, - 0x72, 0x65, 0x6e, 0x74, 0x2e, 0x70, 0x61, 0x72, 0x65, 0x6e, 0x74, 0x45, - 0x6c, 0x65, 0x6d, 0x65, 0x6e, 0x74, 0x3b, 0x0a, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x69, 0x66, 0x20, 0x28, 0x70, 0x61, 0x72, 0x65, - 0x6e, 0x74, 0x20, 0x26, 0x26, 0x20, 0x70, 0x61, 0x72, 0x65, 0x6e, 0x74, - 0x2e, 0x73, 0x63, 0x72, 0x6f, 0x6c, 0x6c, 0x48, 0x65, 0x69, 0x67, 0x68, - 0x74, 0x20, 0x3c, 0x3d, 0x20, 0x70, 0x61, 0x72, 0x65, 0x6e, 0x74, 0x2e, - 0x73, 0x63, 0x72, 0x6f, 0x6c, 0x6c, 0x54, 0x6f, 0x70, 0x20, 0x2b, 0x20, - 0x70, 0x61, 0x72, 0x65, 0x6e, 0x74, 0x2e, 0x6f, 0x66, 0x66, 0x73, 0x65, - 0x74, 0x48, 0x65, 0x69, 0x67, 0x68, 0x74, 0x20, 0x2b, 0x20, 0x33, 0x30, - 0x30, 0x29, 0x20, 0x7b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x70, 0x61, 0x72, 0x65, 0x6e, 0x74, 0x2e, 0x73, 0x63, - 0x72, 0x6f, 0x6c, 0x6c, 0x54, 0x6f, 0x28, 0x30, 0x2c, 0x20, 0x70, 0x61, - 0x72, 0x65, 0x6e, 0x74, 0x2e, 0x73, 0x63, 0x72, 0x6f, 0x6c, 0x6c, 0x48, - 0x65, 0x69, 0x67, 0x68, 0x74, 0x29, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x7d, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x7d, - 0x2c, 0x20, 0x5b, 0x6d, 0x65, 0x73, 0x73, 0x61, 0x67, 0x65, 0x73, 0x5d, - 0x29, 0x0a, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x63, 0x6f, 0x6e, - 0x73, 0x74, 0x20, 0x63, 0x68, 0x61, 0x74, 0x4c, 0x69, 0x6e, 0x65, 0x20, - 0x3d, 0x20, 0x28, 0x5b, 0x75, 0x73, 0x65, 0x72, 0x2c, 0x20, 0x64, 0x61, - 0x74, 0x61, 0x5d, 0x2c, 0x20, 0x69, 0x6e, 0x64, 0x65, 0x78, 0x29, 0x20, - 0x3d, 0x3e, 0x20, 0x7b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x6c, 0x65, 0x74, 0x20, 0x6d, 0x65, 0x73, 0x73, 0x61, 0x67, 0x65, - 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x63, 0x6f, 0x6e, - 0x73, 0x74, 0x20, 0x69, 0x73, 0x41, 0x72, 0x72, 0x61, 0x79, 0x4d, 0x65, - 0x73, 0x73, 0x61, 0x67, 0x65, 0x20, 0x3d, 0x20, 0x41, 0x72, 0x72, 0x61, - 0x79, 0x2e, 0x69, 0x73, 0x41, 0x72, 0x72, 0x61, 0x79, 0x28, 0x64, 0x61, - 0x74, 0x61, 0x29, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x69, 0x66, 0x20, 0x28, 0x70, 0x61, 0x72, 0x61, 0x6d, 0x73, 0x2e, 0x76, - 0x61, 0x6c, 0x75, 0x65, 0x2e, 0x6e, 0x5f, 0x70, 0x72, 0x6f, 0x62, 0x73, - 0x20, 0x3e, 0x20, 0x30, 0x20, 0x26, 0x26, 0x20, 0x69, 0x73, 0x41, 0x72, - 0x72, 0x61, 0x79, 0x4d, 0x65, 0x73, 0x73, 0x61, 0x67, 0x65, 0x29, 0x20, - 0x7b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x6d, 0x65, 0x73, 0x73, 0x61, 0x67, 0x65, 0x20, 0x3d, 0x20, 0x68, 0x74, - 0x6d, 0x6c, 0x60, 0x3c, 0x24, 0x7b, 0x50, 0x72, 0x6f, 0x62, 0x61, 0x62, - 0x69, 0x6c, 0x69, 0x74, 0x69, 0x65, 0x73, 0x7d, 0x20, 0x64, 0x61, 0x74, - 0x61, 0x3d, 0x24, 0x7b, 0x64, 0x61, 0x74, 0x61, 0x7d, 0x20, 0x2f, 0x3e, - 0x60, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x7d, 0x20, - 0x65, 0x6c, 0x73, 0x65, 0x20, 0x7b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x63, 0x6f, 0x6e, 0x73, 0x74, 0x20, 0x74, - 0x65, 0x78, 0x74, 0x20, 0x3d, 0x20, 0x69, 0x73, 0x41, 0x72, 0x72, 0x61, - 0x79, 0x4d, 0x65, 0x73, 0x73, 0x61, 0x67, 0x65, 0x20, 0x3f, 0x0a, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x64, - 0x61, 0x74, 0x61, 0x2e, 0x6d, 0x61, 0x70, 0x28, 0x6d, 0x73, 0x67, 0x20, - 0x3d, 0x3e, 0x20, 0x6d, 0x73, 0x67, 0x2e, 0x63, 0x6f, 0x6e, 0x74, 0x65, - 0x6e, 0x74, 0x29, 0x2e, 0x6a, 0x6f, 0x69, 0x6e, 0x28, 0x27, 0x27, 0x29, - 0x2e, 0x72, 0x65, 0x70, 0x6c, 0x61, 0x63, 0x65, 0x28, 0x2f, 0x5e, 0x5c, - 0x73, 0x2b, 0x2f, 0x2c, 0x20, 0x27, 0x27, 0x29, 0x20, 0x3a, 0x0a, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x64, - 0x61, 0x74, 0x61, 0x3b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x6d, 0x65, 0x73, 0x73, 0x61, 0x67, 0x65, 0x20, 0x3d, - 0x20, 0x68, 0x74, 0x6d, 0x6c, 0x60, 0x3c, 0x24, 0x7b, 0x4d, 0x61, 0x72, - 0x6b, 0x64, 0x6f, 0x77, 0x6e, 0x69, 0x73, 0x68, 0x7d, 0x20, 0x74, 0x65, - 0x78, 0x74, 0x3d, 0x24, 0x7b, 0x74, 0x65, 0x6d, 0x70, 0x6c, 0x61, 0x74, - 0x65, 0x28, 0x74, 0x65, 0x78, 0x74, 0x29, 0x7d, 0x20, 0x2f, 0x3e, 0x60, - 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x7d, 0x0a, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x72, 0x65, 0x74, 0x75, 0x72, - 0x6e, 0x20, 0x68, 0x74, 0x6d, 0x6c, 0x60, 0x3c, 0x70, 0x20, 0x6b, 0x65, - 0x79, 0x3d, 0x24, 0x7b, 0x69, 0x6e, 0x64, 0x65, 0x78, 0x7d, 0x3e, 0x3c, - 0x73, 0x74, 0x72, 0x6f, 0x6e, 0x67, 0x3e, 0x24, 0x7b, 0x74, 0x65, 0x6d, - 0x70, 0x6c, 0x61, 0x74, 0x65, 0x28, 0x75, 0x73, 0x65, 0x72, 0x29, 0x7d, - 0x3a, 0x3c, 0x2f, 0x73, 0x74, 0x72, 0x6f, 0x6e, 0x67, 0x3e, 0x20, 0x24, - 0x7b, 0x6d, 0x65, 0x73, 0x73, 0x61, 0x67, 0x65, 0x7d, 0x3c, 0x2f, 0x70, - 0x3e, 0x60, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x7d, 0x3b, 0x0a, - 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x72, 0x65, 0x74, 0x75, 0x72, - 0x6e, 0x20, 0x68, 0x74, 0x6d, 0x6c, 0x60, 0x0a, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x3c, 0x73, 0x65, 0x63, 0x74, 0x69, 0x6f, 0x6e, - 0x20, 0x69, 0x64, 0x3d, 0x22, 0x63, 0x68, 0x61, 0x74, 0x22, 0x20, 0x72, - 0x65, 0x66, 0x3d, 0x24, 0x7b, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, - 0x65, 0x72, 0x7d, 0x3e, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x24, 0x7b, 0x6d, 0x65, 0x73, 0x73, 0x61, 0x67, 0x65, - 0x73, 0x2e, 0x66, 0x6c, 0x61, 0x74, 0x4d, 0x61, 0x70, 0x28, 0x63, 0x68, - 0x61, 0x74, 0x4c, 0x69, 0x6e, 0x65, 0x29, 0x7d, 0x0a, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x3c, 0x2f, 0x73, 0x65, 0x63, 0x74, 0x69, - 0x6f, 0x6e, 0x3e, 0x60, 0x3b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x7d, 0x3b, - 0x0a, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x63, 0x6f, 0x6e, 0x73, 0x74, 0x20, - 0x43, 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x46, 0x6f, 0x72, 0x6d, 0x20, 0x3d, - 0x20, 0x28, 0x70, 0x72, 0x6f, 0x70, 0x73, 0x29, 0x20, 0x3d, 0x3e, 0x20, - 0x7b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x63, 0x6f, 0x6e, 0x73, - 0x74, 0x20, 0x75, 0x70, 0x64, 0x61, 0x74, 0x65, 0x53, 0x65, 0x73, 0x73, - 0x69, 0x6f, 0x6e, 0x20, 0x3d, 0x20, 0x28, 0x65, 0x6c, 0x29, 0x20, 0x3d, - 0x3e, 0x20, 0x73, 0x65, 0x73, 0x73, 0x69, 0x6f, 0x6e, 0x2e, 0x76, 0x61, - 0x6c, 0x75, 0x65, 0x20, 0x3d, 0x20, 0x7b, 0x20, 0x2e, 0x2e, 0x2e, 0x73, - 0x65, 0x73, 0x73, 0x69, 0x6f, 0x6e, 0x2e, 0x76, 0x61, 0x6c, 0x75, 0x65, - 0x2c, 0x20, 0x5b, 0x65, 0x6c, 0x2e, 0x74, 0x61, 0x72, 0x67, 0x65, 0x74, - 0x2e, 0x6e, 0x61, 0x6d, 0x65, 0x5d, 0x3a, 0x20, 0x65, 0x6c, 0x2e, 0x74, - 0x61, 0x72, 0x67, 0x65, 0x74, 0x2e, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x20, - 0x7d, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x63, 0x6f, 0x6e, 0x73, - 0x74, 0x20, 0x75, 0x70, 0x64, 0x61, 0x74, 0x65, 0x50, 0x61, 0x72, 0x61, - 0x6d, 0x73, 0x20, 0x3d, 0x20, 0x28, 0x65, 0x6c, 0x29, 0x20, 0x3d, 0x3e, - 0x20, 0x70, 0x61, 0x72, 0x61, 0x6d, 0x73, 0x2e, 0x76, 0x61, 0x6c, 0x75, - 0x65, 0x20, 0x3d, 0x20, 0x7b, 0x20, 0x2e, 0x2e, 0x2e, 0x70, 0x61, 0x72, - 0x61, 0x6d, 0x73, 0x2e, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x2c, 0x20, 0x5b, - 0x65, 0x6c, 0x2e, 0x74, 0x61, 0x72, 0x67, 0x65, 0x74, 0x2e, 0x6e, 0x61, - 0x6d, 0x65, 0x5d, 0x3a, 0x20, 0x65, 0x6c, 0x2e, 0x74, 0x61, 0x72, 0x67, - 0x65, 0x74, 0x2e, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x20, 0x7d, 0x0a, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x63, 0x6f, 0x6e, 0x73, 0x74, 0x20, 0x75, - 0x70, 0x64, 0x61, 0x74, 0x65, 0x50, 0x61, 0x72, 0x61, 0x6d, 0x73, 0x46, - 0x6c, 0x6f, 0x61, 0x74, 0x20, 0x3d, 0x20, 0x28, 0x65, 0x6c, 0x29, 0x20, - 0x3d, 0x3e, 0x20, 0x70, 0x61, 0x72, 0x61, 0x6d, 0x73, 0x2e, 0x76, 0x61, - 0x6c, 0x75, 0x65, 0x20, 0x3d, 0x20, 0x7b, 0x20, 0x2e, 0x2e, 0x2e, 0x70, - 0x61, 0x72, 0x61, 0x6d, 0x73, 0x2e, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x2c, - 0x20, 0x5b, 0x65, 0x6c, 0x2e, 0x74, 0x61, 0x72, 0x67, 0x65, 0x74, 0x2e, - 0x6e, 0x61, 0x6d, 0x65, 0x5d, 0x3a, 0x20, 0x70, 0x61, 0x72, 0x73, 0x65, - 0x46, 0x6c, 0x6f, 0x61, 0x74, 0x28, 0x65, 0x6c, 0x2e, 0x74, 0x61, 0x72, - 0x67, 0x65, 0x74, 0x2e, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x29, 0x20, 0x7d, - 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x63, 0x6f, 0x6e, 0x73, 0x74, - 0x20, 0x75, 0x70, 0x64, 0x61, 0x74, 0x65, 0x50, 0x61, 0x72, 0x61, 0x6d, - 0x73, 0x49, 0x6e, 0x74, 0x20, 0x3d, 0x20, 0x28, 0x65, 0x6c, 0x29, 0x20, - 0x3d, 0x3e, 0x20, 0x70, 0x61, 0x72, 0x61, 0x6d, 0x73, 0x2e, 0x76, 0x61, - 0x6c, 0x75, 0x65, 0x20, 0x3d, 0x20, 0x7b, 0x20, 0x2e, 0x2e, 0x2e, 0x70, - 0x61, 0x72, 0x61, 0x6d, 0x73, 0x2e, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x2c, - 0x20, 0x5b, 0x65, 0x6c, 0x2e, 0x74, 0x61, 0x72, 0x67, 0x65, 0x74, 0x2e, - 0x6e, 0x61, 0x6d, 0x65, 0x5d, 0x3a, 0x20, 0x4d, 0x61, 0x74, 0x68, 0x2e, - 0x66, 0x6c, 0x6f, 0x6f, 0x72, 0x28, 0x70, 0x61, 0x72, 0x73, 0x65, 0x46, - 0x6c, 0x6f, 0x61, 0x74, 0x28, 0x65, 0x6c, 0x2e, 0x74, 0x61, 0x72, 0x67, - 0x65, 0x74, 0x2e, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x29, 0x29, 0x20, 0x7d, - 0x0a, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x63, 0x6f, 0x6e, 0x73, - 0x74, 0x20, 0x67, 0x72, 0x61, 0x6d, 0x6d, 0x61, 0x72, 0x4a, 0x73, 0x6f, - 0x6e, 0x53, 0x63, 0x68, 0x65, 0x6d, 0x61, 0x50, 0x72, 0x6f, 0x70, 0x4f, - 0x72, 0x64, 0x65, 0x72, 0x20, 0x3d, 0x20, 0x73, 0x69, 0x67, 0x6e, 0x61, - 0x6c, 0x28, 0x27, 0x27, 0x29, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x63, 0x6f, 0x6e, 0x73, 0x74, 0x20, 0x75, 0x70, 0x64, 0x61, 0x74, 0x65, - 0x47, 0x72, 0x61, 0x6d, 0x6d, 0x61, 0x72, 0x4a, 0x73, 0x6f, 0x6e, 0x53, - 0x63, 0x68, 0x65, 0x6d, 0x61, 0x50, 0x72, 0x6f, 0x70, 0x4f, 0x72, 0x64, - 0x65, 0x72, 0x20, 0x3d, 0x20, 0x28, 0x65, 0x6c, 0x29, 0x20, 0x3d, 0x3e, - 0x20, 0x67, 0x72, 0x61, 0x6d, 0x6d, 0x61, 0x72, 0x4a, 0x73, 0x6f, 0x6e, - 0x53, 0x63, 0x68, 0x65, 0x6d, 0x61, 0x50, 0x72, 0x6f, 0x70, 0x4f, 0x72, - 0x64, 0x65, 0x72, 0x2e, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x20, 0x3d, 0x20, - 0x65, 0x6c, 0x2e, 0x74, 0x61, 0x72, 0x67, 0x65, 0x74, 0x2e, 0x76, 0x61, - 0x6c, 0x75, 0x65, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x63, 0x6f, - 0x6e, 0x73, 0x74, 0x20, 0x63, 0x6f, 0x6e, 0x76, 0x65, 0x72, 0x74, 0x4a, - 0x53, 0x4f, 0x4e, 0x53, 0x63, 0x68, 0x65, 0x6d, 0x61, 0x47, 0x72, 0x61, - 0x6d, 0x6d, 0x61, 0x72, 0x20, 0x3d, 0x20, 0x28, 0x29, 0x20, 0x3d, 0x3e, - 0x20, 0x7b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x74, - 0x72, 0x79, 0x20, 0x7b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x63, 0x6f, 0x6e, 0x73, 0x74, 0x20, 0x73, 0x63, 0x68, - 0x65, 0x6d, 0x61, 0x20, 0x3d, 0x20, 0x4a, 0x53, 0x4f, 0x4e, 0x2e, 0x70, - 0x61, 0x72, 0x73, 0x65, 0x28, 0x70, 0x61, 0x72, 0x61, 0x6d, 0x73, 0x2e, - 0x76, 0x61, 0x6c, 0x75, 0x65, 0x2e, 0x67, 0x72, 0x61, 0x6d, 0x6d, 0x61, - 0x72, 0x29, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x63, 0x6f, 0x6e, 0x73, 0x74, 0x20, 0x63, 0x6f, 0x6e, 0x76, 0x65, - 0x72, 0x74, 0x65, 0x72, 0x20, 0x3d, 0x20, 0x6e, 0x65, 0x77, 0x20, 0x53, - 0x63, 0x68, 0x65, 0x6d, 0x61, 0x43, 0x6f, 0x6e, 0x76, 0x65, 0x72, 0x74, - 0x65, 0x72, 0x28, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x67, 0x72, 0x61, 0x6d, 0x6d, 0x61, 0x72, 0x4a, - 0x73, 0x6f, 0x6e, 0x53, 0x63, 0x68, 0x65, 0x6d, 0x61, 0x50, 0x72, 0x6f, - 0x70, 0x4f, 0x72, 0x64, 0x65, 0x72, 0x2e, 0x76, 0x61, 0x6c, 0x75, 0x65, - 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x2e, 0x73, 0x70, 0x6c, 0x69, 0x74, 0x28, 0x27, 0x2c, - 0x27, 0x29, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x2e, 0x72, 0x65, 0x64, 0x75, 0x63, 0x65, - 0x28, 0x28, 0x61, 0x63, 0x63, 0x2c, 0x20, 0x63, 0x75, 0x72, 0x2c, 0x20, - 0x69, 0x29, 0x20, 0x3d, 0x3e, 0x20, 0x28, 0x7b, 0x2e, 0x2e, 0x2e, 0x61, - 0x63, 0x63, 0x2c, 0x20, 0x5b, 0x63, 0x75, 0x72, 0x2e, 0x74, 0x72, 0x69, - 0x6d, 0x28, 0x29, 0x5d, 0x3a, 0x20, 0x69, 0x7d, 0x29, 0x2c, 0x20, 0x7b, - 0x7d, 0x29, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x29, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x63, 0x6f, 0x6e, 0x76, 0x65, 0x72, 0x74, 0x65, 0x72, 0x2e, 0x76, - 0x69, 0x73, 0x69, 0x74, 0x28, 0x73, 0x63, 0x68, 0x65, 0x6d, 0x61, 0x2c, - 0x20, 0x27, 0x27, 0x29, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x70, 0x61, 0x72, 0x61, 0x6d, 0x73, 0x2e, 0x76, 0x61, - 0x6c, 0x75, 0x65, 0x20, 0x3d, 0x20, 0x7b, 0x0a, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x2e, 0x2e, 0x2e, 0x70, - 0x61, 0x72, 0x61, 0x6d, 0x73, 0x2e, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x2c, - 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x67, 0x72, 0x61, 0x6d, 0x6d, 0x61, 0x72, 0x3a, 0x20, 0x63, 0x6f, - 0x6e, 0x76, 0x65, 0x72, 0x74, 0x65, 0x72, 0x2e, 0x66, 0x6f, 0x72, 0x6d, - 0x61, 0x74, 0x47, 0x72, 0x61, 0x6d, 0x6d, 0x61, 0x72, 0x28, 0x29, 0x2c, - 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x7d, - 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x7d, 0x20, 0x63, - 0x61, 0x74, 0x63, 0x68, 0x20, 0x28, 0x65, 0x29, 0x20, 0x7b, 0x0a, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x61, 0x6c, 0x65, - 0x72, 0x74, 0x28, 0x60, 0x43, 0x6f, 0x6e, 0x76, 0x65, 0x72, 0x74, 0x20, - 0x66, 0x61, 0x69, 0x6c, 0x65, 0x64, 0x3a, 0x20, 0x24, 0x7b, 0x65, 0x2e, - 0x6d, 0x65, 0x73, 0x73, 0x61, 0x67, 0x65, 0x7d, 0x60, 0x29, 0x0a, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x7d, 0x0a, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x7d, 0x0a, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x63, 0x6f, 0x6e, 0x73, 0x74, 0x20, 0x46, 0x6c, 0x6f, 0x61, 0x74, 0x46, - 0x69, 0x65, 0x6c, 0x64, 0x20, 0x3d, 0x20, 0x28, 0x7b, 0x6c, 0x61, 0x62, - 0x65, 0x6c, 0x2c, 0x20, 0x6d, 0x61, 0x78, 0x2c, 0x20, 0x6d, 0x69, 0x6e, - 0x2c, 0x20, 0x6e, 0x61, 0x6d, 0x65, 0x2c, 0x20, 0x73, 0x74, 0x65, 0x70, - 0x2c, 0x20, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x7d, 0x29, 0x20, 0x3d, 0x3e, - 0x20, 0x7b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x72, - 0x65, 0x74, 0x75, 0x72, 0x6e, 0x20, 0x68, 0x74, 0x6d, 0x6c, 0x60, 0x0a, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x3c, 0x64, - 0x69, 0x76, 0x3e, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x3c, 0x6c, 0x61, 0x62, 0x65, 0x6c, 0x20, 0x66, - 0x6f, 0x72, 0x3d, 0x22, 0x24, 0x7b, 0x6e, 0x61, 0x6d, 0x65, 0x7d, 0x22, - 0x3e, 0x24, 0x7b, 0x6c, 0x61, 0x62, 0x65, 0x6c, 0x7d, 0x3c, 0x2f, 0x6c, - 0x61, 0x62, 0x65, 0x6c, 0x3e, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x3c, 0x69, 0x6e, 0x70, 0x75, 0x74, - 0x20, 0x74, 0x79, 0x70, 0x65, 0x3d, 0x22, 0x72, 0x61, 0x6e, 0x67, 0x65, - 0x22, 0x20, 0x69, 0x64, 0x3d, 0x22, 0x24, 0x7b, 0x6e, 0x61, 0x6d, 0x65, - 0x7d, 0x22, 0x20, 0x6d, 0x69, 0x6e, 0x3d, 0x22, 0x24, 0x7b, 0x6d, 0x69, - 0x6e, 0x7d, 0x22, 0x20, 0x6d, 0x61, 0x78, 0x3d, 0x22, 0x24, 0x7b, 0x6d, - 0x61, 0x78, 0x7d, 0x22, 0x20, 0x73, 0x74, 0x65, 0x70, 0x3d, 0x22, 0x24, - 0x7b, 0x73, 0x74, 0x65, 0x70, 0x7d, 0x22, 0x20, 0x6e, 0x61, 0x6d, 0x65, - 0x3d, 0x22, 0x24, 0x7b, 0x6e, 0x61, 0x6d, 0x65, 0x7d, 0x22, 0x20, 0x76, - 0x61, 0x6c, 0x75, 0x65, 0x3d, 0x22, 0x24, 0x7b, 0x76, 0x61, 0x6c, 0x75, - 0x65, 0x7d, 0x22, 0x20, 0x6f, 0x6e, 0x69, 0x6e, 0x70, 0x75, 0x74, 0x3d, - 0x24, 0x7b, 0x75, 0x70, 0x64, 0x61, 0x74, 0x65, 0x50, 0x61, 0x72, 0x61, - 0x6d, 0x73, 0x46, 0x6c, 0x6f, 0x61, 0x74, 0x7d, 0x20, 0x2f, 0x3e, 0x0a, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x3c, 0x73, 0x70, 0x61, 0x6e, 0x3e, 0x24, 0x7b, 0x76, 0x61, 0x6c, 0x75, - 0x65, 0x7d, 0x3c, 0x2f, 0x73, 0x70, 0x61, 0x6e, 0x3e, 0x0a, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x3c, 0x2f, 0x64, 0x69, - 0x76, 0x3e, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x60, - 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x7d, 0x3b, 0x0a, 0x0a, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x63, 0x6f, 0x6e, 0x73, 0x74, 0x20, 0x49, - 0x6e, 0x74, 0x46, 0x69, 0x65, 0x6c, 0x64, 0x20, 0x3d, 0x20, 0x28, 0x7b, - 0x6c, 0x61, 0x62, 0x65, 0x6c, 0x2c, 0x20, 0x6d, 0x61, 0x78, 0x2c, 0x20, - 0x6d, 0x69, 0x6e, 0x2c, 0x20, 0x6e, 0x61, 0x6d, 0x65, 0x2c, 0x20, 0x76, - 0x61, 0x6c, 0x75, 0x65, 0x7d, 0x29, 0x20, 0x3d, 0x3e, 0x20, 0x7b, 0x0a, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x72, 0x65, 0x74, 0x75, - 0x72, 0x6e, 0x20, 0x68, 0x74, 0x6d, 0x6c, 0x60, 0x0a, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x3c, 0x64, 0x69, 0x76, 0x3e, - 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x3c, 0x6c, 0x61, 0x62, 0x65, 0x6c, 0x20, 0x66, 0x6f, 0x72, 0x3d, - 0x22, 0x24, 0x7b, 0x6e, 0x61, 0x6d, 0x65, 0x7d, 0x22, 0x3e, 0x24, 0x7b, - 0x6c, 0x61, 0x62, 0x65, 0x6c, 0x7d, 0x3c, 0x2f, 0x6c, 0x61, 0x62, 0x65, - 0x6c, 0x3e, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x3c, 0x69, 0x6e, 0x70, 0x75, 0x74, 0x20, 0x74, 0x79, - 0x70, 0x65, 0x3d, 0x22, 0x72, 0x61, 0x6e, 0x67, 0x65, 0x22, 0x20, 0x69, - 0x64, 0x3d, 0x22, 0x24, 0x7b, 0x6e, 0x61, 0x6d, 0x65, 0x7d, 0x22, 0x20, - 0x6d, 0x69, 0x6e, 0x3d, 0x22, 0x24, 0x7b, 0x6d, 0x69, 0x6e, 0x7d, 0x22, - 0x20, 0x6d, 0x61, 0x78, 0x3d, 0x22, 0x24, 0x7b, 0x6d, 0x61, 0x78, 0x7d, - 0x22, 0x20, 0x6e, 0x61, 0x6d, 0x65, 0x3d, 0x22, 0x24, 0x7b, 0x6e, 0x61, - 0x6d, 0x65, 0x7d, 0x22, 0x20, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x3d, 0x22, - 0x24, 0x7b, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x7d, 0x22, 0x20, 0x6f, 0x6e, - 0x69, 0x6e, 0x70, 0x75, 0x74, 0x3d, 0x24, 0x7b, 0x75, 0x70, 0x64, 0x61, - 0x74, 0x65, 0x50, 0x61, 0x72, 0x61, 0x6d, 0x73, 0x49, 0x6e, 0x74, 0x7d, - 0x20, 0x2f, 0x3e, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x3c, 0x73, 0x70, 0x61, 0x6e, 0x3e, 0x24, 0x7b, - 0x76, 0x61, 0x6c, 0x75, 0x65, 0x7d, 0x3c, 0x2f, 0x73, 0x70, 0x61, 0x6e, - 0x3e, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x3c, 0x2f, 0x64, 0x69, 0x76, 0x3e, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x60, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x7d, - 0x3b, 0x0a, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x63, 0x6f, 0x6e, - 0x73, 0x74, 0x20, 0x75, 0x73, 0x65, 0x72, 0x54, 0x65, 0x6d, 0x70, 0x6c, - 0x61, 0x74, 0x65, 0x52, 0x65, 0x73, 0x65, 0x74, 0x20, 0x3d, 0x20, 0x28, - 0x65, 0x29, 0x20, 0x3d, 0x3e, 0x20, 0x7b, 0x0a, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x65, 0x2e, 0x70, 0x72, 0x65, 0x76, 0x65, 0x6e, - 0x74, 0x44, 0x65, 0x66, 0x61, 0x75, 0x6c, 0x74, 0x28, 0x29, 0x3b, 0x0a, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x75, 0x73, 0x65, 0x72, - 0x54, 0x65, 0x6d, 0x70, 0x6c, 0x61, 0x74, 0x65, 0x52, 0x65, 0x73, 0x65, - 0x74, 0x54, 0x6f, 0x44, 0x65, 0x66, 0x61, 0x75, 0x6c, 0x74, 0x41, 0x6e, - 0x64, 0x41, 0x70, 0x70, 0x6c, 0x79, 0x28, 0x29, 0x0a, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x7d, 0x0a, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x63, 0x6f, 0x6e, 0x73, 0x74, 0x20, 0x55, 0x73, 0x65, 0x72, 0x54, 0x65, - 0x6d, 0x70, 0x6c, 0x61, 0x74, 0x65, 0x52, 0x65, 0x73, 0x65, 0x74, 0x42, - 0x75, 0x74, 0x74, 0x6f, 0x6e, 0x20, 0x3d, 0x20, 0x28, 0x29, 0x20, 0x3d, - 0x3e, 0x20, 0x7b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x69, 0x66, 0x20, 0x28, 0x73, 0x65, 0x6c, 0x65, 0x63, 0x74, 0x65, 0x64, - 0x55, 0x73, 0x65, 0x72, 0x54, 0x65, 0x6d, 0x70, 0x6c, 0x61, 0x74, 0x65, - 0x2e, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x2e, 0x6e, 0x61, 0x6d, 0x65, 0x20, - 0x3d, 0x3d, 0x20, 0x27, 0x64, 0x65, 0x66, 0x61, 0x75, 0x6c, 0x74, 0x27, - 0x29, 0x20, 0x7b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x72, 0x65, 0x74, 0x75, 0x72, 0x6e, 0x20, 0x68, 0x74, 0x6d, - 0x6c, 0x60, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x3c, 0x62, 0x75, 0x74, 0x74, 0x6f, 0x6e, 0x20, 0x64, - 0x69, 0x73, 0x61, 0x62, 0x6c, 0x65, 0x64, 0x3e, 0x55, 0x73, 0x69, 0x6e, - 0x67, 0x20, 0x64, 0x65, 0x66, 0x61, 0x75, 0x6c, 0x74, 0x20, 0x74, 0x65, - 0x6d, 0x70, 0x6c, 0x61, 0x74, 0x65, 0x3c, 0x2f, 0x62, 0x75, 0x74, 0x74, - 0x6f, 0x6e, 0x3e, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x60, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x7d, 0x0a, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x72, - 0x65, 0x74, 0x75, 0x72, 0x6e, 0x20, 0x68, 0x74, 0x6d, 0x6c, 0x60, 0x0a, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x3c, 0x62, - 0x75, 0x74, 0x74, 0x6f, 0x6e, 0x20, 0x6f, 0x6e, 0x63, 0x6c, 0x69, 0x63, - 0x6b, 0x3d, 0x24, 0x7b, 0x75, 0x73, 0x65, 0x72, 0x54, 0x65, 0x6d, 0x70, - 0x6c, 0x61, 0x74, 0x65, 0x52, 0x65, 0x73, 0x65, 0x74, 0x7d, 0x3e, 0x52, - 0x65, 0x73, 0x65, 0x74, 0x20, 0x61, 0x6c, 0x6c, 0x20, 0x74, 0x6f, 0x20, - 0x64, 0x65, 0x66, 0x61, 0x75, 0x6c, 0x74, 0x3c, 0x2f, 0x62, 0x75, 0x74, - 0x74, 0x6f, 0x6e, 0x3e, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x60, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x7d, 0x3b, 0x0a, - 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x75, 0x73, 0x65, 0x45, 0x66, - 0x66, 0x65, 0x63, 0x74, 0x28, 0x28, 0x29, 0x20, 0x3d, 0x3e, 0x20, 0x7b, - 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x2f, 0x2f, 0x20, - 0x61, 0x75, 0x74, 0x6f, 0x73, 0x61, 0x76, 0x65, 0x20, 0x74, 0x65, 0x6d, - 0x70, 0x6c, 0x61, 0x74, 0x65, 0x20, 0x6f, 0x6e, 0x20, 0x65, 0x76, 0x65, - 0x72, 0x79, 0x20, 0x63, 0x68, 0x61, 0x6e, 0x67, 0x65, 0x0a, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x75, 0x73, 0x65, 0x72, 0x54, 0x65, - 0x6d, 0x70, 0x6c, 0x61, 0x74, 0x65, 0x41, 0x75, 0x74, 0x6f, 0x73, 0x61, - 0x76, 0x65, 0x28, 0x29, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x7d, - 0x2c, 0x20, 0x5b, 0x73, 0x65, 0x73, 0x73, 0x69, 0x6f, 0x6e, 0x2e, 0x76, - 0x61, 0x6c, 0x75, 0x65, 0x2c, 0x20, 0x70, 0x61, 0x72, 0x61, 0x6d, 0x73, - 0x2e, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x5d, 0x29, 0x0a, 0x0a, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x72, 0x65, 0x74, 0x75, 0x72, 0x6e, 0x20, 0x68, - 0x74, 0x6d, 0x6c, 0x60, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x3c, 0x66, 0x6f, 0x72, 0x6d, 0x3e, 0x0a, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x3c, 0x66, 0x69, 0x65, 0x6c, 0x64, - 0x73, 0x65, 0x74, 0x3e, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x3c, 0x24, 0x7b, 0x55, 0x73, 0x65, 0x72, - 0x54, 0x65, 0x6d, 0x70, 0x6c, 0x61, 0x74, 0x65, 0x52, 0x65, 0x73, 0x65, - 0x74, 0x42, 0x75, 0x74, 0x74, 0x6f, 0x6e, 0x7d, 0x2f, 0x3e, 0x0a, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x3c, 0x2f, 0x66, - 0x69, 0x65, 0x6c, 0x64, 0x73, 0x65, 0x74, 0x3e, 0x0a, 0x0a, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x3c, 0x66, 0x69, 0x65, - 0x6c, 0x64, 0x73, 0x65, 0x74, 0x3e, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x3c, 0x64, 0x69, 0x76, 0x3e, - 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x3c, 0x6c, 0x61, 0x62, 0x65, 0x6c, 0x20, 0x66, 0x6f, - 0x72, 0x3d, 0x22, 0x70, 0x72, 0x6f, 0x6d, 0x70, 0x74, 0x22, 0x3e, 0x50, - 0x72, 0x6f, 0x6d, 0x70, 0x74, 0x3c, 0x2f, 0x6c, 0x61, 0x62, 0x65, 0x6c, - 0x3e, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x3c, 0x74, 0x65, 0x78, 0x74, 0x61, 0x72, 0x65, - 0x61, 0x20, 0x74, 0x79, 0x70, 0x65, 0x3d, 0x22, 0x74, 0x65, 0x78, 0x74, - 0x22, 0x20, 0x6e, 0x61, 0x6d, 0x65, 0x3d, 0x22, 0x70, 0x72, 0x6f, 0x6d, - 0x70, 0x74, 0x22, 0x20, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x3d, 0x22, 0x24, - 0x7b, 0x73, 0x65, 0x73, 0x73, 0x69, 0x6f, 0x6e, 0x2e, 0x76, 0x61, 0x6c, - 0x75, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x6d, 0x70, 0x74, 0x7d, 0x22, 0x20, - 0x72, 0x6f, 0x77, 0x73, 0x3d, 0x34, 0x20, 0x6f, 0x6e, 0x69, 0x6e, 0x70, - 0x75, 0x74, 0x3d, 0x24, 0x7b, 0x75, 0x70, 0x64, 0x61, 0x74, 0x65, 0x53, - 0x65, 0x73, 0x73, 0x69, 0x6f, 0x6e, 0x7d, 0x2f, 0x3e, 0x0a, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x3c, 0x2f, - 0x64, 0x69, 0x76, 0x3e, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x3c, 0x2f, 0x66, 0x69, 0x65, 0x6c, 0x64, 0x73, 0x65, - 0x74, 0x3e, 0x0a, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x3c, 0x66, 0x69, 0x65, 0x6c, 0x64, 0x73, 0x65, 0x74, 0x20, - 0x63, 0x6c, 0x61, 0x73, 0x73, 0x3d, 0x22, 0x74, 0x77, 0x6f, 0x22, 0x3e, - 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x3c, 0x64, 0x69, 0x76, 0x3e, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x3c, 0x6c, 0x61, - 0x62, 0x65, 0x6c, 0x20, 0x66, 0x6f, 0x72, 0x3d, 0x22, 0x75, 0x73, 0x65, - 0x72, 0x22, 0x3e, 0x55, 0x73, 0x65, 0x72, 0x20, 0x6e, 0x61, 0x6d, 0x65, - 0x3c, 0x2f, 0x6c, 0x61, 0x62, 0x65, 0x6c, 0x3e, 0x0a, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x3c, - 0x69, 0x6e, 0x70, 0x75, 0x74, 0x20, 0x74, 0x79, 0x70, 0x65, 0x3d, 0x22, - 0x74, 0x65, 0x78, 0x74, 0x22, 0x20, 0x6e, 0x61, 0x6d, 0x65, 0x3d, 0x22, - 0x75, 0x73, 0x65, 0x72, 0x22, 0x20, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x3d, - 0x22, 0x24, 0x7b, 0x73, 0x65, 0x73, 0x73, 0x69, 0x6f, 0x6e, 0x2e, 0x76, - 0x61, 0x6c, 0x75, 0x65, 0x2e, 0x75, 0x73, 0x65, 0x72, 0x7d, 0x22, 0x20, - 0x6f, 0x6e, 0x69, 0x6e, 0x70, 0x75, 0x74, 0x3d, 0x24, 0x7b, 0x75, 0x70, - 0x64, 0x61, 0x74, 0x65, 0x53, 0x65, 0x73, 0x73, 0x69, 0x6f, 0x6e, 0x7d, - 0x20, 0x2f, 0x3e, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x3c, 0x2f, 0x64, 0x69, 0x76, 0x3e, 0x0a, 0x0a, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x3c, 0x64, 0x69, 0x76, 0x3e, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x3c, 0x6c, 0x61, 0x62, - 0x65, 0x6c, 0x20, 0x66, 0x6f, 0x72, 0x3d, 0x22, 0x62, 0x6f, 0x74, 0x22, - 0x3e, 0x42, 0x6f, 0x74, 0x20, 0x6e, 0x61, 0x6d, 0x65, 0x3c, 0x2f, 0x6c, - 0x61, 0x62, 0x65, 0x6c, 0x3e, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x3c, 0x69, 0x6e, 0x70, - 0x75, 0x74, 0x20, 0x74, 0x79, 0x70, 0x65, 0x3d, 0x22, 0x74, 0x65, 0x78, - 0x74, 0x22, 0x20, 0x6e, 0x61, 0x6d, 0x65, 0x3d, 0x22, 0x63, 0x68, 0x61, - 0x72, 0x22, 0x20, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x3d, 0x22, 0x24, 0x7b, - 0x73, 0x65, 0x73, 0x73, 0x69, 0x6f, 0x6e, 0x2e, 0x76, 0x61, 0x6c, 0x75, - 0x65, 0x2e, 0x63, 0x68, 0x61, 0x72, 0x7d, 0x22, 0x20, 0x6f, 0x6e, 0x69, - 0x6e, 0x70, 0x75, 0x74, 0x3d, 0x24, 0x7b, 0x75, 0x70, 0x64, 0x61, 0x74, - 0x65, 0x53, 0x65, 0x73, 0x73, 0x69, 0x6f, 0x6e, 0x7d, 0x20, 0x2f, 0x3e, - 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x3c, 0x2f, 0x64, 0x69, 0x76, 0x3e, 0x0a, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x3c, 0x2f, 0x66, 0x69, 0x65, 0x6c, - 0x64, 0x73, 0x65, 0x74, 0x3e, 0x0a, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x3c, 0x66, 0x69, 0x65, 0x6c, 0x64, 0x73, - 0x65, 0x74, 0x3e, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x3c, 0x64, 0x69, 0x76, 0x3e, 0x0a, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x3c, 0x6c, 0x61, 0x62, 0x65, 0x6c, 0x20, 0x66, 0x6f, 0x72, 0x3d, 0x22, - 0x74, 0x65, 0x6d, 0x70, 0x6c, 0x61, 0x74, 0x65, 0x22, 0x3e, 0x50, 0x72, - 0x6f, 0x6d, 0x70, 0x74, 0x20, 0x74, 0x65, 0x6d, 0x70, 0x6c, 0x61, 0x74, - 0x65, 0x3c, 0x2f, 0x6c, 0x61, 0x62, 0x65, 0x6c, 0x3e, 0x0a, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x3c, 0x74, 0x65, 0x78, 0x74, 0x61, 0x72, 0x65, 0x61, 0x20, 0x69, 0x64, - 0x3d, 0x22, 0x74, 0x65, 0x6d, 0x70, 0x6c, 0x61, 0x74, 0x65, 0x22, 0x20, - 0x6e, 0x61, 0x6d, 0x65, 0x3d, 0x22, 0x74, 0x65, 0x6d, 0x70, 0x6c, 0x61, - 0x74, 0x65, 0x22, 0x20, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x3d, 0x22, 0x24, - 0x7b, 0x73, 0x65, 0x73, 0x73, 0x69, 0x6f, 0x6e, 0x2e, 0x76, 0x61, 0x6c, - 0x75, 0x65, 0x2e, 0x74, 0x65, 0x6d, 0x70, 0x6c, 0x61, 0x74, 0x65, 0x7d, - 0x22, 0x20, 0x72, 0x6f, 0x77, 0x73, 0x3d, 0x34, 0x20, 0x6f, 0x6e, 0x69, - 0x6e, 0x70, 0x75, 0x74, 0x3d, 0x24, 0x7b, 0x75, 0x70, 0x64, 0x61, 0x74, - 0x65, 0x53, 0x65, 0x73, 0x73, 0x69, 0x6f, 0x6e, 0x7d, 0x2f, 0x3e, 0x0a, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x3c, 0x2f, 0x64, 0x69, 0x76, 0x3e, 0x0a, 0x0a, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x3c, 0x64, 0x69, 0x76, - 0x3e, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x3c, 0x6c, 0x61, 0x62, 0x65, 0x6c, 0x20, 0x66, - 0x6f, 0x72, 0x3d, 0x22, 0x74, 0x65, 0x6d, 0x70, 0x6c, 0x61, 0x74, 0x65, - 0x22, 0x3e, 0x43, 0x68, 0x61, 0x74, 0x20, 0x68, 0x69, 0x73, 0x74, 0x6f, - 0x72, 0x79, 0x20, 0x74, 0x65, 0x6d, 0x70, 0x6c, 0x61, 0x74, 0x65, 0x3c, - 0x2f, 0x6c, 0x61, 0x62, 0x65, 0x6c, 0x3e, 0x0a, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x3c, 0x74, - 0x65, 0x78, 0x74, 0x61, 0x72, 0x65, 0x61, 0x20, 0x69, 0x64, 0x3d, 0x22, - 0x74, 0x65, 0x6d, 0x70, 0x6c, 0x61, 0x74, 0x65, 0x22, 0x20, 0x6e, 0x61, - 0x6d, 0x65, 0x3d, 0x22, 0x68, 0x69, 0x73, 0x74, 0x6f, 0x72, 0x79, 0x54, - 0x65, 0x6d, 0x70, 0x6c, 0x61, 0x74, 0x65, 0x22, 0x20, 0x76, 0x61, 0x6c, - 0x75, 0x65, 0x3d, 0x22, 0x24, 0x7b, 0x73, 0x65, 0x73, 0x73, 0x69, 0x6f, - 0x6e, 0x2e, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x2e, 0x68, 0x69, 0x73, 0x74, - 0x6f, 0x72, 0x79, 0x54, 0x65, 0x6d, 0x70, 0x6c, 0x61, 0x74, 0x65, 0x7d, - 0x22, 0x20, 0x72, 0x6f, 0x77, 0x73, 0x3d, 0x31, 0x20, 0x6f, 0x6e, 0x69, - 0x6e, 0x70, 0x75, 0x74, 0x3d, 0x24, 0x7b, 0x75, 0x70, 0x64, 0x61, 0x74, - 0x65, 0x53, 0x65, 0x73, 0x73, 0x69, 0x6f, 0x6e, 0x7d, 0x2f, 0x3e, 0x0a, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x3c, 0x2f, 0x64, 0x69, 0x76, 0x3e, 0x0a, 0x0a, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x3c, 0x64, 0x69, 0x76, - 0x3e, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x3c, 0x6c, 0x61, 0x62, 0x65, 0x6c, 0x20, 0x66, - 0x6f, 0x72, 0x3d, 0x22, 0x74, 0x65, 0x6d, 0x70, 0x6c, 0x61, 0x74, 0x65, - 0x22, 0x3e, 0x47, 0x72, 0x61, 0x6d, 0x6d, 0x61, 0x72, 0x3c, 0x2f, 0x6c, - 0x61, 0x62, 0x65, 0x6c, 0x3e, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x3c, 0x74, 0x65, 0x78, - 0x74, 0x61, 0x72, 0x65, 0x61, 0x20, 0x69, 0x64, 0x3d, 0x22, 0x67, 0x72, - 0x61, 0x6d, 0x6d, 0x61, 0x72, 0x22, 0x20, 0x6e, 0x61, 0x6d, 0x65, 0x3d, - 0x22, 0x67, 0x72, 0x61, 0x6d, 0x6d, 0x61, 0x72, 0x22, 0x20, 0x70, 0x6c, - 0x61, 0x63, 0x65, 0x68, 0x6f, 0x6c, 0x64, 0x65, 0x72, 0x3d, 0x22, 0x55, - 0x73, 0x65, 0x20, 0x67, 0x62, 0x6e, 0x66, 0x20, 0x6f, 0x72, 0x20, 0x4a, - 0x53, 0x4f, 0x4e, 0x20, 0x53, 0x63, 0x68, 0x65, 0x6d, 0x61, 0x2b, 0x63, - 0x6f, 0x6e, 0x76, 0x65, 0x72, 0x74, 0x22, 0x20, 0x76, 0x61, 0x6c, 0x75, - 0x65, 0x3d, 0x22, 0x24, 0x7b, 0x70, 0x61, 0x72, 0x61, 0x6d, 0x73, 0x2e, - 0x76, 0x61, 0x6c, 0x75, 0x65, 0x2e, 0x67, 0x72, 0x61, 0x6d, 0x6d, 0x61, - 0x72, 0x7d, 0x22, 0x20, 0x72, 0x6f, 0x77, 0x73, 0x3d, 0x34, 0x20, 0x6f, - 0x6e, 0x69, 0x6e, 0x70, 0x75, 0x74, 0x3d, 0x24, 0x7b, 0x75, 0x70, 0x64, - 0x61, 0x74, 0x65, 0x50, 0x61, 0x72, 0x61, 0x6d, 0x73, 0x7d, 0x2f, 0x3e, - 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x3c, 0x69, 0x6e, 0x70, 0x75, 0x74, 0x20, 0x74, 0x79, - 0x70, 0x65, 0x3d, 0x22, 0x74, 0x65, 0x78, 0x74, 0x22, 0x20, 0x6e, 0x61, - 0x6d, 0x65, 0x3d, 0x22, 0x70, 0x72, 0x6f, 0x70, 0x2d, 0x6f, 0x72, 0x64, - 0x65, 0x72, 0x22, 0x20, 0x70, 0x6c, 0x61, 0x63, 0x65, 0x68, 0x6f, 0x6c, - 0x64, 0x65, 0x72, 0x3d, 0x22, 0x6f, 0x72, 0x64, 0x65, 0x72, 0x3a, 0x20, - 0x70, 0x72, 0x6f, 0x70, 0x31, 0x2c, 0x70, 0x72, 0x6f, 0x70, 0x32, 0x2c, - 0x70, 0x72, 0x6f, 0x70, 0x33, 0x22, 0x20, 0x6f, 0x6e, 0x69, 0x6e, 0x70, - 0x75, 0x74, 0x3d, 0x24, 0x7b, 0x75, 0x70, 0x64, 0x61, 0x74, 0x65, 0x47, - 0x72, 0x61, 0x6d, 0x6d, 0x61, 0x72, 0x4a, 0x73, 0x6f, 0x6e, 0x53, 0x63, - 0x68, 0x65, 0x6d, 0x61, 0x50, 0x72, 0x6f, 0x70, 0x4f, 0x72, 0x64, 0x65, - 0x72, 0x7d, 0x20, 0x2f, 0x3e, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x3c, 0x62, 0x75, 0x74, - 0x74, 0x6f, 0x6e, 0x20, 0x74, 0x79, 0x70, 0x65, 0x3d, 0x22, 0x62, 0x75, - 0x74, 0x74, 0x6f, 0x6e, 0x22, 0x20, 0x6f, 0x6e, 0x63, 0x6c, 0x69, 0x63, - 0x6b, 0x3d, 0x24, 0x7b, 0x63, 0x6f, 0x6e, 0x76, 0x65, 0x72, 0x74, 0x4a, - 0x53, 0x4f, 0x4e, 0x53, 0x63, 0x68, 0x65, 0x6d, 0x61, 0x47, 0x72, 0x61, - 0x6d, 0x6d, 0x61, 0x72, 0x7d, 0x3e, 0x43, 0x6f, 0x6e, 0x76, 0x65, 0x72, - 0x74, 0x20, 0x4a, 0x53, 0x4f, 0x4e, 0x20, 0x53, 0x63, 0x68, 0x65, 0x6d, - 0x61, 0x3c, 0x2f, 0x62, 0x75, 0x74, 0x74, 0x6f, 0x6e, 0x3e, 0x0a, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x3c, - 0x2f, 0x64, 0x69, 0x76, 0x3e, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x3c, 0x2f, 0x66, 0x69, 0x65, 0x6c, 0x64, 0x73, - 0x65, 0x74, 0x3e, 0x0a, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x3c, 0x66, 0x69, 0x65, 0x6c, 0x64, 0x73, 0x65, 0x74, - 0x20, 0x63, 0x6c, 0x61, 0x73, 0x73, 0x3d, 0x22, 0x74, 0x77, 0x6f, 0x22, - 0x3e, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x24, 0x7b, 0x49, 0x6e, 0x74, 0x46, 0x69, 0x65, 0x6c, 0x64, - 0x28, 0x7b, 0x6c, 0x61, 0x62, 0x65, 0x6c, 0x3a, 0x20, 0x22, 0x50, 0x72, - 0x65, 0x64, 0x69, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x22, 0x2c, 0x20, - 0x6d, 0x61, 0x78, 0x3a, 0x20, 0x32, 0x30, 0x34, 0x38, 0x2c, 0x20, 0x6d, - 0x69, 0x6e, 0x3a, 0x20, 0x2d, 0x31, 0x2c, 0x20, 0x6e, 0x61, 0x6d, 0x65, - 0x3a, 0x20, 0x22, 0x6e, 0x5f, 0x70, 0x72, 0x65, 0x64, 0x69, 0x63, 0x74, - 0x22, 0x2c, 0x20, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x3a, 0x20, 0x70, 0x61, - 0x72, 0x61, 0x6d, 0x73, 0x2e, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x2e, 0x6e, - 0x5f, 0x70, 0x72, 0x65, 0x64, 0x69, 0x63, 0x74, 0x7d, 0x29, 0x7d, 0x0a, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x24, 0x7b, 0x46, 0x6c, 0x6f, 0x61, 0x74, 0x46, 0x69, 0x65, 0x6c, 0x64, - 0x28, 0x7b, 0x6c, 0x61, 0x62, 0x65, 0x6c, 0x3a, 0x20, 0x22, 0x54, 0x65, - 0x6d, 0x70, 0x65, 0x72, 0x61, 0x74, 0x75, 0x72, 0x65, 0x22, 0x2c, 0x20, - 0x6d, 0x61, 0x78, 0x3a, 0x20, 0x31, 0x2e, 0x35, 0x2c, 0x20, 0x6d, 0x69, - 0x6e, 0x3a, 0x20, 0x30, 0x2e, 0x30, 0x2c, 0x20, 0x6e, 0x61, 0x6d, 0x65, - 0x3a, 0x20, 0x22, 0x74, 0x65, 0x6d, 0x70, 0x65, 0x72, 0x61, 0x74, 0x75, - 0x72, 0x65, 0x22, 0x2c, 0x20, 0x73, 0x74, 0x65, 0x70, 0x3a, 0x20, 0x30, - 0x2e, 0x30, 0x31, 0x2c, 0x20, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x3a, 0x20, - 0x70, 0x61, 0x72, 0x61, 0x6d, 0x73, 0x2e, 0x76, 0x61, 0x6c, 0x75, 0x65, - 0x2e, 0x74, 0x65, 0x6d, 0x70, 0x65, 0x72, 0x61, 0x74, 0x75, 0x72, 0x65, - 0x7d, 0x29, 0x7d, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x24, 0x7b, 0x46, 0x6c, 0x6f, 0x61, 0x74, 0x46, - 0x69, 0x65, 0x6c, 0x64, 0x28, 0x7b, 0x6c, 0x61, 0x62, 0x65, 0x6c, 0x3a, - 0x20, 0x22, 0x50, 0x65, 0x6e, 0x61, 0x6c, 0x69, 0x7a, 0x65, 0x20, 0x72, - 0x65, 0x70, 0x65, 0x61, 0x74, 0x20, 0x73, 0x65, 0x71, 0x75, 0x65, 0x6e, - 0x63, 0x65, 0x22, 0x2c, 0x20, 0x6d, 0x61, 0x78, 0x3a, 0x20, 0x32, 0x2e, - 0x30, 0x2c, 0x20, 0x6d, 0x69, 0x6e, 0x3a, 0x20, 0x30, 0x2e, 0x30, 0x2c, - 0x20, 0x6e, 0x61, 0x6d, 0x65, 0x3a, 0x20, 0x22, 0x72, 0x65, 0x70, 0x65, - 0x61, 0x74, 0x5f, 0x70, 0x65, 0x6e, 0x61, 0x6c, 0x74, 0x79, 0x22, 0x2c, - 0x20, 0x73, 0x74, 0x65, 0x70, 0x3a, 0x20, 0x30, 0x2e, 0x30, 0x31, 0x2c, - 0x20, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x3a, 0x20, 0x70, 0x61, 0x72, 0x61, - 0x6d, 0x73, 0x2e, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x2e, 0x72, 0x65, 0x70, - 0x65, 0x61, 0x74, 0x5f, 0x70, 0x65, 0x6e, 0x61, 0x6c, 0x74, 0x79, 0x7d, - 0x29, 0x7d, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x24, 0x7b, 0x49, 0x6e, 0x74, 0x46, 0x69, 0x65, 0x6c, - 0x64, 0x28, 0x7b, 0x6c, 0x61, 0x62, 0x65, 0x6c, 0x3a, 0x20, 0x22, 0x43, - 0x6f, 0x6e, 0x73, 0x69, 0x64, 0x65, 0x72, 0x20, 0x4e, 0x20, 0x74, 0x6f, - 0x6b, 0x65, 0x6e, 0x73, 0x20, 0x66, 0x6f, 0x72, 0x20, 0x70, 0x65, 0x6e, - 0x61, 0x6c, 0x69, 0x7a, 0x65, 0x22, 0x2c, 0x20, 0x6d, 0x61, 0x78, 0x3a, - 0x20, 0x32, 0x30, 0x34, 0x38, 0x2c, 0x20, 0x6d, 0x69, 0x6e, 0x3a, 0x20, - 0x30, 0x2c, 0x20, 0x6e, 0x61, 0x6d, 0x65, 0x3a, 0x20, 0x22, 0x72, 0x65, - 0x70, 0x65, 0x61, 0x74, 0x5f, 0x6c, 0x61, 0x73, 0x74, 0x5f, 0x6e, 0x22, - 0x2c, 0x20, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x3a, 0x20, 0x70, 0x61, 0x72, - 0x61, 0x6d, 0x73, 0x2e, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x2e, 0x72, 0x65, - 0x70, 0x65, 0x61, 0x74, 0x5f, 0x6c, 0x61, 0x73, 0x74, 0x5f, 0x6e, 0x7d, - 0x29, 0x7d, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x24, 0x7b, 0x49, 0x6e, 0x74, 0x46, 0x69, 0x65, 0x6c, - 0x64, 0x28, 0x7b, 0x6c, 0x61, 0x62, 0x65, 0x6c, 0x3a, 0x20, 0x22, 0x54, - 0x6f, 0x70, 0x2d, 0x4b, 0x20, 0x73, 0x61, 0x6d, 0x70, 0x6c, 0x69, 0x6e, - 0x67, 0x22, 0x2c, 0x20, 0x6d, 0x61, 0x78, 0x3a, 0x20, 0x31, 0x30, 0x30, - 0x2c, 0x20, 0x6d, 0x69, 0x6e, 0x3a, 0x20, 0x2d, 0x31, 0x2c, 0x20, 0x6e, - 0x61, 0x6d, 0x65, 0x3a, 0x20, 0x22, 0x74, 0x6f, 0x70, 0x5f, 0x6b, 0x22, - 0x2c, 0x20, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x3a, 0x20, 0x70, 0x61, 0x72, - 0x61, 0x6d, 0x73, 0x2e, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x2e, 0x74, 0x6f, - 0x70, 0x5f, 0x6b, 0x7d, 0x29, 0x7d, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x24, 0x7b, 0x46, 0x6c, 0x6f, - 0x61, 0x74, 0x46, 0x69, 0x65, 0x6c, 0x64, 0x28, 0x7b, 0x6c, 0x61, 0x62, - 0x65, 0x6c, 0x3a, 0x20, 0x22, 0x54, 0x6f, 0x70, 0x2d, 0x50, 0x20, 0x73, - 0x61, 0x6d, 0x70, 0x6c, 0x69, 0x6e, 0x67, 0x22, 0x2c, 0x20, 0x6d, 0x61, - 0x78, 0x3a, 0x20, 0x31, 0x2e, 0x30, 0x2c, 0x20, 0x6d, 0x69, 0x6e, 0x3a, - 0x20, 0x30, 0x2e, 0x30, 0x2c, 0x20, 0x6e, 0x61, 0x6d, 0x65, 0x3a, 0x20, - 0x22, 0x74, 0x6f, 0x70, 0x5f, 0x70, 0x22, 0x2c, 0x20, 0x73, 0x74, 0x65, - 0x70, 0x3a, 0x20, 0x30, 0x2e, 0x30, 0x31, 0x2c, 0x20, 0x76, 0x61, 0x6c, - 0x75, 0x65, 0x3a, 0x20, 0x70, 0x61, 0x72, 0x61, 0x6d, 0x73, 0x2e, 0x76, - 0x61, 0x6c, 0x75, 0x65, 0x2e, 0x74, 0x6f, 0x70, 0x5f, 0x70, 0x7d, 0x29, - 0x7d, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x3c, 0x2f, 0x66, 0x69, 0x65, 0x6c, 0x64, 0x73, 0x65, 0x74, 0x3e, 0x0a, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x3c, 0x64, - 0x65, 0x74, 0x61, 0x69, 0x6c, 0x73, 0x3e, 0x0a, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x3c, 0x73, 0x75, 0x6d, - 0x6d, 0x61, 0x72, 0x79, 0x3e, 0x4d, 0x6f, 0x72, 0x65, 0x20, 0x6f, 0x70, - 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x3c, 0x2f, 0x73, 0x75, 0x6d, 0x6d, 0x61, - 0x72, 0x79, 0x3e, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x3c, 0x66, 0x69, 0x65, 0x6c, 0x64, 0x73, 0x65, - 0x74, 0x20, 0x63, 0x6c, 0x61, 0x73, 0x73, 0x3d, 0x22, 0x74, 0x77, 0x6f, - 0x22, 0x3e, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x24, 0x7b, 0x46, 0x6c, 0x6f, 0x61, 0x74, - 0x46, 0x69, 0x65, 0x6c, 0x64, 0x28, 0x7b, 0x6c, 0x61, 0x62, 0x65, 0x6c, - 0x3a, 0x20, 0x22, 0x54, 0x46, 0x53, 0x2d, 0x5a, 0x22, 0x2c, 0x20, 0x6d, - 0x61, 0x78, 0x3a, 0x20, 0x31, 0x2e, 0x30, 0x2c, 0x20, 0x6d, 0x69, 0x6e, - 0x3a, 0x20, 0x30, 0x2e, 0x30, 0x2c, 0x20, 0x6e, 0x61, 0x6d, 0x65, 0x3a, - 0x20, 0x22, 0x74, 0x66, 0x73, 0x5f, 0x7a, 0x22, 0x2c, 0x20, 0x73, 0x74, - 0x65, 0x70, 0x3a, 0x20, 0x30, 0x2e, 0x30, 0x31, 0x2c, 0x20, 0x76, 0x61, - 0x6c, 0x75, 0x65, 0x3a, 0x20, 0x70, 0x61, 0x72, 0x61, 0x6d, 0x73, 0x2e, - 0x76, 0x61, 0x6c, 0x75, 0x65, 0x2e, 0x74, 0x66, 0x73, 0x5f, 0x7a, 0x7d, - 0x29, 0x7d, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x24, 0x7b, 0x46, 0x6c, 0x6f, 0x61, 0x74, - 0x46, 0x69, 0x65, 0x6c, 0x64, 0x28, 0x7b, 0x6c, 0x61, 0x62, 0x65, 0x6c, - 0x3a, 0x20, 0x22, 0x54, 0x79, 0x70, 0x69, 0x63, 0x61, 0x6c, 0x20, 0x50, - 0x22, 0x2c, 0x20, 0x6d, 0x61, 0x78, 0x3a, 0x20, 0x31, 0x2e, 0x30, 0x2c, - 0x20, 0x6d, 0x69, 0x6e, 0x3a, 0x20, 0x30, 0x2e, 0x30, 0x2c, 0x20, 0x6e, - 0x61, 0x6d, 0x65, 0x3a, 0x20, 0x22, 0x74, 0x79, 0x70, 0x69, 0x63, 0x61, - 0x6c, 0x5f, 0x70, 0x22, 0x2c, 0x20, 0x73, 0x74, 0x65, 0x70, 0x3a, 0x20, - 0x30, 0x2e, 0x30, 0x31, 0x2c, 0x20, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x3a, - 0x20, 0x70, 0x61, 0x72, 0x61, 0x6d, 0x73, 0x2e, 0x76, 0x61, 0x6c, 0x75, - 0x65, 0x2e, 0x74, 0x79, 0x70, 0x69, 0x63, 0x61, 0x6c, 0x5f, 0x70, 0x7d, - 0x29, 0x7d, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x24, 0x7b, 0x46, 0x6c, 0x6f, 0x61, 0x74, - 0x46, 0x69, 0x65, 0x6c, 0x64, 0x28, 0x7b, 0x6c, 0x61, 0x62, 0x65, 0x6c, - 0x3a, 0x20, 0x22, 0x50, 0x72, 0x65, 0x73, 0x65, 0x6e, 0x63, 0x65, 0x20, - 0x70, 0x65, 0x6e, 0x61, 0x6c, 0x74, 0x79, 0x22, 0x2c, 0x20, 0x6d, 0x61, - 0x78, 0x3a, 0x20, 0x31, 0x2e, 0x30, 0x2c, 0x20, 0x6d, 0x69, 0x6e, 0x3a, - 0x20, 0x30, 0x2e, 0x30, 0x2c, 0x20, 0x6e, 0x61, 0x6d, 0x65, 0x3a, 0x20, - 0x22, 0x70, 0x72, 0x65, 0x73, 0x65, 0x6e, 0x63, 0x65, 0x5f, 0x70, 0x65, - 0x6e, 0x61, 0x6c, 0x74, 0x79, 0x22, 0x2c, 0x20, 0x73, 0x74, 0x65, 0x70, - 0x3a, 0x20, 0x30, 0x2e, 0x30, 0x31, 0x2c, 0x20, 0x76, 0x61, 0x6c, 0x75, - 0x65, 0x3a, 0x20, 0x70, 0x61, 0x72, 0x61, 0x6d, 0x73, 0x2e, 0x76, 0x61, - 0x6c, 0x75, 0x65, 0x2e, 0x70, 0x72, 0x65, 0x73, 0x65, 0x6e, 0x63, 0x65, - 0x5f, 0x70, 0x65, 0x6e, 0x61, 0x6c, 0x74, 0x79, 0x7d, 0x29, 0x7d, 0x0a, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x24, 0x7b, 0x46, 0x6c, 0x6f, 0x61, 0x74, 0x46, 0x69, 0x65, - 0x6c, 0x64, 0x28, 0x7b, 0x6c, 0x61, 0x62, 0x65, 0x6c, 0x3a, 0x20, 0x22, - 0x46, 0x72, 0x65, 0x71, 0x75, 0x65, 0x6e, 0x63, 0x79, 0x20, 0x70, 0x65, - 0x6e, 0x61, 0x6c, 0x74, 0x79, 0x22, 0x2c, 0x20, 0x6d, 0x61, 0x78, 0x3a, - 0x20, 0x31, 0x2e, 0x30, 0x2c, 0x20, 0x6d, 0x69, 0x6e, 0x3a, 0x20, 0x30, - 0x2e, 0x30, 0x2c, 0x20, 0x6e, 0x61, 0x6d, 0x65, 0x3a, 0x20, 0x22, 0x66, - 0x72, 0x65, 0x71, 0x75, 0x65, 0x6e, 0x63, 0x79, 0x5f, 0x70, 0x65, 0x6e, - 0x61, 0x6c, 0x74, 0x79, 0x22, 0x2c, 0x20, 0x73, 0x74, 0x65, 0x70, 0x3a, - 0x20, 0x30, 0x2e, 0x30, 0x31, 0x2c, 0x20, 0x76, 0x61, 0x6c, 0x75, 0x65, - 0x3a, 0x20, 0x70, 0x61, 0x72, 0x61, 0x6d, 0x73, 0x2e, 0x76, 0x61, 0x6c, - 0x75, 0x65, 0x2e, 0x66, 0x72, 0x65, 0x71, 0x75, 0x65, 0x6e, 0x63, 0x79, - 0x5f, 0x70, 0x65, 0x6e, 0x61, 0x6c, 0x74, 0x79, 0x7d, 0x29, 0x7d, 0x0a, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x3c, 0x2f, 0x66, 0x69, 0x65, 0x6c, 0x64, 0x73, 0x65, 0x74, 0x3e, 0x0a, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x3c, 0x68, 0x72, 0x20, 0x2f, 0x3e, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x3c, 0x66, 0x69, 0x65, 0x6c, - 0x64, 0x73, 0x65, 0x74, 0x20, 0x63, 0x6c, 0x61, 0x73, 0x73, 0x3d, 0x22, - 0x74, 0x68, 0x72, 0x65, 0x65, 0x22, 0x3e, 0x0a, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x3c, 0x64, - 0x69, 0x76, 0x3e, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x3c, 0x6c, 0x61, 0x62, - 0x65, 0x6c, 0x3e, 0x3c, 0x69, 0x6e, 0x70, 0x75, 0x74, 0x20, 0x74, 0x79, - 0x70, 0x65, 0x3d, 0x22, 0x72, 0x61, 0x64, 0x69, 0x6f, 0x22, 0x20, 0x6e, - 0x61, 0x6d, 0x65, 0x3d, 0x22, 0x6d, 0x69, 0x72, 0x6f, 0x73, 0x74, 0x61, - 0x74, 0x22, 0x20, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x3d, 0x22, 0x30, 0x22, - 0x20, 0x63, 0x68, 0x65, 0x63, 0x6b, 0x65, 0x64, 0x3d, 0x24, 0x7b, 0x70, - 0x61, 0x72, 0x61, 0x6d, 0x73, 0x2e, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x2e, - 0x6d, 0x69, 0x72, 0x6f, 0x73, 0x74, 0x61, 0x74, 0x20, 0x3d, 0x3d, 0x20, - 0x30, 0x7d, 0x20, 0x6f, 0x6e, 0x69, 0x6e, 0x70, 0x75, 0x74, 0x3d, 0x24, - 0x7b, 0x75, 0x70, 0x64, 0x61, 0x74, 0x65, 0x50, 0x61, 0x72, 0x61, 0x6d, - 0x73, 0x49, 0x6e, 0x74, 0x7d, 0x20, 0x2f, 0x3e, 0x20, 0x6e, 0x6f, 0x20, - 0x4d, 0x69, 0x72, 0x6f, 0x73, 0x74, 0x61, 0x74, 0x3c, 0x2f, 0x6c, 0x61, - 0x62, 0x65, 0x6c, 0x3e, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x3c, 0x6c, 0x61, - 0x62, 0x65, 0x6c, 0x3e, 0x3c, 0x69, 0x6e, 0x70, 0x75, 0x74, 0x20, 0x74, - 0x79, 0x70, 0x65, 0x3d, 0x22, 0x72, 0x61, 0x64, 0x69, 0x6f, 0x22, 0x20, - 0x6e, 0x61, 0x6d, 0x65, 0x3d, 0x22, 0x6d, 0x69, 0x72, 0x6f, 0x73, 0x74, - 0x61, 0x74, 0x22, 0x20, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x3d, 0x22, 0x31, - 0x22, 0x20, 0x63, 0x68, 0x65, 0x63, 0x6b, 0x65, 0x64, 0x3d, 0x24, 0x7b, - 0x70, 0x61, 0x72, 0x61, 0x6d, 0x73, 0x2e, 0x76, 0x61, 0x6c, 0x75, 0x65, - 0x2e, 0x6d, 0x69, 0x72, 0x6f, 0x73, 0x74, 0x61, 0x74, 0x20, 0x3d, 0x3d, - 0x20, 0x31, 0x7d, 0x20, 0x6f, 0x6e, 0x69, 0x6e, 0x70, 0x75, 0x74, 0x3d, - 0x24, 0x7b, 0x75, 0x70, 0x64, 0x61, 0x74, 0x65, 0x50, 0x61, 0x72, 0x61, - 0x6d, 0x73, 0x49, 0x6e, 0x74, 0x7d, 0x20, 0x2f, 0x3e, 0x20, 0x4d, 0x69, - 0x72, 0x6f, 0x73, 0x74, 0x61, 0x74, 0x20, 0x76, 0x31, 0x3c, 0x2f, 0x6c, - 0x61, 0x62, 0x65, 0x6c, 0x3e, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x3c, 0x6c, - 0x61, 0x62, 0x65, 0x6c, 0x3e, 0x3c, 0x69, 0x6e, 0x70, 0x75, 0x74, 0x20, - 0x74, 0x79, 0x70, 0x65, 0x3d, 0x22, 0x72, 0x61, 0x64, 0x69, 0x6f, 0x22, - 0x20, 0x6e, 0x61, 0x6d, 0x65, 0x3d, 0x22, 0x6d, 0x69, 0x72, 0x6f, 0x73, - 0x74, 0x61, 0x74, 0x22, 0x20, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x3d, 0x22, - 0x32, 0x22, 0x20, 0x63, 0x68, 0x65, 0x63, 0x6b, 0x65, 0x64, 0x3d, 0x24, - 0x7b, 0x70, 0x61, 0x72, 0x61, 0x6d, 0x73, 0x2e, 0x76, 0x61, 0x6c, 0x75, - 0x65, 0x2e, 0x6d, 0x69, 0x72, 0x6f, 0x73, 0x74, 0x61, 0x74, 0x20, 0x3d, - 0x3d, 0x20, 0x32, 0x7d, 0x20, 0x6f, 0x6e, 0x69, 0x6e, 0x70, 0x75, 0x74, - 0x3d, 0x24, 0x7b, 0x75, 0x70, 0x64, 0x61, 0x74, 0x65, 0x50, 0x61, 0x72, - 0x61, 0x6d, 0x73, 0x49, 0x6e, 0x74, 0x7d, 0x20, 0x2f, 0x3e, 0x20, 0x4d, - 0x69, 0x72, 0x6f, 0x73, 0x74, 0x61, 0x74, 0x20, 0x76, 0x32, 0x3c, 0x2f, - 0x6c, 0x61, 0x62, 0x65, 0x6c, 0x3e, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x3c, 0x2f, 0x64, - 0x69, 0x76, 0x3e, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x24, 0x7b, 0x46, 0x6c, 0x6f, 0x61, - 0x74, 0x46, 0x69, 0x65, 0x6c, 0x64, 0x28, 0x7b, 0x6c, 0x61, 0x62, 0x65, - 0x6c, 0x3a, 0x20, 0x22, 0x4d, 0x69, 0x72, 0x6f, 0x73, 0x74, 0x61, 0x74, - 0x20, 0x74, 0x61, 0x75, 0x22, 0x2c, 0x20, 0x6d, 0x61, 0x78, 0x3a, 0x20, - 0x31, 0x30, 0x2e, 0x30, 0x2c, 0x20, 0x6d, 0x69, 0x6e, 0x3a, 0x20, 0x30, - 0x2e, 0x30, 0x2c, 0x20, 0x6e, 0x61, 0x6d, 0x65, 0x3a, 0x20, 0x22, 0x6d, - 0x69, 0x72, 0x6f, 0x73, 0x74, 0x61, 0x74, 0x5f, 0x74, 0x61, 0x75, 0x22, - 0x2c, 0x20, 0x73, 0x74, 0x65, 0x70, 0x3a, 0x20, 0x30, 0x2e, 0x30, 0x31, - 0x2c, 0x20, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x3a, 0x20, 0x70, 0x61, 0x72, - 0x61, 0x6d, 0x73, 0x2e, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x2e, 0x6d, 0x69, - 0x72, 0x6f, 0x73, 0x74, 0x61, 0x74, 0x5f, 0x74, 0x61, 0x75, 0x7d, 0x29, - 0x7d, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x24, 0x7b, 0x46, 0x6c, 0x6f, 0x61, 0x74, 0x46, - 0x69, 0x65, 0x6c, 0x64, 0x28, 0x7b, 0x6c, 0x61, 0x62, 0x65, 0x6c, 0x3a, - 0x20, 0x22, 0x4d, 0x69, 0x72, 0x6f, 0x73, 0x74, 0x61, 0x74, 0x20, 0x65, - 0x74, 0x61, 0x22, 0x2c, 0x20, 0x6d, 0x61, 0x78, 0x3a, 0x20, 0x31, 0x2e, - 0x30, 0x2c, 0x20, 0x6d, 0x69, 0x6e, 0x3a, 0x20, 0x30, 0x2e, 0x30, 0x2c, - 0x20, 0x6e, 0x61, 0x6d, 0x65, 0x3a, 0x20, 0x22, 0x6d, 0x69, 0x72, 0x6f, - 0x73, 0x74, 0x61, 0x74, 0x5f, 0x65, 0x74, 0x61, 0x22, 0x2c, 0x20, 0x73, - 0x74, 0x65, 0x70, 0x3a, 0x20, 0x30, 0x2e, 0x30, 0x31, 0x2c, 0x20, 0x76, - 0x61, 0x6c, 0x75, 0x65, 0x3a, 0x20, 0x70, 0x61, 0x72, 0x61, 0x6d, 0x73, - 0x2e, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x2e, 0x6d, 0x69, 0x72, 0x6f, 0x73, - 0x74, 0x61, 0x74, 0x5f, 0x65, 0x74, 0x61, 0x7d, 0x29, 0x7d, 0x0a, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x3c, - 0x2f, 0x66, 0x69, 0x65, 0x6c, 0x64, 0x73, 0x65, 0x74, 0x3e, 0x0a, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x3c, - 0x66, 0x69, 0x65, 0x6c, 0x64, 0x73, 0x65, 0x74, 0x3e, 0x0a, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x24, 0x7b, 0x49, 0x6e, 0x74, 0x46, 0x69, 0x65, 0x6c, 0x64, 0x28, 0x7b, - 0x6c, 0x61, 0x62, 0x65, 0x6c, 0x3a, 0x20, 0x22, 0x53, 0x68, 0x6f, 0x77, - 0x20, 0x50, 0x72, 0x6f, 0x62, 0x61, 0x62, 0x69, 0x6c, 0x69, 0x74, 0x69, - 0x65, 0x73, 0x22, 0x2c, 0x20, 0x6d, 0x61, 0x78, 0x3a, 0x20, 0x31, 0x30, - 0x2c, 0x20, 0x6d, 0x69, 0x6e, 0x3a, 0x20, 0x30, 0x2c, 0x20, 0x6e, 0x61, - 0x6d, 0x65, 0x3a, 0x20, 0x22, 0x6e, 0x5f, 0x70, 0x72, 0x6f, 0x62, 0x73, - 0x22, 0x2c, 0x20, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x3a, 0x20, 0x70, 0x61, - 0x72, 0x61, 0x6d, 0x73, 0x2e, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x2e, 0x6e, - 0x5f, 0x70, 0x72, 0x6f, 0x62, 0x73, 0x7d, 0x29, 0x7d, 0x0a, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x3c, 0x2f, - 0x66, 0x69, 0x65, 0x6c, 0x64, 0x73, 0x65, 0x74, 0x3e, 0x0a, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x3c, 0x2f, 0x64, 0x65, - 0x74, 0x61, 0x69, 0x6c, 0x73, 0x3e, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x3c, 0x2f, 0x66, 0x6f, 0x72, 0x6d, 0x3e, 0x0a, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x60, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x7d, - 0x0a, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x63, 0x6f, 0x6e, 0x73, 0x74, 0x20, - 0x70, 0x72, 0x6f, 0x62, 0x43, 0x6f, 0x6c, 0x6f, 0x72, 0x20, 0x3d, 0x20, - 0x28, 0x70, 0x29, 0x20, 0x3d, 0x3e, 0x20, 0x7b, 0x0a, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x63, 0x6f, 0x6e, 0x73, 0x74, 0x20, 0x72, 0x20, 0x3d, - 0x20, 0x4d, 0x61, 0x74, 0x68, 0x2e, 0x66, 0x6c, 0x6f, 0x6f, 0x72, 0x28, - 0x31, 0x39, 0x32, 0x20, 0x2a, 0x20, 0x28, 0x31, 0x20, 0x2d, 0x20, 0x70, - 0x29, 0x29, 0x3b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x63, 0x6f, - 0x6e, 0x73, 0x74, 0x20, 0x67, 0x20, 0x3d, 0x20, 0x4d, 0x61, 0x74, 0x68, - 0x2e, 0x66, 0x6c, 0x6f, 0x6f, 0x72, 0x28, 0x31, 0x39, 0x32, 0x20, 0x2a, - 0x20, 0x70, 0x29, 0x3b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x72, - 0x65, 0x74, 0x75, 0x72, 0x6e, 0x20, 0x60, 0x72, 0x67, 0x62, 0x61, 0x28, - 0x24, 0x7b, 0x72, 0x7d, 0x2c, 0x24, 0x7b, 0x67, 0x7d, 0x2c, 0x30, 0x2c, - 0x30, 0x2e, 0x33, 0x29, 0x60, 0x3b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x7d, - 0x0a, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x63, 0x6f, 0x6e, 0x73, 0x74, 0x20, - 0x50, 0x72, 0x6f, 0x62, 0x61, 0x62, 0x69, 0x6c, 0x69, 0x74, 0x69, 0x65, - 0x73, 0x20, 0x3d, 0x20, 0x28, 0x70, 0x61, 0x72, 0x61, 0x6d, 0x73, 0x29, - 0x20, 0x3d, 0x3e, 0x20, 0x7b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x72, 0x65, 0x74, 0x75, 0x72, 0x6e, 0x20, 0x70, 0x61, 0x72, 0x61, 0x6d, - 0x73, 0x2e, 0x64, 0x61, 0x74, 0x61, 0x2e, 0x6d, 0x61, 0x70, 0x28, 0x6d, - 0x73, 0x67, 0x20, 0x3d, 0x3e, 0x20, 0x7b, 0x0a, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x63, 0x6f, 0x6e, 0x73, 0x74, 0x20, 0x7b, 0x20, - 0x63, 0x6f, 0x6d, 0x70, 0x6c, 0x65, 0x74, 0x69, 0x6f, 0x6e, 0x5f, 0x70, - 0x72, 0x6f, 0x62, 0x61, 0x62, 0x69, 0x6c, 0x69, 0x74, 0x69, 0x65, 0x73, - 0x20, 0x7d, 0x20, 0x3d, 0x20, 0x6d, 0x73, 0x67, 0x3b, 0x0a, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x69, 0x66, 0x20, 0x28, 0x0a, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x21, 0x63, 0x6f, - 0x6d, 0x70, 0x6c, 0x65, 0x74, 0x69, 0x6f, 0x6e, 0x5f, 0x70, 0x72, 0x6f, - 0x62, 0x61, 0x62, 0x69, 0x6c, 0x69, 0x74, 0x69, 0x65, 0x73, 0x20, 0x7c, - 0x7c, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x63, 0x6f, 0x6d, 0x70, 0x6c, 0x65, 0x74, 0x69, 0x6f, 0x6e, 0x5f, 0x70, - 0x72, 0x6f, 0x62, 0x61, 0x62, 0x69, 0x6c, 0x69, 0x74, 0x69, 0x65, 0x73, - 0x2e, 0x6c, 0x65, 0x6e, 0x67, 0x74, 0x68, 0x20, 0x3d, 0x3d, 0x3d, 0x20, - 0x30, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x29, 0x20, - 0x72, 0x65, 0x74, 0x75, 0x72, 0x6e, 0x20, 0x6d, 0x73, 0x67, 0x2e, 0x63, - 0x6f, 0x6e, 0x74, 0x65, 0x6e, 0x74, 0x0a, 0x0a, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x69, 0x66, 0x20, 0x28, 0x63, 0x6f, 0x6d, 0x70, - 0x6c, 0x65, 0x74, 0x69, 0x6f, 0x6e, 0x5f, 0x70, 0x72, 0x6f, 0x62, 0x61, - 0x62, 0x69, 0x6c, 0x69, 0x74, 0x69, 0x65, 0x73, 0x2e, 0x6c, 0x65, 0x6e, - 0x67, 0x74, 0x68, 0x20, 0x3e, 0x20, 0x31, 0x29, 0x20, 0x7b, 0x0a, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x2f, 0x2f, 0x20, - 0x4e, 0x6f, 0x74, 0x20, 0x66, 0x6f, 0x72, 0x20, 0x62, 0x79, 0x74, 0x65, - 0x20, 0x70, 0x61, 0x69, 0x72, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x69, 0x66, 0x20, 0x28, 0x63, 0x6f, 0x6d, 0x70, - 0x6c, 0x65, 0x74, 0x69, 0x6f, 0x6e, 0x5f, 0x70, 0x72, 0x6f, 0x62, 0x61, - 0x62, 0x69, 0x6c, 0x69, 0x74, 0x69, 0x65, 0x73, 0x5b, 0x30, 0x5d, 0x2e, - 0x63, 0x6f, 0x6e, 0x74, 0x65, 0x6e, 0x74, 0x2e, 0x73, 0x74, 0x61, 0x72, - 0x74, 0x73, 0x57, 0x69, 0x74, 0x68, 0x28, 0x27, 0x62, 0x79, 0x74, 0x65, - 0x3a, 0x20, 0x5c, 0x5c, 0x27, 0x29, 0x29, 0x20, 0x72, 0x65, 0x74, 0x75, - 0x72, 0x6e, 0x20, 0x6d, 0x73, 0x67, 0x2e, 0x63, 0x6f, 0x6e, 0x74, 0x65, - 0x6e, 0x74, 0x0a, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x63, 0x6f, 0x6e, 0x73, 0x74, 0x20, 0x73, 0x70, 0x6c, 0x69, - 0x74, 0x44, 0x61, 0x74, 0x61, 0x20, 0x3d, 0x20, 0x63, 0x6f, 0x6d, 0x70, - 0x6c, 0x65, 0x74, 0x69, 0x6f, 0x6e, 0x5f, 0x70, 0x72, 0x6f, 0x62, 0x61, - 0x62, 0x69, 0x6c, 0x69, 0x74, 0x69, 0x65, 0x73, 0x2e, 0x6d, 0x61, 0x70, - 0x28, 0x70, 0x72, 0x6f, 0x62, 0x20, 0x3d, 0x3e, 0x20, 0x28, 0x7b, 0x0a, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x63, 0x6f, 0x6e, 0x74, 0x65, 0x6e, 0x74, 0x3a, 0x20, 0x70, 0x72, 0x6f, - 0x62, 0x2e, 0x63, 0x6f, 0x6e, 0x74, 0x65, 0x6e, 0x74, 0x2c, 0x0a, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x63, - 0x6f, 0x6d, 0x70, 0x6c, 0x65, 0x74, 0x69, 0x6f, 0x6e, 0x5f, 0x70, 0x72, - 0x6f, 0x62, 0x61, 0x62, 0x69, 0x6c, 0x69, 0x74, 0x69, 0x65, 0x73, 0x3a, - 0x20, 0x5b, 0x70, 0x72, 0x6f, 0x62, 0x5d, 0x0a, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x7d, 0x29, 0x29, 0x0a, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x72, 0x65, 0x74, 0x75, - 0x72, 0x6e, 0x20, 0x68, 0x74, 0x6d, 0x6c, 0x60, 0x3c, 0x24, 0x7b, 0x50, - 0x72, 0x6f, 0x62, 0x61, 0x62, 0x69, 0x6c, 0x69, 0x74, 0x69, 0x65, 0x73, - 0x7d, 0x20, 0x64, 0x61, 0x74, 0x61, 0x3d, 0x24, 0x7b, 0x73, 0x70, 0x6c, - 0x69, 0x74, 0x44, 0x61, 0x74, 0x61, 0x7d, 0x20, 0x2f, 0x3e, 0x60, 0x0a, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x7d, 0x0a, 0x0a, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x63, 0x6f, 0x6e, 0x73, 0x74, - 0x20, 0x7b, 0x20, 0x70, 0x72, 0x6f, 0x62, 0x73, 0x2c, 0x20, 0x63, 0x6f, - 0x6e, 0x74, 0x65, 0x6e, 0x74, 0x20, 0x7d, 0x20, 0x3d, 0x20, 0x63, 0x6f, - 0x6d, 0x70, 0x6c, 0x65, 0x74, 0x69, 0x6f, 0x6e, 0x5f, 0x70, 0x72, 0x6f, - 0x62, 0x61, 0x62, 0x69, 0x6c, 0x69, 0x74, 0x69, 0x65, 0x73, 0x5b, 0x30, - 0x5d, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x63, 0x6f, - 0x6e, 0x73, 0x74, 0x20, 0x66, 0x6f, 0x75, 0x6e, 0x64, 0x20, 0x3d, 0x20, - 0x70, 0x72, 0x6f, 0x62, 0x73, 0x2e, 0x66, 0x69, 0x6e, 0x64, 0x28, 0x70, - 0x20, 0x3d, 0x3e, 0x20, 0x70, 0x2e, 0x74, 0x6f, 0x6b, 0x5f, 0x73, 0x74, - 0x72, 0x20, 0x3d, 0x3d, 0x3d, 0x20, 0x6d, 0x73, 0x67, 0x2e, 0x63, 0x6f, - 0x6e, 0x74, 0x65, 0x6e, 0x74, 0x29, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x63, 0x6f, 0x6e, 0x73, 0x74, 0x20, 0x70, 0x43, 0x6f, - 0x6c, 0x6f, 0x72, 0x20, 0x3d, 0x20, 0x66, 0x6f, 0x75, 0x6e, 0x64, 0x20, - 0x3f, 0x20, 0x70, 0x72, 0x6f, 0x62, 0x43, 0x6f, 0x6c, 0x6f, 0x72, 0x28, - 0x66, 0x6f, 0x75, 0x6e, 0x64, 0x2e, 0x70, 0x72, 0x6f, 0x62, 0x29, 0x20, - 0x3a, 0x20, 0x27, 0x74, 0x72, 0x61, 0x6e, 0x73, 0x70, 0x61, 0x72, 0x65, - 0x6e, 0x74, 0x27, 0x0a, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x63, 0x6f, 0x6e, 0x73, 0x74, 0x20, 0x70, 0x6f, 0x70, 0x6f, 0x76, - 0x65, 0x72, 0x43, 0x68, 0x69, 0x6c, 0x64, 0x72, 0x65, 0x6e, 0x20, 0x3d, - 0x20, 0x68, 0x74, 0x6d, 0x6c, 0x60, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x3c, 0x64, 0x69, 0x76, 0x20, 0x63, 0x6c, - 0x61, 0x73, 0x73, 0x3d, 0x22, 0x70, 0x72, 0x6f, 0x62, 0x2d, 0x73, 0x65, - 0x74, 0x22, 0x3e, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x24, 0x7b, 0x70, 0x72, 0x6f, 0x62, 0x73, 0x2e, - 0x6d, 0x61, 0x70, 0x28, 0x28, 0x70, 0x2c, 0x20, 0x69, 0x6e, 0x64, 0x65, - 0x78, 0x29, 0x20, 0x3d, 0x3e, 0x20, 0x7b, 0x0a, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x72, 0x65, - 0x74, 0x75, 0x72, 0x6e, 0x20, 0x68, 0x74, 0x6d, 0x6c, 0x60, 0x0a, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x3c, 0x64, 0x69, 0x76, 0x0a, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x6b, 0x65, 0x79, 0x3d, 0x24, 0x7b, 0x69, 0x6e, 0x64, 0x65, - 0x78, 0x7d, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x74, 0x69, 0x74, - 0x6c, 0x65, 0x3d, 0x24, 0x7b, 0x60, 0x70, 0x72, 0x6f, 0x62, 0x3a, 0x20, - 0x24, 0x7b, 0x70, 0x2e, 0x70, 0x72, 0x6f, 0x62, 0x7d, 0x60, 0x7d, 0x0a, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x73, 0x74, 0x79, 0x6c, 0x65, 0x3d, - 0x24, 0x7b, 0x7b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x70, 0x61, 0x64, 0x64, 0x69, 0x6e, 0x67, 0x3a, 0x20, 0x27, 0x30, 0x2e, - 0x33, 0x65, 0x6d, 0x27, 0x2c, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x62, 0x61, 0x63, 0x6b, 0x67, 0x72, 0x6f, 0x75, 0x6e, 0x64, - 0x43, 0x6f, 0x6c, 0x6f, 0x72, 0x3a, 0x20, 0x70, 0x2e, 0x74, 0x6f, 0x6b, - 0x5f, 0x73, 0x74, 0x72, 0x20, 0x3d, 0x3d, 0x3d, 0x20, 0x63, 0x6f, 0x6e, - 0x74, 0x65, 0x6e, 0x74, 0x20, 0x3f, 0x20, 0x70, 0x72, 0x6f, 0x62, 0x43, - 0x6f, 0x6c, 0x6f, 0x72, 0x28, 0x70, 0x2e, 0x70, 0x72, 0x6f, 0x62, 0x29, - 0x20, 0x3a, 0x20, 0x27, 0x74, 0x72, 0x61, 0x6e, 0x73, 0x70, 0x61, 0x72, - 0x65, 0x6e, 0x74, 0x27, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x7d, - 0x7d, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x3e, 0x0a, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x3c, 0x73, 0x70, 0x61, 0x6e, 0x3e, 0x24, 0x7b, 0x70, 0x2e, - 0x74, 0x6f, 0x6b, 0x5f, 0x73, 0x74, 0x72, 0x7d, 0x3a, 0x20, 0x3c, 0x2f, - 0x73, 0x70, 0x61, 0x6e, 0x3e, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x3c, 0x73, 0x70, 0x61, 0x6e, 0x3e, 0x24, 0x7b, 0x4d, 0x61, 0x74, 0x68, - 0x2e, 0x66, 0x6c, 0x6f, 0x6f, 0x72, 0x28, 0x70, 0x2e, 0x70, 0x72, 0x6f, - 0x62, 0x20, 0x2a, 0x20, 0x31, 0x30, 0x30, 0x29, 0x7d, 0x25, 0x3c, 0x2f, - 0x73, 0x70, 0x61, 0x6e, 0x3e, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x3c, 0x2f, - 0x64, 0x69, 0x76, 0x3e, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x60, 0x0a, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x7d, 0x29, 0x7d, - 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x3c, - 0x2f, 0x64, 0x69, 0x76, 0x3e, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x60, 0x0a, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x72, 0x65, 0x74, 0x75, 0x72, 0x6e, 0x20, 0x68, 0x74, 0x6d, 0x6c, - 0x60, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x3c, 0x24, 0x7b, 0x50, 0x6f, 0x70, 0x6f, 0x76, 0x65, 0x72, 0x7d, 0x20, - 0x73, 0x74, 0x79, 0x6c, 0x65, 0x3d, 0x24, 0x7b, 0x7b, 0x20, 0x62, 0x61, - 0x63, 0x6b, 0x67, 0x72, 0x6f, 0x75, 0x6e, 0x64, 0x43, 0x6f, 0x6c, 0x6f, - 0x72, 0x3a, 0x20, 0x70, 0x43, 0x6f, 0x6c, 0x6f, 0x72, 0x20, 0x7d, 0x7d, - 0x20, 0x70, 0x6f, 0x70, 0x6f, 0x76, 0x65, 0x72, 0x43, 0x68, 0x69, 0x6c, - 0x64, 0x72, 0x65, 0x6e, 0x3d, 0x24, 0x7b, 0x70, 0x6f, 0x70, 0x6f, 0x76, - 0x65, 0x72, 0x43, 0x68, 0x69, 0x6c, 0x64, 0x72, 0x65, 0x6e, 0x7d, 0x3e, - 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x24, 0x7b, 0x6d, 0x73, 0x67, 0x2e, 0x63, 0x6f, 0x6e, 0x74, 0x65, - 0x6e, 0x74, 0x2e, 0x6d, 0x61, 0x74, 0x63, 0x68, 0x28, 0x2f, 0x5c, 0x6e, - 0x2f, 0x67, 0x69, 0x6d, 0x29, 0x20, 0x3f, 0x20, 0x68, 0x74, 0x6d, 0x6c, - 0x60, 0x3c, 0x62, 0x72, 0x20, 0x2f, 0x3e, 0x60, 0x20, 0x3a, 0x20, 0x6d, - 0x73, 0x67, 0x2e, 0x63, 0x6f, 0x6e, 0x74, 0x65, 0x6e, 0x74, 0x7d, 0x0a, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x3c, 0x2f, - 0x3e, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x60, 0x0a, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x7d, 0x29, 0x3b, 0x0a, 0x20, 0x20, - 0x20, 0x20, 0x7d, 0x0a, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x2f, 0x2f, 0x20, - 0x70, 0x6f, 0x6f, 0x72, 0x20, 0x6d, 0x61, 0x6e, 0x73, 0x20, 0x6d, 0x61, - 0x72, 0x6b, 0x64, 0x6f, 0x77, 0x6e, 0x20, 0x72, 0x65, 0x70, 0x6c, 0x61, - 0x63, 0x65, 0x6d, 0x65, 0x6e, 0x74, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x63, - 0x6f, 0x6e, 0x73, 0x74, 0x20, 0x4d, 0x61, 0x72, 0x6b, 0x64, 0x6f, 0x77, - 0x6e, 0x69, 0x73, 0x68, 0x20, 0x3d, 0x20, 0x28, 0x70, 0x61, 0x72, 0x61, - 0x6d, 0x73, 0x29, 0x20, 0x3d, 0x3e, 0x20, 0x7b, 0x0a, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x63, 0x6f, 0x6e, 0x73, 0x74, 0x20, 0x6d, 0x64, 0x20, - 0x3d, 0x20, 0x70, 0x61, 0x72, 0x61, 0x6d, 0x73, 0x2e, 0x74, 0x65, 0x78, - 0x74, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x2e, 0x72, - 0x65, 0x70, 0x6c, 0x61, 0x63, 0x65, 0x28, 0x2f, 0x26, 0x2f, 0x67, 0x2c, - 0x20, 0x27, 0x26, 0x61, 0x6d, 0x70, 0x3b, 0x27, 0x29, 0x0a, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x2e, 0x72, 0x65, 0x70, 0x6c, 0x61, - 0x63, 0x65, 0x28, 0x2f, 0x3c, 0x2f, 0x67, 0x2c, 0x20, 0x27, 0x26, 0x6c, - 0x74, 0x3b, 0x27, 0x29, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x2e, 0x72, 0x65, 0x70, 0x6c, 0x61, 0x63, 0x65, 0x28, 0x2f, 0x3e, - 0x2f, 0x67, 0x2c, 0x20, 0x27, 0x26, 0x67, 0x74, 0x3b, 0x27, 0x29, 0x0a, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x2e, 0x72, 0x65, 0x70, - 0x6c, 0x61, 0x63, 0x65, 0x28, 0x2f, 0x5e, 0x23, 0x7b, 0x31, 0x2c, 0x36, - 0x7d, 0x20, 0x28, 0x2e, 0x2a, 0x29, 0x24, 0x2f, 0x67, 0x69, 0x6d, 0x2c, - 0x20, 0x27, 0x3c, 0x68, 0x33, 0x3e, 0x24, 0x31, 0x3c, 0x2f, 0x68, 0x33, - 0x3e, 0x27, 0x29, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x2e, 0x72, 0x65, 0x70, 0x6c, 0x61, 0x63, 0x65, 0x28, 0x2f, 0x5c, 0x2a, - 0x5c, 0x2a, 0x28, 0x2e, 0x2a, 0x3f, 0x29, 0x5c, 0x2a, 0x5c, 0x2a, 0x2f, - 0x67, 0x2c, 0x20, 0x27, 0x3c, 0x73, 0x74, 0x72, 0x6f, 0x6e, 0x67, 0x3e, - 0x24, 0x31, 0x3c, 0x2f, 0x73, 0x74, 0x72, 0x6f, 0x6e, 0x67, 0x3e, 0x27, - 0x29, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x2e, 0x72, - 0x65, 0x70, 0x6c, 0x61, 0x63, 0x65, 0x28, 0x2f, 0x5f, 0x5f, 0x28, 0x2e, - 0x2a, 0x3f, 0x29, 0x5f, 0x5f, 0x2f, 0x67, 0x2c, 0x20, 0x27, 0x3c, 0x73, - 0x74, 0x72, 0x6f, 0x6e, 0x67, 0x3e, 0x24, 0x31, 0x3c, 0x2f, 0x73, 0x74, - 0x72, 0x6f, 0x6e, 0x67, 0x3e, 0x27, 0x29, 0x0a, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x2e, 0x72, 0x65, 0x70, 0x6c, 0x61, 0x63, 0x65, - 0x28, 0x2f, 0x5c, 0x2a, 0x28, 0x2e, 0x2a, 0x3f, 0x29, 0x5c, 0x2a, 0x2f, - 0x67, 0x2c, 0x20, 0x27, 0x3c, 0x65, 0x6d, 0x3e, 0x24, 0x31, 0x3c, 0x2f, - 0x65, 0x6d, 0x3e, 0x27, 0x29, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x2e, 0x72, 0x65, 0x70, 0x6c, 0x61, 0x63, 0x65, 0x28, 0x2f, - 0x5f, 0x28, 0x2e, 0x2a, 0x3f, 0x29, 0x5f, 0x2f, 0x67, 0x2c, 0x20, 0x27, - 0x3c, 0x65, 0x6d, 0x3e, 0x24, 0x31, 0x3c, 0x2f, 0x65, 0x6d, 0x3e, 0x27, - 0x29, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x2e, 0x72, - 0x65, 0x70, 0x6c, 0x61, 0x63, 0x65, 0x28, 0x2f, 0x60, 0x60, 0x60, 0x2e, - 0x2a, 0x3f, 0x5c, 0x6e, 0x28, 0x5b, 0x5c, 0x73, 0x5c, 0x53, 0x5d, 0x2a, - 0x3f, 0x29, 0x60, 0x60, 0x60, 0x2f, 0x67, 0x2c, 0x20, 0x27, 0x3c, 0x70, - 0x72, 0x65, 0x3e, 0x3c, 0x63, 0x6f, 0x64, 0x65, 0x3e, 0x24, 0x31, 0x3c, - 0x2f, 0x63, 0x6f, 0x64, 0x65, 0x3e, 0x3c, 0x2f, 0x70, 0x72, 0x65, 0x3e, - 0x27, 0x29, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x2e, - 0x72, 0x65, 0x70, 0x6c, 0x61, 0x63, 0x65, 0x28, 0x2f, 0x60, 0x28, 0x2e, - 0x2a, 0x3f, 0x29, 0x60, 0x2f, 0x67, 0x2c, 0x20, 0x27, 0x3c, 0x63, 0x6f, - 0x64, 0x65, 0x3e, 0x24, 0x31, 0x3c, 0x2f, 0x63, 0x6f, 0x64, 0x65, 0x3e, - 0x27, 0x29, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x2e, - 0x72, 0x65, 0x70, 0x6c, 0x61, 0x63, 0x65, 0x28, 0x2f, 0x5c, 0x6e, 0x2f, - 0x67, 0x69, 0x6d, 0x2c, 0x20, 0x27, 0x3c, 0x62, 0x72, 0x20, 0x2f, 0x3e, - 0x27, 0x29, 0x3b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x72, 0x65, - 0x74, 0x75, 0x72, 0x6e, 0x20, 0x68, 0x74, 0x6d, 0x6c, 0x60, 0x3c, 0x73, - 0x70, 0x61, 0x6e, 0x20, 0x64, 0x61, 0x6e, 0x67, 0x65, 0x72, 0x6f, 0x75, - 0x73, 0x6c, 0x79, 0x53, 0x65, 0x74, 0x49, 0x6e, 0x6e, 0x65, 0x72, 0x48, - 0x54, 0x4d, 0x4c, 0x3d, 0x24, 0x7b, 0x7b, 0x20, 0x5f, 0x5f, 0x68, 0x74, - 0x6d, 0x6c, 0x3a, 0x20, 0x6d, 0x64, 0x20, 0x7d, 0x7d, 0x20, 0x2f, 0x3e, - 0x60, 0x3b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x7d, 0x3b, 0x0a, 0x0a, 0x20, - 0x20, 0x20, 0x20, 0x63, 0x6f, 0x6e, 0x73, 0x74, 0x20, 0x4d, 0x6f, 0x64, - 0x65, 0x6c, 0x47, 0x65, 0x6e, 0x65, 0x72, 0x61, 0x74, 0x69, 0x6f, 0x6e, - 0x49, 0x6e, 0x66, 0x6f, 0x20, 0x3d, 0x20, 0x28, 0x70, 0x61, 0x72, 0x61, - 0x6d, 0x73, 0x29, 0x20, 0x3d, 0x3e, 0x20, 0x7b, 0x0a, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x69, 0x66, 0x20, 0x28, 0x21, 0x6c, 0x6c, 0x61, 0x6d, - 0x61, 0x53, 0x74, 0x61, 0x74, 0x73, 0x2e, 0x76, 0x61, 0x6c, 0x75, 0x65, - 0x29, 0x20, 0x7b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x72, 0x65, 0x74, 0x75, 0x72, 0x6e, 0x20, 0x68, 0x74, 0x6d, 0x6c, 0x60, - 0x3c, 0x73, 0x70, 0x61, 0x6e, 0x2f, 0x3e, 0x60, 0x0a, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x7d, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x72, - 0x65, 0x74, 0x75, 0x72, 0x6e, 0x20, 0x68, 0x74, 0x6d, 0x6c, 0x60, 0x0a, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x3c, 0x73, 0x70, 0x61, - 0x6e, 0x3e, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x24, 0x7b, 0x6c, 0x6c, 0x61, 0x6d, 0x61, 0x53, 0x74, 0x61, 0x74, - 0x73, 0x2e, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x2e, 0x70, 0x72, 0x65, 0x64, - 0x69, 0x63, 0x74, 0x65, 0x64, 0x5f, 0x70, 0x65, 0x72, 0x5f, 0x74, 0x6f, - 0x6b, 0x65, 0x6e, 0x5f, 0x6d, 0x73, 0x2e, 0x74, 0x6f, 0x46, 0x69, 0x78, - 0x65, 0x64, 0x28, 0x29, 0x7d, 0x6d, 0x73, 0x20, 0x70, 0x65, 0x72, 0x20, - 0x74, 0x6f, 0x6b, 0x65, 0x6e, 0x2c, 0x20, 0x24, 0x7b, 0x6c, 0x6c, 0x61, - 0x6d, 0x61, 0x53, 0x74, 0x61, 0x74, 0x73, 0x2e, 0x76, 0x61, 0x6c, 0x75, - 0x65, 0x2e, 0x70, 0x72, 0x65, 0x64, 0x69, 0x63, 0x74, 0x65, 0x64, 0x5f, - 0x70, 0x65, 0x72, 0x5f, 0x73, 0x65, 0x63, 0x6f, 0x6e, 0x64, 0x2e, 0x74, - 0x6f, 0x46, 0x69, 0x78, 0x65, 0x64, 0x28, 0x32, 0x29, 0x7d, 0x20, 0x74, - 0x6f, 0x6b, 0x65, 0x6e, 0x73, 0x20, 0x70, 0x65, 0x72, 0x20, 0x73, 0x65, - 0x63, 0x6f, 0x6e, 0x64, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x3c, 0x2f, 0x73, 0x70, 0x61, 0x6e, 0x3e, 0x0a, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x60, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x7d, 0x0a, 0x0a, - 0x20, 0x20, 0x20, 0x20, 0x2f, 0x2f, 0x20, 0x73, 0x69, 0x6d, 0x70, 0x6c, - 0x65, 0x20, 0x70, 0x6f, 0x70, 0x6f, 0x76, 0x65, 0x72, 0x20, 0x69, 0x6d, - 0x70, 0x6c, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x63, 0x6f, 0x6e, 0x73, 0x74, - 0x20, 0x50, 0x6f, 0x70, 0x6f, 0x76, 0x65, 0x72, 0x20, 0x3d, 0x20, 0x28, - 0x70, 0x72, 0x6f, 0x70, 0x73, 0x29, 0x20, 0x3d, 0x3e, 0x20, 0x7b, 0x0a, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x63, 0x6f, 0x6e, 0x73, 0x74, 0x20, - 0x69, 0x73, 0x4f, 0x70, 0x65, 0x6e, 0x20, 0x3d, 0x20, 0x75, 0x73, 0x65, - 0x53, 0x69, 0x67, 0x6e, 0x61, 0x6c, 0x28, 0x66, 0x61, 0x6c, 0x73, 0x65, - 0x29, 0x3b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x63, 0x6f, 0x6e, - 0x73, 0x74, 0x20, 0x70, 0x6f, 0x73, 0x69, 0x74, 0x69, 0x6f, 0x6e, 0x20, - 0x3d, 0x20, 0x75, 0x73, 0x65, 0x53, 0x69, 0x67, 0x6e, 0x61, 0x6c, 0x28, - 0x7b, 0x20, 0x74, 0x6f, 0x70, 0x3a, 0x20, 0x27, 0x30, 0x70, 0x78, 0x27, - 0x2c, 0x20, 0x6c, 0x65, 0x66, 0x74, 0x3a, 0x20, 0x27, 0x30, 0x70, 0x78, - 0x27, 0x20, 0x7d, 0x29, 0x3b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x63, 0x6f, 0x6e, 0x73, 0x74, 0x20, 0x62, 0x75, 0x74, 0x74, 0x6f, 0x6e, - 0x52, 0x65, 0x66, 0x20, 0x3d, 0x20, 0x75, 0x73, 0x65, 0x52, 0x65, 0x66, - 0x28, 0x6e, 0x75, 0x6c, 0x6c, 0x29, 0x3b, 0x0a, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x63, 0x6f, 0x6e, 0x73, 0x74, 0x20, 0x70, 0x6f, 0x70, 0x6f, - 0x76, 0x65, 0x72, 0x52, 0x65, 0x66, 0x20, 0x3d, 0x20, 0x75, 0x73, 0x65, - 0x52, 0x65, 0x66, 0x28, 0x6e, 0x75, 0x6c, 0x6c, 0x29, 0x3b, 0x0a, 0x0a, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x63, 0x6f, 0x6e, 0x73, 0x74, 0x20, - 0x74, 0x6f, 0x67, 0x67, 0x6c, 0x65, 0x50, 0x6f, 0x70, 0x6f, 0x76, 0x65, - 0x72, 0x20, 0x3d, 0x20, 0x28, 0x29, 0x20, 0x3d, 0x3e, 0x20, 0x7b, 0x0a, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x69, 0x66, 0x20, 0x28, - 0x62, 0x75, 0x74, 0x74, 0x6f, 0x6e, 0x52, 0x65, 0x66, 0x2e, 0x63, 0x75, - 0x72, 0x72, 0x65, 0x6e, 0x74, 0x29, 0x20, 0x7b, 0x0a, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x63, 0x6f, 0x6e, 0x73, 0x74, - 0x20, 0x72, 0x65, 0x63, 0x74, 0x20, 0x3d, 0x20, 0x62, 0x75, 0x74, 0x74, - 0x6f, 0x6e, 0x52, 0x65, 0x66, 0x2e, 0x63, 0x75, 0x72, 0x72, 0x65, 0x6e, - 0x74, 0x2e, 0x67, 0x65, 0x74, 0x42, 0x6f, 0x75, 0x6e, 0x64, 0x69, 0x6e, - 0x67, 0x43, 0x6c, 0x69, 0x65, 0x6e, 0x74, 0x52, 0x65, 0x63, 0x74, 0x28, - 0x29, 0x3b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x70, 0x6f, 0x73, 0x69, 0x74, 0x69, 0x6f, 0x6e, 0x2e, 0x76, 0x61, - 0x6c, 0x75, 0x65, 0x20, 0x3d, 0x20, 0x7b, 0x0a, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x74, 0x6f, 0x70, 0x3a, - 0x20, 0x60, 0x24, 0x7b, 0x72, 0x65, 0x63, 0x74, 0x2e, 0x62, 0x6f, 0x74, - 0x74, 0x6f, 0x6d, 0x20, 0x2b, 0x20, 0x77, 0x69, 0x6e, 0x64, 0x6f, 0x77, - 0x2e, 0x73, 0x63, 0x72, 0x6f, 0x6c, 0x6c, 0x59, 0x7d, 0x70, 0x78, 0x60, - 0x2c, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x6c, 0x65, 0x66, 0x74, 0x3a, 0x20, 0x60, 0x24, 0x7b, 0x72, - 0x65, 0x63, 0x74, 0x2e, 0x6c, 0x65, 0x66, 0x74, 0x20, 0x2b, 0x20, 0x77, - 0x69, 0x6e, 0x64, 0x6f, 0x77, 0x2e, 0x73, 0x63, 0x72, 0x6f, 0x6c, 0x6c, - 0x58, 0x7d, 0x70, 0x78, 0x60, 0x2c, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x7d, 0x3b, 0x0a, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x7d, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x69, 0x73, 0x4f, 0x70, 0x65, 0x6e, 0x2e, 0x76, 0x61, 0x6c, - 0x75, 0x65, 0x20, 0x3d, 0x20, 0x21, 0x69, 0x73, 0x4f, 0x70, 0x65, 0x6e, - 0x2e, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x3b, 0x0a, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x7d, 0x3b, 0x0a, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x63, 0x6f, 0x6e, 0x73, 0x74, 0x20, 0x68, 0x61, 0x6e, 0x64, 0x6c, 0x65, - 0x43, 0x6c, 0x69, 0x63, 0x6b, 0x4f, 0x75, 0x74, 0x73, 0x69, 0x64, 0x65, - 0x20, 0x3d, 0x20, 0x28, 0x65, 0x76, 0x65, 0x6e, 0x74, 0x29, 0x20, 0x3d, - 0x3e, 0x20, 0x7b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x69, 0x66, 0x20, 0x28, 0x70, 0x6f, 0x70, 0x6f, 0x76, 0x65, 0x72, 0x52, - 0x65, 0x66, 0x2e, 0x63, 0x75, 0x72, 0x72, 0x65, 0x6e, 0x74, 0x20, 0x26, - 0x26, 0x20, 0x21, 0x70, 0x6f, 0x70, 0x6f, 0x76, 0x65, 0x72, 0x52, 0x65, - 0x66, 0x2e, 0x63, 0x75, 0x72, 0x72, 0x65, 0x6e, 0x74, 0x2e, 0x63, 0x6f, - 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x73, 0x28, 0x65, 0x76, 0x65, 0x6e, 0x74, - 0x2e, 0x74, 0x61, 0x72, 0x67, 0x65, 0x74, 0x29, 0x20, 0x26, 0x26, 0x20, - 0x21, 0x62, 0x75, 0x74, 0x74, 0x6f, 0x6e, 0x52, 0x65, 0x66, 0x2e, 0x63, - 0x75, 0x72, 0x72, 0x65, 0x6e, 0x74, 0x2e, 0x63, 0x6f, 0x6e, 0x74, 0x61, - 0x69, 0x6e, 0x73, 0x28, 0x65, 0x76, 0x65, 0x6e, 0x74, 0x2e, 0x74, 0x61, - 0x72, 0x67, 0x65, 0x74, 0x29, 0x29, 0x20, 0x7b, 0x0a, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x69, 0x73, 0x4f, 0x70, 0x65, - 0x6e, 0x2e, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x20, 0x3d, 0x20, 0x66, 0x61, - 0x6c, 0x73, 0x65, 0x3b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x7d, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x7d, 0x3b, 0x0a, - 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x75, 0x73, 0x65, 0x45, 0x66, - 0x66, 0x65, 0x63, 0x74, 0x28, 0x28, 0x29, 0x20, 0x3d, 0x3e, 0x20, 0x7b, - 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x64, 0x6f, 0x63, - 0x75, 0x6d, 0x65, 0x6e, 0x74, 0x2e, 0x61, 0x64, 0x64, 0x45, 0x76, 0x65, - 0x6e, 0x74, 0x4c, 0x69, 0x73, 0x74, 0x65, 0x6e, 0x65, 0x72, 0x28, 0x27, - 0x6d, 0x6f, 0x75, 0x73, 0x65, 0x64, 0x6f, 0x77, 0x6e, 0x27, 0x2c, 0x20, - 0x68, 0x61, 0x6e, 0x64, 0x6c, 0x65, 0x43, 0x6c, 0x69, 0x63, 0x6b, 0x4f, - 0x75, 0x74, 0x73, 0x69, 0x64, 0x65, 0x29, 0x3b, 0x0a, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x72, 0x65, 0x74, 0x75, 0x72, 0x6e, 0x20, - 0x28, 0x29, 0x20, 0x3d, 0x3e, 0x20, 0x7b, 0x0a, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x64, 0x6f, 0x63, 0x75, 0x6d, 0x65, - 0x6e, 0x74, 0x2e, 0x72, 0x65, 0x6d, 0x6f, 0x76, 0x65, 0x45, 0x76, 0x65, - 0x6e, 0x74, 0x4c, 0x69, 0x73, 0x74, 0x65, 0x6e, 0x65, 0x72, 0x28, 0x27, - 0x6d, 0x6f, 0x75, 0x73, 0x65, 0x64, 0x6f, 0x77, 0x6e, 0x27, 0x2c, 0x20, - 0x68, 0x61, 0x6e, 0x64, 0x6c, 0x65, 0x43, 0x6c, 0x69, 0x63, 0x6b, 0x4f, - 0x75, 0x74, 0x73, 0x69, 0x64, 0x65, 0x29, 0x3b, 0x0a, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x7d, 0x3b, 0x0a, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x7d, 0x2c, 0x20, 0x5b, 0x5d, 0x29, 0x3b, 0x0a, 0x0a, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x72, 0x65, 0x74, 0x75, 0x72, 0x6e, 0x20, - 0x68, 0x74, 0x6d, 0x6c, 0x60, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x3c, 0x73, 0x70, 0x61, 0x6e, 0x20, 0x73, 0x74, 0x79, 0x6c, - 0x65, 0x3d, 0x24, 0x7b, 0x70, 0x72, 0x6f, 0x70, 0x73, 0x2e, 0x73, 0x74, - 0x79, 0x6c, 0x65, 0x7d, 0x20, 0x72, 0x65, 0x66, 0x3d, 0x24, 0x7b, 0x62, - 0x75, 0x74, 0x74, 0x6f, 0x6e, 0x52, 0x65, 0x66, 0x7d, 0x20, 0x6f, 0x6e, - 0x43, 0x6c, 0x69, 0x63, 0x6b, 0x3d, 0x24, 0x7b, 0x74, 0x6f, 0x67, 0x67, - 0x6c, 0x65, 0x50, 0x6f, 0x70, 0x6f, 0x76, 0x65, 0x72, 0x7d, 0x3e, 0x24, - 0x7b, 0x70, 0x72, 0x6f, 0x70, 0x73, 0x2e, 0x63, 0x68, 0x69, 0x6c, 0x64, - 0x72, 0x65, 0x6e, 0x7d, 0x3c, 0x2f, 0x73, 0x70, 0x61, 0x6e, 0x3e, 0x0a, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x24, 0x7b, 0x69, 0x73, - 0x4f, 0x70, 0x65, 0x6e, 0x2e, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x20, 0x26, - 0x26, 0x20, 0x68, 0x74, 0x6d, 0x6c, 0x60, 0x0a, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x3c, 0x24, 0x7b, 0x50, 0x6f, 0x72, - 0x74, 0x61, 0x6c, 0x7d, 0x20, 0x69, 0x6e, 0x74, 0x6f, 0x3d, 0x22, 0x23, - 0x70, 0x6f, 0x72, 0x74, 0x61, 0x6c, 0x22, 0x3e, 0x0a, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x3c, 0x64, 0x69, - 0x76, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x72, 0x65, 0x66, 0x3d, 0x24, 0x7b, 0x70, 0x6f, - 0x70, 0x6f, 0x76, 0x65, 0x72, 0x52, 0x65, 0x66, 0x7d, 0x0a, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x63, 0x6c, 0x61, 0x73, 0x73, 0x3d, 0x22, 0x70, 0x6f, 0x70, 0x6f, 0x76, - 0x65, 0x72, 0x2d, 0x63, 0x6f, 0x6e, 0x74, 0x65, 0x6e, 0x74, 0x22, 0x0a, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x73, 0x74, 0x79, 0x6c, 0x65, 0x3d, 0x24, 0x7b, 0x7b, 0x0a, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x74, 0x6f, 0x70, 0x3a, 0x20, 0x70, 0x6f, 0x73, - 0x69, 0x74, 0x69, 0x6f, 0x6e, 0x2e, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x2e, - 0x74, 0x6f, 0x70, 0x2c, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x6c, 0x65, 0x66, - 0x74, 0x3a, 0x20, 0x70, 0x6f, 0x73, 0x69, 0x74, 0x69, 0x6f, 0x6e, 0x2e, - 0x76, 0x61, 0x6c, 0x75, 0x65, 0x2e, 0x6c, 0x65, 0x66, 0x74, 0x2c, 0x0a, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x7d, 0x7d, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x3e, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x24, 0x7b, 0x70, - 0x72, 0x6f, 0x70, 0x73, 0x2e, 0x70, 0x6f, 0x70, 0x6f, 0x76, 0x65, 0x72, - 0x43, 0x68, 0x69, 0x6c, 0x64, 0x72, 0x65, 0x6e, 0x7d, 0x0a, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x3c, 0x2f, - 0x64, 0x69, 0x76, 0x3e, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x3c, 0x2f, 0x24, 0x7b, 0x50, 0x6f, 0x72, 0x74, 0x61, - 0x6c, 0x7d, 0x3e, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x60, 0x7d, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x60, 0x3b, 0x0a, - 0x20, 0x20, 0x20, 0x20, 0x7d, 0x3b, 0x0a, 0x0a, 0x20, 0x20, 0x20, 0x20, - 0x2f, 0x2f, 0x20, 0x53, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x3a, 0x20, 0x70, - 0x72, 0x65, 0x61, 0x63, 0x74, 0x2d, 0x70, 0x6f, 0x72, 0x74, 0x61, 0x6c, - 0x20, 0x28, 0x68, 0x74, 0x74, 0x70, 0x73, 0x3a, 0x2f, 0x2f, 0x67, 0x69, - 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x64, 0x65, 0x76, - 0x65, 0x6c, 0x6f, 0x70, 0x69, 0x74, 0x2f, 0x70, 0x72, 0x65, 0x61, 0x63, - 0x74, 0x2d, 0x70, 0x6f, 0x72, 0x74, 0x61, 0x6c, 0x2f, 0x62, 0x6c, 0x6f, - 0x62, 0x2f, 0x6d, 0x61, 0x73, 0x74, 0x65, 0x72, 0x2f, 0x73, 0x72, 0x63, - 0x2f, 0x70, 0x72, 0x65, 0x61, 0x63, 0x74, 0x2d, 0x70, 0x6f, 0x72, 0x74, - 0x61, 0x6c, 0x2e, 0x6a, 0x73, 0x29, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x2f, - 0x2a, 0x2a, 0x20, 0x52, 0x65, 0x64, 0x69, 0x72, 0x65, 0x63, 0x74, 0x20, - 0x72, 0x65, 0x6e, 0x64, 0x65, 0x72, 0x69, 0x6e, 0x67, 0x20, 0x6f, 0x66, - 0x20, 0x64, 0x65, 0x73, 0x63, 0x65, 0x6e, 0x64, 0x61, 0x6e, 0x74, 0x73, - 0x20, 0x69, 0x6e, 0x74, 0x6f, 0x20, 0x74, 0x68, 0x65, 0x20, 0x67, 0x69, - 0x76, 0x65, 0x6e, 0x20, 0x43, 0x53, 0x53, 0x20, 0x73, 0x65, 0x6c, 0x65, - 0x63, 0x74, 0x6f, 0x72, 0x20, 0x2a, 0x2f, 0x0a, 0x20, 0x20, 0x20, 0x20, - 0x63, 0x6c, 0x61, 0x73, 0x73, 0x20, 0x50, 0x6f, 0x72, 0x74, 0x61, 0x6c, - 0x20, 0x65, 0x78, 0x74, 0x65, 0x6e, 0x64, 0x73, 0x20, 0x43, 0x6f, 0x6d, - 0x70, 0x6f, 0x6e, 0x65, 0x6e, 0x74, 0x20, 0x7b, 0x0a, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x63, 0x6f, 0x6d, 0x70, 0x6f, 0x6e, 0x65, 0x6e, 0x74, - 0x44, 0x69, 0x64, 0x55, 0x70, 0x64, 0x61, 0x74, 0x65, 0x28, 0x70, 0x72, - 0x6f, 0x70, 0x73, 0x29, 0x20, 0x7b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x66, 0x6f, 0x72, 0x20, 0x28, 0x6c, 0x65, 0x74, 0x20, - 0x69, 0x20, 0x69, 0x6e, 0x20, 0x70, 0x72, 0x6f, 0x70, 0x73, 0x29, 0x20, - 0x7b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x69, 0x66, 0x20, 0x28, 0x70, 0x72, 0x6f, 0x70, 0x73, 0x5b, 0x69, 0x5d, - 0x20, 0x21, 0x3d, 0x3d, 0x20, 0x74, 0x68, 0x69, 0x73, 0x2e, 0x70, 0x72, - 0x6f, 0x70, 0x73, 0x5b, 0x69, 0x5d, 0x29, 0x20, 0x7b, 0x0a, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x72, 0x65, - 0x74, 0x75, 0x72, 0x6e, 0x20, 0x73, 0x65, 0x74, 0x54, 0x69, 0x6d, 0x65, - 0x6f, 0x75, 0x74, 0x28, 0x74, 0x68, 0x69, 0x73, 0x2e, 0x72, 0x65, 0x6e, - 0x64, 0x65, 0x72, 0x4c, 0x61, 0x79, 0x65, 0x72, 0x29, 0x3b, 0x0a, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x7d, 0x0a, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x7d, 0x0a, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x7d, 0x0a, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x63, 0x6f, 0x6d, 0x70, 0x6f, 0x6e, 0x65, 0x6e, 0x74, 0x44, 0x69, 0x64, - 0x4d, 0x6f, 0x75, 0x6e, 0x74, 0x28, 0x29, 0x20, 0x7b, 0x0a, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x74, 0x68, 0x69, 0x73, 0x2e, 0x69, - 0x73, 0x4d, 0x6f, 0x75, 0x6e, 0x74, 0x65, 0x64, 0x20, 0x3d, 0x20, 0x74, - 0x72, 0x75, 0x65, 0x3b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x74, 0x68, 0x69, 0x73, 0x2e, 0x72, 0x65, 0x6e, 0x64, 0x65, 0x72, - 0x4c, 0x61, 0x79, 0x65, 0x72, 0x20, 0x3d, 0x20, 0x74, 0x68, 0x69, 0x73, - 0x2e, 0x72, 0x65, 0x6e, 0x64, 0x65, 0x72, 0x4c, 0x61, 0x79, 0x65, 0x72, - 0x2e, 0x62, 0x69, 0x6e, 0x64, 0x28, 0x74, 0x68, 0x69, 0x73, 0x29, 0x3b, - 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x74, 0x68, 0x69, - 0x73, 0x2e, 0x72, 0x65, 0x6e, 0x64, 0x65, 0x72, 0x4c, 0x61, 0x79, 0x65, - 0x72, 0x28, 0x29, 0x3b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x7d, - 0x0a, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x63, 0x6f, 0x6d, 0x70, - 0x6f, 0x6e, 0x65, 0x6e, 0x74, 0x57, 0x69, 0x6c, 0x6c, 0x55, 0x6e, 0x6d, - 0x6f, 0x75, 0x6e, 0x74, 0x28, 0x29, 0x20, 0x7b, 0x0a, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x74, 0x68, 0x69, 0x73, 0x2e, 0x72, 0x65, - 0x6e, 0x64, 0x65, 0x72, 0x4c, 0x61, 0x79, 0x65, 0x72, 0x28, 0x66, 0x61, - 0x6c, 0x73, 0x65, 0x29, 0x3b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x74, 0x68, 0x69, 0x73, 0x2e, 0x69, 0x73, 0x4d, 0x6f, 0x75, - 0x6e, 0x74, 0x65, 0x64, 0x20, 0x3d, 0x20, 0x66, 0x61, 0x6c, 0x73, 0x65, - 0x3b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x69, 0x66, - 0x20, 0x28, 0x74, 0x68, 0x69, 0x73, 0x2e, 0x72, 0x65, 0x6d, 0x6f, 0x74, - 0x65, 0x20, 0x26, 0x26, 0x20, 0x74, 0x68, 0x69, 0x73, 0x2e, 0x72, 0x65, - 0x6d, 0x6f, 0x74, 0x65, 0x2e, 0x70, 0x61, 0x72, 0x65, 0x6e, 0x74, 0x4e, - 0x6f, 0x64, 0x65, 0x29, 0x20, 0x74, 0x68, 0x69, 0x73, 0x2e, 0x72, 0x65, - 0x6d, 0x6f, 0x74, 0x65, 0x2e, 0x70, 0x61, 0x72, 0x65, 0x6e, 0x74, 0x4e, - 0x6f, 0x64, 0x65, 0x2e, 0x72, 0x65, 0x6d, 0x6f, 0x76, 0x65, 0x43, 0x68, - 0x69, 0x6c, 0x64, 0x28, 0x74, 0x68, 0x69, 0x73, 0x2e, 0x72, 0x65, 0x6d, - 0x6f, 0x74, 0x65, 0x29, 0x3b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x7d, 0x0a, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x66, 0x69, 0x6e, - 0x64, 0x4e, 0x6f, 0x64, 0x65, 0x28, 0x6e, 0x6f, 0x64, 0x65, 0x29, 0x20, - 0x7b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x72, 0x65, - 0x74, 0x75, 0x72, 0x6e, 0x20, 0x74, 0x79, 0x70, 0x65, 0x6f, 0x66, 0x20, - 0x6e, 0x6f, 0x64, 0x65, 0x20, 0x3d, 0x3d, 0x3d, 0x20, 0x27, 0x73, 0x74, - 0x72, 0x69, 0x6e, 0x67, 0x27, 0x20, 0x3f, 0x20, 0x64, 0x6f, 0x63, 0x75, - 0x6d, 0x65, 0x6e, 0x74, 0x2e, 0x71, 0x75, 0x65, 0x72, 0x79, 0x53, 0x65, - 0x6c, 0x65, 0x63, 0x74, 0x6f, 0x72, 0x28, 0x6e, 0x6f, 0x64, 0x65, 0x29, - 0x20, 0x3a, 0x20, 0x6e, 0x6f, 0x64, 0x65, 0x3b, 0x0a, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x7d, 0x0a, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x72, 0x65, 0x6e, 0x64, 0x65, 0x72, 0x4c, 0x61, 0x79, 0x65, 0x72, 0x28, - 0x73, 0x68, 0x6f, 0x77, 0x20, 0x3d, 0x20, 0x74, 0x72, 0x75, 0x65, 0x29, - 0x20, 0x7b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x69, - 0x66, 0x20, 0x28, 0x21, 0x74, 0x68, 0x69, 0x73, 0x2e, 0x69, 0x73, 0x4d, - 0x6f, 0x75, 0x6e, 0x74, 0x65, 0x64, 0x29, 0x20, 0x72, 0x65, 0x74, 0x75, - 0x72, 0x6e, 0x3b, 0x0a, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x2f, 0x2f, 0x20, 0x63, 0x6c, 0x65, 0x61, 0x6e, 0x20, 0x75, 0x70, - 0x20, 0x6f, 0x6c, 0x64, 0x20, 0x6e, 0x6f, 0x64, 0x65, 0x20, 0x69, 0x66, - 0x20, 0x6d, 0x6f, 0x76, 0x69, 0x6e, 0x67, 0x20, 0x62, 0x61, 0x73, 0x65, - 0x73, 0x3a, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x69, - 0x66, 0x20, 0x28, 0x74, 0x68, 0x69, 0x73, 0x2e, 0x70, 0x72, 0x6f, 0x70, - 0x73, 0x2e, 0x69, 0x6e, 0x74, 0x6f, 0x20, 0x21, 0x3d, 0x3d, 0x20, 0x74, - 0x68, 0x69, 0x73, 0x2e, 0x69, 0x6e, 0x74, 0x6f, 0x50, 0x6f, 0x69, 0x6e, - 0x74, 0x65, 0x72, 0x29, 0x20, 0x7b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x74, 0x68, 0x69, 0x73, 0x2e, 0x69, 0x6e, - 0x74, 0x6f, 0x50, 0x6f, 0x69, 0x6e, 0x74, 0x65, 0x72, 0x20, 0x3d, 0x20, - 0x74, 0x68, 0x69, 0x73, 0x2e, 0x70, 0x72, 0x6f, 0x70, 0x73, 0x2e, 0x69, - 0x6e, 0x74, 0x6f, 0x3b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x69, 0x66, 0x20, 0x28, 0x74, 0x68, 0x69, 0x73, 0x2e, - 0x69, 0x6e, 0x74, 0x6f, 0x20, 0x26, 0x26, 0x20, 0x74, 0x68, 0x69, 0x73, - 0x2e, 0x72, 0x65, 0x6d, 0x6f, 0x74, 0x65, 0x29, 0x20, 0x7b, 0x0a, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x74, - 0x68, 0x69, 0x73, 0x2e, 0x72, 0x65, 0x6d, 0x6f, 0x74, 0x65, 0x20, 0x3d, - 0x20, 0x72, 0x65, 0x6e, 0x64, 0x65, 0x72, 0x28, 0x68, 0x74, 0x6d, 0x6c, - 0x60, 0x3c, 0x24, 0x7b, 0x50, 0x6f, 0x72, 0x74, 0x61, 0x6c, 0x50, 0x72, - 0x6f, 0x78, 0x79, 0x7d, 0x20, 0x2f, 0x3e, 0x60, 0x2c, 0x20, 0x74, 0x68, - 0x69, 0x73, 0x2e, 0x69, 0x6e, 0x74, 0x6f, 0x2c, 0x20, 0x74, 0x68, 0x69, - 0x73, 0x2e, 0x72, 0x65, 0x6d, 0x6f, 0x74, 0x65, 0x29, 0x3b, 0x0a, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x7d, 0x0a, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x74, 0x68, 0x69, - 0x73, 0x2e, 0x69, 0x6e, 0x74, 0x6f, 0x20, 0x3d, 0x20, 0x74, 0x68, 0x69, - 0x73, 0x2e, 0x66, 0x69, 0x6e, 0x64, 0x4e, 0x6f, 0x64, 0x65, 0x28, 0x74, - 0x68, 0x69, 0x73, 0x2e, 0x70, 0x72, 0x6f, 0x70, 0x73, 0x2e, 0x69, 0x6e, - 0x74, 0x6f, 0x29, 0x3b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x7d, 0x0a, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x74, 0x68, 0x69, 0x73, 0x2e, 0x72, 0x65, 0x6d, 0x6f, 0x74, 0x65, 0x20, - 0x3d, 0x20, 0x72, 0x65, 0x6e, 0x64, 0x65, 0x72, 0x28, 0x68, 0x74, 0x6d, - 0x6c, 0x60, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x3c, 0x24, 0x7b, 0x50, 0x6f, 0x72, 0x74, 0x61, 0x6c, 0x50, 0x72, - 0x6f, 0x78, 0x79, 0x7d, 0x20, 0x63, 0x6f, 0x6e, 0x74, 0x65, 0x78, 0x74, - 0x3d, 0x24, 0x7b, 0x74, 0x68, 0x69, 0x73, 0x2e, 0x63, 0x6f, 0x6e, 0x74, - 0x65, 0x78, 0x74, 0x7d, 0x3e, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x24, 0x7b, 0x73, 0x68, 0x6f, 0x77, - 0x20, 0x26, 0x26, 0x20, 0x74, 0x68, 0x69, 0x73, 0x2e, 0x70, 0x72, 0x6f, - 0x70, 0x73, 0x2e, 0x63, 0x68, 0x69, 0x6c, 0x64, 0x72, 0x65, 0x6e, 0x20, - 0x7c, 0x7c, 0x20, 0x6e, 0x75, 0x6c, 0x6c, 0x7d, 0x0a, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x3c, 0x2f, 0x24, 0x7b, 0x50, - 0x6f, 0x72, 0x74, 0x61, 0x6c, 0x50, 0x72, 0x6f, 0x78, 0x79, 0x7d, 0x3e, - 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x60, 0x2c, 0x20, - 0x74, 0x68, 0x69, 0x73, 0x2e, 0x69, 0x6e, 0x74, 0x6f, 0x2c, 0x20, 0x74, - 0x68, 0x69, 0x73, 0x2e, 0x72, 0x65, 0x6d, 0x6f, 0x74, 0x65, 0x29, 0x3b, - 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x7d, 0x0a, 0x0a, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x72, 0x65, 0x6e, 0x64, 0x65, 0x72, 0x28, 0x29, - 0x20, 0x7b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x72, - 0x65, 0x74, 0x75, 0x72, 0x6e, 0x20, 0x6e, 0x75, 0x6c, 0x6c, 0x3b, 0x0a, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x7d, 0x0a, 0x20, 0x20, 0x20, 0x20, - 0x7d, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x2f, 0x2f, 0x20, 0x68, 0x69, 0x67, - 0x68, 0x2d, 0x6f, 0x72, 0x64, 0x65, 0x72, 0x20, 0x63, 0x6f, 0x6d, 0x70, - 0x6f, 0x6e, 0x65, 0x6e, 0x74, 0x20, 0x74, 0x68, 0x61, 0x74, 0x20, 0x72, - 0x65, 0x6e, 0x64, 0x65, 0x72, 0x73, 0x20, 0x69, 0x74, 0x73, 0x20, 0x66, - 0x69, 0x72, 0x73, 0x74, 0x20, 0x63, 0x68, 0x69, 0x6c, 0x64, 0x20, 0x69, - 0x66, 0x20, 0x69, 0x74, 0x20, 0x65, 0x78, 0x69, 0x73, 0x74, 0x73, 0x2e, - 0x0a, 0x20, 0x20, 0x20, 0x20, 0x2f, 0x2f, 0x20, 0x75, 0x73, 0x65, 0x64, - 0x20, 0x61, 0x73, 0x20, 0x61, 0x20, 0x63, 0x6f, 0x6e, 0x64, 0x69, 0x74, - 0x69, 0x6f, 0x6e, 0x61, 0x6c, 0x20, 0x72, 0x65, 0x6e, 0x64, 0x65, 0x72, - 0x69, 0x6e, 0x67, 0x20, 0x70, 0x72, 0x6f, 0x78, 0x79, 0x2e, 0x0a, 0x20, - 0x20, 0x20, 0x20, 0x63, 0x6c, 0x61, 0x73, 0x73, 0x20, 0x50, 0x6f, 0x72, - 0x74, 0x61, 0x6c, 0x50, 0x72, 0x6f, 0x78, 0x79, 0x20, 0x65, 0x78, 0x74, - 0x65, 0x6e, 0x64, 0x73, 0x20, 0x43, 0x6f, 0x6d, 0x70, 0x6f, 0x6e, 0x65, - 0x6e, 0x74, 0x20, 0x7b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x67, - 0x65, 0x74, 0x43, 0x68, 0x69, 0x6c, 0x64, 0x43, 0x6f, 0x6e, 0x74, 0x65, - 0x78, 0x74, 0x28, 0x29, 0x20, 0x7b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x72, 0x65, 0x74, 0x75, 0x72, 0x6e, 0x20, 0x74, 0x68, - 0x69, 0x73, 0x2e, 0x70, 0x72, 0x6f, 0x70, 0x73, 0x2e, 0x63, 0x6f, 0x6e, - 0x74, 0x65, 0x78, 0x74, 0x3b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x7d, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x72, 0x65, 0x6e, 0x64, - 0x65, 0x72, 0x28, 0x7b, 0x20, 0x63, 0x68, 0x69, 0x6c, 0x64, 0x72, 0x65, - 0x6e, 0x20, 0x7d, 0x29, 0x20, 0x7b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x72, 0x65, 0x74, 0x75, 0x72, 0x6e, 0x20, 0x63, 0x68, - 0x69, 0x6c, 0x64, 0x72, 0x65, 0x6e, 0x20, 0x7c, 0x7c, 0x20, 0x6e, 0x75, - 0x6c, 0x6c, 0x3b, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x7d, 0x0a, - 0x20, 0x20, 0x20, 0x20, 0x7d, 0x0a, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x66, - 0x75, 0x6e, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x20, 0x41, 0x70, 0x70, 0x28, - 0x70, 0x72, 0x6f, 0x70, 0x73, 0x29, 0x20, 0x7b, 0x0a, 0x0a, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x72, 0x65, 0x74, 0x75, 0x72, 0x6e, 0x20, 0x68, - 0x74, 0x6d, 0x6c, 0x60, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x3c, 0x64, 0x69, 0x76, 0x3e, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x3c, 0x68, 0x65, 0x61, 0x64, 0x65, 0x72, - 0x3e, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x3c, 0x68, 0x31, 0x3e, 0x6c, 0x6c, 0x61, 0x6d, 0x61, 0x2e, - 0x63, 0x70, 0x70, 0x3c, 0x2f, 0x68, 0x31, 0x3e, 0x0a, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x3c, 0x2f, 0x68, 0x65, 0x61, - 0x64, 0x65, 0x72, 0x3e, 0x0a, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x3c, 0x6d, 0x61, 0x69, 0x6e, 0x20, 0x69, 0x64, - 0x3d, 0x22, 0x63, 0x6f, 0x6e, 0x74, 0x65, 0x6e, 0x74, 0x22, 0x3e, 0x0a, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x3c, 0x24, 0x7b, 0x63, 0x68, 0x61, 0x74, 0x53, 0x74, 0x61, 0x72, 0x74, - 0x65, 0x64, 0x2e, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x20, 0x3f, 0x20, 0x43, - 0x68, 0x61, 0x74, 0x4c, 0x6f, 0x67, 0x20, 0x3a, 0x20, 0x43, 0x6f, 0x6e, - 0x66, 0x69, 0x67, 0x46, 0x6f, 0x72, 0x6d, 0x7d, 0x20, 0x2f, 0x3e, 0x0a, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x3c, 0x2f, - 0x6d, 0x61, 0x69, 0x6e, 0x3e, 0x0a, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x3c, 0x73, 0x65, 0x63, 0x74, 0x69, 0x6f, - 0x6e, 0x20, 0x69, 0x64, 0x3d, 0x22, 0x77, 0x72, 0x69, 0x74, 0x65, 0x22, - 0x3e, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x3c, 0x24, 0x7b, 0x4d, 0x65, 0x73, 0x73, 0x61, 0x67, 0x65, - 0x49, 0x6e, 0x70, 0x75, 0x74, 0x7d, 0x20, 0x2f, 0x3e, 0x0a, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x3c, 0x2f, 0x73, 0x65, - 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x3e, 0x0a, 0x0a, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x3c, 0x66, 0x6f, 0x6f, 0x74, 0x65, - 0x72, 0x3e, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x3c, 0x70, 0x3e, 0x3c, 0x24, 0x7b, 0x4d, 0x6f, 0x64, - 0x65, 0x6c, 0x47, 0x65, 0x6e, 0x65, 0x72, 0x61, 0x74, 0x69, 0x6f, 0x6e, - 0x49, 0x6e, 0x66, 0x6f, 0x7d, 0x20, 0x2f, 0x3e, 0x3c, 0x2f, 0x70, 0x3e, - 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x3c, 0x70, 0x3e, 0x50, 0x6f, 0x77, 0x65, 0x72, 0x65, 0x64, 0x20, - 0x62, 0x79, 0x20, 0x3c, 0x61, 0x20, 0x68, 0x72, 0x65, 0x66, 0x3d, 0x22, - 0x68, 0x74, 0x74, 0x70, 0x73, 0x3a, 0x2f, 0x2f, 0x67, 0x69, 0x74, 0x68, - 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x67, 0x67, 0x65, 0x72, 0x67, - 0x61, 0x6e, 0x6f, 0x76, 0x2f, 0x6c, 0x6c, 0x61, 0x6d, 0x61, 0x2e, 0x63, - 0x70, 0x70, 0x22, 0x3e, 0x6c, 0x6c, 0x61, 0x6d, 0x61, 0x2e, 0x63, 0x70, - 0x70, 0x3c, 0x2f, 0x61, 0x3e, 0x20, 0x61, 0x6e, 0x64, 0x20, 0x3c, 0x61, - 0x20, 0x68, 0x72, 0x65, 0x66, 0x3d, 0x22, 0x68, 0x74, 0x74, 0x70, 0x73, - 0x3a, 0x2f, 0x2f, 0x67, 0x67, 0x6d, 0x6c, 0x2e, 0x61, 0x69, 0x22, 0x3e, - 0x67, 0x67, 0x6d, 0x6c, 0x2e, 0x61, 0x69, 0x3c, 0x2f, 0x61, 0x3e, 0x2e, - 0x3c, 0x2f, 0x70, 0x3e, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, - 0x20, 0x20, 0x20, 0x3c, 0x2f, 0x66, 0x6f, 0x6f, 0x74, 0x65, 0x72, 0x3e, - 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x3c, 0x2f, 0x64, - 0x69, 0x76, 0x3e, 0x0a, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x60, 0x3b, - 0x0a, 0x20, 0x20, 0x20, 0x20, 0x7d, 0x0a, 0x0a, 0x20, 0x20, 0x20, 0x20, - 0x72, 0x65, 0x6e, 0x64, 0x65, 0x72, 0x28, 0x68, 0x28, 0x41, 0x70, 0x70, - 0x29, 0x2c, 0x20, 0x64, 0x6f, 0x63, 0x75, 0x6d, 0x65, 0x6e, 0x74, 0x2e, - 0x71, 0x75, 0x65, 0x72, 0x79, 0x53, 0x65, 0x6c, 0x65, 0x63, 0x74, 0x6f, - 0x72, 0x28, 0x27, 0x23, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65, - 0x72, 0x27, 0x29, 0x29, 0x3b, 0x0a, 0x20, 0x20, 0x3c, 0x2f, 0x73, 0x63, - 0x72, 0x69, 0x70, 0x74, 0x3e, 0x0a, 0x3c, 0x2f, 0x68, 0x65, 0x61, 0x64, - 0x3e, 0x0a, 0x0a, 0x3c, 0x62, 0x6f, 0x64, 0x79, 0x3e, 0x0a, 0x20, 0x20, - 0x3c, 0x64, 0x69, 0x76, 0x20, 0x69, 0x64, 0x3d, 0x22, 0x63, 0x6f, 0x6e, - 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x22, 0x3e, 0x3c, 0x2f, 0x64, 0x69, - 0x76, 0x3e, 0x0a, 0x20, 0x20, 0x3c, 0x64, 0x69, 0x76, 0x20, 0x69, 0x64, - 0x3d, 0x22, 0x70, 0x6f, 0x72, 0x74, 0x61, 0x6c, 0x22, 0x3e, 0x3c, 0x2f, - 0x64, 0x69, 0x76, 0x3e, 0x0a, 0x3c, 0x2f, 0x62, 0x6f, 0x64, 0x79, 0x3e, - 0x0a, 0x0a, 0x3c, 0x2f, 0x68, 0x74, 0x6d, 0x6c, 0x3e, 0x0a -}; -unsigned int index_html_len = 28018; diff --git a/spaces/Ilzhabimantara/rvc-Blue-archives/lib/infer_pack/modules/F0Predictor/F0Predictor.py b/spaces/Ilzhabimantara/rvc-Blue-archives/lib/infer_pack/modules/F0Predictor/F0Predictor.py deleted file mode 100644 index f56e49e7f0e6eab3babf0711cae2933371b9f9cc..0000000000000000000000000000000000000000 --- a/spaces/Ilzhabimantara/rvc-Blue-archives/lib/infer_pack/modules/F0Predictor/F0Predictor.py +++ /dev/null @@ -1,16 +0,0 @@ -class F0Predictor(object): - def compute_f0(self, wav, p_len): - """ - input: wav:[signal_length] - p_len:int - output: f0:[signal_length//hop_length] - """ - pass - - def compute_f0_uv(self, wav, p_len): - """ - input: wav:[signal_length] - p_len:int - output: f0:[signal_length//hop_length],uv:[signal_length//hop_length] - """ - pass diff --git a/spaces/InpaintAI/Inpaint-Anything/third_party/lama/docker/build.sh b/spaces/InpaintAI/Inpaint-Anything/third_party/lama/docker/build.sh deleted file mode 100644 index d70b747ff1282b9c60cf8b8b352b6b7701e1a254..0000000000000000000000000000000000000000 --- a/spaces/InpaintAI/Inpaint-Anything/third_party/lama/docker/build.sh +++ /dev/null @@ -1,5 +0,0 @@ -#!/bin/bash - -BASEDIR="$(dirname $0)" - -docker build -t windj007/lama -f "$BASEDIR/Dockerfile" "$BASEDIR" diff --git a/spaces/Jeff2323/ai-comic-factory/src/app/page.tsx b/spaces/Jeff2323/ai-comic-factory/src/app/page.tsx deleted file mode 100644 index f3dc36ec97a1782ef32c28611e1f64df9ae1a44f..0000000000000000000000000000000000000000 --- a/spaces/Jeff2323/ai-comic-factory/src/app/page.tsx +++ /dev/null @@ -1,28 +0,0 @@ -"use server" - -import Head from "next/head" - -import Main from "./main" -import { TooltipProvider } from "@/components/ui/tooltip" - -// https://nextjs.org/docs/pages/building-your-application/optimizing/fonts - -export default async function IndexPage({ params: { ownerId } }: { params: { ownerId: string }}) { - return ( - <> - - - - - -
- -
- -
- - ) -} \ No newline at end of file diff --git a/spaces/JohnCalimoso/animalbreedidentificationversion1.5/Control/Guineapig/directorytest.py b/spaces/JohnCalimoso/animalbreedidentificationversion1.5/Control/Guineapig/directorytest.py deleted file mode 100644 index 0371ea59ff66d1883d9feeb0a34d85ab568713c0..0000000000000000000000000000000000000000 --- a/spaces/JohnCalimoso/animalbreedidentificationversion1.5/Control/Guineapig/directorytest.py +++ /dev/null @@ -1,9 +0,0 @@ -import os - -relative_path = "../../Model/Guineapig/resnetxSVM/resnet_EXTRACTOR.h5" -absolute_path = os.path.abspath(relative_path) - -if os.path.exists(absolute_path): - print("The file exists at:", absolute_path) -else: - print("The file does not exist at:", absolute_path) \ No newline at end of file diff --git a/spaces/JohnSmith9982/ChuanhuChatGPT/modules/models/base_model.py b/spaces/JohnSmith9982/ChuanhuChatGPT/modules/models/base_model.py deleted file mode 100644 index fa94579d725dbf9d739d58fc17b35bc2248c7fcd..0000000000000000000000000000000000000000 --- a/spaces/JohnSmith9982/ChuanhuChatGPT/modules/models/base_model.py +++ /dev/null @@ -1,787 +0,0 @@ -from __future__ import annotations -from typing import TYPE_CHECKING, List - -import logging -import json -import commentjson as cjson -import os -import sys -import requests -import urllib3 -import traceback -import pathlib - -from tqdm import tqdm -import colorama -from duckduckgo_search import DDGS -from itertools import islice -import asyncio -import aiohttp -from enum import Enum - -from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler -from langchain.callbacks.manager import BaseCallbackManager - -from typing import Any, Dict, List, Optional, Union - -from langchain.callbacks.base import BaseCallbackHandler -from langchain.input import print_text -from langchain.schema import AgentAction, AgentFinish, LLMResult -from threading import Thread, Condition -from collections import deque -from langchain.chat_models.base import BaseChatModel -from langchain.schema import HumanMessage, AIMessage, SystemMessage, BaseMessage - -from ..presets import * -from ..index_func import * -from ..utils import * -from .. import shared -from ..config import retrieve_proxy - - -class CallbackToIterator: - def __init__(self): - self.queue = deque() - self.cond = Condition() - self.finished = False - - def callback(self, result): - with self.cond: - self.queue.append(result) - self.cond.notify() # Wake up the generator. - - def __iter__(self): - return self - - def __next__(self): - with self.cond: - # Wait for a value to be added to the queue. - while not self.queue and not self.finished: - self.cond.wait() - if not self.queue: - raise StopIteration() - return self.queue.popleft() - - def finish(self): - with self.cond: - self.finished = True - self.cond.notify() # Wake up the generator if it's waiting. - - -def get_action_description(text): - match = re.search('```(.*?)```', text, re.S) - json_text = match.group(1) - # 把json转化为python字典 - json_dict = json.loads(json_text) - # 提取'action'和'action_input'的值 - action_name = json_dict['action'] - action_input = json_dict['action_input'] - if action_name != "Final Answer": - return f'

{action_name}: {action_input}\n\n

' - else: - return "" - - -class ChuanhuCallbackHandler(BaseCallbackHandler): - - def __init__(self, callback) -> None: - """Initialize callback handler.""" - self.callback = callback - - def on_agent_action( - self, action: AgentAction, color: Optional[str] = None, **kwargs: Any - ) -> Any: - self.callback(get_action_description(action.log)) - - def on_tool_end( - self, - output: str, - color: Optional[str] = None, - observation_prefix: Optional[str] = None, - llm_prefix: Optional[str] = None, - **kwargs: Any, - ) -> None: - """If not the final action, print out observation.""" - # if observation_prefix is not None: - # self.callback(f"\n\n{observation_prefix}") - # self.callback(output) - # if llm_prefix is not None: - # self.callback(f"\n\n{llm_prefix}") - if observation_prefix is not None: - logging.info(observation_prefix) - self.callback(output) - if llm_prefix is not None: - logging.info(llm_prefix) - - def on_agent_finish( - self, finish: AgentFinish, color: Optional[str] = None, **kwargs: Any - ) -> None: - # self.callback(f"{finish.log}\n\n") - logging.info(finish.log) - - def on_llm_new_token(self, token: str, **kwargs: Any) -> None: - """Run on new LLM token. Only available when streaming is enabled.""" - self.callback(token) - - def on_chat_model_start(self, serialized: Dict[str, Any], messages: List[List[BaseMessage]], **kwargs: Any) -> Any: - """Run when a chat model starts running.""" - pass - - -class ModelType(Enum): - Unknown = -1 - OpenAI = 0 - ChatGLM = 1 - LLaMA = 2 - XMChat = 3 - StableLM = 4 - MOSS = 5 - YuanAI = 6 - Minimax = 7 - ChuanhuAgent = 8 - GooglePaLM = 9 - LangchainChat = 10 - Midjourney = 11 - - @classmethod - def get_type(cls, model_name: str): - model_type = None - model_name_lower = model_name.lower() - if "gpt" in model_name_lower: - model_type = ModelType.OpenAI - elif "chatglm" in model_name_lower: - model_type = ModelType.ChatGLM - elif "llama" in model_name_lower or "alpaca" in model_name_lower: - model_type = ModelType.LLaMA - elif "xmchat" in model_name_lower: - model_type = ModelType.XMChat - elif "stablelm" in model_name_lower: - model_type = ModelType.StableLM - elif "moss" in model_name_lower: - model_type = ModelType.MOSS - elif "yuanai" in model_name_lower: - model_type = ModelType.YuanAI - elif "minimax" in model_name_lower: - model_type = ModelType.Minimax - elif "川虎助理" in model_name_lower: - model_type = ModelType.ChuanhuAgent - elif "palm" in model_name_lower: - model_type = ModelType.GooglePaLM - elif "midjourney" in model_name_lower: - model_type = ModelType.Midjourney - elif "azure" in model_name_lower or "api" in model_name_lower: - model_type = ModelType.LangchainChat - else: - model_type = ModelType.Unknown - return model_type - - -class BaseLLMModel: - def __init__( - self, - model_name, - system_prompt=INITIAL_SYSTEM_PROMPT, - temperature=1.0, - top_p=1.0, - n_choices=1, - stop=None, - max_generation_token=None, - presence_penalty=0, - frequency_penalty=0, - logit_bias=None, - user="", - ) -> None: - self.history = [] - self.all_token_counts = [] - self.model_name = model_name - self.model_type = ModelType.get_type(model_name) - try: - self.token_upper_limit = MODEL_TOKEN_LIMIT[model_name] - except KeyError: - self.token_upper_limit = DEFAULT_TOKEN_LIMIT - self.interrupted = False - self.system_prompt = system_prompt - self.api_key = None - self.need_api_key = False - self.single_turn = False - - self.temperature = temperature - self.top_p = top_p - self.n_choices = n_choices - self.stop_sequence = stop - self.max_generation_token = None - self.presence_penalty = presence_penalty - self.frequency_penalty = frequency_penalty - self.logit_bias = logit_bias - self.user_identifier = user - - def get_answer_stream_iter(self): - """stream predict, need to be implemented - conversations are stored in self.history, with the most recent question, in OpenAI format - should return a generator, each time give the next word (str) in the answer - """ - logging.warning( - "stream predict not implemented, using at once predict instead") - response, _ = self.get_answer_at_once() - yield response - - def get_answer_at_once(self): - """predict at once, need to be implemented - conversations are stored in self.history, with the most recent question, in OpenAI format - Should return: - the answer (str) - total token count (int) - """ - logging.warning( - "at once predict not implemented, using stream predict instead") - response_iter = self.get_answer_stream_iter() - count = 0 - for response in response_iter: - count += 1 - return response, sum(self.all_token_counts) + count - - def billing_info(self): - """get billing infomation, inplement if needed""" - logging.warning("billing info not implemented, using default") - return BILLING_NOT_APPLICABLE_MSG - - def count_token(self, user_input): - """get token count from input, implement if needed""" - # logging.warning("token count not implemented, using default") - return len(user_input) - - def stream_next_chatbot(self, inputs, chatbot, fake_input=None, display_append=""): - def get_return_value(): - return chatbot, status_text - - status_text = i18n("开始实时传输回答……") - if fake_input: - chatbot.append((fake_input, "")) - else: - chatbot.append((inputs, "")) - - user_token_count = self.count_token(inputs) - self.all_token_counts.append(user_token_count) - logging.debug(f"输入token计数: {user_token_count}") - - stream_iter = self.get_answer_stream_iter() - - if display_append: - display_append = '\n\n
' + display_append - partial_text = "" - for partial_text in stream_iter: - chatbot[-1] = (chatbot[-1][0], partial_text + display_append) - self.all_token_counts[-1] += 1 - status_text = self.token_message() - yield get_return_value() - if self.interrupted: - self.recover() - break - self.history.append(construct_assistant(partial_text)) - - def next_chatbot_at_once(self, inputs, chatbot, fake_input=None, display_append=""): - if fake_input: - chatbot.append((fake_input, "")) - else: - chatbot.append((inputs, "")) - if fake_input is not None: - user_token_count = self.count_token(fake_input) - else: - user_token_count = self.count_token(inputs) - self.all_token_counts.append(user_token_count) - ai_reply, total_token_count = self.get_answer_at_once() - self.history.append(construct_assistant(ai_reply)) - if fake_input is not None: - self.history[-2] = construct_user(fake_input) - chatbot[-1] = (chatbot[-1][0], ai_reply + display_append) - if fake_input is not None: - self.all_token_counts[-1] += count_token( - construct_assistant(ai_reply)) - else: - self.all_token_counts[-1] = total_token_count - \ - sum(self.all_token_counts) - status_text = self.token_message() - return chatbot, status_text - - def handle_file_upload(self, files, chatbot, language): - """if the model accepts multi modal input, implement this function""" - status = gr.Markdown.update() - if files: - index = construct_index(self.api_key, file_src=files) - status = i18n("索引构建完成") - return gr.Files.update(), chatbot, status - - def summarize_index(self, files, chatbot, language): - status = gr.Markdown.update() - if files: - index = construct_index(self.api_key, file_src=files) - status = i18n("总结完成") - logging.info(i18n("生成内容总结中……")) - os.environ["OPENAI_API_KEY"] = self.api_key - from langchain.chains.summarize import load_summarize_chain - from langchain.prompts import PromptTemplate - from langchain.chat_models import ChatOpenAI - from langchain.callbacks import StdOutCallbackHandler - prompt_template = "Write a concise summary of the following:\n\n{text}\n\nCONCISE SUMMARY IN " + language + ":" - PROMPT = PromptTemplate( - template=prompt_template, input_variables=["text"]) - llm = ChatOpenAI() - chain = load_summarize_chain( - llm, chain_type="map_reduce", return_intermediate_steps=True, map_prompt=PROMPT, combine_prompt=PROMPT) - summary = chain({"input_documents": list(index.docstore.__dict__[ - "_dict"].values())}, return_only_outputs=True)["output_text"] - print(i18n("总结") + f": {summary}") - chatbot.append([i18n("上传了")+str(len(files))+"个文件", summary]) - return chatbot, status - - def prepare_inputs(self, real_inputs, use_websearch, files, reply_language, chatbot): - fake_inputs = None - display_append = [] - limited_context = False - fake_inputs = real_inputs - if files: - from langchain.embeddings.huggingface import HuggingFaceEmbeddings - from langchain.vectorstores.base import VectorStoreRetriever - limited_context = True - msg = "加载索引中……" - logging.info(msg) - index = construct_index(self.api_key, file_src=files) - assert index is not None, "获取索引失败" - msg = "索引获取成功,生成回答中……" - logging.info(msg) - with retrieve_proxy(): - retriever = VectorStoreRetriever(vectorstore=index, search_type="similarity_score_threshold", search_kwargs={ - "k": 6, "score_threshold": 0.5}) - relevant_documents = retriever.get_relevant_documents( - real_inputs) - reference_results = [[d.page_content.strip("�"), os.path.basename( - d.metadata["source"])] for d in relevant_documents] - reference_results = add_source_numbers(reference_results) - display_append = add_details(reference_results) - display_append = "\n\n" + "".join(display_append) - real_inputs = ( - replace_today(PROMPT_TEMPLATE) - .replace("{query_str}", real_inputs) - .replace("{context_str}", "\n\n".join(reference_results)) - .replace("{reply_language}", reply_language) - ) - elif use_websearch: - search_results = [] - with DDGS() as ddgs: - ddgs_gen = ddgs.text(real_inputs, backend="lite") - for r in islice(ddgs_gen, 10): - search_results.append(r) - reference_results = [] - for idx, result in enumerate(search_results): - logging.debug(f"搜索结果{idx + 1}:{result}") - domain_name = urllib3.util.parse_url(result['href']).host - reference_results.append([result['body'], result['href']]) - display_append.append( - # f"{idx+1}. [{domain_name}]({result['href']})\n" - f"{idx+1}. {result['title']}" - ) - reference_results = add_source_numbers(reference_results) - # display_append = "
    \n\n" + "".join(display_append) + "
" - display_append = '
' + \ - "".join(display_append) + '
' - real_inputs = ( - replace_today(WEBSEARCH_PTOMPT_TEMPLATE) - .replace("{query}", real_inputs) - .replace("{web_results}", "\n\n".join(reference_results)) - .replace("{reply_language}", reply_language) - ) - else: - display_append = "" - return limited_context, fake_inputs, display_append, real_inputs, chatbot - - def predict( - self, - inputs, - chatbot, - stream=False, - use_websearch=False, - files=None, - reply_language="中文", - should_check_token_count=True, - ): # repetition_penalty, top_k - - status_text = "开始生成回答……" - logging.info( - "用户" + f"{self.user_identifier}" + "的输入为:" + - colorama.Fore.BLUE + f"{inputs}" + colorama.Style.RESET_ALL - ) - if should_check_token_count: - yield chatbot + [(inputs, "")], status_text - if reply_language == "跟随问题语言(不稳定)": - reply_language = "the same language as the question, such as English, 中文, 日本語, Español, Français, or Deutsch." - - limited_context, fake_inputs, display_append, inputs, chatbot = self.prepare_inputs( - real_inputs=inputs, use_websearch=use_websearch, files=files, reply_language=reply_language, chatbot=chatbot) - yield chatbot + [(fake_inputs, "")], status_text - - if ( - self.need_api_key and - self.api_key is None - and not shared.state.multi_api_key - ): - status_text = STANDARD_ERROR_MSG + NO_APIKEY_MSG - logging.info(status_text) - chatbot.append((inputs, "")) - if len(self.history) == 0: - self.history.append(construct_user(inputs)) - self.history.append("") - self.all_token_counts.append(0) - else: - self.history[-2] = construct_user(inputs) - yield chatbot + [(inputs, "")], status_text - return - elif len(inputs.strip()) == 0: - status_text = STANDARD_ERROR_MSG + NO_INPUT_MSG - logging.info(status_text) - yield chatbot + [(inputs, "")], status_text - return - - if self.single_turn: - self.history = [] - self.all_token_counts = [] - self.history.append(construct_user(inputs)) - - try: - if stream: - logging.debug("使用流式传输") - iter = self.stream_next_chatbot( - inputs, - chatbot, - fake_input=fake_inputs, - display_append=display_append, - ) - for chatbot, status_text in iter: - yield chatbot, status_text - else: - logging.debug("不使用流式传输") - chatbot, status_text = self.next_chatbot_at_once( - inputs, - chatbot, - fake_input=fake_inputs, - display_append=display_append, - ) - yield chatbot, status_text - except Exception as e: - traceback.print_exc() - status_text = STANDARD_ERROR_MSG + beautify_err_msg(str(e)) - yield chatbot, status_text - - if len(self.history) > 1 and self.history[-1]["content"] != inputs: - logging.info( - "回答为:" - + colorama.Fore.BLUE - + f"{self.history[-1]['content']}" - + colorama.Style.RESET_ALL - ) - - if limited_context: - # self.history = self.history[-4:] - # self.all_token_counts = self.all_token_counts[-2:] - self.history = [] - self.all_token_counts = [] - - max_token = self.token_upper_limit - TOKEN_OFFSET - - if sum(self.all_token_counts) > max_token and should_check_token_count: - count = 0 - while ( - sum(self.all_token_counts) - > self.token_upper_limit * REDUCE_TOKEN_FACTOR - and sum(self.all_token_counts) > 0 - ): - count += 1 - del self.all_token_counts[0] - del self.history[:2] - logging.info(status_text) - status_text = f"为了防止token超限,模型忘记了早期的 {count} 轮对话" - yield chatbot, status_text - - self.auto_save(chatbot) - - def retry( - self, - chatbot, - stream=False, - use_websearch=False, - files=None, - reply_language="中文", - ): - logging.debug("重试中……") - if len(self.history) > 0: - inputs = self.history[-2]["content"] - del self.history[-2:] - if len(self.all_token_counts) > 0: - self.all_token_counts.pop() - elif len(chatbot) > 0: - inputs = chatbot[-1][0] - else: - yield chatbot, f"{STANDARD_ERROR_MSG}上下文是空的" - return - - iter = self.predict( - inputs, - chatbot, - stream=stream, - use_websearch=use_websearch, - files=files, - reply_language=reply_language, - ) - for x in iter: - yield x - logging.debug("重试完毕") - - # def reduce_token_size(self, chatbot): - # logging.info("开始减少token数量……") - # chatbot, status_text = self.next_chatbot_at_once( - # summarize_prompt, - # chatbot - # ) - # max_token_count = self.token_upper_limit * REDUCE_TOKEN_FACTOR - # num_chat = find_n(self.all_token_counts, max_token_count) - # logging.info(f"previous_token_count: {self.all_token_counts}, keeping {num_chat} chats") - # chatbot = chatbot[:-1] - # self.history = self.history[-2*num_chat:] if num_chat > 0 else [] - # self.all_token_counts = self.all_token_counts[-num_chat:] if num_chat > 0 else [] - # msg = f"保留了最近{num_chat}轮对话" - # logging.info(msg) - # logging.info("减少token数量完毕") - # return chatbot, msg + "," + self.token_message(self.all_token_counts if len(self.all_token_counts) > 0 else [0]) - - def interrupt(self): - self.interrupted = True - - def recover(self): - self.interrupted = False - - def set_token_upper_limit(self, new_upper_limit): - self.token_upper_limit = new_upper_limit - print(f"token上限设置为{new_upper_limit}") - - def set_temperature(self, new_temperature): - self.temperature = new_temperature - - def set_top_p(self, new_top_p): - self.top_p = new_top_p - - def set_n_choices(self, new_n_choices): - self.n_choices = new_n_choices - - def set_stop_sequence(self, new_stop_sequence: str): - new_stop_sequence = new_stop_sequence.split(",") - self.stop_sequence = new_stop_sequence - - def set_max_tokens(self, new_max_tokens): - self.max_generation_token = new_max_tokens - - def set_presence_penalty(self, new_presence_penalty): - self.presence_penalty = new_presence_penalty - - def set_frequency_penalty(self, new_frequency_penalty): - self.frequency_penalty = new_frequency_penalty - - def set_logit_bias(self, logit_bias): - logit_bias = logit_bias.split() - bias_map = {} - encoding = tiktoken.get_encoding("cl100k_base") - for line in logit_bias: - word, bias_amount = line.split(":") - if word: - for token in encoding.encode(word): - bias_map[token] = float(bias_amount) - self.logit_bias = bias_map - - def set_user_identifier(self, new_user_identifier): - self.user_identifier = new_user_identifier - - def set_system_prompt(self, new_system_prompt): - self.system_prompt = new_system_prompt - - def set_key(self, new_access_key): - if "*" not in new_access_key: - self.api_key = new_access_key.strip() - msg = i18n("API密钥更改为了") + hide_middle_chars(self.api_key) - logging.info(msg) - return self.api_key, msg - else: - return gr.update(), gr.update() - - def set_single_turn(self, new_single_turn): - self.single_turn = new_single_turn - - def reset(self): - self.history = [] - self.all_token_counts = [] - self.interrupted = False - pathlib.Path(os.path.join(HISTORY_DIR, self.user_identifier, new_auto_history_filename( - os.path.join(HISTORY_DIR, self.user_identifier)))).touch() - return [], self.token_message([0]) - - def delete_first_conversation(self): - if self.history: - del self.history[:2] - del self.all_token_counts[0] - return self.token_message() - - def delete_last_conversation(self, chatbot): - if len(chatbot) > 0 and STANDARD_ERROR_MSG in chatbot[-1][1]: - msg = "由于包含报错信息,只删除chatbot记录" - chatbot.pop() - return chatbot, self.history - if len(self.history) > 0: - self.history.pop() - self.history.pop() - if len(chatbot) > 0: - msg = "删除了一组chatbot对话" - chatbot.pop() - if len(self.all_token_counts) > 0: - msg = "删除了一组对话的token计数记录" - self.all_token_counts.pop() - msg = "删除了一组对话" - return chatbot, msg - - def token_message(self, token_lst=None): - if token_lst is None: - token_lst = self.all_token_counts - token_sum = 0 - for i in range(len(token_lst)): - token_sum += sum(token_lst[: i + 1]) - return i18n("Token 计数: ") + f"{sum(token_lst)}" + i18n(",本次对话累计消耗了 ") + f"{token_sum} tokens" - - def save_chat_history(self, filename, chatbot, user_name): - if filename == "": - return - if not filename.endswith(".json"): - filename += ".json" - return save_file(filename, self.system_prompt, self.history, chatbot, user_name) - - def auto_save(self, chatbot): - history_file_path = get_history_filepath(self.user_identifier) - save_file(history_file_path, self.system_prompt, - self.history, chatbot, self.user_identifier) - - def export_markdown(self, filename, chatbot, user_name): - if filename == "": - return - if not filename.endswith(".md"): - filename += ".md" - return save_file(filename, self.system_prompt, self.history, chatbot, user_name) - - def load_chat_history(self, filename, user_name): - logging.debug(f"{user_name} 加载对话历史中……") - logging.info(f"filename: {filename}") - if type(filename) != str and filename is not None: - filename = filename.name - try: - if "/" not in filename: - history_file_path = os.path.join( - HISTORY_DIR, user_name, filename) - else: - history_file_path = filename - with open(history_file_path, "r", encoding="utf-8") as f: - json_s = json.load(f) - try: - if type(json_s["history"][0]) == str: - logging.info("历史记录格式为旧版,正在转换……") - new_history = [] - for index, item in enumerate(json_s["history"]): - if index % 2 == 0: - new_history.append(construct_user(item)) - else: - new_history.append(construct_assistant(item)) - json_s["history"] = new_history - logging.info(new_history) - except: - pass - logging.debug(f"{user_name} 加载对话历史完毕") - self.history = json_s["history"] - return os.path.basename(filename), json_s["system"], json_s["chatbot"] - except: - # 没有对话历史或者对话历史解析失败 - logging.info(f"没有找到对话历史记录 {filename}") - return gr.update(), self.system_prompt, gr.update() - - def delete_chat_history(self, filename, user_name): - if filename == "CANCELED": - return gr.update(), gr.update(), gr.update() - if filename == "": - return i18n("你没有选择任何对话历史"), gr.update(), gr.update() - if not filename.endswith(".json"): - filename += ".json" - if "/" not in filename: - history_file_path = os.path.join(HISTORY_DIR, user_name, filename) - else: - history_file_path = filename - try: - os.remove(history_file_path) - return i18n("删除对话历史成功"), get_history_names(False, user_name), [] - except: - logging.info(f"删除对话历史失败 {history_file_path}") - return i18n("对话历史")+filename+i18n("已经被删除啦"), gr.update(), gr.update() - - def auto_load(self): - if self.user_identifier == "": - self.reset() - return self.system_prompt, gr.update() - history_file_path = get_history_filepath(self.user_identifier) - filename, system_prompt, chatbot = self.load_chat_history( - history_file_path, self.user_identifier) - return system_prompt, chatbot - - def like(self): - """like the last response, implement if needed - """ - return gr.update() - - def dislike(self): - """dislike the last response, implement if needed - """ - return gr.update() - - -class Base_Chat_Langchain_Client(BaseLLMModel): - def __init__(self, model_name, user_name=""): - super().__init__(model_name, user=user_name) - self.need_api_key = False - self.model = self.setup_model() - - def setup_model(self): - # inplement this to setup the model then return it - pass - - def _get_langchain_style_history(self): - history = [SystemMessage(content=self.system_prompt)] - for i in self.history: - if i["role"] == "user": - history.append(HumanMessage(content=i["content"])) - elif i["role"] == "assistant": - history.append(AIMessage(content=i["content"])) - return history - - def get_answer_at_once(self): - assert isinstance( - self.model, BaseChatModel), "model is not instance of LangChain BaseChatModel" - history = self._get_langchain_style_history() - response = self.model.generate(history) - return response.content, sum(response.content) - - def get_answer_stream_iter(self): - it = CallbackToIterator() - assert isinstance( - self.model, BaseChatModel), "model is not instance of LangChain BaseChatModel" - history = self._get_langchain_style_history() - - def thread_func(): - self.model(messages=history, callbacks=[ - ChuanhuCallbackHandler(it.callback)]) - it.finish() - t = Thread(target=thread_func) - t.start() - partial_text = "" - for value in it: - partial_text += value - yield partial_text diff --git a/spaces/KPCGD/bingo/src/components/welcome-screen.tsx b/spaces/KPCGD/bingo/src/components/welcome-screen.tsx deleted file mode 100644 index f7449fcbb6c621875e235db98f2790bf7894fb0a..0000000000000000000000000000000000000000 --- a/spaces/KPCGD/bingo/src/components/welcome-screen.tsx +++ /dev/null @@ -1,34 +0,0 @@ -import { useBing } from '@/lib/hooks/use-bing' - -const exampleMessages = [ - { - heading: '🧐 提出复杂问题', - message: `我可以为我挑剔的只吃橙色食物的孩子做什么饭?` - }, - { - heading: '🙌 获取更好的答案', - message: '销量最高的 3 种宠物吸尘器有哪些优点和缺点?' - }, - { - heading: '🎨 获得创意灵感', - message: `以海盗的口吻写一首关于外太空鳄鱼的俳句` - } -] - -export function WelcomeScreen({ setInput }: Pick, 'setInput'>) { - return ( -
- {exampleMessages.map(example => ( - - ))} -
- ) -} diff --git a/spaces/KPCGD/bingo/src/lib/bots/bing/sr.ts b/spaces/KPCGD/bingo/src/lib/bots/bing/sr.ts deleted file mode 100644 index 7cae14da7362bd6cc1e234851c11ca67e5a99f0c..0000000000000000000000000000000000000000 --- a/spaces/KPCGD/bingo/src/lib/bots/bing/sr.ts +++ /dev/null @@ -1,106 +0,0 @@ -// @ts-ignore -const SpeechRecognitionPolyfill: typeof webkitSpeechRecognition = typeof window !== 'undefined' ? ( - // @ts-ignore - window.SpeechRecognition || - window.webkitSpeechRecognition || - // @ts-ignore - window.mozSpeechRecognition || - // @ts-ignore - window.msSpeechRecognition || - // @ts-ignore - window.oSpeechRecognition -) as typeof webkitSpeechRecognition : undefined - -type subscriber = (msg: string, command?: string) => void - -export class SR { - recognition?: SpeechRecognition - onchange?: subscriber - transcript: boolean = false - listening: boolean = false - private commandsRe?: RegExp - constructor(commands: string[]) { - this.recognition = SpeechRecognitionPolyfill ? new SpeechRecognitionPolyfill() : undefined - if (!this.recognition) { - return - } - this.configuration('zh-CN') - if (commands.length) { - this.commandsRe = new RegExp(`^(${commands.join('|')})。?$`) - } - this.recognition.onresult = this.speechRecognition - this.recognition.onerror = (err) => { - console.log('err', err.error) - this.stop() - } - this.recognition.onend = () => { - if (this.recognition && this.listening) { - this.recognition.start() - } - } - } - - speechRecognition = (event: SpeechRecognitionEvent) => { - if (!this.listening) return - for (var i = event.resultIndex; i < event.results.length; i++) { - let result = event.results[i] - if (result.isFinal) { - var alt = result[0] - const text = alt.transcript.trim() - if (this.commandsRe && this.commandsRe.test(text)) { - return this.onchange?.('', RegExp.$1) - } - if (!this.transcript) return - this.onchange?.(text) - } - } - } - - private configuration = async (lang: string = 'zh-CN') => { - return new Promise((resolve) => { - if (this.recognition) { - this.recognition.continuous = true - this.recognition.lang = lang - this.recognition.onstart = resolve - } - }) - } - - start = async () => { - if (this.recognition && !this.listening) { - await this.recognition.start() - this.transcript = true - this.listening = true - } - } - - stop = () => { - if (this.recognition) { - this.recognition.stop() - this.transcript = false - this.listening = false - } - } - - - pause = () => { - if (this.recognition) { - this.transcript = false - } - } - - resume = () => { - if (this.recognition) { - this.transcript = true - } - } - - abort = () => { - if (this.recognition && this.transcript) { - this.recognition.abort() - this.transcript = false - this.listening = false - } - } -} - diff --git a/spaces/Kangarroar/ApplioRVC-Inference/Applio-RVC-Fork/utils/backups.py b/spaces/Kangarroar/ApplioRVC-Inference/Applio-RVC-Fork/utils/backups.py deleted file mode 100644 index b814f8184792e80e2324685436053d61487110b1..0000000000000000000000000000000000000000 --- a/spaces/Kangarroar/ApplioRVC-Inference/Applio-RVC-Fork/utils/backups.py +++ /dev/null @@ -1,141 +0,0 @@ -import os -import shutil -import hashlib -import time -import base64 - - - - -LOGS_FOLDER = '/content/Applio-RVC-Fork/logs' -WEIGHTS_FOLDER = '/content/Applio-RVC-Fork/weights' -GOOGLE_DRIVE_PATH = '/content/drive/MyDrive/RVC_Backup' - -def import_google_drive_backup(): - print("Importing Google Drive backup...") - weights_exist = False - for root, dirs, files in os.walk(GOOGLE_DRIVE_PATH): - for filename in files: - filepath = os.path.join(root, filename) - if os.path.isfile(filepath) and not filepath.startswith(os.path.join(GOOGLE_DRIVE_PATH, 'weights')): - backup_filepath = os.path.join(LOGS_FOLDER, os.path.relpath(filepath, GOOGLE_DRIVE_PATH)) - backup_folderpath = os.path.dirname(backup_filepath) - if not os.path.exists(backup_folderpath): - os.makedirs(backup_folderpath) - print(f'Created backup folder: {backup_folderpath}', flush=True) - shutil.copy2(filepath, backup_filepath) # copy file with metadata - print(f'Imported file from Google Drive backup: {filename}') - elif filepath.startswith(os.path.join(GOOGLE_DRIVE_PATH, 'weights')) and filename.endswith('.pth'): - weights_exist = True - weights_filepath = os.path.join(WEIGHTS_FOLDER, os.path.relpath(filepath, os.path.join(GOOGLE_DRIVE_PATH, 'weights'))) - weights_folderpath = os.path.dirname(weights_filepath) - if not os.path.exists(weights_folderpath): - os.makedirs(weights_folderpath) - print(f'Created weights folder: {weights_folderpath}', flush=True) - shutil.copy2(filepath, weights_filepath) # copy file with metadata - print(f'Imported file from weights: {filename}') - if weights_exist: - print("Copied weights from Google Drive backup to local weights folder.") - else: - print("No weights found in Google Drive backup.") - print("Google Drive backup import completed.") - -def get_md5_hash(file_path): - hash_md5 = hashlib.md5() - with open(file_path, "rb") as f: - for chunk in iter(lambda: f.read(4096), b""): - hash_md5.update(chunk) - return hash_md5.hexdigest() - -def copy_weights_folder_to_drive(): - destination_folder = os.path.join(GOOGLE_DRIVE_PATH, 'weights') - try: - if not os.path.exists(destination_folder): - os.makedirs(destination_folder) - - num_copied = 0 - for filename in os.listdir(WEIGHTS_FOLDER): - if filename.endswith('.pth'): - source_file = os.path.join(WEIGHTS_FOLDER, filename) - destination_file = os.path.join(destination_folder, filename) - if not os.path.exists(destination_file): - shutil.copy2(source_file, destination_file) - num_copied += 1 - print(f"Copied {filename} to Google Drive!") - - if num_copied == 0: - print("No new finished models found for copying.") - else: - print(f"Finished copying {num_copied} files to Google Drive!") - - except Exception as e: - print(f"An error occurred while copying weights: {str(e)}") - # You can log the error or take appropriate actions here. - -def backup_files(): - print("\nStarting backup loop...") - last_backup_timestamps_path = os.path.join(LOGS_FOLDER, 'last_backup_timestamps.txt') - fully_updated = False # boolean to track if all files are up to date - - while True: - try: - updated = False # flag to check if any files were updated - last_backup_timestamps = {} - - try: - with open(last_backup_timestamps_path, 'r') as f: - last_backup_timestamps = dict(line.strip().split(':') for line in f) - except FileNotFoundError: - pass # File does not exist yet, which is fine - - for root, dirs, files in os.walk(LOGS_FOLDER): - for filename in files: - if filename != 'last_backup_timestamps.txt': - filepath = os.path.join(root, filename) - if os.path.isfile(filepath): - backup_filepath = os.path.join(GOOGLE_DRIVE_PATH, os.path.relpath(filepath, LOGS_FOLDER)) - backup_folderpath = os.path.dirname(backup_filepath) - if not os.path.exists(backup_folderpath): - os.makedirs(backup_folderpath) - print(f'Created backup folder: {backup_folderpath}', flush=True) - # check if file has changed since last backup - last_backup_timestamp = last_backup_timestamps.get(filepath) - current_timestamp = os.path.getmtime(filepath) - if last_backup_timestamp is None or float(last_backup_timestamp) < current_timestamp: - shutil.copy2(filepath, backup_filepath) # copy file with metadata - last_backup_timestamps[filepath] = str(current_timestamp) # update last backup timestamp - if last_backup_timestamp is None: - print(f'Backed up file: {filename}') - else: - print(f'Updating backed up file: {filename}') - updated = True - fully_updated = False # if a file is updated, all files are not up to date - - # check if any files were deleted in Colab and delete them from the backup drive - for filepath in list(last_backup_timestamps.keys()): - if not os.path.exists(filepath): - backup_filepath = os.path.join(GOOGLE_DRIVE_PATH, os.path.relpath(filepath, LOGS_FOLDER)) - if os.path.exists(backup_filepath): - os.remove(backup_filepath) - print(f'Deleted file: {filepath}') - del last_backup_timestamps[filepath] - updated = True - fully_updated = False # if a file is deleted, all files are not up to date - - if not updated and not fully_updated: - print("Files are up to date.") - fully_updated = True # if all files are up to date, set the boolean to True - copy_weights_folder_to_drive() - sleep_time = 15 - else: - sleep_time = 0.1 - - with open(last_backup_timestamps_path, 'w') as f: - for filepath, timestamp in last_backup_timestamps.items(): - f.write(f'{filepath}:{timestamp}\n') - - time.sleep(sleep_time) # wait for 15 seconds before checking again, or 0.1s if not fully up to date to speed up backups - - except Exception as e: - print(f"An error occurred: {str(e)}") - # You can log the error or take appropriate actions here. diff --git a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/ppg2mel/train/option.py b/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/ppg2mel/train/option.py deleted file mode 100644 index f66c600b84e0404c7937bacf8653776ce9be74c0..0000000000000000000000000000000000000000 --- a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/ppg2mel/train/option.py +++ /dev/null @@ -1,10 +0,0 @@ -# Default parameters which will be imported by solver -default_hparas = { - 'GRAD_CLIP': 5.0, # Grad. clip threshold - 'PROGRESS_STEP': 100, # Std. output refresh freq. - # Decode steps for objective validation (step = ratio*input_txt_len) - 'DEV_STEP_RATIO': 1.2, - # Number of examples (alignment/text) to show in tensorboard - 'DEV_N_EXAMPLE': 4, - 'TB_FLUSH_FREQ': 180 # Update frequency of tensorboard (secs) -} diff --git a/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/tools/gui/guidml.py b/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/tools/gui/guidml.py deleted file mode 100644 index f883e25cd2c981d8a469ff5d965a2dceeb2d963e..0000000000000000000000000000000000000000 --- a/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/tools/gui/guidml.py +++ /dev/null @@ -1,710 +0,0 @@ -""" -0416后的更新: - 引入config中half - 重建npy而不用填写 - v2支持 - 无f0模型支持 - 修复 - - int16: - 增加无索引支持 - f0算法改harvest(怎么看就只有这个会影响CPU占用),但是不这么改效果不好 -""" -import os, sys, traceback, re - -import json - -now_dir = os.getcwd() -sys.path.append(now_dir) -from assets.configs.config import Config - -Config = Config() - -import torch_directml -import PySimpleGUI as sg -import sounddevice as sd -import noisereduce as nr -import numpy as np -from fairseq import checkpoint_utils -import librosa, torch, pyworld, faiss, time, threading -import torch.nn.functional as F -import torchaudio.transforms as tat -import scipy.signal as signal - - -# import matplotlib.pyplot as plt -from lib.infer.infer_pack.models import ( - SynthesizerTrnMs256NSFsid, - SynthesizerTrnMs256NSFsid_nono, - SynthesizerTrnMs768NSFsid, - SynthesizerTrnMs768NSFsid_nono, -) -from assets.i18n.i18n import I18nAuto - -i18n = I18nAuto() -device = torch_directml.device(torch_directml.default_device()) -current_dir = os.getcwd() - - -class RVC: - def __init__( - self, key, hubert_path, pth_path, index_path, npy_path, index_rate - ) -> None: - """ - 初始化 - """ - try: - self.f0_up_key = key - self.time_step = 160 / 16000 * 1000 - self.f0_min = 50 - self.f0_max = 1100 - self.f0_mel_min = 1127 * np.log(1 + self.f0_min / 700) - self.f0_mel_max = 1127 * np.log(1 + self.f0_max / 700) - self.sr = 16000 - self.window = 160 - if index_rate != 0: - self.index = faiss.read_index(index_path) - # self.big_npy = np.load(npy_path) - self.big_npy = self.index.reconstruct_n(0, self.index.ntotal) - print("index search enabled") - self.index_rate = index_rate - model_path = hubert_path - print("load model(s) from {}".format(model_path)) - models, saved_cfg, task = checkpoint_utils.load_model_ensemble_and_task( - [model_path], - suffix="", - ) - self.model = models[0] - self.model = self.model.to(device) - if Config.is_half: - self.model = self.model.half() - else: - self.model = self.model.float() - self.model.eval() - cpt = torch.load(pth_path, map_location="cpu") - self.tgt_sr = cpt["config"][-1] - cpt["config"][-3] = cpt["weight"]["emb_g.weight"].shape[0] # n_spk - self.if_f0 = cpt.get("f0", 1) - self.version = cpt.get("version", "v1") - if self.version == "v1": - if self.if_f0 == 1: - self.net_g = SynthesizerTrnMs256NSFsid( - *cpt["config"], is_half=Config.is_half - ) - else: - self.net_g = SynthesizerTrnMs256NSFsid_nono(*cpt["config"]) - elif self.version == "v2": - if self.if_f0 == 1: - self.net_g = SynthesizerTrnMs768NSFsid( - *cpt["config"], is_half=Config.is_half - ) - else: - self.net_g = SynthesizerTrnMs768NSFsid_nono(*cpt["config"]) - del self.net_g.enc_q - print(self.net_g.load_state_dict(cpt["weight"], strict=False)) - self.net_g.eval().to(device) - if Config.is_half: - self.net_g = self.net_g.half() - else: - self.net_g = self.net_g.float() - except: - print(traceback.format_exc()) - - def get_f0(self, x, f0_up_key, inp_f0=None): - x_pad = 1 - f0_min = 50 - f0_max = 1100 - f0_mel_min = 1127 * np.log(1 + f0_min / 700) - f0_mel_max = 1127 * np.log(1 + f0_max / 700) - f0, t = pyworld.harvest( - x.astype(np.double), - fs=self.sr, - f0_ceil=f0_max, - f0_floor=f0_min, - frame_period=10, - ) - f0 = pyworld.stonemask(x.astype(np.double), f0, t, self.sr) - f0 = signal.medfilt(f0, 3) - f0 *= pow(2, f0_up_key / 12) - # with open("test.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()])) - tf0 = self.sr // self.window # 每秒f0点数 - if inp_f0 is not None: - delta_t = np.round( - (inp_f0[:, 0].max() - inp_f0[:, 0].min()) * tf0 + 1 - ).astype("int16") - replace_f0 = np.interp( - list(range(delta_t)), inp_f0[:, 0] * 100, inp_f0[:, 1] - ) - shape = f0[x_pad * tf0 : x_pad * tf0 + len(replace_f0)].shape[0] - f0[x_pad * tf0 : x_pad * tf0 + len(replace_f0)] = replace_f0[:shape] - # with open("test_opt.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()])) - f0bak = f0.copy() - f0_mel = 1127 * np.log(1 + f0 / 700) - f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * 254 / ( - f0_mel_max - f0_mel_min - ) + 1 - f0_mel[f0_mel <= 1] = 1 - f0_mel[f0_mel > 255] = 255 - f0_coarse = np.rint(f0_mel).astype(np.int) - return f0_coarse, f0bak # 1-0 - - def infer(self, feats: torch.Tensor) -> np.ndarray: - """ - 推理函数 - """ - audio = feats.clone().cpu().numpy() - assert feats.dim() == 1, feats.dim() - feats = feats.view(1, -1) - padding_mask = torch.BoolTensor(feats.shape).fill_(False) - if Config.is_half: - feats = feats.half() - else: - feats = feats.float() - inputs = { - "source": feats.to(device), - "padding_mask": padding_mask.to(device), - "output_layer": 9 if self.version == "v1" else 12, - } - torch.cuda.synchronize() - with torch.no_grad(): - logits = self.model.extract_features(**inputs) - feats = ( - self.model.final_proj(logits[0]) if self.version == "v1" else logits[0] - ) - - ####索引优化 - try: - if ( - hasattr(self, "index") - and hasattr(self, "big_npy") - and self.index_rate != 0 - ): - npy = feats[0].cpu().numpy().astype("float32") - score, ix = self.index.search(npy, k=8) - weight = np.square(1 / score) - weight /= weight.sum(axis=1, keepdims=True) - npy = np.sum(self.big_npy[ix] * np.expand_dims(weight, axis=2), axis=1) - if Config.is_half: - npy = npy.astype("float16") - feats = ( - torch.from_numpy(npy).unsqueeze(0).to(device) * self.index_rate - + (1 - self.index_rate) * feats - ) - else: - print("index search FAIL or disabled") - except: - traceback.print_exc() - print("index search FAIL") - feats = F.interpolate(feats.permute(0, 2, 1), scale_factor=2).permute(0, 2, 1) - torch.cuda.synchronize() - print(feats.shape) - if self.if_f0 == 1: - pitch, pitchf = self.get_f0(audio, self.f0_up_key) - p_len = min(feats.shape[1], 13000, pitch.shape[0]) # 太大了爆显存 - else: - pitch, pitchf = None, None - p_len = min(feats.shape[1], 13000) # 太大了爆显存 - torch.cuda.synchronize() - # print(feats.shape,pitch.shape) - feats = feats[:, :p_len, :] - if self.if_f0 == 1: - pitch = pitch[:p_len] - pitchf = pitchf[:p_len] - pitch = torch.LongTensor(pitch).unsqueeze(0).to(device) - pitchf = torch.FloatTensor(pitchf).unsqueeze(0).to(device) - p_len = torch.LongTensor([p_len]).to(device) - ii = 0 # sid - sid = torch.LongTensor([ii]).to(device) - with torch.no_grad(): - if self.if_f0 == 1: - infered_audio = ( - self.net_g.infer(feats, p_len, pitch, pitchf, sid)[0][0, 0] - .data.cpu() - .float() - ) - else: - infered_audio = ( - self.net_g.infer(feats, p_len, sid)[0][0, 0].data.cpu().float() - ) - torch.cuda.synchronize() - return infered_audio - - -class GUIConfig: - def __init__(self) -> None: - self.hubert_path: str = "" - self.pth_path: str = "" - self.index_path: str = "" - self.npy_path: str = "" - self.pitch: int = 12 - self.samplerate: int = 44100 - self.block_time: float = 1.0 # s - self.buffer_num: int = 1 - self.threhold: int = -30 - self.crossfade_time: float = 0.08 - self.extra_time: float = 0.04 - self.I_noise_reduce = False - self.O_noise_reduce = False - self.index_rate = 0.3 - - -class GUI: - def __init__(self) -> None: - self.config = GUIConfig() - self.flag_vc = False - - self.launcher() - - def load(self): - ( - input_devices, - output_devices, - input_devices_indices, - output_devices_indices, - ) = self.get_devices() - try: - with open("values1.json", "r") as j: - data = json.load(j) - except: - with open("values1.json", "w") as j: - data = { - "pth_path": "", - "index_path": "", - "sg_input_device": input_devices[ - input_devices_indices.index(sd.default.device[0]) - ], - "sg_output_device": output_devices[ - output_devices_indices.index(sd.default.device[1]) - ], - "threhold": "-45", - "pitch": "0", - "index_rate": "0", - "block_time": "1", - "crossfade_length": "0.04", - "extra_time": "1", - } - return data - - def launcher(self): - data = self.load() - sg.theme("LightBlue3") - input_devices, output_devices, _, _ = self.get_devices() - layout = [ - [ - sg.Frame( - title=i18n("Load model"), - layout=[ - [ - sg.Input( - default_text="hubert_base.pt", - key="hubert_path", - disabled=True, - ), - sg.FileBrowse( - i18n("Hubert Model"), - initial_folder=os.path.join(os.getcwd()), - file_types=(("pt files", "*.pt"),), - ), - ], - [ - sg.Input( - default_text=data.get("pth_path", ""), - key="pth_path", - ), - sg.FileBrowse( - i18n("Select the .pth file"), - initial_folder=os.path.join(os.getcwd(), "weights"), - file_types=(("weight files", "*.pth"),), - ), - ], - [ - sg.Input( - default_text=data.get("index_path", ""), - key="index_path", - ), - sg.FileBrowse( - i18n("Select the .index file"), - initial_folder=os.path.join(os.getcwd(), "logs"), - file_types=(("index files", "*.index"),), - ), - ], - [ - sg.Input( - default_text="你不需要填写这个You don't need write this.", - key="npy_path", - disabled=True, - ), - sg.FileBrowse( - i18n("Select the .npy file"), - initial_folder=os.path.join(os.getcwd(), "logs"), - file_types=(("feature files", "*.npy"),), - ), - ], - ], - ) - ], - [ - sg.Frame( - layout=[ - [ - sg.Text(i18n("Input device")), - sg.Combo( - input_devices, - key="sg_input_device", - default_value=data.get("sg_input_device", ""), - ), - ], - [ - sg.Text(i18n("Output device")), - sg.Combo( - output_devices, - key="sg_output_device", - default_value=data.get("sg_output_device", ""), - ), - ], - ], - title=i18n("Audio device (please use the same type of driver)"), - ) - ], - [ - sg.Frame( - layout=[ - [ - sg.Text(i18n("Response threshold")), - sg.Slider( - range=(-60, 0), - key="threhold", - resolution=1, - orientation="h", - default_value=data.get("threhold", ""), - ), - ], - [ - sg.Text(i18n("Pitch settings")), - sg.Slider( - range=(-24, 24), - key="pitch", - resolution=1, - orientation="h", - default_value=data.get("pitch", ""), - ), - ], - [ - sg.Text(i18n("Index Rate")), - sg.Slider( - range=(0.0, 1.0), - key="index_rate", - resolution=0.01, - orientation="h", - default_value=data.get("index_rate", ""), - ), - ], - ], - title=i18n("General settings"), - ), - sg.Frame( - layout=[ - [ - sg.Text(i18n("Sample length")), - sg.Slider( - range=(0.1, 3.0), - key="block_time", - resolution=0.1, - orientation="h", - default_value=data.get("block_time", ""), - ), - ], - [ - sg.Text(i18n("Fade length")), - sg.Slider( - range=(0.01, 0.15), - key="crossfade_length", - resolution=0.01, - orientation="h", - default_value=data.get("crossfade_length", ""), - ), - ], - [ - sg.Text(i18n("Extra推理时长")), - sg.Slider( - range=(0.05, 3.00), - key="extra_time", - resolution=0.01, - orientation="h", - default_value=data.get("extra_time", ""), - ), - ], - [ - sg.Checkbox(i18n("Input noise reduction"), key="I_noise_reduce"), - sg.Checkbox(i18n("Output noise reduction"), key="O_noise_reduce"), - ], - ], - title=i18n("Performance settings"), - ), - ], - [ - sg.Button(i18n("开始音频Convert"), key="start_vc"), - sg.Button(i18n("停止音频Convert"), key="stop_vc"), - sg.Text(i18n("Inference time (ms):")), - sg.Text("0", key="infer_time"), - ], - ] - self.window = sg.Window("RVC - GUI", layout=layout) - self.event_handler() - - def event_handler(self): - while True: - event, values = self.window.read() - if event == sg.WINDOW_CLOSED: - self.flag_vc = False - exit() - if event == "start_vc" and self.flag_vc == False: - if self.set_values(values) == True: - print("using_cuda:" + str(torch.cuda.is_available())) - self.start_vc() - settings = { - "pth_path": values["pth_path"], - "index_path": values["index_path"], - "sg_input_device": values["sg_input_device"], - "sg_output_device": values["sg_output_device"], - "threhold": values["threhold"], - "pitch": values["pitch"], - "index_rate": values["index_rate"], - "block_time": values["block_time"], - "crossfade_length": values["crossfade_length"], - "extra_time": values["extra_time"], - } - with open("values1.json", "w") as j: - json.dump(settings, j) - if event == "stop_vc" and self.flag_vc == True: - self.flag_vc = False - - def set_values(self, values): - if len(values["pth_path"].strip()) == 0: - sg.popup(i18n("Select the pth file")) - return False - if len(values["index_path"].strip()) == 0: - sg.popup(i18n("Select the index file")) - return False - pattern = re.compile("[^\x00-\x7F]+") - if pattern.findall(values["hubert_path"]): - sg.popup(i18n("The hubert model path must not contain Chinese characters")) - return False - if pattern.findall(values["pth_path"]): - sg.popup(i18n("The pth file path must not contain Chinese characters.")) - return False - if pattern.findall(values["index_path"]): - sg.popup(i18n("The index file path must not contain Chinese characters.")) - return False - self.set_devices(values["sg_input_device"], values["sg_output_device"]) - self.config.hubert_path = os.path.join(current_dir, "hubert_base.pt") - self.config.pth_path = values["pth_path"] - self.config.index_path = values["index_path"] - self.config.npy_path = values["npy_path"] - self.config.threhold = values["threhold"] - self.config.pitch = values["pitch"] - self.config.block_time = values["block_time"] - self.config.crossfade_time = values["crossfade_length"] - self.config.extra_time = values["extra_time"] - self.config.I_noise_reduce = values["I_noise_reduce"] - self.config.O_noise_reduce = values["O_noise_reduce"] - self.config.index_rate = values["index_rate"] - return True - - def start_vc(self): - torch.cuda.empty_cache() - self.flag_vc = True - self.block_frame = int(self.config.block_time * self.config.samplerate) - self.crossfade_frame = int(self.config.crossfade_time * self.config.samplerate) - self.sola_search_frame = int(0.012 * self.config.samplerate) - self.delay_frame = int(0.01 * self.config.samplerate) # 往前预留0.02s - self.extra_frame = int(self.config.extra_time * self.config.samplerate) - self.rvc = None - self.rvc = RVC( - self.config.pitch, - self.config.hubert_path, - self.config.pth_path, - self.config.index_path, - self.config.npy_path, - self.config.index_rate, - ) - self.input_wav: np.ndarray = np.zeros( - self.extra_frame - + self.crossfade_frame - + self.sola_search_frame - + self.block_frame, - dtype="float32", - ) - self.output_wav: torch.Tensor = torch.zeros( - self.block_frame, device=device, dtype=torch.float32 - ) - self.sola_buffer: torch.Tensor = torch.zeros( - self.crossfade_frame, device=device, dtype=torch.float32 - ) - self.fade_in_window: torch.Tensor = torch.linspace( - 0.0, 1.0, steps=self.crossfade_frame, device=device, dtype=torch.float32 - ) - self.fade_out_window: torch.Tensor = 1 - self.fade_in_window - self.resampler1 = tat.Resample( - orig_freq=self.config.samplerate, new_freq=16000, dtype=torch.float32 - ) - self.resampler2 = tat.Resample( - orig_freq=self.rvc.tgt_sr, - new_freq=self.config.samplerate, - dtype=torch.float32, - ) - thread_vc = threading.Thread(target=self.soundinput) - thread_vc.start() - - def soundinput(self): - """ - 接受音频输入 - """ - with sd.Stream( - channels=2, - callback=self.audio_callback, - blocksize=self.block_frame, - samplerate=self.config.samplerate, - dtype="float32", - ): - while self.flag_vc: - time.sleep(self.config.block_time) - print("Audio block passed.") - print("ENDing VC") - - def audio_callback( - self, indata: np.ndarray, outdata: np.ndarray, frames, times, status - ): - """ - 音频处理 - """ - start_time = time.perf_counter() - indata = librosa.to_mono(indata.T) - if self.config.I_noise_reduce: - indata[:] = nr.reduce_noise(y=indata, sr=self.config.samplerate) - - """noise gate""" - frame_length = 2048 - hop_length = 1024 - rms = librosa.feature.rms( - y=indata, frame_length=frame_length, hop_length=hop_length - ) - db_threhold = librosa.amplitude_to_db(rms, ref=1.0)[0] < self.config.threhold - # print(rms.shape,db.shape,db) - for i in range(db_threhold.shape[0]): - if db_threhold[i]: - indata[i * hop_length : (i + 1) * hop_length] = 0 - self.input_wav[:] = np.append(self.input_wav[self.block_frame :], indata) - - # infer - print("input_wav:" + str(self.input_wav.shape)) - # print('infered_wav:'+str(infer_wav.shape)) - infer_wav: torch.Tensor = self.resampler2( - self.rvc.infer(self.resampler1(torch.from_numpy(self.input_wav))) - )[-self.crossfade_frame - self.sola_search_frame - self.block_frame :].to( - device - ) - print("infer_wav:" + str(infer_wav.shape)) - - # SOLA algorithm from https://github.com/yxlllc/DDSP-SVC - cor_nom = F.conv1d( - infer_wav[None, None, : self.crossfade_frame + self.sola_search_frame], - self.sola_buffer[None, None, :], - ) - cor_den = torch.sqrt( - F.conv1d( - infer_wav[None, None, : self.crossfade_frame + self.sola_search_frame] - ** 2, - torch.ones(1, 1, self.crossfade_frame, device=device), - ) - + 1e-8 - ) - sola_offset = torch.argmax(cor_nom[0, 0] / cor_den[0, 0]) - print("sola offset: " + str(int(sola_offset))) - - # crossfade - self.output_wav[:] = infer_wav[sola_offset : sola_offset + self.block_frame] - self.output_wav[: self.crossfade_frame] *= self.fade_in_window - self.output_wav[: self.crossfade_frame] += self.sola_buffer[:] - if sola_offset < self.sola_search_frame: - self.sola_buffer[:] = ( - infer_wav[ - -self.sola_search_frame - - self.crossfade_frame - + sola_offset : -self.sola_search_frame - + sola_offset - ] - * self.fade_out_window - ) - else: - self.sola_buffer[:] = ( - infer_wav[-self.crossfade_frame :] * self.fade_out_window - ) - - if self.config.O_noise_reduce: - outdata[:] = np.tile( - nr.reduce_noise( - y=self.output_wav[:].cpu().numpy(), sr=self.config.samplerate - ), - (2, 1), - ).T - else: - outdata[:] = self.output_wav[:].repeat(2, 1).t().cpu().numpy() - total_time = time.perf_counter() - start_time - self.window["infer_time"].update(int(total_time * 1000)) - print("infer time:" + str(total_time)) - - def get_devices(self, update: bool = True): - """获取设备列表""" - if update: - sd._terminate() - sd._initialize() - devices = sd.query_devices() - hostapis = sd.query_hostapis() - for hostapi in hostapis: - for device_idx in hostapi["devices"]: - devices[device_idx]["hostapi_name"] = hostapi["name"] - input_devices = [ - f"{d['name']} ({d['hostapi_name']})" - for d in devices - if d["max_input_channels"] > 0 - ] - output_devices = [ - f"{d['name']} ({d['hostapi_name']})" - for d in devices - if d["max_output_channels"] > 0 - ] - input_devices_indices = [ - d["index"] if "index" in d else d["name"] - for d in devices - if d["max_input_channels"] > 0 - ] - output_devices_indices = [ - d["index"] if "index" in d else d["name"] - for d in devices - if d["max_output_channels"] > 0 - ] - return ( - input_devices, - output_devices, - input_devices_indices, - output_devices_indices, - ) - - def set_devices(self, input_device, output_device): - """设置输出设备""" - ( - input_devices, - output_devices, - input_device_indices, - output_device_indices, - ) = self.get_devices() - sd.default.device[0] = input_device_indices[input_devices.index(input_device)] - sd.default.device[1] = output_device_indices[ - output_devices.index(output_device) - ] - print("input device:" + str(sd.default.device[0]) + ":" + str(input_device)) - print("output device:" + str(sd.default.device[1]) + ":" + str(output_device)) - - -gui = GUI() diff --git a/spaces/Libra7578/Image-to-video/app.py b/spaces/Libra7578/Image-to-video/app.py deleted file mode 100644 index 27939ccec58d8caa167c54383e5d7fbdb5114ad3..0000000000000000000000000000000000000000 --- a/spaces/Libra7578/Image-to-video/app.py +++ /dev/null @@ -1,17 +0,0 @@ -from huggingface_hub import snapshot_download - -from modelscope.pipelines import pipeline -from modelscope.outputs import OutputKeys -import pathlib - -model_dir = pathlib.Path('weights') -snapshot_download('damo-vilab/modelscope-damo-text-to-video-synthesis', - repo_type='model', local_dir=model_dir) - -pipe = pipeline('text-to-video-synthesis', model_dir.as_posix()) -test_text = { - 'text': 'A panda eating bamboo on a rock.', - } -output_video_path = pipe(test_text,)[OutputKeys.OUTPUT_VIDEO] -print('output_video_path:', output_video_path) - diff --git "a/spaces/Liu-LAB/GPT-academic/crazy_functions/\346\211\271\351\207\217\346\200\273\347\273\223PDF\346\226\207\346\241\243.py" "b/spaces/Liu-LAB/GPT-academic/crazy_functions/\346\211\271\351\207\217\346\200\273\347\273\223PDF\346\226\207\346\241\243.py" deleted file mode 100644 index fc65f5c8e9337c91ae34d6ec1a98ad1accc1ea0c..0000000000000000000000000000000000000000 --- "a/spaces/Liu-LAB/GPT-academic/crazy_functions/\346\211\271\351\207\217\346\200\273\347\273\223PDF\346\226\207\346\241\243.py" +++ /dev/null @@ -1,149 +0,0 @@ -from toolbox import update_ui, promote_file_to_downloadzone, gen_time_str -from toolbox import CatchException, report_execption, write_results_to_file -from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive -from .crazy_utils import read_and_clean_pdf_text -from .crazy_utils import input_clipping - - - -def 解析PDF(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt): - file_write_buffer = [] - for file_name in file_manifest: - print('begin analysis on:', file_name) - ############################## <第 0 步,切割PDF> ################################## - # 递归地切割PDF文件,每一块(尽量是完整的一个section,比如introduction,experiment等,必要时再进行切割) - # 的长度必须小于 2500 个 Token - file_content, page_one = read_and_clean_pdf_text(file_name) # (尝试)按照章节切割PDF - file_content = file_content.encode('utf-8', 'ignore').decode() # avoid reading non-utf8 chars - page_one = str(page_one).encode('utf-8', 'ignore').decode() # avoid reading non-utf8 chars - - TOKEN_LIMIT_PER_FRAGMENT = 2500 - - from .crazy_utils import breakdown_txt_to_satisfy_token_limit_for_pdf - from request_llm.bridge_all import model_info - enc = model_info["gpt-3.5-turbo"]['tokenizer'] - def get_token_num(txt): return len(enc.encode(txt, disallowed_special=())) - paper_fragments = breakdown_txt_to_satisfy_token_limit_for_pdf( - txt=file_content, get_token_fn=get_token_num, limit=TOKEN_LIMIT_PER_FRAGMENT) - page_one_fragments = breakdown_txt_to_satisfy_token_limit_for_pdf( - txt=str(page_one), get_token_fn=get_token_num, limit=TOKEN_LIMIT_PER_FRAGMENT//4) - # 为了更好的效果,我们剥离Introduction之后的部分(如果有) - paper_meta = page_one_fragments[0].split('introduction')[0].split('Introduction')[0].split('INTRODUCTION')[0] - - ############################## <第 1 步,从摘要中提取高价值信息,放到history中> ################################## - final_results = [] - final_results.append(paper_meta) - - ############################## <第 2 步,迭代地历遍整个文章,提取精炼信息> ################################## - i_say_show_user = f'首先你在中文语境下通读整篇论文。'; gpt_say = "[Local Message] 收到。" # 用户提示 - chatbot.append([i_say_show_user, gpt_say]); yield from update_ui(chatbot=chatbot, history=[]) # 更新UI - - iteration_results = [] - last_iteration_result = paper_meta # 初始值是摘要 - MAX_WORD_TOTAL = 4096 * 0.7 - n_fragment = len(paper_fragments) - if n_fragment >= 20: print('文章极长,不能达到预期效果') - for i in range(n_fragment): - NUM_OF_WORD = MAX_WORD_TOTAL // n_fragment - i_say = f"Read this section, recapitulate the content of this section with less than {NUM_OF_WORD} Chinese characters: {paper_fragments[i]}" - i_say_show_user = f"[{i+1}/{n_fragment}] Read this section, recapitulate the content of this section with less than {NUM_OF_WORD} Chinese characters: {paper_fragments[i][:200]}" - gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(i_say, i_say_show_user, # i_say=真正给chatgpt的提问, i_say_show_user=给用户看的提问 - llm_kwargs, chatbot, - history=["The main idea of the previous section is?", last_iteration_result], # 迭代上一次的结果 - sys_prompt="Extract the main idea of this section with Chinese." # 提示 - ) - iteration_results.append(gpt_say) - last_iteration_result = gpt_say - - ############################## <第 3 步,整理history,提取总结> ################################## - final_results.extend(iteration_results) - final_results.append(f'Please conclude this paper discussed above。') - # This prompt is from https://github.com/kaixindelele/ChatPaper/blob/main/chat_paper.py - NUM_OF_WORD = 1000 - i_say = """ -1. Mark the title of the paper (with Chinese translation) -2. list all the authors' names (use English) -3. mark the first author's affiliation (output Chinese translation only) -4. mark the keywords of this article (use English) -5. link to the paper, Github code link (if available, fill in Github:None if not) -6. summarize according to the following four points.Be sure to use Chinese answers (proper nouns need to be marked in English) - - (1):What is the research background of this article? - - (2):What are the past methods? What are the problems with them? Is the approach well motivated? - - (3):What is the research methodology proposed in this paper? - - (4):On what task and what performance is achieved by the methods in this paper? Can the performance support their goals? -Follow the format of the output that follows: -1. Title: xxx\n\n -2. Authors: xxx\n\n -3. Affiliation: xxx\n\n -4. Keywords: xxx\n\n -5. Urls: xxx or xxx , xxx \n\n -6. Summary: \n\n - - (1):xxx;\n - - (2):xxx;\n - - (3):xxx;\n - - (4):xxx.\n\n -Be sure to use Chinese answers (proper nouns need to be marked in English), statements as concise and academic as possible, -do not have too much repetitive information, numerical values using the original numbers. - """ - # This prompt is from https://github.com/kaixindelele/ChatPaper/blob/main/chat_paper.py - file_write_buffer.extend(final_results) - i_say, final_results = input_clipping(i_say, final_results, max_token_limit=2000) - gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive( - inputs=i_say, inputs_show_user='开始最终总结', - llm_kwargs=llm_kwargs, chatbot=chatbot, history=final_results, - sys_prompt= f"Extract the main idea of this paper with less than {NUM_OF_WORD} Chinese characters" - ) - final_results.append(gpt_say) - file_write_buffer.extend([i_say, gpt_say]) - ############################## <第 4 步,设置一个token上限> ################################## - _, final_results = input_clipping("", final_results, max_token_limit=3200) - yield from update_ui(chatbot=chatbot, history=final_results) # 注意这里的历史记录被替代了 - - res = write_results_to_file(file_write_buffer, file_name=gen_time_str()) - promote_file_to_downloadzone(res.split('\t')[-1], chatbot=chatbot) - yield from update_ui(chatbot=chatbot, history=final_results) # 刷新界面 - - -@CatchException -def 批量总结PDF文档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - import glob, os - - # 基本信息:功能、贡献者 - chatbot.append([ - "函数插件功能?", - "批量总结PDF文档。函数插件贡献者: ValeriaWong,Eralien"]) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - # 尝试导入依赖,如果缺少依赖,则给出安装建议 - try: - import fitz - except: - report_execption(chatbot, history, - a = f"解析项目: {txt}", - b = f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade pymupdf```。") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - - # 清空历史,以免输入溢出 - history = [] - - # 检测输入参数,如没有给定输入参数,直接退出 - if os.path.exists(txt): - project_folder = txt - else: - if txt == "": txt = '空空如也的输入栏' - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - - # 搜索需要处理的文件清单 - file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.pdf', recursive=True)] - - # 如果没找到任何文件 - if len(file_manifest) == 0: - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.tex或.pdf文件: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - - # 开始正式执行任务 - yield from 解析PDF(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt) diff --git a/spaces/MUmairAB/English-to-French/README.md b/spaces/MUmairAB/English-to-French/README.md deleted file mode 100644 index 4d2749f56236bf33ce8d0ecc812daaf46061ed42..0000000000000000000000000000000000000000 --- a/spaces/MUmairAB/English-to-French/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: English To French -emoji: 🏢 -colorFrom: blue -colorTo: yellow -sdk: gradio -sdk_version: 3.38.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/MVV/3dTopDenoising/models/__init__.py b/spaces/MVV/3dTopDenoising/models/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Mahiruoshi/BangDream-Bert-VITS2/bert_gen.py b/spaces/Mahiruoshi/BangDream-Bert-VITS2/bert_gen.py deleted file mode 100644 index 25cd7d97bafa02c514d0e1a34621546eac10da53..0000000000000000000000000000000000000000 --- a/spaces/Mahiruoshi/BangDream-Bert-VITS2/bert_gen.py +++ /dev/null @@ -1,59 +0,0 @@ -import torch -from multiprocessing import Pool -import commons -import utils -from tqdm import tqdm -from text import cleaned_text_to_sequence, get_bert -import argparse -import torch.multiprocessing as mp - - -def process_line(line): - rank = mp.current_process()._identity - rank = rank[0] if len(rank) > 0 else 0 - if torch.cuda.is_available(): - gpu_id = rank % torch.cuda.device_count() - device = torch.device(f"cuda:{gpu_id}") - wav_path, _, language_str, text, phones, tone, word2ph = line.strip().split("|") - phone = phones.split(" ") - tone = [int(i) for i in tone.split(" ")] - word2ph = [int(i) for i in word2ph.split(" ")] - word2ph = [i for i in word2ph] - phone, tone, language = cleaned_text_to_sequence(phone, tone, language_str) - - phone = commons.intersperse(phone, 0) - tone = commons.intersperse(tone, 0) - language = commons.intersperse(language, 0) - for i in range(len(word2ph)): - word2ph[i] = word2ph[i] * 2 - word2ph[0] += 1 - - bert_path = wav_path.replace(".wav", ".bert.pt") - - try: - bert = torch.load(bert_path) - assert bert.shape[-1] == len(phone) - except Exception: - bert = get_bert(text, word2ph, language_str, device) - assert bert.shape[-1] == len(phone) - torch.save(bert, bert_path) - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument("-c", "--config", type=str, default="configs/config.json") - parser.add_argument("--num_processes", type=int, default=2) - args = parser.parse_args() - config_path = args.config - hps = utils.get_hparams_from_file(config_path) - lines = [] - with open(hps.data.training_files, encoding="utf-8") as f: - lines.extend(f.readlines()) - - with open(hps.data.validation_files, encoding="utf-8") as f: - lines.extend(f.readlines()) - - num_processes = args.num_processes - with Pool(processes=num_processes) as pool: - for _ in tqdm(pool.imap_unordered(process_line, lines), total=len(lines)): - pass diff --git a/spaces/Mahiruoshi/lovelive-ShojoKageki-vits/app.py b/spaces/Mahiruoshi/lovelive-ShojoKageki-vits/app.py deleted file mode 100644 index 81d4a717300e3ae7151392717786d1916be0a8c6..0000000000000000000000000000000000000000 --- a/spaces/Mahiruoshi/lovelive-ShojoKageki-vits/app.py +++ /dev/null @@ -1,162 +0,0 @@ -import logging -logging.getLogger('numba').setLevel(logging.WARNING) -logging.getLogger('matplotlib').setLevel(logging.WARNING) -logging.getLogger('urllib3').setLevel(logging.WARNING) -from text import text_to_sequence -import numpy as np -from scipy.io import wavfile -import torch -import json -import commons -import utils -import sys -import pathlib -import onnxruntime as ort -import gradio as gr -import argparse -import time -import os -from scipy.io.wavfile import write - -def is_japanese(string): - for ch in string: - if ord(ch) > 0x3040 and ord(ch) < 0x30FF: - return True - return False - -def is_english(string): - import re - pattern = re.compile('^[A-Za-z0-9.,:;!?()_*"\' ]+$') - if pattern.fullmatch(string): - return True - else: - return False - -def to_numpy(tensor: torch.Tensor): - return tensor.detach().cpu().numpy() if tensor.requires_grad \ - else tensor.detach().numpy() - -def get_symbols_from_json(path): - assert os.path.isfile(path) - with open(path, 'r') as f: - data = json.load(f) - return data['symbols'] - -def sle(language,text): - text = text.replace('\n','。').replace(' ',',') - if language == "中文": - tts_input1 = "[ZH]" + text + "[ZH]" - return tts_input1 - elif language == "自动": - tts_input1 = f"[JA]{text}[JA]" if is_japanese(text) else f"[ZH]{text}[ZH]" - return tts_input1 - elif language == "日文": - tts_input1 = "[JA]" + text + "[JA]" - return tts_input1 - elif language == "英文": - tts_input1 = "[EN]" + text + "[EN]" - return tts_input1 - elif language == "手动": - return text - -def get_text(text,hps_ms): - text_norm = text_to_sequence(text,hps_ms.symbols,hps_ms.data.text_cleaners) - if hps_ms.data.add_blank: - text_norm = commons.intersperse(text_norm, 0) - text_norm = torch.LongTensor(text_norm) - return text_norm - -def create_tts_fn(ort_sess, speaker_id): - def tts_fn(text , language, n_scale= 0.667,n_scale_w = 0.8, l_scale = 1 ): - text =sle(language,text) - seq = text_to_sequence(text,hps.symbols, cleaner_names=hps.data.text_cleaners) - if hps.data.add_blank: - seq = commons.intersperse(seq, 0) - with torch.no_grad(): - x = np.array([seq], dtype=np.int64) - x_len = np.array([x.shape[1]], dtype=np.int64) - sid = np.array([speaker_id], dtype=np.int64) - scales = np.array([n_scale, n_scale_w, l_scale], dtype=np.float32) - scales.resize(1, 3) - ort_inputs = { - 'input': x, - 'input_lengths': x_len, - 'scales': scales, - 'sid': sid - } - t1 = time.time() - audio = np.squeeze(ort_sess.run(None, ort_inputs)) - audio *= 32767.0 / max(0.01, np.max(np.abs(audio))) * 0.6 - audio = np.clip(audio, -32767.0, 32767.0) - t2 = time.time() - spending_time = "推理时间:"+str(t2-t1)+"s" - print(spending_time) - return (hps.data.sampling_rate, audio) - return tts_fn - - -if __name__ == '__main__': - symbols = get_symbols_from_json('checkpoints/ShojoKageki/config.json') - hps = utils.get_hparams_from_file('checkpoints/ShojoKageki/config.json') - device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") - models = [] - schools = ["ShojoKageki-Nijigasaki","ShojoKageki","Nijigasaki"] - lan = ["中文","日文","自动","手动"] - with open("checkpoints/info.json", "r", encoding="utf-8") as f: - models_info = json.load(f) - for i in models_info: - school = models_info[i] - speakers = school["speakers"] - checkpoint = school["checkpoint"] - phone_dict = { - symbol: i for i, symbol in enumerate(symbols) - } - ort_sess = ort.InferenceSession(checkpoint) - content = [] - for j in speakers: - sid = int(speakers[j]['sid']) - title = school - example = speakers[j]['speech'] - name = speakers[j]["name"] - content.append((sid, name, title, example, create_tts_fn(ort_sess, sid))) - models.append(content) - - with gr.Blocks() as app: - gr.Markdown( - "#
vits-models\n" - ) - with gr.Tabs(): - for i in schools: - with gr.TabItem(i): - for (sid, name, title, example, tts_fn) in models[schools.index(i)]: - with gr.TabItem(name): - ''' - with gr.Row(): - gr.Markdown( - '
' - f'{name}' - f'' - '
' - ) - ''' - with gr.Row(): - with gr.Column(): - with gr.Row(): - with gr.Column(): - gr.Markdown( - '
' - f'{name}' - f'' - '
' - ) - input2 = gr.Dropdown(label="Language", choices=lan, value="自动", interactive=True) - with gr.Column(): - input1 = gr.TextArea(label="Text", value=example) - input4 = gr.Slider(minimum=0, maximum=1.0, label="更改噪声比例(noise scale),以控制情感", value=0.667) - input5 = gr.Slider(minimum=0, maximum=1.0, label="更改噪声偏差(noise scale w),以控制音素长短", value=0.8) - input6 = gr.Slider(minimum=0.1, maximum=10, label="duration", value=1) - btnVC = gr.Button("Submit") - output1 = gr.Audio(label="采样率22050") - - btnVC.click(tts_fn, inputs=[input1, input2, input4, input5, input6], outputs=[output1]) - app.launch() diff --git a/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/blip_inference.py b/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/blip_inference.py deleted file mode 100644 index a5f0114a9e79e4ffc9c1e8e940cffee12861006a..0000000000000000000000000000000000000000 --- a/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/blip_inference.py +++ /dev/null @@ -1,75 +0,0 @@ -from PIL import Image -from transformers import Blip2Processor, Blip2ForConditionalGeneration - -import torch -import os -from glob import glob -import argparse -from glob import glob - -from BLIP2.blip_video_model import Blip2ForVideoConditionalGeneration as Blip2ForConditionalGeneration - -from termcolor import colored, cprint - -parser = argparse.ArgumentParser() -parser.add_argument("-d", "--data_root", type=str, required=True) -parser.add_argument("-fn" , "--frame_num", type=int, default=8) -parser.add_argument("-fps" , "--frame_rate", type=int, default=1) -args = parser.parse_args() - - -device = "cuda" if torch.cuda.is_available() else "cpu" - -# Salesforce/blip2-flan-t5-xxl -# Salesforce/blip2-opt-6.7b -blip2_version = "Salesforce/blip2-flan-t5-xl" -# blip2_version = "Salesforce/blip2-opt-6.7b" - -weight_dtype = torch.bfloat16 if "flan" in blip2_version else torch.float16 -# weight_dtype = torch.float16 - -processor = Blip2Processor.from_pretrained(blip2_version) -model = Blip2ForConditionalGeneration.from_pretrained( - blip2_version, torch_dtype=weight_dtype -) -model.to(device) - - -if not os.path.isdir(args.data_root): - image_list = [args.data_root] -else: - # ipdb.set_trace() - all_image_list = sorted(glob(os.path.join(args.data_root, "*.jpg"))) + sorted(glob(os.path.join(args.data_root, "*.png"))) - image_list = [all_image_list[f] for f in range(0, args.frame_num*args.frame_rate, args.frame_rate)] - assert len(image_list) == args.frame_num - - -images = [] -for image_path in image_list: - image = Image.open(image_path).convert("RGB") - images.append(image) - -def blip2_call(prompt=None, max_new_tokens=20): - inputs = processor(images, text=prompt, return_tensors="pt").to(device, weight_dtype) - generated_ids = model.generate(**inputs, max_new_tokens=max_new_tokens) - generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0].strip() - if prompt is not None: - cprint(prompt, "red") - else: - cprint("No prompt", "red") - - print(generated_text) - - -## prompt captioning -prompt = "this is a video of" - -print("Captioning") -blip2_call(prompt, 20) - - -prompt = "Question: what is the protagonist in this video? Answer: " - -blip2_call(prompt, 10) - - diff --git a/spaces/Marne/MockingBird/mockingbirdforuse/__init__.py b/spaces/Marne/MockingBird/mockingbirdforuse/__init__.py deleted file mode 100644 index 339b6b696e3657372501f54d29cf2aca84d67d84..0000000000000000000000000000000000000000 --- a/spaces/Marne/MockingBird/mockingbirdforuse/__init__.py +++ /dev/null @@ -1,120 +0,0 @@ -import re -import librosa -import numpy as np -from io import BytesIO -from pathlib import Path -from scipy.io import wavfile -from typing import List, Literal, Optional - -from .encoder.inference import Encoder, preprocess_wav -from .synthesizer.inference import Synthesizer -from .vocoder.hifigan.inference import HifiGanVocoder -from .vocoder.wavernn.inference import WaveRNNVocoder -from .log import logger - - -def process_text(text: str) -> List[str]: - punctuation = "!,。、,?!," # punctuate and split/clean text - processed_texts = [] - text = re.sub(r"[{}]+".format(punctuation), "\n", text) - for processed_text in text.split("\n"): - if processed_text: - processed_texts.append(processed_text.strip()) - return processed_texts - - -class MockingBird: - def __init__(self): - self.encoder: Optional[Encoder] = None - self.gan_vocoder: Optional[HifiGanVocoder] = None - self.rnn_vocoder: Optional[WaveRNNVocoder] = None - self.synthesizer: Optional[Synthesizer] = None - - def load_model( - self, - encoder_path: Path, - gan_vocoder_path: Optional[Path] = None, - rnn_vocoder_path: Optional[Path] = None, - ): - """ - 设置 Encoder模型 和 Vocoder模型 路径 - - Args: - encoder_path (Path): Encoder模型路径 - gan_vocoder_path (Path): HifiGan Vocoder模型路径,可选,需要用到 HifiGan 类型时必须填写 - rnn_vocoder_path (Path): WaveRNN Vocoder模型路径,可选,需要用到 WaveRNN 类型时必须填写 - """ - self.encoder = Encoder(encoder_path) - if gan_vocoder_path: - self.gan_vocoder = HifiGanVocoder(gan_vocoder_path) - if rnn_vocoder_path: - self.rnn_vocoder = WaveRNNVocoder(rnn_vocoder_path) - - def set_synthesizer(self, synthesizer_path: Path): - """ - 设置Synthesizer模型路径 - - Args: - synthesizer_path (Path): Synthesizer模型路径 - """ - self.synthesizer = Synthesizer(synthesizer_path) - logger.info(f"using synthesizer model: {synthesizer_path}") - - def synthesize( - self, - text: str, - input_wav: Path, - vocoder_type: Literal["HifiGan", "WaveRNN"] = "HifiGan", - style_idx: int = 0, - min_stop_token: int = 5, - steps: int = 1000, - ) -> BytesIO: - """ - 生成语音 - - Args: - text (str): 目标文字 - input_wav (Path): 目标录音路径 - vocoder_type (HifiGan / WaveRNN): Vocoder模型,默认使用HifiGan - style_idx (int, optional): Style 范围 -1~9,默认为 0 - min_stop_token (int, optional): Accuracy(精度) 范围3~9,默认为 5 - steps (int, optional): MaxLength(最大句长) 范围200~2000,默认为 1000 - """ - if not self.encoder: - raise Exception("Please set encoder path first") - - if not self.synthesizer: - raise Exception("Please set synthesizer path first") - - # Load input wav - wav, sample_rate = librosa.load(input_wav) - - encoder_wav = preprocess_wav(wav, sample_rate) - embed, _, _ = self.encoder.embed_utterance(encoder_wav, return_partials=True) - - # Load input text - texts = process_text(text) - - # synthesize and vocode - embeds = [embed] * len(texts) - specs = self.synthesizer.synthesize_spectrograms( - texts, - embeds, - style_idx=style_idx, - min_stop_token=min_stop_token, - steps=steps, - ) - spec = np.concatenate(specs, axis=1) - if vocoder_type == "WaveRNN": - if not self.rnn_vocoder: - raise Exception("Please set wavernn vocoder path first") - wav, sample_rate = self.rnn_vocoder.infer_waveform(spec) - else: - if not self.gan_vocoder: - raise Exception("Please set hifigan vocoder path first") - wav, sample_rate = self.gan_vocoder.infer_waveform(spec) - - # Return cooked wav - out = BytesIO() - wavfile.write(out, sample_rate, wav.astype(np.float32)) - return out diff --git a/spaces/MashiroSA/sovits-emu-voice-transform/modules/modules.py b/spaces/MashiroSA/sovits-emu-voice-transform/modules/modules.py deleted file mode 100644 index 54290fd207b25e93831bd21005990ea137e6b50e..0000000000000000000000000000000000000000 --- a/spaces/MashiroSA/sovits-emu-voice-transform/modules/modules.py +++ /dev/null @@ -1,342 +0,0 @@ -import copy -import math -import numpy as np -import scipy -import torch -from torch import nn -from torch.nn import functional as F - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm - -import modules.commons as commons -from modules.commons import init_weights, get_padding - - -LRELU_SLOPE = 0.1 - - -class LayerNorm(nn.Module): - def __init__(self, channels, eps=1e-5): - super().__init__() - self.channels = channels - self.eps = eps - - self.gamma = nn.Parameter(torch.ones(channels)) - self.beta = nn.Parameter(torch.zeros(channels)) - - def forward(self, x): - x = x.transpose(1, -1) - x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps) - return x.transpose(1, -1) - - -class ConvReluNorm(nn.Module): - def __init__(self, in_channels, hidden_channels, out_channels, kernel_size, n_layers, p_dropout): - super().__init__() - self.in_channels = in_channels - self.hidden_channels = hidden_channels - self.out_channels = out_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - assert n_layers > 1, "Number of layers should be larger than 0." - - self.conv_layers = nn.ModuleList() - self.norm_layers = nn.ModuleList() - self.conv_layers.append(nn.Conv1d(in_channels, hidden_channels, kernel_size, padding=kernel_size//2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.relu_drop = nn.Sequential( - nn.ReLU(), - nn.Dropout(p_dropout)) - for _ in range(n_layers-1): - self.conv_layers.append(nn.Conv1d(hidden_channels, hidden_channels, kernel_size, padding=kernel_size//2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask): - x_org = x - for i in range(self.n_layers): - x = self.conv_layers[i](x * x_mask) - x = self.norm_layers[i](x) - x = self.relu_drop(x) - x = x_org + self.proj(x) - return x * x_mask - - -class DDSConv(nn.Module): - """ - Dialted and Depth-Separable Convolution - """ - def __init__(self, channels, kernel_size, n_layers, p_dropout=0.): - super().__init__() - self.channels = channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - - self.drop = nn.Dropout(p_dropout) - self.convs_sep = nn.ModuleList() - self.convs_1x1 = nn.ModuleList() - self.norms_1 = nn.ModuleList() - self.norms_2 = nn.ModuleList() - for i in range(n_layers): - dilation = kernel_size ** i - padding = (kernel_size * dilation - dilation) // 2 - self.convs_sep.append(nn.Conv1d(channels, channels, kernel_size, - groups=channels, dilation=dilation, padding=padding - )) - self.convs_1x1.append(nn.Conv1d(channels, channels, 1)) - self.norms_1.append(LayerNorm(channels)) - self.norms_2.append(LayerNorm(channels)) - - def forward(self, x, x_mask, g=None): - if g is not None: - x = x + g - for i in range(self.n_layers): - y = self.convs_sep[i](x * x_mask) - y = self.norms_1[i](y) - y = F.gelu(y) - y = self.convs_1x1[i](y) - y = self.norms_2[i](y) - y = F.gelu(y) - y = self.drop(y) - x = x + y - return x * x_mask - - -class WN(torch.nn.Module): - def __init__(self, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=0, p_dropout=0): - super(WN, self).__init__() - assert(kernel_size % 2 == 1) - self.hidden_channels =hidden_channels - self.kernel_size = kernel_size, - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - self.p_dropout = p_dropout - - self.in_layers = torch.nn.ModuleList() - self.res_skip_layers = torch.nn.ModuleList() - self.drop = nn.Dropout(p_dropout) - - if gin_channels != 0: - cond_layer = torch.nn.Conv1d(gin_channels, 2*hidden_channels*n_layers, 1) - self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name='weight') - - for i in range(n_layers): - dilation = dilation_rate ** i - padding = int((kernel_size * dilation - dilation) / 2) - in_layer = torch.nn.Conv1d(hidden_channels, 2*hidden_channels, kernel_size, - dilation=dilation, padding=padding) - in_layer = torch.nn.utils.weight_norm(in_layer, name='weight') - self.in_layers.append(in_layer) - - # last one is not necessary - if i < n_layers - 1: - res_skip_channels = 2 * hidden_channels - else: - res_skip_channels = hidden_channels - - res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1) - res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name='weight') - self.res_skip_layers.append(res_skip_layer) - - def forward(self, x, x_mask, g=None, **kwargs): - output = torch.zeros_like(x) - n_channels_tensor = torch.IntTensor([self.hidden_channels]) - - if g is not None: - g = self.cond_layer(g) - - for i in range(self.n_layers): - x_in = self.in_layers[i](x) - if g is not None: - cond_offset = i * 2 * self.hidden_channels - g_l = g[:,cond_offset:cond_offset+2*self.hidden_channels,:] - else: - g_l = torch.zeros_like(x_in) - - acts = commons.fused_add_tanh_sigmoid_multiply( - x_in, - g_l, - n_channels_tensor) - acts = self.drop(acts) - - res_skip_acts = self.res_skip_layers[i](acts) - if i < self.n_layers - 1: - res_acts = res_skip_acts[:,:self.hidden_channels,:] - x = (x + res_acts) * x_mask - output = output + res_skip_acts[:,self.hidden_channels:,:] - else: - output = output + res_skip_acts - return output * x_mask - - def remove_weight_norm(self): - if self.gin_channels != 0: - torch.nn.utils.remove_weight_norm(self.cond_layer) - for l in self.in_layers: - torch.nn.utils.remove_weight_norm(l) - for l in self.res_skip_layers: - torch.nn.utils.remove_weight_norm(l) - - -class ResBlock1(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock1, self).__init__() - self.convs1 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]))) - ]) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))) - ]) - self.convs2.apply(init_weights) - - def forward(self, x, x_mask=None): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c2(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class ResBlock2(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3)): - super(ResBlock2, self).__init__() - self.convs = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))) - ]) - self.convs.apply(init_weights) - - def forward(self, x, x_mask=None): - for c in self.convs: - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -class Log(nn.Module): - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask - logdet = torch.sum(-y, [1, 2]) - return y, logdet - else: - x = torch.exp(x) * x_mask - return x - - -class Flip(nn.Module): - def forward(self, x, *args, reverse=False, **kwargs): - x = torch.flip(x, [1]) - if not reverse: - logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device) - return x, logdet - else: - return x - - -class ElementwiseAffine(nn.Module): - def __init__(self, channels): - super().__init__() - self.channels = channels - self.m = nn.Parameter(torch.zeros(channels,1)) - self.logs = nn.Parameter(torch.zeros(channels,1)) - - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = self.m + torch.exp(self.logs) * x - y = y * x_mask - logdet = torch.sum(self.logs * x_mask, [1,2]) - return y, logdet - else: - x = (x - self.m) * torch.exp(-self.logs) * x_mask - return x - - -class ResidualCouplingLayer(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=0, - gin_channels=0, - mean_only=False): - assert channels % 2 == 0, "channels should be divisible by 2" - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.half_channels = channels // 2 - self.mean_only = mean_only - - self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1) - self.enc = WN(hidden_channels, kernel_size, dilation_rate, n_layers, p_dropout=p_dropout, gin_channels=gin_channels) - self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1) - self.post.weight.data.zero_() - self.post.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) * x_mask - h = self.enc(h, x_mask, g=g) - stats = self.post(h) * x_mask - if not self.mean_only: - m, logs = torch.split(stats, [self.half_channels]*2, 1) - else: - m = stats - logs = torch.zeros_like(m) - - if not reverse: - x1 = m + x1 * torch.exp(logs) * x_mask - x = torch.cat([x0, x1], 1) - logdet = torch.sum(logs, [1,2]) - return x, logdet - else: - x1 = (x1 - m) * torch.exp(-logs) * x_mask - x = torch.cat([x0, x1], 1) - return x diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/cnn/resnet.py b/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/cnn/resnet.py deleted file mode 100644 index 1cb3ac057ee2d52c46fc94685b5d4e698aad8d5f..0000000000000000000000000000000000000000 --- a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/cnn/resnet.py +++ /dev/null @@ -1,316 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import logging - -import torch.nn as nn -import torch.utils.checkpoint as cp - -from .utils import constant_init, kaiming_init - - -def conv3x3(in_planes, out_planes, stride=1, dilation=1): - """3x3 convolution with padding.""" - return nn.Conv2d( - in_planes, - out_planes, - kernel_size=3, - stride=stride, - padding=dilation, - dilation=dilation, - bias=False) - - -class BasicBlock(nn.Module): - expansion = 1 - - def __init__(self, - inplanes, - planes, - stride=1, - dilation=1, - downsample=None, - style='pytorch', - with_cp=False): - super(BasicBlock, self).__init__() - assert style in ['pytorch', 'caffe'] - self.conv1 = conv3x3(inplanes, planes, stride, dilation) - self.bn1 = nn.BatchNorm2d(planes) - self.relu = nn.ReLU(inplace=True) - self.conv2 = conv3x3(planes, planes) - self.bn2 = nn.BatchNorm2d(planes) - self.downsample = downsample - self.stride = stride - self.dilation = dilation - assert not with_cp - - def forward(self, x): - residual = x - - out = self.conv1(x) - out = self.bn1(out) - out = self.relu(out) - - out = self.conv2(out) - out = self.bn2(out) - - if self.downsample is not None: - residual = self.downsample(x) - - out += residual - out = self.relu(out) - - return out - - -class Bottleneck(nn.Module): - expansion = 4 - - def __init__(self, - inplanes, - planes, - stride=1, - dilation=1, - downsample=None, - style='pytorch', - with_cp=False): - """Bottleneck block. - - If style is "pytorch", the stride-two layer is the 3x3 conv layer, if - it is "caffe", the stride-two layer is the first 1x1 conv layer. - """ - super(Bottleneck, self).__init__() - assert style in ['pytorch', 'caffe'] - if style == 'pytorch': - conv1_stride = 1 - conv2_stride = stride - else: - conv1_stride = stride - conv2_stride = 1 - self.conv1 = nn.Conv2d( - inplanes, planes, kernel_size=1, stride=conv1_stride, bias=False) - self.conv2 = nn.Conv2d( - planes, - planes, - kernel_size=3, - stride=conv2_stride, - padding=dilation, - dilation=dilation, - bias=False) - - self.bn1 = nn.BatchNorm2d(planes) - self.bn2 = nn.BatchNorm2d(planes) - self.conv3 = nn.Conv2d( - planes, planes * self.expansion, kernel_size=1, bias=False) - self.bn3 = nn.BatchNorm2d(planes * self.expansion) - self.relu = nn.ReLU(inplace=True) - self.downsample = downsample - self.stride = stride - self.dilation = dilation - self.with_cp = with_cp - - def forward(self, x): - - def _inner_forward(x): - residual = x - - out = self.conv1(x) - out = self.bn1(out) - out = self.relu(out) - - out = self.conv2(out) - out = self.bn2(out) - out = self.relu(out) - - out = self.conv3(out) - out = self.bn3(out) - - if self.downsample is not None: - residual = self.downsample(x) - - out += residual - - return out - - if self.with_cp and x.requires_grad: - out = cp.checkpoint(_inner_forward, x) - else: - out = _inner_forward(x) - - out = self.relu(out) - - return out - - -def make_res_layer(block, - inplanes, - planes, - blocks, - stride=1, - dilation=1, - style='pytorch', - with_cp=False): - downsample = None - if stride != 1 or inplanes != planes * block.expansion: - downsample = nn.Sequential( - nn.Conv2d( - inplanes, - planes * block.expansion, - kernel_size=1, - stride=stride, - bias=False), - nn.BatchNorm2d(planes * block.expansion), - ) - - layers = [] - layers.append( - block( - inplanes, - planes, - stride, - dilation, - downsample, - style=style, - with_cp=with_cp)) - inplanes = planes * block.expansion - for _ in range(1, blocks): - layers.append( - block(inplanes, planes, 1, dilation, style=style, with_cp=with_cp)) - - return nn.Sequential(*layers) - - -class ResNet(nn.Module): - """ResNet backbone. - - Args: - depth (int): Depth of resnet, from {18, 34, 50, 101, 152}. - num_stages (int): Resnet stages, normally 4. - strides (Sequence[int]): Strides of the first block of each stage. - dilations (Sequence[int]): Dilation of each stage. - out_indices (Sequence[int]): Output from which stages. - style (str): `pytorch` or `caffe`. If set to "pytorch", the stride-two - layer is the 3x3 conv layer, otherwise the stride-two layer is - the first 1x1 conv layer. - frozen_stages (int): Stages to be frozen (all param fixed). -1 means - not freezing any parameters. - bn_eval (bool): Whether to set BN layers as eval mode, namely, freeze - running stats (mean and var). - bn_frozen (bool): Whether to freeze weight and bias of BN layers. - with_cp (bool): Use checkpoint or not. Using checkpoint will save some - memory while slowing down the training speed. - """ - - arch_settings = { - 18: (BasicBlock, (2, 2, 2, 2)), - 34: (BasicBlock, (3, 4, 6, 3)), - 50: (Bottleneck, (3, 4, 6, 3)), - 101: (Bottleneck, (3, 4, 23, 3)), - 152: (Bottleneck, (3, 8, 36, 3)) - } - - def __init__(self, - depth, - num_stages=4, - strides=(1, 2, 2, 2), - dilations=(1, 1, 1, 1), - out_indices=(0, 1, 2, 3), - style='pytorch', - frozen_stages=-1, - bn_eval=True, - bn_frozen=False, - with_cp=False): - super(ResNet, self).__init__() - if depth not in self.arch_settings: - raise KeyError(f'invalid depth {depth} for resnet') - assert num_stages >= 1 and num_stages <= 4 - block, stage_blocks = self.arch_settings[depth] - stage_blocks = stage_blocks[:num_stages] - assert len(strides) == len(dilations) == num_stages - assert max(out_indices) < num_stages - - self.out_indices = out_indices - self.style = style - self.frozen_stages = frozen_stages - self.bn_eval = bn_eval - self.bn_frozen = bn_frozen - self.with_cp = with_cp - - self.inplanes = 64 - self.conv1 = nn.Conv2d( - 3, 64, kernel_size=7, stride=2, padding=3, bias=False) - self.bn1 = nn.BatchNorm2d(64) - self.relu = nn.ReLU(inplace=True) - self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1) - - self.res_layers = [] - for i, num_blocks in enumerate(stage_blocks): - stride = strides[i] - dilation = dilations[i] - planes = 64 * 2**i - res_layer = make_res_layer( - block, - self.inplanes, - planes, - num_blocks, - stride=stride, - dilation=dilation, - style=self.style, - with_cp=with_cp) - self.inplanes = planes * block.expansion - layer_name = f'layer{i + 1}' - self.add_module(layer_name, res_layer) - self.res_layers.append(layer_name) - - self.feat_dim = block.expansion * 64 * 2**(len(stage_blocks) - 1) - - def init_weights(self, pretrained=None): - if isinstance(pretrained, str): - logger = logging.getLogger() - from ..runner import load_checkpoint - load_checkpoint(self, pretrained, strict=False, logger=logger) - elif pretrained is None: - for m in self.modules(): - if isinstance(m, nn.Conv2d): - kaiming_init(m) - elif isinstance(m, nn.BatchNorm2d): - constant_init(m, 1) - else: - raise TypeError('pretrained must be a str or None') - - def forward(self, x): - x = self.conv1(x) - x = self.bn1(x) - x = self.relu(x) - x = self.maxpool(x) - outs = [] - for i, layer_name in enumerate(self.res_layers): - res_layer = getattr(self, layer_name) - x = res_layer(x) - if i in self.out_indices: - outs.append(x) - if len(outs) == 1: - return outs[0] - else: - return tuple(outs) - - def train(self, mode=True): - super(ResNet, self).train(mode) - if self.bn_eval: - for m in self.modules(): - if isinstance(m, nn.BatchNorm2d): - m.eval() - if self.bn_frozen: - for params in m.parameters(): - params.requires_grad = False - if mode and self.frozen_stages >= 0: - for param in self.conv1.parameters(): - param.requires_grad = False - for param in self.bn1.parameters(): - param.requires_grad = False - self.bn1.eval() - self.bn1.weight.requires_grad = False - self.bn1.bias.requires_grad = False - for i in range(1, self.frozen_stages + 1): - mod = getattr(self, f'layer{i}') - mod.eval() - for param in mod.parameters(): - param.requires_grad = False diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmseg/core/seg/sampler/__init__.py b/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmseg/core/seg/sampler/__init__.py deleted file mode 100644 index 332b242c03d1c5e80d4577df442a9a037b1816e1..0000000000000000000000000000000000000000 --- a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmseg/core/seg/sampler/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -from .base_pixel_sampler import BasePixelSampler -from .ohem_pixel_sampler import OHEMPixelSampler - -__all__ = ['BasePixelSampler', 'OHEMPixelSampler'] diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmseg/datasets/pipelines/loading.py b/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmseg/datasets/pipelines/loading.py deleted file mode 100644 index d3692ae91f19b9c7ccf6023168788ff42c9e93e3..0000000000000000000000000000000000000000 --- a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmseg/datasets/pipelines/loading.py +++ /dev/null @@ -1,153 +0,0 @@ -import os.path as osp - -import annotator.uniformer.mmcv as mmcv -import numpy as np - -from ..builder import PIPELINES - - -@PIPELINES.register_module() -class LoadImageFromFile(object): - """Load an image from file. - - Required keys are "img_prefix" and "img_info" (a dict that must contain the - key "filename"). Added or updated keys are "filename", "img", "img_shape", - "ori_shape" (same as `img_shape`), "pad_shape" (same as `img_shape`), - "scale_factor" (1.0) and "img_norm_cfg" (means=0 and stds=1). - - Args: - to_float32 (bool): Whether to convert the loaded image to a float32 - numpy array. If set to False, the loaded image is an uint8 array. - Defaults to False. - color_type (str): The flag argument for :func:`mmcv.imfrombytes`. - Defaults to 'color'. - file_client_args (dict): Arguments to instantiate a FileClient. - See :class:`mmcv.fileio.FileClient` for details. - Defaults to ``dict(backend='disk')``. - imdecode_backend (str): Backend for :func:`mmcv.imdecode`. Default: - 'cv2' - """ - - def __init__(self, - to_float32=False, - color_type='color', - file_client_args=dict(backend='disk'), - imdecode_backend='cv2'): - self.to_float32 = to_float32 - self.color_type = color_type - self.file_client_args = file_client_args.copy() - self.file_client = None - self.imdecode_backend = imdecode_backend - - def __call__(self, results): - """Call functions to load image and get image meta information. - - Args: - results (dict): Result dict from :obj:`mmseg.CustomDataset`. - - Returns: - dict: The dict contains loaded image and meta information. - """ - - if self.file_client is None: - self.file_client = mmcv.FileClient(**self.file_client_args) - - if results.get('img_prefix') is not None: - filename = osp.join(results['img_prefix'], - results['img_info']['filename']) - else: - filename = results['img_info']['filename'] - img_bytes = self.file_client.get(filename) - img = mmcv.imfrombytes( - img_bytes, flag=self.color_type, backend=self.imdecode_backend) - if self.to_float32: - img = img.astype(np.float32) - - results['filename'] = filename - results['ori_filename'] = results['img_info']['filename'] - results['img'] = img - results['img_shape'] = img.shape - results['ori_shape'] = img.shape - # Set initial values for default meta_keys - results['pad_shape'] = img.shape - results['scale_factor'] = 1.0 - num_channels = 1 if len(img.shape) < 3 else img.shape[2] - results['img_norm_cfg'] = dict( - mean=np.zeros(num_channels, dtype=np.float32), - std=np.ones(num_channels, dtype=np.float32), - to_rgb=False) - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(to_float32={self.to_float32},' - repr_str += f"color_type='{self.color_type}'," - repr_str += f"imdecode_backend='{self.imdecode_backend}')" - return repr_str - - -@PIPELINES.register_module() -class LoadAnnotations(object): - """Load annotations for semantic segmentation. - - Args: - reduce_zero_label (bool): Whether reduce all label value by 1. - Usually used for datasets where 0 is background label. - Default: False. - file_client_args (dict): Arguments to instantiate a FileClient. - See :class:`mmcv.fileio.FileClient` for details. - Defaults to ``dict(backend='disk')``. - imdecode_backend (str): Backend for :func:`mmcv.imdecode`. Default: - 'pillow' - """ - - def __init__(self, - reduce_zero_label=False, - file_client_args=dict(backend='disk'), - imdecode_backend='pillow'): - self.reduce_zero_label = reduce_zero_label - self.file_client_args = file_client_args.copy() - self.file_client = None - self.imdecode_backend = imdecode_backend - - def __call__(self, results): - """Call function to load multiple types annotations. - - Args: - results (dict): Result dict from :obj:`mmseg.CustomDataset`. - - Returns: - dict: The dict contains loaded semantic segmentation annotations. - """ - - if self.file_client is None: - self.file_client = mmcv.FileClient(**self.file_client_args) - - if results.get('seg_prefix', None) is not None: - filename = osp.join(results['seg_prefix'], - results['ann_info']['seg_map']) - else: - filename = results['ann_info']['seg_map'] - img_bytes = self.file_client.get(filename) - gt_semantic_seg = mmcv.imfrombytes( - img_bytes, flag='unchanged', - backend=self.imdecode_backend).squeeze().astype(np.uint8) - # modify if custom classes - if results.get('label_map', None) is not None: - for old_id, new_id in results['label_map'].items(): - gt_semantic_seg[gt_semantic_seg == old_id] = new_id - # reduce zero_label - if self.reduce_zero_label: - # avoid using underflow conversion - gt_semantic_seg[gt_semantic_seg == 0] = 255 - gt_semantic_seg = gt_semantic_seg - 1 - gt_semantic_seg[gt_semantic_seg == 254] = 255 - results['gt_semantic_seg'] = gt_semantic_seg - results['seg_fields'].append('gt_semantic_seg') - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(reduce_zero_label={self.reduce_zero_label},' - repr_str += f"imdecode_backend='{self.imdecode_backend}')" - return repr_str diff --git a/spaces/MingGatsby/Grounding_DINO_demo/groundingdino/util/vl_utils.py b/spaces/MingGatsby/Grounding_DINO_demo/groundingdino/util/vl_utils.py deleted file mode 100644 index c91bb02f584398f08a28e6b7719e2b99f6e28616..0000000000000000000000000000000000000000 --- a/spaces/MingGatsby/Grounding_DINO_demo/groundingdino/util/vl_utils.py +++ /dev/null @@ -1,100 +0,0 @@ -import os -import random -from typing import List - -import torch - - -def create_positive_map_from_span(tokenized, token_span, max_text_len=256): - """construct a map such that positive_map[i,j] = True iff box i is associated to token j - Input: - - tokenized: - - input_ids: Tensor[1, ntokens] - - attention_mask: Tensor[1, ntokens] - - token_span: list with length num_boxes. - - each item: [start_idx, end_idx] - """ - positive_map = torch.zeros((len(token_span), max_text_len), dtype=torch.float) - for j, tok_list in enumerate(token_span): - for (beg, end) in tok_list: - beg_pos = tokenized.char_to_token(beg) - end_pos = tokenized.char_to_token(end - 1) - if beg_pos is None: - try: - beg_pos = tokenized.char_to_token(beg + 1) - if beg_pos is None: - beg_pos = tokenized.char_to_token(beg + 2) - except: - beg_pos = None - if end_pos is None: - try: - end_pos = tokenized.char_to_token(end - 2) - if end_pos is None: - end_pos = tokenized.char_to_token(end - 3) - except: - end_pos = None - if beg_pos is None or end_pos is None: - continue - - assert beg_pos is not None and end_pos is not None - if os.environ.get("SHILONG_DEBUG_ONLY_ONE_POS", None) == "TRUE": - positive_map[j, beg_pos] = 1 - break - else: - positive_map[j, beg_pos : end_pos + 1].fill_(1) - - return positive_map / (positive_map.sum(-1)[:, None] + 1e-6) - - -def build_captions_and_token_span(cat_list, force_lowercase): - """ - Return: - captions: str - cat2tokenspan: dict - { - 'dog': [[0, 2]], - ... - } - """ - - cat2tokenspan = {} - captions = "" - for catname in cat_list: - class_name = catname - if force_lowercase: - class_name = class_name.lower() - if "/" in class_name: - class_name_list: List = class_name.strip().split("/") - class_name_list.append(class_name) - class_name: str = random.choice(class_name_list) - - tokens_positive_i = [] - subnamelist = [i.strip() for i in class_name.strip().split(" ")] - for subname in subnamelist: - if len(subname) == 0: - continue - if len(captions) > 0: - captions = captions + " " - strat_idx = len(captions) - end_idx = strat_idx + len(subname) - tokens_positive_i.append([strat_idx, end_idx]) - captions = captions + subname - - if len(tokens_positive_i) > 0: - captions = captions + " ." - cat2tokenspan[class_name] = tokens_positive_i - - return captions, cat2tokenspan - - -def build_id2posspan_and_caption(category_dict: dict): - """Build id2pos_span and caption from category_dict - - Args: - category_dict (dict): category_dict - """ - cat_list = [item["name"].lower() for item in category_dict] - id2catname = {item["id"]: item["name"].lower() for item in category_dict} - caption, cat2posspan = build_captions_and_token_span(cat_list, force_lowercase=True) - id2posspan = {catid: cat2posspan[catname] for catid, catname in id2catname.items()} - return id2posspan, caption diff --git a/spaces/Miuzarte/SUI-svc-3.0/preprocess_hubert_f0.py b/spaces/Miuzarte/SUI-svc-3.0/preprocess_hubert_f0.py deleted file mode 100644 index 9a770d6bba5f93c850f35327ac18a400ce8d50fc..0000000000000000000000000000000000000000 --- a/spaces/Miuzarte/SUI-svc-3.0/preprocess_hubert_f0.py +++ /dev/null @@ -1,129 +0,0 @@ -import os -import argparse - -import torch -import json -from glob import glob - -from pyworld import pyworld -from tqdm import tqdm -from scipy.io import wavfile - -import utils -from mel_processing import mel_spectrogram_torch -#import h5py -import logging -logging.getLogger('numba').setLevel(logging.WARNING) - -import parselmouth -import librosa -import numpy as np -def stft(y): - return librosa.stft( - y=y, - n_fft=1280, - hop_length=320, - win_length=1280, - ) - -def energy(y): - # Extract energy - S = librosa.magphase(stft(y))[0] - e = np.sqrt(np.sum(S ** 2, axis=0)) # np.linalg.norm(S, axis=0) - return e.squeeze() # (Number of frames) => (654,) - -def get_energy(path, p_len=None): - wav, sr = librosa.load(path, 48000) - e = energy(wav) - if p_len is None: - p_len = wav.shape[0] // 320 - assert e.shape[0] -p_len <2 ,(e.shape[0] ,p_len) - e = e[: p_len] - return e - - - -def get_f0(path,p_len=None, f0_up_key=0): - x, _ = librosa.load(path, 48000) - if p_len is None: - p_len = x.shape[0]//320 - else: - assert abs(p_len-x.shape[0]//320) < 3, (path, p_len, x.shape) - time_step = 320 / 48000 * 1000 - f0_min = 50 - f0_max = 1100 - f0_mel_min = 1127 * np.log(1 + f0_min / 700) - f0_mel_max = 1127 * np.log(1 + f0_max / 700) - - f0 = parselmouth.Sound(x, 48000).to_pitch_ac( - time_step=time_step / 1000, voicing_threshold=0.6, - pitch_floor=f0_min, pitch_ceiling=f0_max).selected_array['frequency'] - - pad_size=(p_len - len(f0) + 1) // 2 - if(pad_size>0 or p_len - len(f0) - pad_size>0): - f0 = np.pad(f0,[[pad_size,p_len - len(f0) - pad_size]], mode='constant') - - f0bak = f0.copy() - f0 *= pow(2, f0_up_key / 12) - f0_mel = 1127 * np.log(1 + f0 / 700) - f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * 254 / (f0_mel_max - f0_mel_min) + 1 - f0_mel[f0_mel <= 1] = 1 - f0_mel[f0_mel > 255] = 255 - f0_coarse = np.rint(f0_mel).astype(np.int) - return f0_coarse, f0bak - -def resize2d(x, target_len): - source = np.array(x) - source[source<0.001] = np.nan - target = np.interp(np.arange(0, len(source)*target_len, len(source))/ target_len, np.arange(0, len(source)), source) - res = np.nan_to_num(target) - return res - -def compute_f0(path, c_len): - x, sr = librosa.load(path, sr=48000) - f0, t = pyworld.dio( - x.astype(np.double), - fs=sr, - f0_ceil=800, - frame_period=1000 * 320 / sr, - ) - f0 = pyworld.stonemask(x.astype(np.double), f0, t, 48000) - for index, pitch in enumerate(f0): - f0[index] = round(pitch, 1) - assert abs(c_len - x.shape[0]//320) < 3, (c_len, f0.shape) - - return None, resize2d(f0, c_len) - - -def process(filename): - print(filename) - save_name = filename+".soft.pt" - if not os.path.exists(save_name): - devive = torch.device("cuda" if torch.cuda.is_available() else "cpu") - wav, _ = librosa.load(filename, sr=16000) - wav = torch.from_numpy(wav).unsqueeze(0).to(devive) - c = utils.get_hubert_content(hmodel, wav) - torch.save(c.cpu(), save_name) - else: - c = torch.load(save_name) - f0path = filename+".f0.npy" - if not os.path.exists(f0path): - cf0, f0 = compute_f0(filename, c.shape[-1] * 3) - np.save(f0path, f0) - - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument("--in_dir", type=str, default="dataset/48k", help="path to input dir") - args = parser.parse_args() - - print("Loading hubert for content...") - hmodel = utils.get_hubert_model(0 if torch.cuda.is_available() else None) - print("Loaded hubert.") - - filenames = glob(f'{args.in_dir}/*/*.wav', recursive=True)#[:10] - - for filename in tqdm(filenames): - process(filename) - \ No newline at end of file diff --git a/spaces/NCTCMumbai/NCTC/models/research/cognitive_planning/label_map_util.py b/spaces/NCTCMumbai/NCTC/models/research/cognitive_planning/label_map_util.py deleted file mode 100644 index e258e3ab57fbe0de3aeb664e64f5df5a6dc5111d..0000000000000000000000000000000000000000 --- a/spaces/NCTCMumbai/NCTC/models/research/cognitive_planning/label_map_util.py +++ /dev/null @@ -1,181 +0,0 @@ -# Copyright 2017 The TensorFlow Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# ============================================================================== - -"""Label map utility functions.""" - -import logging - -import tensorflow as tf -from google.protobuf import text_format -import string_int_label_map_pb2 - - -def _validate_label_map(label_map): - """Checks if a label map is valid. - - Args: - label_map: StringIntLabelMap to validate. - - Raises: - ValueError: if label map is invalid. - """ - for item in label_map.item: - if item.id < 0: - raise ValueError('Label map ids should be >= 0.') - if (item.id == 0 and item.name != 'background' and - item.display_name != 'background'): - raise ValueError('Label map id 0 is reserved for the background label') - - -def create_category_index(categories): - """Creates dictionary of COCO compatible categories keyed by category id. - - Args: - categories: a list of dicts, each of which has the following keys: - 'id': (required) an integer id uniquely identifying this category. - 'name': (required) string representing category name - e.g., 'cat', 'dog', 'pizza'. - - Returns: - category_index: a dict containing the same entries as categories, but keyed - by the 'id' field of each category. - """ - category_index = {} - for cat in categories: - category_index[cat['id']] = cat - return category_index - - -def get_max_label_map_index(label_map): - """Get maximum index in label map. - - Args: - label_map: a StringIntLabelMapProto - - Returns: - an integer - """ - return max([item.id for item in label_map.item]) - - -def convert_label_map_to_categories(label_map, - max_num_classes, - use_display_name=True): - """Loads label map proto and returns categories list compatible with eval. - - This function loads a label map and returns a list of dicts, each of which - has the following keys: - 'id': (required) an integer id uniquely identifying this category. - 'name': (required) string representing category name - e.g., 'cat', 'dog', 'pizza'. - We only allow class into the list if its id-label_id_offset is - between 0 (inclusive) and max_num_classes (exclusive). - If there are several items mapping to the same id in the label map, - we will only keep the first one in the categories list. - - Args: - label_map: a StringIntLabelMapProto or None. If None, a default categories - list is created with max_num_classes categories. - max_num_classes: maximum number of (consecutive) label indices to include. - use_display_name: (boolean) choose whether to load 'display_name' field - as category name. If False or if the display_name field does not exist, - uses 'name' field as category names instead. - Returns: - categories: a list of dictionaries representing all possible categories. - """ - categories = [] - list_of_ids_already_added = [] - if not label_map: - label_id_offset = 1 - for class_id in range(max_num_classes): - categories.append({ - 'id': class_id + label_id_offset, - 'name': 'category_{}'.format(class_id + label_id_offset) - }) - return categories - for item in label_map.item: - if not 0 < item.id <= max_num_classes: - logging.info('Ignore item %d since it falls outside of requested ' - 'label range.', item.id) - continue - if use_display_name and item.HasField('display_name'): - name = item.display_name - else: - name = item.name - if item.id not in list_of_ids_already_added: - list_of_ids_already_added.append(item.id) - categories.append({'id': item.id, 'name': name}) - return categories - - -def load_labelmap(path): - """Loads label map proto. - - Args: - path: path to StringIntLabelMap proto text file. - Returns: - a StringIntLabelMapProto - """ - with tf.gfile.GFile(path, 'r') as fid: - label_map_string = fid.read() - label_map = string_int_label_map_pb2.StringIntLabelMap() - try: - text_format.Merge(label_map_string, label_map) - except text_format.ParseError: - label_map.ParseFromString(label_map_string) - _validate_label_map(label_map) - return label_map - - -def get_label_map_dict(label_map_path, use_display_name=False): - """Reads a label map and returns a dictionary of label names to id. - - Args: - label_map_path: path to label_map. - use_display_name: whether to use the label map items' display names as keys. - - Returns: - A dictionary mapping label names to id. - """ - label_map = load_labelmap(label_map_path) - label_map_dict = {} - for item in label_map.item: - if use_display_name: - label_map_dict[item.display_name] = item.id - else: - label_map_dict[item.name] = item.id - return label_map_dict - - -def create_category_index_from_labelmap(label_map_path): - """Reads a label map and returns a category index. - - Args: - label_map_path: Path to `StringIntLabelMap` proto text file. - - Returns: - A category index, which is a dictionary that maps integer ids to dicts - containing categories, e.g. - {1: {'id': 1, 'name': 'dog'}, 2: {'id': 2, 'name': 'cat'}, ...} - """ - label_map = load_labelmap(label_map_path) - max_num_classes = max(item.id for item in label_map.item) - categories = convert_label_map_to_categories(label_map, max_num_classes) - return create_category_index(categories) - - -def create_class_agnostic_category_index(): - """Creates a category index with a single `object` class.""" - return {1: {'id': 1, 'name': 'object'}} diff --git a/spaces/Not-Grim-Refer/huggingface-transformers-agents/app.py b/spaces/Not-Grim-Refer/huggingface-transformers-agents/app.py deleted file mode 100644 index 2dc7fdc9bcfb37002a1be112a27a5d4788b0f5b3..0000000000000000000000000000000000000000 --- a/spaces/Not-Grim-Refer/huggingface-transformers-agents/app.py +++ /dev/null @@ -1,49 +0,0 @@ -import gradio as gr - -from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, Conversation, pipeline - -# Load the best pre-trained models and tokenizers for coding tasks -models_and_tokenizers = [ - ("EleutherAI/gpt-neo-2.7B", AutoTokenizer.from_pretrained("EleutherAI/gpt-neo-2.7B"), AutoModelForSeq2SeqLM.from_pretrained("EleutherAI/gpt-neo-2.7B")), - ("Bard", AutoTokenizer.from_pretrained("bard"), AutoModelForSeq2SeqLM.from_pretrained("bard")), - ("Turing NLG", AutoTokenizer.from_pretrained("Turing NLG"), AutoModelForSeq2SeqLM.from_pretrained("Turing NLG")), - ("GPT-3", AutoTokenizer.from_pretrained("gpt-3"), AutoModelForSeq2SeqLM.from_pretrained("gpt-3")), - ("GPT-J", AutoTokenizer.from_pretrained("gpt-j"), AutoModelForSeq2SeqLM.from_pretrained("gpt-j")), -] - -# Create the conversational pipeline -conversational_pipeline = pipeline("conversational", model=models_and_tokenizers[0][1], tokenizer=models_and_tokenizers[0][0]) - -# Define a function to handle conversation with multiple models -def handle_conversation(models, prompt): - responses = [] - for model, tokenizer in models: - conversation = Conversation(prompt) - response = pipeline("conversational", model=model, tokenizer=tokenizer)(conversation) - responses.append(response.generated_responses[-1]) - return responses - -# Replace the following line with the user's input code snippet -user_code = """ -def reverse_prompt_engineer(code): - # TODO: Reverse prompt engineer the code - return None -""" - -# Use the handle_conversation function to get responses from multiple models -responses = handle_conversation(models_and_tokenizers, f"Now I want you to reverse prompt engineer the {user_code}. Give me a single prompt that would create a similar output.") -print(responses) - -# Instruct the user how to use the tool -print("To use this tool, simply paste your code snippet into the `user_code` variable and then run the code. The tool will then generate a prompt that can be used to create similar code.") - -# Create the interface -app = gr.Interface( - fn=handle_conversation, - inputs="text", - outputs="text", - title="Reverse Prompt Engineer", - description="Generate a prompt that can be used to create similar code.", -) - -app.launch() \ No newline at end of file diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/tasks/audio_finetuning.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/tasks/audio_finetuning.py deleted file mode 100644 index 4ef87c604f00581f03075e9ebe10a43dd51d6e45..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/tasks/audio_finetuning.py +++ /dev/null @@ -1,346 +0,0 @@ -# Copyright (c) 2017-present, Facebook, Inc. -# All rights reserved. -# -# This source code is licensed under the license found in the LICENSE file in -# the root directory of this source tree. An additional grant of patent rights -# can be found in the PATENTS file in the same directory. - -import logging -import os -import torch -import json - -from argparse import Namespace -from dataclasses import dataclass, field -from typing import Optional, Any - -from fairseq.data import AddTargetDataset, Dictionary, encoders -from fairseq.tasks.audio_pretraining import AudioPretrainingTask, AudioPretrainingConfig -from fairseq.dataclass import FairseqDataclass -from fairseq.dataclass.configs import GenerationConfig -from fairseq.data.text_compressor import TextCompressor, TextCompressionLevel - -from . import register_task -from .. import utils -from ..logging import metrics - - -logger = logging.getLogger(__name__) - - -class LabelEncoder(object): - def __init__(self, dictionary): - self.dictionary = dictionary - - def __call__(self, label): - return self.dictionary.encode_line( - label, append_eos=False, add_if_not_exist=False - ) - - -def label_len_fn(label): - return len(label.split(" ")) - - -@dataclass -class AudioFinetuningConfig(AudioPretrainingConfig): - # Options for reporting WER metrics during validation. Only applicable to - # Seq2Seq models during fine-tuning - eval_wer: bool = field( - default=False, metadata={"help": "compute WER for Seq2Seq models"} - ) - eval_wer_config: GenerationConfig = field( - default_factory=lambda: GenerationConfig(), - metadata={"help": "beam search config for evaluating wer during training"}, - ) - eval_wer_tokenizer: Any = field( - default=None, - metadata={"help": "tokenizer config for evaluating wer during training"}, - ) - eval_wer_post_process: str = field( - default="letter", - metadata={ - "help": "remove BPE tokens before scoring (can be sentencepiece, letter, and more)" - }, - ) - eval_bleu: bool = field( - default=False, metadata={"help": "evaluation with BLEU scores"} - ) - eval_bleu_detok: Optional[str] = field( - default=None, metadata={ - "help": "detokenize before computing BLEU (e.g., 'moses'); " - "required if using --eval-bleu; use 'space' to disable " - "detokenization; see fairseq.data.encoders for other options" - } - ) - eval_bleu_detok_args: str = field( - default="{}", - metadata={"help": "args for building the tokenizer, if needed"} - ) - eval_tokenized_bleu: bool = field( - default=False, - metadata={"help": "compute tokenized BLEU instead of sacrebleu"} - ) - eval_bleu_remove_bpe: Optional[str] = field( - default=None, metadata={"help": "remove BPE before computing BLEU"} - ) - eval_bleu_args: str = field( - default="{}", - metadata={"help": "generation args for BLUE scoring, e.g., " - "'{\"beam\": 4, \"lenpen\": 0.6}'"} - ) - eval_bleu_print_samples: bool = field( - default=False, - metadata={"help": "print sample generations during validation"} - ) - autoregressive: bool = field( - default=False, - metadata={ - "help": "required for autoregressive decoders (like seq2seq models); " - "adds 'prev_output_tokens' to input and appends eos to target" - }, - ) - - -@register_task("audio_finetuning", dataclass=AudioFinetuningConfig) -class AudioFinetuningTask(AudioPretrainingTask): - """ """ - - cfg: AudioFinetuningConfig - - def __init__( - self, - cfg: AudioFinetuningConfig, - ): - super().__init__(cfg) - self.blank_symbol = "" - - self.state.add_factory("target_dictionary", self.load_target_dictionary) - - def load_target_dictionary(self): - if self.cfg.labels: - dict_path = os.path.join(self.cfg.data, f"dict.{self.cfg.labels}.txt") - return Dictionary.load(dict_path) - return None - - def load_dataset(self, split: str, task_cfg: AudioFinetuningConfig = None, **kwargs): - super().load_dataset(split, task_cfg, **kwargs) - - task_cfg = task_cfg or self.cfg - assert task_cfg.labels is not None - text_compression_level = getattr( - TextCompressionLevel, str(self.cfg.text_compression_level) - ) - data_path = self.cfg.data - label_path = os.path.join(data_path, f"{split}.{task_cfg.labels}") - skipped_indices = getattr(self.datasets[split], "skipped_indices", set()) - text_compressor = TextCompressor(level=text_compression_level) - with open(label_path, "r") as f: - labels = [ - text_compressor.compress(l) - for i, l in enumerate(f) if i not in skipped_indices - ] - - assert len(labels) == len(self.datasets[split]), ( - f"labels length ({len(labels)}) and dataset length " - f"({len(self.datasets[split])}) do not match" - ) - - process_label = LabelEncoder(self.target_dictionary) - - self.datasets[split] = AddTargetDataset( - self.datasets[split], - labels, - pad=self.target_dictionary.pad(), - eos=self.target_dictionary.eos(), - batch_targets=True, - process_label=process_label, - label_len_fn=label_len_fn, - add_to_input=task_cfg.get("autoregressive", False), - text_compression_level=text_compression_level - ) - - @property - def target_dictionary(self): - """Return the :class:`~fairseq.data.Dictionary` for the language - model.""" - return self.state.target_dictionary - - def valid_step(self, sample, model, criterion): - loss, sample_size, logging_output = super().valid_step(sample, model, criterion) - if self.cfg.eval_wer and self.cfg.autoregressive: - metrics = self._inference_with_wer(self.sequence_generator, sample, model) - logging_output["_num_char_errors"] = metrics["num_char_errors"] - logging_output["_num_chars"] = metrics["num_chars"] - logging_output["_num_word_errors"] = metrics["num_word_errors"] - logging_output["_num_words"] = metrics["num_words"] - if self.cfg.eval_bleu and self.cfg.autoregressive: - metrics = self._inference_with_bleu(self.sequence_generator, sample, model) - logging_output['_bleu_sys_len'] = metrics.sys_len - logging_output['_bleu_ref_len'] = metrics.ref_len - # we split counts into separate entries so that they can be - # summed efficiently across workers using fast-stat-sync - assert len(metrics.counts) == 4 - for i in range(4): - logging_output[f"_bleu_counts_{i}"] = metrics.counts[i] - logging_output[f"_bleu_totals_{i}"] = metrics.totals[i] - return loss, sample_size, logging_output - - def build_model(self, model_cfg: FairseqDataclass): - model = super().build_model(model_cfg) - - if self.cfg.eval_wer and self.cfg.autoregressive: - self.sequence_generator = self.build_generator( - [model], - self.cfg.eval_wer_config, - ) - if self.cfg.eval_wer_tokenizer: - self.tokenizer = encoders.build_tokenizer(self.cfg.eval_wer_tokenizer) - else: - self.tokenizer = None - if self.cfg.eval_bleu and self.cfg.autoregressive: - assert self.cfg.eval_bleu_detok is not None, ( - '--eval-bleu-detok is required if using --eval-bleu; ' - 'try --eval-bleu-detok=moses (or --eval-bleu-detok=space ' - 'to disable detokenization, e.g., when using sentencepiece)' - ) - detok_args = json.loads(self.cfg.eval_bleu_detok_args) - self.tokenizer = encoders.build_tokenizer( - Namespace(tokenizer=self.cfg.eval_bleu_detok, **detok_args) - ) - gen_args = json.loads(self.cfg.eval_bleu_args) - gen_args = Namespace(**gen_args) - self.sequence_generator = self.build_generator([model], gen_args) - - return model - - def _inference_with_wer(self, generator, sample, model): - import editdistance - - def decode(toks): - s = self.target_dictionary.string( - toks.int().cpu(), - self.cfg.eval_wer_post_process, - escape_unk=True, - ) - if self.tokenizer: - s = self.tokenizer.decode(s) - return s - - num_word_errors, num_char_errors = 0, 0 - num_chars, num_words = 0, 0 - gen_out = self.inference_step(generator, [model], sample, None) - for i in range(len(gen_out)): - hyp = decode(gen_out[i][0]["tokens"]) - ref = decode( - utils.strip_pad(sample["target"][i], self.target_dictionary.pad()), - ) - num_char_errors += editdistance.eval(hyp, ref) - num_chars += len(ref) - hyp_words = hyp.split() - ref_words = ref.split() - num_word_errors += editdistance.eval(hyp_words, ref_words) - num_words += len(ref_words) - - return { - "num_char_errors": num_char_errors, - "num_chars": num_chars, - "num_word_errors": num_word_errors, - "num_words": num_words, - } - - def _inference_with_bleu(self, generator, sample, model): - import sacrebleu - - def decode(toks, is_ref): - s = self.target_dictionary.string( - toks.int().cpu(), - self.cfg.eval_bleu_remove_bpe, - # The default unknown string in fairseq is ``, but - # this is tokenized by sacrebleu as `< unk >`, inflating - # BLEU scores. Instead, we use a somewhat more verbose - # alternative that is unlikely to appear in the real - # reference, but doesn't get split into multiple tokens. - unk_string=( - "UNKNOWNTOKENINREF" if is_ref else "UNKNOWNTOKENINHYP" - ), - ) - if self.tokenizer: - s = self.tokenizer.decode(s) - return s - - gen_out = self.inference_step(generator, [model], sample) - hyps, refs = [], [] - for i in range(len(gen_out)): - hyps.append(decode(gen_out[i][0]['tokens'], is_ref=False)) - refs.append( - decode( - utils.strip_pad( - sample['target'][i], - self.target_dictionary.pad() - ), - is_ref=True, # don't count as matches to the hypo - ) - ) - if self.cfg.eval_bleu_print_samples: - logger.info('H-{} {}'.format(sample["id"][0], hyps[0])) - logger.info('T-{} {}'.format(sample["id"][0], refs[0])) - - eval_tokenization = 'none' if self.cfg.eval_tokenized_bleu else '13a' - return sacrebleu.corpus_bleu(hyps, [refs], tokenize=eval_tokenization) - - def reduce_metrics(self, logging_outputs, criterion): - super().reduce_metrics(logging_outputs, criterion) - - if self.cfg.eval_wer: - zero = torch.scalar_tensor(0.0) - num_char_errors = sum( - log.get("_num_char_errors", zero) for log in logging_outputs - ) - num_chars = sum(log.get("_num_chars", zero) for log in logging_outputs) - num_word_errors = sum( - log.get("_num_word_errors", zero) for log in logging_outputs - ) - num_words = sum(log.get("_num_words", zero) for log in logging_outputs) - metrics.log_scalar("_num_char_errors", num_char_errors) - metrics.log_scalar("_num_chars", num_chars) - metrics.log_scalar("_num_word_errors", num_word_errors) - metrics.log_scalar("_num_words", num_words) - if num_chars > 0: - metrics.log_derived( - "uer", - lambda meters: meters["_num_char_errors"].sum - * 100.0 - / meters["_num_chars"].sum - if meters["_num_chars"].sum > 0 - else float("nan"), - ) - if num_words > 0: - metrics.log_derived( - "wer", - lambda meters: meters["_num_word_errors"].sum - * 100.0 - / meters["_num_words"].sum - if meters["_num_words"].sum > 0 - else float("nan"), - ) - if self.cfg.eval_bleu: - len_keys = ["_bleu_sys_len", "_bleu_ref_len"] - count_keys = [f"_bleu_counts_{i}" for i in range(4)] - total_keys = [f"_bleu_totals_{i}" for i in range(4)] - for k in len_keys + count_keys + total_keys: - metrics.log_scalar( - k, sum(log.get(k, 0) for log in logging_outputs) - ) - - import sacrebleu - metrics.log_derived( - 'bleu', - lambda meters: sacrebleu.compute_bleu( - correct=[meters[k].sum for k in count_keys], - total=[meters[k].sum for k in total_keys], - sys_len=meters['_bleu_sys_len'].sum, - ref_len=meters['_bleu_ref_len'].sum, - smooth_method="exp" - ).score - ) diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/wav2vec/scripts/binarize_manifest.sh b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/wav2vec/scripts/binarize_manifest.sh deleted file mode 100644 index 6f201bdb524fad51a69d8c45889eaa1578efc62d..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/wav2vec/scripts/binarize_manifest.sh +++ /dev/null @@ -1,33 +0,0 @@ -#!/usr/bin/env bash - -# usage: bash binarize_manifest - -DEST_DIR=$1 -TRAIN_SPLIT=$2 -VALID_SPLIT=$3 -FAIRSEQ_ROOT=$4 - -mkdir -p $DEST_DIR - -# split file path and lengths into separate files -cut -f1 $TRAIN_SPLIT.tsv > $DEST_DIR/train_fnames.txt -cut -f1 $VALID_SPLIT.tsv > $DEST_DIR/valid_fnames.txt -cut -f2 $TRAIN_SPLIT.tsv > $DEST_DIR/train.lengths -cut -f2 $VALID_SPLIT.tsv > $DEST_DIR/valid.lengths - -# copy root directory -head -1 $TRAIN_SPLIT.tsv > $DEST_DIR/train.root -head -1 $VALID_SPLIT.tsv > $DEST_DIR/valid.root - -# remove root directory -sed -i '1d' $DEST_DIR/train_fnames.txt -sed -i '1d' $DEST_DIR/valid_fnames.txt -sed -i '1d' $DEST_DIR/train.lengths -sed -i '1d' $DEST_DIR/valid.lengths - -# insert spaces between characters -sed -i -e 's/\(.\)/\1 /g' $DEST_DIR/train_fnames.txt -sed -i -e 's/\(.\)/\1 /g' $DEST_DIR/valid_fnames.txt - -# run preprocessor -PYTHONPATH=$FAIRSEQ_ROOT python $FAIRSEQ_ROOT/fairseq_cli/preprocess.py --dataset-impl mmap --trainpref $DEST_DIR/train_fnames.txt --validpref $DEST_DIR/valid_fnames.txt --workers 60 --only-source --destdir $DEST_DIR diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/data/legacy/__init__.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/data/legacy/__init__.py deleted file mode 100644 index 9bd5c72b5e9d7f67fb7e4ef10808d7ec08967ff4..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/data/legacy/__init__.py +++ /dev/null @@ -1,16 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from .block_pair_dataset import BlockPairDataset -from .masked_lm_dataset import MaskedLMDataset -from .masked_lm_dictionary import BertDictionary, MaskedLMDictionary - - -__all__ = [ - "BertDictionary", - "BlockPairDataset", - "MaskedLMDataset", - "MaskedLMDictionary", -] diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/models/speech_to_text/berard.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/models/speech_to_text/berard.py deleted file mode 100644 index c505e3acaa84e5f3263ccbfaf9556f77123f09fc..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/models/speech_to_text/berard.py +++ /dev/null @@ -1,606 +0,0 @@ -#!/usr/bin/env python3 - -from ast import literal_eval -from typing import List, Tuple - -import torch -import torch.nn as nn -import torch.nn.functional as F -from fairseq import checkpoint_utils, utils -from fairseq.data.data_utils import lengths_to_padding_mask -from fairseq.models import ( - FairseqEncoder, - FairseqEncoderDecoderModel, - FairseqIncrementalDecoder, - register_model, - register_model_architecture, -) - - -@register_model("s2t_berard") -class BerardModel(FairseqEncoderDecoderModel): - """Implementation of a model similar to https://arxiv.org/abs/1802.04200 - - Paper title: End-to-End Automatic Speech Translation of Audiobooks - An implementation is available in tensorflow at - https://github.com/eske/seq2seq - Relevant files in this implementation are the config - (https://github.com/eske/seq2seq/blob/master/config/LibriSpeech/AST.yaml) - and the model code - (https://github.com/eske/seq2seq/blob/master/translate/models.py). - The encoder and decoder try to be close to the original implementation. - The attention is an MLP as in Bahdanau et al. - (https://arxiv.org/abs/1409.0473). - There is no state initialization by averaging the encoder outputs. - """ - - def __init__(self, encoder, decoder): - super().__init__(encoder, decoder) - - @staticmethod - def add_args(parser): - parser.add_argument( - "--input-layers", - type=str, - metavar="EXPR", - help="List of linear layer dimensions. These " - "layers are applied to the input features and " - "are followed by tanh and possibly dropout.", - ) - parser.add_argument( - "--dropout", - type=float, - metavar="D", - help="Dropout probability to use in the encoder/decoder. " - "Note that this parameters control dropout in various places, " - "there is no fine-grained control for dropout for embeddings " - "vs LSTM layers for example.", - ) - parser.add_argument( - "--in-channels", - type=int, - metavar="N", - help="Number of encoder input channels. " "Typically value is 1.", - ) - parser.add_argument( - "--conv-layers", - type=str, - metavar="EXPR", - help="List of conv layers " "(format: (channels, kernel, stride)).", - ) - parser.add_argument( - "--num-blstm-layers", - type=int, - metavar="N", - help="Number of encoder bi-LSTM layers.", - ) - parser.add_argument( - "--lstm-size", type=int, metavar="N", help="LSTM hidden size." - ) - parser.add_argument( - "--decoder-embed-dim", - type=int, - metavar="N", - help="Embedding dimension of the decoder target tokens.", - ) - parser.add_argument( - "--decoder-hidden-dim", - type=int, - metavar="N", - help="Decoder LSTM hidden dimension.", - ) - parser.add_argument( - "--decoder-num-layers", - type=int, - metavar="N", - help="Number of decoder LSTM layers.", - ) - parser.add_argument( - "--attention-dim", - type=int, - metavar="N", - help="Hidden layer dimension in MLP attention.", - ) - parser.add_argument( - "--output-layer-dim", - type=int, - metavar="N", - help="Hidden layer dim for linear layer prior to output projection.", - ) - parser.add_argument( - "--load-pretrained-encoder-from", - type=str, - metavar="STR", - help="model to take encoder weights from (for initialization)", - ) - parser.add_argument( - "--load-pretrained-decoder-from", - type=str, - metavar="STR", - help="model to take decoder weights from (for initialization)", - ) - - @classmethod - def build_encoder(cls, args, task): - encoder = BerardEncoder( - input_layers=literal_eval(args.input_layers), - conv_layers=literal_eval(args.conv_layers), - in_channels=args.input_channels, - input_feat_per_channel=args.input_feat_per_channel, - num_blstm_layers=args.num_blstm_layers, - lstm_size=args.lstm_size, - dropout=args.dropout, - ) - if getattr(args, "load_pretrained_encoder_from", None): - encoder = checkpoint_utils.load_pretrained_component_from_model( - component=encoder, checkpoint=args.load_pretrained_encoder_from - ) - return encoder - - @classmethod - def build_decoder(cls, args, task): - decoder = LSTMDecoder( - dictionary=task.target_dictionary, - embed_dim=args.decoder_embed_dim, - num_layers=args.decoder_num_layers, - hidden_size=args.decoder_hidden_dim, - dropout=args.dropout, - encoder_output_dim=2 * args.lstm_size, # bidirectional - attention_dim=args.attention_dim, - output_layer_dim=args.output_layer_dim, - ) - if getattr(args, "load_pretrained_decoder_from", None): - decoder = checkpoint_utils.load_pretrained_component_from_model( - component=decoder, checkpoint=args.load_pretrained_decoder_from - ) - return decoder - - @classmethod - def build_model(cls, args, task): - """Build a new model instance.""" - encoder = cls.build_encoder(args, task) - decoder = cls.build_decoder(args, task) - - return cls(encoder, decoder) - - def get_normalized_probs(self, net_output, log_probs, sample=None): - # net_output['encoder_out'] is a (B, T, D) tensor - lprobs = super().get_normalized_probs(net_output, log_probs, sample) - # lprobs is a (B, T, D) tensor - lprobs.batch_first = True - return lprobs - - -class BerardEncoder(FairseqEncoder): - def __init__( - self, - input_layers: List[int], - conv_layers: List[Tuple[int]], - in_channels: int, - input_feat_per_channel: int, - num_blstm_layers: int, - lstm_size: int, - dropout: float, - ): - """ - Args: - input_layers: list of linear layer dimensions. These layers are - applied to the input features and are followed by tanh and - possibly dropout. - conv_layers: list of conv2d layer configurations. A configuration is - a tuple (out_channels, conv_kernel_size, stride). - in_channels: number of input channels. - input_feat_per_channel: number of input features per channel. These - are speech features, typically 40 or 80. - num_blstm_layers: number of bidirectional LSTM layers. - lstm_size: size of the LSTM hidden (and cell) size. - dropout: dropout probability. Dropout can be applied after the - linear layers and LSTM layers but not to the convolutional - layers. - """ - super().__init__(None) - - self.input_layers = nn.ModuleList() - in_features = input_feat_per_channel - for out_features in input_layers: - if dropout > 0: - self.input_layers.append( - nn.Sequential( - nn.Linear(in_features, out_features), nn.Dropout(p=dropout) - ) - ) - else: - self.input_layers.append(nn.Linear(in_features, out_features)) - in_features = out_features - - self.in_channels = in_channels - self.input_dim = input_feat_per_channel - self.conv_kernel_sizes_and_strides = [] - self.conv_layers = nn.ModuleList() - lstm_input_dim = input_layers[-1] - for conv_layer in conv_layers: - out_channels, conv_kernel_size, conv_stride = conv_layer - self.conv_layers.append( - nn.Conv2d( - in_channels, - out_channels, - conv_kernel_size, - stride=conv_stride, - padding=conv_kernel_size // 2, - ) - ) - self.conv_kernel_sizes_and_strides.append((conv_kernel_size, conv_stride)) - in_channels = out_channels - lstm_input_dim //= conv_stride - - lstm_input_dim *= conv_layers[-1][0] - self.lstm_size = lstm_size - self.num_blstm_layers = num_blstm_layers - self.lstm = nn.LSTM( - input_size=lstm_input_dim, - hidden_size=lstm_size, - num_layers=num_blstm_layers, - dropout=dropout, - bidirectional=True, - ) - self.output_dim = 2 * lstm_size # bidirectional - if dropout > 0: - self.dropout = nn.Dropout(p=dropout) - else: - self.dropout = None - - def forward(self, src_tokens, src_lengths=None, **kwargs): - """ - Args - src_tokens: padded tensor (B, T, C * feat) - src_lengths: tensor of original lengths of input utterances (B,) - """ - bsz, max_seq_len, _ = src_tokens.size() - # (B, C, T, feat) - x = ( - src_tokens.view(bsz, max_seq_len, self.in_channels, self.input_dim) - .transpose(1, 2) - .contiguous() - ) - - for input_layer in self.input_layers: - x = input_layer(x) - x = torch.tanh(x) - - for conv_layer in self.conv_layers: - x = conv_layer(x) - - bsz, _, output_seq_len, _ = x.size() - - # (B, C, T, feat) -> (B, T, C, feat) -> (T, B, C, feat) -> - # (T, B, C * feat) - x = x.transpose(1, 2).transpose(0, 1).contiguous().view(output_seq_len, bsz, -1) - - input_lengths = src_lengths.clone() - for k, s in self.conv_kernel_sizes_and_strides: - p = k // 2 - input_lengths = (input_lengths.float() + 2 * p - k) / s + 1 - input_lengths = input_lengths.floor().long() - - packed_x = nn.utils.rnn.pack_padded_sequence(x, input_lengths) - - h0 = x.new(2 * self.num_blstm_layers, bsz, self.lstm_size).zero_() - c0 = x.new(2 * self.num_blstm_layers, bsz, self.lstm_size).zero_() - packed_outs, _ = self.lstm(packed_x, (h0, c0)) - - # unpack outputs and apply dropout - x, output_lengths = nn.utils.rnn.pad_packed_sequence(packed_outs) - if self.dropout is not None: - x = self.dropout(x) - - encoder_padding_mask = ( - lengths_to_padding_mask(output_lengths).to(src_tokens.device).t() - ) - - return { - "encoder_out": x, # (T, B, C) - "encoder_padding_mask": encoder_padding_mask, # (T, B) - } - - def reorder_encoder_out(self, encoder_out, new_order): - encoder_out["encoder_out"] = encoder_out["encoder_out"].index_select( - 1, new_order - ) - encoder_out["encoder_padding_mask"] = encoder_out[ - "encoder_padding_mask" - ].index_select(1, new_order) - return encoder_out - - -class MLPAttention(nn.Module): - """The original attention from Badhanau et al. (2014) - - https://arxiv.org/abs/1409.0473, based on a Multi-Layer Perceptron. - The attention score between position i in the encoder and position j in the - decoder is: alpha_ij = V_a * tanh(W_ae * enc_i + W_ad * dec_j + b_a) - """ - - def __init__(self, decoder_hidden_state_dim, context_dim, attention_dim): - super().__init__() - - self.context_dim = context_dim - self.attention_dim = attention_dim - # W_ae and b_a - self.encoder_proj = nn.Linear(context_dim, self.attention_dim, bias=True) - # W_ad - self.decoder_proj = nn.Linear( - decoder_hidden_state_dim, self.attention_dim, bias=False - ) - # V_a - self.to_scores = nn.Linear(self.attention_dim, 1, bias=False) - - def forward(self, decoder_state, source_hids, encoder_padding_mask): - """The expected input dimensions are: - decoder_state: bsz x decoder_hidden_state_dim - source_hids: src_len x bsz x context_dim - encoder_padding_mask: src_len x bsz - """ - src_len, bsz, _ = source_hids.size() - # (src_len*bsz) x context_dim (to feed through linear) - flat_source_hids = source_hids.view(-1, self.context_dim) - # (src_len*bsz) x attention_dim - encoder_component = self.encoder_proj(flat_source_hids) - # src_len x bsz x attention_dim - encoder_component = encoder_component.view(src_len, bsz, self.attention_dim) - # 1 x bsz x attention_dim - decoder_component = self.decoder_proj(decoder_state).unsqueeze(0) - # Sum with broadcasting and apply the non linearity - # src_len x bsz x attention_dim - hidden_att = torch.tanh( - (decoder_component + encoder_component).view(-1, self.attention_dim) - ) - # Project onto the reals to get attentions scores (src_len x bsz) - attn_scores = self.to_scores(hidden_att).view(src_len, bsz) - - # Mask + softmax (src_len x bsz) - if encoder_padding_mask is not None: - attn_scores = ( - attn_scores.float() - .masked_fill_(encoder_padding_mask, float("-inf")) - .type_as(attn_scores) - ) # FP16 support: cast to float and back - # srclen x bsz - normalized_masked_attn_scores = F.softmax(attn_scores, dim=0) - - # Sum weighted sources (bsz x context_dim) - attn_weighted_context = ( - source_hids * normalized_masked_attn_scores.unsqueeze(2) - ).sum(dim=0) - - return attn_weighted_context, normalized_masked_attn_scores - - -class LSTMDecoder(FairseqIncrementalDecoder): - def __init__( - self, - dictionary, - embed_dim, - num_layers, - hidden_size, - dropout, - encoder_output_dim, - attention_dim, - output_layer_dim, - ): - """ - Args: - dictionary: target text dictionary. - embed_dim: embedding dimension for target tokens. - num_layers: number of LSTM layers. - hidden_size: hidden size for LSTM layers. - dropout: dropout probability. Dropout can be applied to the - embeddings, the LSTM layers, and the context vector. - encoder_output_dim: encoder output dimension (hidden size of - encoder LSTM). - attention_dim: attention dimension for MLP attention. - output_layer_dim: size of the linear layer prior to output - projection. - """ - super().__init__(dictionary) - self.num_layers = num_layers - self.hidden_size = hidden_size - num_embeddings = len(dictionary) - padding_idx = dictionary.pad() - self.embed_tokens = nn.Embedding(num_embeddings, embed_dim, padding_idx) - if dropout > 0: - self.dropout = nn.Dropout(p=dropout) - else: - self.dropout = None - - self.layers = nn.ModuleList() - for layer_id in range(num_layers): - input_size = embed_dim if layer_id == 0 else encoder_output_dim - self.layers.append( - nn.LSTMCell(input_size=input_size, hidden_size=hidden_size) - ) - - self.context_dim = encoder_output_dim - self.attention = MLPAttention( - decoder_hidden_state_dim=hidden_size, - context_dim=encoder_output_dim, - attention_dim=attention_dim, - ) - - self.deep_output_layer = nn.Linear( - hidden_size + encoder_output_dim + embed_dim, output_layer_dim - ) - self.output_projection = nn.Linear(output_layer_dim, num_embeddings) - - def forward( - self, prev_output_tokens, encoder_out=None, incremental_state=None, **kwargs - ): - encoder_padding_mask = encoder_out["encoder_padding_mask"] - encoder_outs = encoder_out["encoder_out"] - - if incremental_state is not None: - prev_output_tokens = prev_output_tokens[:, -1:] - bsz, seqlen = prev_output_tokens.size() - - srclen = encoder_outs.size(0) - - # embed tokens - embeddings = self.embed_tokens(prev_output_tokens) - x = embeddings - if self.dropout is not None: - x = self.dropout(x) - - # B x T x C -> T x B x C - x = x.transpose(0, 1) - - # initialize previous states (or get from cache during incremental - # generation) - cached_state = utils.get_incremental_state( - self, incremental_state, "cached_state" - ) - if cached_state is not None: - prev_hiddens, prev_cells = cached_state - else: - prev_hiddens = [encoder_out["encoder_out"].mean(dim=0)] * self.num_layers - prev_cells = [x.new_zeros(bsz, self.hidden_size)] * self.num_layers - - attn_scores = x.new_zeros(bsz, srclen) - attention_outs = [] - outs = [] - for j in range(seqlen): - input = x[j, :, :] - attention_out = None - for i, layer in enumerate(self.layers): - # the previous state is one layer below except for the bottom - # layer where the previous state is the state emitted by the - # top layer - hidden, cell = layer( - input, - ( - prev_hiddens[(i - 1) % self.num_layers], - prev_cells[(i - 1) % self.num_layers], - ), - ) - if self.dropout is not None: - hidden = self.dropout(hidden) - prev_hiddens[i] = hidden - prev_cells[i] = cell - if attention_out is None: - attention_out, attn_scores = self.attention( - hidden, encoder_outs, encoder_padding_mask - ) - if self.dropout is not None: - attention_out = self.dropout(attention_out) - attention_outs.append(attention_out) - input = attention_out - - # collect the output of the top layer - outs.append(hidden) - - # cache previous states (no-op except during incremental generation) - utils.set_incremental_state( - self, incremental_state, "cached_state", (prev_hiddens, prev_cells) - ) - - # collect outputs across time steps - x = torch.cat(outs, dim=0).view(seqlen, bsz, self.hidden_size) - attention_outs_concat = torch.cat(attention_outs, dim=0).view( - seqlen, bsz, self.context_dim - ) - - # T x B x C -> B x T x C - x = x.transpose(0, 1) - attention_outs_concat = attention_outs_concat.transpose(0, 1) - - # concat LSTM output, attention output and embedding - # before output projection - x = torch.cat((x, attention_outs_concat, embeddings), dim=2) - x = self.deep_output_layer(x) - x = torch.tanh(x) - if self.dropout is not None: - x = self.dropout(x) - # project back to size of vocabulary - x = self.output_projection(x) - - # to return the full attn_scores tensor, we need to fix the decoder - # to account for subsampling input frames - # return x, attn_scores - return x, None - - def reorder_incremental_state(self, incremental_state, new_order): - super().reorder_incremental_state(incremental_state, new_order) - cached_state = utils.get_incremental_state( - self, incremental_state, "cached_state" - ) - if cached_state is None: - return - - def reorder_state(state): - if isinstance(state, list): - return [reorder_state(state_i) for state_i in state] - return state.index_select(0, new_order) - - new_state = tuple(map(reorder_state, cached_state)) - utils.set_incremental_state(self, incremental_state, "cached_state", new_state) - - -@register_model_architecture(model_name="s2t_berard", arch_name="s2t_berard") -def berard(args): - """The original version: "End-to-End Automatic Speech Translation of - Audiobooks" (https://arxiv.org/abs/1802.04200) - """ - args.input_layers = getattr(args, "input_layers", "[256, 128]") - args.conv_layers = getattr(args, "conv_layers", "[(16, 3, 2), (16, 3, 2)]") - args.num_blstm_layers = getattr(args, "num_blstm_layers", 3) - args.lstm_size = getattr(args, "lstm_size", 256) - args.dropout = getattr(args, "dropout", 0.2) - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 128) - args.decoder_num_layers = getattr(args, "decoder_num_layers", 2) - args.decoder_hidden_dim = getattr(args, "decoder_hidden_dim", 512) - args.attention_dim = getattr(args, "attention_dim", 512) - args.output_layer_dim = getattr(args, "output_layer_dim", 128) - args.load_pretrained_encoder_from = getattr( - args, "load_pretrained_encoder_from", None - ) - args.load_pretrained_decoder_from = getattr( - args, "load_pretrained_decoder_from", None - ) - - -@register_model_architecture(model_name="s2t_berard", arch_name="s2t_berard_256_3_3") -def berard_256_3_3(args): - """Used in - * "Harnessing Indirect Training Data for End-to-End Automatic Speech - Translation: Tricks of the Trade" (https://arxiv.org/abs/1909.06515) - * "CoVoST: A Diverse Multilingual Speech-To-Text Translation Corpus" - (https://arxiv.org/pdf/2002.01320.pdf) - * "Self-Supervised Representations Improve End-to-End Speech Translation" - (https://arxiv.org/abs/2006.12124) - """ - args.decoder_num_layers = getattr(args, "decoder_num_layers", 3) - berard(args) - - -@register_model_architecture(model_name="s2t_berard", arch_name="s2t_berard_512_3_2") -def berard_512_3_2(args): - args.num_blstm_layers = getattr(args, "num_blstm_layers", 3) - args.lstm_size = getattr(args, "lstm_size", 512) - args.dropout = getattr(args, "dropout", 0.3) - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 256) - args.decoder_num_layers = getattr(args, "decoder_num_layers", 2) - args.decoder_hidden_dim = getattr(args, "decoder_hidden_dim", 1024) - args.attention_dim = getattr(args, "attention_dim", 512) - args.output_layer_dim = getattr(args, "output_layer_dim", 256) - berard(args) - - -@register_model_architecture(model_name="s2t_berard", arch_name="s2t_berard_512_5_3") -def berard_512_5_3(args): - args.num_blstm_layers = getattr(args, "num_blstm_layers", 5) - args.lstm_size = getattr(args, "lstm_size", 512) - args.dropout = getattr(args, "dropout", 0.3) - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 256) - args.decoder_num_layers = getattr(args, "decoder_num_layers", 3) - args.decoder_hidden_dim = getattr(args, "decoder_hidden_dim", 1024) - args.attention_dim = getattr(args, "attention_dim", 512) - args.output_layer_dim = getattr(args, "output_layer_dim", 256) - berard(args) diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/speech_text_joint_to_text/models/s2t_dualinputxmtransformer.py b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/speech_text_joint_to_text/models/s2t_dualinputxmtransformer.py deleted file mode 100644 index 50683e6d7c8c0db5b8f019e5f7f5fb8c6dfd9f66..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/speech_text_joint_to_text/models/s2t_dualinputxmtransformer.py +++ /dev/null @@ -1,585 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import copy - -import torch.nn as nn -from fairseq import checkpoint_utils -from fairseq import utils -from fairseq.data.data_utils import lengths_to_padding_mask -from fairseq.models import ( - register_model, - register_model_architecture, - FairseqEncoder, -) -from fairseq.models.speech_to_text import XMTransformerModel, Wav2VecEncoderWithAdaptor -from fairseq.models.speech_to_text.xm_transformer import ( - set_default_adaptor_args, - set_default_w2v_encoder_args, -) -from fairseq.models.transformer import TransformerEncoder, TransformerDecoder -from fairseq.models.wav2vec import TransformerSentenceEncoderLayer -from fairseq.utils import safe_hasattr - -from .s2t_dualinputtransformer import ( - DualInputS2TTransformerModel, - TransformerMultiInputDecoder, - DualInputEncoder, -) - - -class TransformerSentenceEncoderLayerStd(TransformerSentenceEncoderLayer): - def __init__(self, sent_enc_layer): - super(TransformerSentenceEncoderLayer, self).__init__() - self.embedding_dim = sent_enc_layer.embedding_dim - self.dropout = sent_enc_layer.dropout - self.activation_dropout = sent_enc_layer.activation_dropout - - # Initialize blocks - self.activation_fn = sent_enc_layer.activation_fn - self.self_attn = sent_enc_layer.self_attn - - self.dropout1 = sent_enc_layer.dropout1 - self.dropout2 = sent_enc_layer.dropout2 - self.dropout3 = sent_enc_layer.dropout3 - - self.layer_norm_first = sent_enc_layer.layer_norm_first - - # layer norm associated with the self attention layer - self.self_attn_layer_norm = sent_enc_layer.self_attn_layer_norm - self.fc1 = sent_enc_layer.fc1 - self.fc2 = sent_enc_layer.fc2 - - # layer norm associated with the position wise feed-forward NN - self.final_layer_norm = sent_enc_layer.final_layer_norm - - def forward( - self, - x, - self_attn_mask=None, - self_attn_padding_mask=None, - need_weights=None, - att_args=None, - ): - x, attn = super().forward( - x, self_attn_mask, self_attn_padding_mask, need_weights, att_args - ) - return x - - -# TODO retire SharedEncoder -class SharedEncoder(FairseqEncoder): - def __init__(self, wav2vec_enc, mbart_enc, adaptor, shared_layers): - super().__init__(None) - self.w2v_encoder = wav2vec_enc - self.shared_layers = self.w2v_encoder.w2v_model.encoder.layers[-shared_layers:] - self.w2v_encoder.w2v_model.encoder.layers = ( - self.w2v_encoder.w2v_model.encoder.layers[:-shared_layers] - ) - self.adaptor = adaptor - if self.shared_layers[-1].layer_norm_first: - self.final_layer_norm = mbart_enc.layer_norm - else: - mbart_enc.layer_norm = None - self.final_layer_norm = None - shared_layer_from = len(mbart_enc.layers) - shared_layers - if shared_layer_from < 0: - shared_layer_from = 0 - for layer_id, layer in enumerate(self.shared_layers): - mbart_enc.layers[ - shared_layer_from + layer_id - ] = TransformerSentenceEncoderLayerStd(layer) - - def forward(self, src_tokens, src_lengths=None, **kwargs): - padding_mask = lengths_to_padding_mask(src_lengths) - if not padding_mask.any(): - padding_mask = None - - out = self.w2v_encoder.forward(src_tokens, padding_mask, tbc=True) - x = out["encoder_out"] - enc_padding_mask = None - if out["encoder_padding_mask"] is not None: - enc_padding_mask = out["encoder_padding_mask"].transpose( - 0, 1 - ) # T X B --> B X T - - x, enc_padding_mask = self.adaptor(x, enc_padding_mask) - for layer in self.shared_layers: - x, _ = layer(x, enc_padding_mask) - if self.final_layer_norm is not None: - x = self.final_layer_norm(x) - - return { - "encoder_out": [x], # T x B x C - "encoder_padding_mask": [enc_padding_mask] - if enc_padding_mask is not None - else [], # B x T - "encoder_embedding": [], # B x T x C - "encoder_states": [], # List[T x B x C] - "src_tokens": [], - "src_lengths": [], - } - - -class StackedWav2VecEncoderWithAdaptor(FairseqEncoder): - def __init__( - self, - wav2vec_enc, - mbart_enc_layers, - mbart_layer_norm, - adaptor, - drop_w2v_layers=0, - ): - super().__init__(None) - self.w2v_encoder = wav2vec_enc - self.adaptor = adaptor - self.mbart_encoder_layers = mbart_enc_layers - self.final_layer_norm = mbart_layer_norm - if drop_w2v_layers > 0: - self.w2v_encoder.w2v_model.encoder.layers = ( - self.w2v_encoder.w2v_model.encoder.layers[:-drop_w2v_layers] - ) - - def forward(self, src_tokens, src_lengths=None, return_all_hiddens=False, **kwargs): - padding_mask = lengths_to_padding_mask(src_lengths) - if not padding_mask.any(): - padding_mask = None - - out = self.w2v_encoder.forward(src_tokens, padding_mask, tbc=True) - x = out["encoder_out"] - enc_padding_mask = None - if out["encoder_padding_mask"] is not None: - enc_padding_mask = out["encoder_padding_mask"].transpose( - 0, 1 - ) # T X B --> B X T - - x, enc_padding_mask = self.adaptor(x, enc_padding_mask) - encoder_states = [] - for layer in self.mbart_encoder_layers: - x = layer(x, enc_padding_mask) - if return_all_hiddens: - encoder_states.append(x) - if self.final_layer_norm is not None: - x = self.final_layer_norm(x) - - return { - "encoder_out": [x], # T x B x C - "encoder_padding_mask": [enc_padding_mask] - if enc_padding_mask is not None - else [], # B x T - "encoder_embedding": [], # B x T x C - "encoder_states": encoder_states, # List[T x B x C] - "src_tokens": [], - "src_lengths": [], - } - - def reorder_encoder_out(self, encoder_out, new_order): - new_encoder_out = ( - [] - if len(encoder_out["encoder_out"]) == 0 - else [x.index_select(1, new_order) for x in encoder_out["encoder_out"]] - ) - - new_encoder_padding_mask = ( - [] - if len(encoder_out["encoder_padding_mask"]) == 0 - else [ - x.index_select(0, new_order) - for x in encoder_out["encoder_padding_mask"] - ] - ) - - new_encoder_embedding = ( - [] - if len(encoder_out["encoder_embedding"]) == 0 - else [ - x.index_select(0, new_order) for x in encoder_out["encoder_embedding"] - ] - ) - - encoder_states = encoder_out["encoder_states"] - if len(encoder_states) > 0: - for idx, state in enumerate(encoder_states): - encoder_states[idx] = state.index_select(1, new_order) - - return { - "encoder_out": new_encoder_out, # T x B x C - "encoder_padding_mask": new_encoder_padding_mask, # B x T - "encoder_embedding": new_encoder_embedding, # B x T x C - "encoder_states": encoder_states, # List[T x B x C] - "src_tokens": [], # B x T - "src_lengths": [], # B x 1 - } - - -# Note: -# dual input transformer: -# encoder: wav2vec for speech + mbart encoder for text -# decoder: mbart decoder for text -@register_model("dual_input_xm_transformer") -class DualInputXMTransformerModel(DualInputS2TTransformerModel): - def __init__(self, encoder, decoder): - super().__init__(encoder, decoder) - - @staticmethod - def add_args(parser): - """Add model-specific arguments to the parser.""" - # wav2vec encoder - Wav2VecEncoderWithAdaptor.add_args(parser) - # add_decoder_args(parser) - # mbart Transformer - parser.add_argument( - "--activation-fn", - type=str, - default="relu", - choices=utils.get_available_activation_fns(), - help="activation function to use", - ) - - parser.add_argument( - "--mbart-dropout", type=float, metavar="D", help="dropout probability" - ) - parser.add_argument( - "--mbart-attention-dropout", - type=float, - metavar="D", - help="dropout probability for attention weights", - ) - parser.add_argument( - "--mbart-activation-dropout", - type=float, - metavar="D", - help="dropout probability after activation in FFN.", - ) - - parser.add_argument( - "--encoder-embed-dim", - type=int, - metavar="N", - help="encoder embedding dimension", - ) - parser.add_argument( - "--encoder-ffn-embed-dim", - type=int, - metavar="N", - help="encoder embedding dimension for FFN", - ) - parser.add_argument( - "--encoder-layers", type=int, metavar="N", help="num encoder layers" - ) - parser.add_argument( - "--encoder-attention-heads", - type=int, - metavar="N", - help="num encoder attention heads", - ) - parser.add_argument( - "--encoder-normalize-before", - action="store_true", - help="apply layernorm before each encoder block", - ) - - parser.add_argument( - "--decoder-embed-dim", - type=int, - metavar="N", - help="decoder embedding dimension", - ) - parser.add_argument( - "--decoder-ffn-embed-dim", - type=int, - metavar="N", - help="decoder embedding dimension for FFN", - ) - parser.add_argument( - "--decoder-layers", type=int, metavar="N", help="num decoder layers" - ) - parser.add_argument( - "--decoder-attention-heads", - type=int, - metavar="N", - help="num decoder attention heads", - ) - parser.add_argument( - "--decoder-normalize-before", - action="store_true", - help="apply layernorm before each decoder block", - ) - parser.add_argument( - "--layernorm-embedding", - action="store_true", - help="add layernorm to embedding", - ) - parser.add_argument( - "--no-scale-embedding", - action="store_true", - help="if True, dont scale embeddings", - ) - parser.add_argument( - "--load-pretrained-mbart-from", - type=str, - metavar="STR", - help="model to take text encoder decoder weights from (for initialization)", - ) - # parser.add_argument("--finetune-w2v-params", type=str, metavar="STR", - # help="comma-separated param strings to finetune.") - parser.add_argument( - "--finetune-mbart-decoder-params", - type=str, - metavar="STR", - help="comma-separated param strings to finetune.", - ) - parser.add_argument( - "--finetune-mbart-encoder-params", - type=str, - metavar="STR", - help="comma-separated param strings to finetune.", - ) - parser.add_argument( - "--skip-encoder-projection", - action="store_true", - help="skip the projection layer in encoder", - ) - - parser.add_argument( - "--enc-grad-mult", - type=float, - metavar="V", - default=1.0, - help="multiply enc1 and enc2 gradient by V", - ) - parser.add_argument( - "--enc2-along-grad-mult", - type=float, - metavar="V", - default=1.0, - help="multiply enc2 gradient by V if only enc2 is used", - ) - parser.add_argument( - "--text-input-cost-ratio", - type=float, - default=1.0, - metavar="V", - help="text input cost ratio relative to speech input cost", - ) - parser.add_argument( - "--stack-w2v-mbart-encoder", - action="store_true", - help="stack w2v and mbart encoder", - ) - parser.add_argument( - "--stack-w2v-mbart-nonorm-encoder", - action="store_true", - help="stack w2v and mbart encoder", - ) - parser.add_argument( - "--no-final-norm-decoder", action="store_true", help="no layer norm" - ) - parser.add_argument( - "--drop-w2v-layers", - type=int, - default=0, - metavar="N", - help="drop w2v encoder layers", - ) - - parser.add_argument( - "--share-w2v-text-encoder", - action="store_true", - help="share w2v encoder layers with text encoder", - ) - parser.add_argument( - "--shared-w2v-layers", - type=int, - default=0, - metavar="N", - help="shared encoder layers from w2v encoder", - ) - - @classmethod - def build_encoder(cls, args, task): - _args = copy.deepcopy(args) - _args.dropout = args.mbart_dropout - _args.attention_dropout = args.mbart_attention_dropout - _args.activation_dropout = args.mbart_activation_dropout - _args.max_source_positions = 1024 - enc_emb = nn.Embedding( - len(task.src_dict), _args.encoder_embed_dim, task.src_dict.pad() - ) - text_encoder = TransformerEncoder(_args, task.src_dict, enc_emb) - spch_encoder = Wav2VecEncoderWithAdaptor(args) - if getattr(args, "load_pretrained_mbart_from", None): - text_encoder = checkpoint_utils.load_pretrained_component_from_model( - component=text_encoder, checkpoint=args.load_pretrained_mbart_from - ) - if getattr(args, "stack_w2v_mbart_encoder", False): - assert getattr(args, "share_w2v_text_encoder", False) is False - spch_encoder = StackedWav2VecEncoderWithAdaptor( - spch_encoder.w2v_encoder, - text_encoder.layers, - text_encoder.layer_norm, - spch_encoder.adaptor, - args.drop_w2v_layers, - ) - elif getattr(args, "stack_w2v_mbart_nonorm_encoder", False): - text_encoder.layer_norm = None - spch_encoder = StackedWav2VecEncoderWithAdaptor( - spch_encoder.w2v_encoder, - text_encoder.layers, - text_encoder.layer_norm, - spch_encoder.adaptor, - args.drop_w2v_layers, - ) - elif getattr(args, "share_w2v_text_encoder", False): - spch_encoder = SharedEncoder( - spch_encoder.w2v_encoder, - text_encoder, - spch_encoder.adaptor, - args.shared_w2v_layers, - ) - - for k, p in spch_encoder.named_parameters(): - # Freeze pretrained models by default - if safe_hasattr( - args, "finetune_w2v_params" - ) and XMTransformerModel.finetune_params(args.finetune_w2v_params, k): - p.requires_grad = True - else: - p.requires_grad = False - for k, p in text_encoder.named_parameters(): - # Freeze pretrained models by default - if safe_hasattr( - args, "finetune_mbart_encoder_params" - ) and XMTransformerModel.finetune_params( - args.finetune_mbart_encoder_params, k - ): - p.requires_grad = True - else: - p.requires_grad = False - cross_attentive_loss_before_last_layer = ( - 0 if getattr(args, "attentive_cost_regularization", 0.0) > 0.0 else -1 - ) - encoder = DualInputEncoder( - args, - spch_encoder, - text_encoder, - task.src_dict, - cross_attentive_loss_before_last_layer, - ) - return encoder - - @classmethod - def build_decoder(cls, args, task): - _args = copy.deepcopy(args) - _args.dropout = args.mbart_dropout - _args.attention_dropout = args.mbart_attention_dropout - _args.activation_dropout = args.mbart_activation_dropout - _args.max_target_positions = 1024 - dec_emb = nn.Embedding( - len(task.tgt_dict), _args.encoder_embed_dim, task.tgt_dict.pad() - ) - decoder = TransformerDecoder(_args, task.tgt_dict, dec_emb) - if getattr(args, "load_pretrained_mbart_from", None): - decoder = checkpoint_utils.load_pretrained_component_from_model( - component=decoder, checkpoint=args.load_pretrained_mbart_from - ) - if getattr(args, "no_final_norm_decoder", False): - decoder.layer_norm = None - for k, p in decoder.named_parameters(): - # Freeze pretrained models by default - if safe_hasattr( - args, "finetune_mbart_decoder_params" - ) and XMTransformerModel.finetune_params( - args.finetune_mbart_decoder_params, k - ): - p.requires_grad = True - else: - p.requires_grad = False - - compute_cross_attentive_loss = ( - True if getattr(args, "attentive_cost_regularization", 0.0) > 0.0 else False - ) - cross_attentive_loss_without_norm = getattr( - args, "attentive_cost_without_normalize", False - ) - cross_attentive_loss_reverse = ( - False # getattr(args, "attentive_cost_reverse", False) - ) - decoder = TransformerMultiInputDecoder( - dictionary=task.target_dictionary, - spch_decoder=decoder, - text_decoder=decoder, - compute_cross_attentive_loss=compute_cross_attentive_loss, - cross_attentive_loss_with_norm=True - if not cross_attentive_loss_without_norm - else False, - cross_attentive_loss_reverse=cross_attentive_loss_reverse, - ) - return decoder - - @classmethod - def build_model(cls, args, task): - """Build a new model instance.""" - # make sure that all args are properly defaulted - # (in case there are any new ones) - dualinputxmtransformer_base(args) - - encoder = cls.build_encoder(args, task) - decoder = cls.build_decoder(args, task) - return cls(encoder, decoder) - - -@register_model_architecture("dual_input_xm_transformer", "dualinputxmtransformer_base") -def dualinputxmtransformer_base(args): - # wav2vec encoder - set_default_w2v_encoder_args(args) - set_default_adaptor_args(args) - - # mbart model - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 1024) - args.encoder_ffn_embed_dim = getattr( - args, "encoder_ffn_embed_dim", 4 * args.encoder_embed_dim - ) - args.encoder_layers = getattr(args, "encoder_layers", 12) - args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 16) - args.encoder_normalize_before = getattr(args, "encoder_normalize_before", True) - args.encoder_layerdrop = getattr(args, "encoder_layerdrop", 0) - args.encoder_learned_pos = getattr(args, "encoder_learned_pos", True) - - args.decoder_embed_path = getattr(args, "decoder_embed_path", None) - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 1024) - args.decoder_ffn_embed_dim = getattr(args, "decoder_ffn_embed_dim", 4 * 1024) - args.decoder_layers = getattr(args, "decoder_layers", 12) - args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 16) - args.decoder_normalize_before = getattr(args, "decoder_normalize_before", True) - args.decoder_learned_pos = getattr(args, "decoder_learned_pos", True) - args.decoder_layerdrop = getattr(args, "decoder_layerdrop", 0.0) - - args.adaptive_input = getattr(args, "adaptive_input", False) - - args.mbart_attention_dropout = getattr(args, "mbart_attention_dropout", 0.0) - args.mbart_activation_dropout = getattr(args, "mbart_activation_dropout", 0.0) - args.mbart_dropout = getattr(args, "mbart_dropout", 0.1) - args.adaptive_softmax_cutoff = getattr(args, "adaptive_softmax_cutoff", None) - args.adaptive_softmax_dropout = getattr(args, "adaptive_softmax_dropout", 0) - args.share_decoder_input_output_embed = getattr( - args, "share_decoder_input_output_embed", True - ) - args.no_token_positional_embeddings = getattr( - args, "no_token_positional_embeddings", False - ) - - args.decoder_output_dim = getattr( - args, "decoder_output_dim", args.decoder_embed_dim - ) - args.decoder_input_dim = getattr(args, "decoder_input_dim", args.decoder_embed_dim) - - args.no_scale_embedding = getattr(args, "no_scale_embedding", False) - args.quant_noise_pq = getattr(args, "quant_noise_pq", 0) - args.layernorm_embedding = getattr(args, "layernorm_embedding", True) - - args.activation_fn = getattr(args, "activation_fn", "gelu") - args.pooler_activation_fn = getattr(args, "pooler_activation_fn", "tanh") - args.pooler_dropout = getattr(args, "pooler_dropout", 0.0) diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/data/noising.py b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/data/noising.py deleted file mode 100644 index 2b1cc347203bfbdc9f1cba29e2e36427b7b5be57..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/data/noising.py +++ /dev/null @@ -1,335 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import numpy as np -import torch -from fairseq.data import data_utils - - -class WordNoising(object): - """Generate a noisy version of a sentence, without changing words themselves.""" - - def __init__(self, dictionary, bpe_cont_marker="@@", bpe_end_marker=None): - self.dictionary = dictionary - self.bpe_end = None - if bpe_cont_marker: - self.bpe_end = np.array( - [ - not self.dictionary[i].endswith(bpe_cont_marker) - for i in range(len(self.dictionary)) - ] - ) - elif bpe_end_marker: - self.bpe_end = np.array( - [ - self.dictionary[i].endswith(bpe_end_marker) - for i in range(len(self.dictionary)) - ] - ) - - self.get_word_idx = ( - self._get_bpe_word_idx if self.bpe_end is not None else self._get_token_idx - ) - - def noising(self, x, lengths, noising_prob=0.0): - raise NotImplementedError() - - def _get_bpe_word_idx(self, x): - """ - Given a list of BPE tokens, for every index in the tokens list, - return the index of the word grouping that it belongs to. - For example, for input x corresponding to ["how", "are", "y@@", "ou"], - return [[0], [1], [2], [2]]. - """ - # x: (T x B) - bpe_end = self.bpe_end[x] - - if x.size(0) == 1 and x.size(1) == 1: - # Special case when we only have one word in x. If x = [[N]], - # bpe_end is a scalar (bool) instead of a 2-dim array of bools, - # which makes the sum operation below fail. - return np.array([[0]]) - - # do a reduce front sum to generate word ids - word_idx = bpe_end[::-1].cumsum(0)[::-1] - word_idx = word_idx.max(0)[None, :] - word_idx - return word_idx - - def _get_token_idx(self, x): - """ - This is to extend noising functions to be able to apply to non-bpe - tokens, e.g. word or characters. - """ - x = torch.t(x) - word_idx = np.array([range(len(x_i)) for x_i in x]) - return np.transpose(word_idx) - - -class WordDropout(WordNoising): - """Randomly drop input words. If not passing blank_idx (default is None), - then dropped words will be removed. Otherwise, it will be replaced by the - blank_idx.""" - - def __init__( - self, - dictionary, - default_dropout_prob=0.1, - bpe_cont_marker="@@", - bpe_end_marker=None, - ): - super().__init__(dictionary, bpe_cont_marker, bpe_end_marker) - self.default_dropout_prob = default_dropout_prob - - def noising(self, x, lengths, dropout_prob=None, blank_idx=None): - if dropout_prob is None: - dropout_prob = self.default_dropout_prob - # x: (T x B), lengths: B - if dropout_prob == 0: - return x, lengths - - assert 0 < dropout_prob < 1 - - # be sure to drop entire words - word_idx = self.get_word_idx(x) - sentences = [] - modified_lengths = [] - for i in range(lengths.size(0)): - # Since dropout probabilities need to apply over non-pad tokens, - # it is not trivial to generate the keep mask without consider - # input lengths; otherwise, this could be done outside the loop - - # We want to drop whole words based on word_idx grouping - num_words = max(word_idx[:, i]) + 1 - - # ith example: [x0, x1, ..., eos, pad, ..., pad] - # We should only generate keep probs for non-EOS tokens. Thus if the - # input sentence ends in EOS, the last word idx is not included in - # the dropout mask generation and we append True to always keep EOS. - # Otherwise, just generate the dropout mask for all word idx - # positions. - has_eos = x[lengths[i] - 1, i] == self.dictionary.eos() - if has_eos: # has eos? - keep = np.random.rand(num_words - 1) >= dropout_prob - keep = np.append(keep, [True]) # keep EOS symbol - else: - keep = np.random.rand(num_words) >= dropout_prob - - words = x[: lengths[i], i].tolist() - - # TODO: speed up the following loop - # drop words from the input according to keep - new_s = [ - w if keep[word_idx[j, i]] else blank_idx for j, w in enumerate(words) - ] - new_s = [w for w in new_s if w is not None] - # we need to have at least one word in the sentence (more than the - # start / end sentence symbols) - if len(new_s) <= 1: - # insert at beginning in case the only token left is EOS - # EOS should be at end of list. - new_s.insert(0, words[np.random.randint(0, len(words))]) - assert len(new_s) >= 1 and ( - not has_eos # Either don't have EOS at end or last token is EOS - or (len(new_s) >= 2 and new_s[-1] == self.dictionary.eos()) - ), "New sentence is invalid." - sentences.append(new_s) - modified_lengths.append(len(new_s)) - # re-construct input - modified_lengths = torch.LongTensor(modified_lengths) - modified_x = torch.LongTensor( - modified_lengths.max(), modified_lengths.size(0) - ).fill_(self.dictionary.pad()) - for i in range(modified_lengths.size(0)): - modified_x[: modified_lengths[i], i].copy_(torch.LongTensor(sentences[i])) - - return modified_x, modified_lengths - - -class WordShuffle(WordNoising): - """Shuffle words by no more than k positions.""" - - def __init__( - self, - dictionary, - default_max_shuffle_distance=3, - bpe_cont_marker="@@", - bpe_end_marker=None, - ): - super().__init__(dictionary, bpe_cont_marker, bpe_end_marker) - self.default_max_shuffle_distance = 3 - - def noising(self, x, lengths, max_shuffle_distance=None): - if max_shuffle_distance is None: - max_shuffle_distance = self.default_max_shuffle_distance - # x: (T x B), lengths: B - if max_shuffle_distance == 0: - return x, lengths - - # max_shuffle_distance < 1 will return the same sequence - assert max_shuffle_distance > 1 - - # define noise word scores - noise = np.random.uniform( - 0, - max_shuffle_distance, - size=(x.size(0), x.size(1)), - ) - noise[0] = -1 # do not move start sentence symbol - # be sure to shuffle entire words - word_idx = self.get_word_idx(x) - x2 = x.clone() - for i in range(lengths.size(0)): - length_no_eos = lengths[i] - if x[lengths[i] - 1, i] == self.dictionary.eos(): - length_no_eos = lengths[i] - 1 - # generate a random permutation - scores = word_idx[:length_no_eos, i] + noise[word_idx[:length_no_eos, i], i] - # ensure no reordering inside a word - scores += 1e-6 * np.arange(length_no_eos.item()) - permutation = scores.argsort() - # shuffle words - x2[:length_no_eos, i].copy_( - x2[:length_no_eos, i][torch.from_numpy(permutation)] - ) - return x2, lengths - - -class UnsupervisedMTNoising(WordNoising): - """ - Implements the default configuration for noising in UnsupervisedMT - (github.com/facebookresearch/UnsupervisedMT) - """ - - def __init__( - self, - dictionary, - max_word_shuffle_distance, - word_dropout_prob, - word_blanking_prob, - bpe_cont_marker="@@", - bpe_end_marker=None, - ): - super().__init__(dictionary) - self.max_word_shuffle_distance = max_word_shuffle_distance - self.word_dropout_prob = word_dropout_prob - self.word_blanking_prob = word_blanking_prob - - self.word_dropout = WordDropout( - dictionary=dictionary, - bpe_cont_marker=bpe_cont_marker, - bpe_end_marker=bpe_end_marker, - ) - self.word_shuffle = WordShuffle( - dictionary=dictionary, - bpe_cont_marker=bpe_cont_marker, - bpe_end_marker=bpe_end_marker, - ) - - def noising(self, x, lengths): - # 1. Word Shuffle - noisy_src_tokens, noisy_src_lengths = self.word_shuffle.noising( - x=x, - lengths=lengths, - max_shuffle_distance=self.max_word_shuffle_distance, - ) - # 2. Word Dropout - noisy_src_tokens, noisy_src_lengths = self.word_dropout.noising( - x=noisy_src_tokens, - lengths=noisy_src_lengths, - dropout_prob=self.word_dropout_prob, - ) - # 3. Word Blanking - noisy_src_tokens, noisy_src_lengths = self.word_dropout.noising( - x=noisy_src_tokens, - lengths=noisy_src_lengths, - dropout_prob=self.word_blanking_prob, - blank_idx=self.dictionary.unk(), - ) - - return noisy_src_tokens - - -class NoisingDataset(torch.utils.data.Dataset): - def __init__( - self, - src_dataset, - src_dict, - seed, - noiser=None, - noising_class=UnsupervisedMTNoising, - **kwargs - ): - """ - Wrap a :class:`~torch.utils.data.Dataset` and apply noise to the - samples based on the supplied noising configuration. - - Args: - src_dataset (~torch.utils.data.Dataset): dataset to wrap. - to build self.src_dataset -- - a LanguagePairDataset with src dataset as the source dataset and - None as the target dataset. Should NOT have padding so that - src_lengths are accurately calculated by language_pair_dataset - collate function. - We use language_pair_dataset here to encapsulate the tgt_dataset - so we can re-use the LanguagePairDataset collater to format the - batches in the structure that SequenceGenerator expects. - src_dict (~fairseq.data.Dictionary): source dictionary - seed (int): seed to use when generating random noise - noiser (WordNoising): a pre-initialized :class:`WordNoising` - instance. If this is None, a new instance will be created using - *noising_class* and *kwargs*. - noising_class (class, optional): class to use to initialize a - default :class:`WordNoising` instance. - kwargs (dict, optional): arguments to initialize the default - :class:`WordNoising` instance given by *noiser*. - """ - self.src_dataset = src_dataset - self.src_dict = src_dict - self.seed = seed - self.noiser = ( - noiser - if noiser is not None - else noising_class( - dictionary=src_dict, - **kwargs, - ) - ) - self.sizes = src_dataset.sizes - - - def __getitem__(self, index): - """ - Returns a single noisy sample. Multiple samples are fed to the collater - create a noising dataset batch. - """ - src_tokens = self.src_dataset[index] - src_lengths = torch.LongTensor([len(src_tokens)]) - src_tokens = src_tokens.unsqueeze(0) - - # Transpose src tokens to fit expected shape of x in noising function - # (batch size, sequence length) -> (sequence length, batch size) - src_tokens_t = torch.t(src_tokens) - - with data_utils.numpy_seed(self.seed + index): - noisy_src_tokens = self.noiser.noising(src_tokens_t, src_lengths) - - # Transpose back to expected src_tokens format - # (sequence length, 1) -> (1, sequence length) - noisy_src_tokens = torch.t(noisy_src_tokens) - return noisy_src_tokens[0] - - def __len__(self): - """ - The length of the noising dataset is the length of src. - """ - return len(self.src_dataset) - - @property - def supports_prefetch(self): - return self.src_dataset.supports_prefetch - - def prefetch(self, indices): - if self.src_dataset.supports_prefetch: - self.src_dataset.prefetch(indices) diff --git a/spaces/OlaWod/FreeVC/utils.py b/spaces/OlaWod/FreeVC/utils.py deleted file mode 100644 index 1bd5b6185af6c9f1c270b8ba345bfc36d059e081..0000000000000000000000000000000000000000 --- a/spaces/OlaWod/FreeVC/utils.py +++ /dev/null @@ -1,305 +0,0 @@ -import os -import sys -import argparse -import logging -import json -import subprocess -import numpy as np -from scipy.io.wavfile import read -import torch -from torch.nn import functional as F -from commons import sequence_mask - -MATPLOTLIB_FLAG = False - -logging.basicConfig(stream=sys.stdout, level=logging.DEBUG) -logger = logging - - -def get_cmodel(rank): - checkpoint = torch.load('wavlm/WavLM-Large.pt') - cfg = WavLMConfig(checkpoint['cfg']) - cmodel = WavLM(cfg).cuda(rank) - cmodel.load_state_dict(checkpoint['model']) - cmodel.eval() - return cmodel - - -def get_content(cmodel, y): - with torch.no_grad(): - c = cmodel.extract_features(y.squeeze(1))[0] - c = c.transpose(1, 2) - return c - - -def get_vocoder(rank): - with open("hifigan/config.json", "r") as f: - config = json.load(f) - config = hifigan.AttrDict(config) - vocoder = hifigan.Generator(config) - ckpt = torch.load("hifigan/generator_v1") - vocoder.load_state_dict(ckpt["generator"]) - vocoder.eval() - vocoder.remove_weight_norm() - vocoder.cuda(rank) - return vocoder - - -def transform(mel, height): # 68-92 - #r = np.random.random() - #rate = r * 0.3 + 0.85 # 0.85-1.15 - #height = int(mel.size(-2) * rate) - tgt = torchvision.transforms.functional.resize(mel, (height, mel.size(-1))) - if height >= mel.size(-2): - return tgt[:, :mel.size(-2), :] - else: - silence = tgt[:,-1:,:].repeat(1,mel.size(-2)-height,1) - silence += torch.randn_like(silence) / 10 - return torch.cat((tgt, silence), 1) - - -def stretch(mel, width): # 0.5-2 - return torchvision.transforms.functional.resize(mel, (mel.size(-2), width)) - - -def load_checkpoint(checkpoint_path, model, optimizer=None): - assert os.path.isfile(checkpoint_path) - checkpoint_dict = torch.load(checkpoint_path, map_location='cpu') - iteration = checkpoint_dict['iteration'] - learning_rate = checkpoint_dict['learning_rate'] - if optimizer is not None: - optimizer.load_state_dict(checkpoint_dict['optimizer']) - saved_state_dict = checkpoint_dict['model'] - if hasattr(model, 'module'): - state_dict = model.module.state_dict() - else: - state_dict = model.state_dict() - new_state_dict= {} - for k, v in state_dict.items(): - try: - new_state_dict[k] = saved_state_dict[k] - except: - logger.info("%s is not in the checkpoint" % k) - new_state_dict[k] = v - if hasattr(model, 'module'): - model.module.load_state_dict(new_state_dict) - else: - model.load_state_dict(new_state_dict) - logger.info("Loaded checkpoint '{}' (iteration {})" .format( - checkpoint_path, iteration)) - return model, optimizer, learning_rate, iteration - - -def save_checkpoint(model, optimizer, learning_rate, iteration, checkpoint_path): - logger.info("Saving model and optimizer state at iteration {} to {}".format( - iteration, checkpoint_path)) - if hasattr(model, 'module'): - state_dict = model.module.state_dict() - else: - state_dict = model.state_dict() - torch.save({'model': state_dict, - 'iteration': iteration, - 'optimizer': optimizer.state_dict(), - 'learning_rate': learning_rate}, checkpoint_path) - - -def summarize(writer, global_step, scalars={}, histograms={}, images={}, audios={}, audio_sampling_rate=22050): - for k, v in scalars.items(): - writer.add_scalar(k, v, global_step) - for k, v in histograms.items(): - writer.add_histogram(k, v, global_step) - for k, v in images.items(): - writer.add_image(k, v, global_step, dataformats='HWC') - for k, v in audios.items(): - writer.add_audio(k, v, global_step, audio_sampling_rate) - - -def latest_checkpoint_path(dir_path, regex="G_*.pth"): - f_list = glob.glob(os.path.join(dir_path, regex)) - f_list.sort(key=lambda f: int("".join(filter(str.isdigit, f)))) - x = f_list[-1] - print(x) - return x - - -def plot_spectrogram_to_numpy(spectrogram): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger('matplotlib') - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(10,2)) - im = ax.imshow(spectrogram, aspect="auto", origin="lower", - interpolation='none') - plt.colorbar(im, ax=ax) - plt.xlabel("Frames") - plt.ylabel("Channels") - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def plot_alignment_to_numpy(alignment, info=None): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger('matplotlib') - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(6, 4)) - im = ax.imshow(alignment.transpose(), aspect='auto', origin='lower', - interpolation='none') - fig.colorbar(im, ax=ax) - xlabel = 'Decoder timestep' - if info is not None: - xlabel += '\n\n' + info - plt.xlabel(xlabel) - plt.ylabel('Encoder timestep') - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def load_wav_to_torch(full_path): - sampling_rate, data = read(full_path) - return torch.FloatTensor(data.astype(np.float32)), sampling_rate - - -def load_filepaths_and_text(filename, split="|"): - with open(filename, encoding='utf-8') as f: - filepaths_and_text = [line.strip().split(split) for line in f] - return filepaths_and_text - - -def get_hparams(init=True): - parser = argparse.ArgumentParser() - parser.add_argument('-c', '--config', type=str, default="./configs/base.json", - help='JSON file for configuration') - parser.add_argument('-m', '--model', type=str, required=True, - help='Model name') - - args = parser.parse_args() - model_dir = os.path.join("./logs", args.model) - - if not os.path.exists(model_dir): - os.makedirs(model_dir) - - config_path = args.config - config_save_path = os.path.join(model_dir, "config.json") - if init: - with open(config_path, "r") as f: - data = f.read() - with open(config_save_path, "w") as f: - f.write(data) - else: - with open(config_save_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - hparams.model_dir = model_dir - return hparams - - -def get_hparams_from_dir(model_dir): - config_save_path = os.path.join(model_dir, "config.json") - with open(config_save_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams =HParams(**config) - hparams.model_dir = model_dir - return hparams - - -def get_hparams_from_file(config_path): - with open(config_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams =HParams(**config) - return hparams - - -def check_git_hash(model_dir): - source_dir = os.path.dirname(os.path.realpath(__file__)) - if not os.path.exists(os.path.join(source_dir, ".git")): - logger.warn("{} is not a git repository, therefore hash value comparison will be ignored.".format( - source_dir - )) - return - - cur_hash = subprocess.getoutput("git rev-parse HEAD") - - path = os.path.join(model_dir, "githash") - if os.path.exists(path): - saved_hash = open(path).read() - if saved_hash != cur_hash: - logger.warn("git hash values are different. {}(saved) != {}(current)".format( - saved_hash[:8], cur_hash[:8])) - else: - open(path, "w").write(cur_hash) - - -def get_logger(model_dir, filename="train.log"): - global logger - logger = logging.getLogger(os.path.basename(model_dir)) - logger.setLevel(logging.DEBUG) - - formatter = logging.Formatter("%(asctime)s\t%(name)s\t%(levelname)s\t%(message)s") - if not os.path.exists(model_dir): - os.makedirs(model_dir) - h = logging.FileHandler(os.path.join(model_dir, filename)) - h.setLevel(logging.DEBUG) - h.setFormatter(formatter) - logger.addHandler(h) - return logger - - -class HParams(): - def __init__(self, **kwargs): - for k, v in kwargs.items(): - if type(v) == dict: - v = HParams(**v) - self[k] = v - - def keys(self): - return self.__dict__.keys() - - def items(self): - return self.__dict__.items() - - def values(self): - return self.__dict__.values() - - def __len__(self): - return len(self.__dict__) - - def __getitem__(self, key): - return getattr(self, key) - - def __setitem__(self, key, value): - return setattr(self, key, value) - - def __contains__(self, key): - return key in self.__dict__ - - def __repr__(self): - return self.__dict__.__repr__() diff --git a/spaces/Omnibus/Video-Diffusion-WebUI/README.md b/spaces/Omnibus/Video-Diffusion-WebUI/README.md deleted file mode 100644 index 269acace59ae1caff969f832b9a89b446ad250d3..0000000000000000000000000000000000000000 --- a/spaces/Omnibus/Video-Diffusion-WebUI/README.md +++ /dev/null @@ -1,16 +0,0 @@ ---- -title: Video Diffusion WebUI -emoji: 🏃 -colorFrom: gray -colorTo: yellow -sdk: gradio -sdk_version: 3.19.0 -app_file: app.py -pinned: false -license: apache-2.0 -tags: -- making-demos -duplicated_from: ArtGAN/Video-Diffusion-WebUI ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/configs/common/data/coco_keypoint.py b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/configs/common/data/coco_keypoint.py deleted file mode 100644 index b4ceb066faf696954244205dc75376b767071217..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/configs/common/data/coco_keypoint.py +++ /dev/null @@ -1,13 +0,0 @@ -from detectron2.data.detection_utils import create_keypoint_hflip_indices - -from .coco import dataloader - -dataloader.train.dataset.min_keypoints = 1 -dataloader.train.dataset.names = "keypoints_coco_2017_train" -dataloader.test.dataset.names = "keypoints_coco_2017_val" - -dataloader.train.mapper.update( - use_instance_mask=False, - use_keypoint=True, - keypoint_hflip_indices=create_keypoint_hflip_indices(dataloader.train.dataset.names), -) diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/configs/common/models/mask_rcnn_fpn.py b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/configs/common/models/mask_rcnn_fpn.py deleted file mode 100644 index 744d5306f5b0ba4cf508731bd790bad823b520fa..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/configs/common/models/mask_rcnn_fpn.py +++ /dev/null @@ -1,93 +0,0 @@ -from detectron2.config import LazyCall as L -from detectron2.layers import ShapeSpec -from detectron2.modeling.meta_arch import GeneralizedRCNN -from detectron2.modeling.anchor_generator import DefaultAnchorGenerator -from detectron2.modeling.backbone.fpn import LastLevelMaxPool -from detectron2.modeling.backbone import BasicStem, FPN, ResNet -from detectron2.modeling.box_regression import Box2BoxTransform -from detectron2.modeling.matcher import Matcher -from detectron2.modeling.poolers import ROIPooler -from detectron2.modeling.proposal_generator import RPN, StandardRPNHead -from detectron2.modeling.roi_heads import ( - StandardROIHeads, - FastRCNNOutputLayers, - MaskRCNNConvUpsampleHead, - FastRCNNConvFCHead, -) - -model = L(GeneralizedRCNN)( - backbone=L(FPN)( - bottom_up=L(ResNet)( - stem=L(BasicStem)(in_channels=3, out_channels=64, norm="FrozenBN"), - stages=L(ResNet.make_default_stages)( - depth=50, - stride_in_1x1=True, - norm="FrozenBN", - ), - out_features=["res2", "res3", "res4", "res5"], - ), - in_features="${.bottom_up.out_features}", - out_channels=256, - top_block=L(LastLevelMaxPool)(), - ), - proposal_generator=L(RPN)( - in_features=["p2", "p3", "p4", "p5", "p6"], - head=L(StandardRPNHead)(in_channels=256, num_anchors=3), - anchor_generator=L(DefaultAnchorGenerator)( - sizes=[[32], [64], [128], [256], [512]], - aspect_ratios=[0.5, 1.0, 2.0], - strides=[4, 8, 16, 32, 64], - offset=0.0, - ), - anchor_matcher=L(Matcher)( - thresholds=[0.3, 0.7], labels=[0, -1, 1], allow_low_quality_matches=True - ), - box2box_transform=L(Box2BoxTransform)(weights=[1.0, 1.0, 1.0, 1.0]), - batch_size_per_image=256, - positive_fraction=0.5, - pre_nms_topk=(2000, 1000), - post_nms_topk=(1000, 1000), - nms_thresh=0.7, - ), - roi_heads=L(StandardROIHeads)( - num_classes=80, - batch_size_per_image=512, - positive_fraction=0.25, - proposal_matcher=L(Matcher)( - thresholds=[0.5], labels=[0, 1], allow_low_quality_matches=False - ), - box_in_features=["p2", "p3", "p4", "p5"], - box_pooler=L(ROIPooler)( - output_size=7, - scales=(1.0 / 4, 1.0 / 8, 1.0 / 16, 1.0 / 32), - sampling_ratio=0, - pooler_type="ROIAlignV2", - ), - box_head=L(FastRCNNConvFCHead)( - input_shape=ShapeSpec(channels=256, height=7, width=7), - conv_dims=[], - fc_dims=[1024, 1024], - ), - box_predictor=L(FastRCNNOutputLayers)( - input_shape=ShapeSpec(channels=1024), - test_score_thresh=0.05, - box2box_transform=L(Box2BoxTransform)(weights=(10, 10, 5, 5)), - num_classes="${..num_classes}", - ), - mask_in_features=["p2", "p3", "p4", "p5"], - mask_pooler=L(ROIPooler)( - output_size=14, - scales=(1.0 / 4, 1.0 / 8, 1.0 / 16, 1.0 / 32), - sampling_ratio=0, - pooler_type="ROIAlignV2", - ), - mask_head=L(MaskRCNNConvUpsampleHead)( - input_shape=ShapeSpec(channels=256, width=14, height=14), - num_classes="${..num_classes}", - conv_dims=[256, 256, 256, 256, 256], - ), - ), - pixel_mean=[103.530, 116.280, 123.675], - pixel_std=[1.0, 1.0, 1.0], - input_format="BGR", -) diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/demo/demo.py b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/demo/demo.py deleted file mode 100644 index 4baa8767f7b299f18253aadb15a9bac5b9cc07fc..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/demo/demo.py +++ /dev/null @@ -1,188 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import argparse -import glob -import multiprocessing as mp -import numpy as np -import os -import tempfile -import time -import warnings -import cv2 -import tqdm - -from detectron2.config import get_cfg -from detectron2.data.detection_utils import read_image -from detectron2.utils.logger import setup_logger - -from predictor import VisualizationDemo - -# constants -WINDOW_NAME = "COCO detections" - - -def setup_cfg(args): - # load config from file and command-line arguments - cfg = get_cfg() - # To use demo for Panoptic-DeepLab, please uncomment the following two lines. - # from detectron2.projects.panoptic_deeplab import add_panoptic_deeplab_config # noqa - # add_panoptic_deeplab_config(cfg) - cfg.merge_from_file(args.config_file) - cfg.merge_from_list(args.opts) - # Set score_threshold for builtin models - cfg.MODEL.RETINANET.SCORE_THRESH_TEST = args.confidence_threshold - cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = args.confidence_threshold - cfg.MODEL.PANOPTIC_FPN.COMBINE.INSTANCES_CONFIDENCE_THRESH = args.confidence_threshold - cfg.freeze() - return cfg - - -def get_parser(): - parser = argparse.ArgumentParser(description="Detectron2 demo for builtin configs") - parser.add_argument( - "--config-file", - default="configs/quick_schedules/mask_rcnn_R_50_FPN_inference_acc_test.yaml", - metavar="FILE", - help="path to config file", - ) - parser.add_argument("--webcam", action="store_true", help="Take inputs from webcam.") - parser.add_argument("--video-input", help="Path to video file.") - parser.add_argument( - "--input", - nargs="+", - help="A list of space separated input images; " - "or a single glob pattern such as 'directory/*.jpg'", - ) - parser.add_argument( - "--output", - help="A file or directory to save output visualizations. " - "If not given, will show output in an OpenCV window.", - ) - - parser.add_argument( - "--confidence-threshold", - type=float, - default=0.5, - help="Minimum score for instance predictions to be shown", - ) - parser.add_argument( - "--opts", - help="Modify config options using the command-line 'KEY VALUE' pairs", - default=[], - nargs=argparse.REMAINDER, - ) - return parser - - -def test_opencv_video_format(codec, file_ext): - with tempfile.TemporaryDirectory(prefix="video_format_test") as dir: - filename = os.path.join(dir, "test_file" + file_ext) - writer = cv2.VideoWriter( - filename=filename, - fourcc=cv2.VideoWriter_fourcc(*codec), - fps=float(30), - frameSize=(10, 10), - isColor=True, - ) - [writer.write(np.zeros((10, 10, 3), np.uint8)) for _ in range(30)] - writer.release() - if os.path.isfile(filename): - return True - return False - - -if __name__ == "__main__": - mp.set_start_method("spawn", force=True) - args = get_parser().parse_args() - setup_logger(name="fvcore") - logger = setup_logger() - logger.info("Arguments: " + str(args)) - - cfg = setup_cfg(args) - - demo = VisualizationDemo(cfg) - - if args.input: - if len(args.input) == 1: - args.input = glob.glob(os.path.expanduser(args.input[0])) - assert args.input, "The input path(s) was not found" - for path in tqdm.tqdm(args.input, disable=not args.output): - # use PIL, to be consistent with evaluation - img = read_image(path, format="BGR") - start_time = time.time() - predictions, visualized_output = demo.run_on_image(img) - logger.info( - "{}: {} in {:.2f}s".format( - path, - "detected {} instances".format(len(predictions["instances"])) - if "instances" in predictions - else "finished", - time.time() - start_time, - ) - ) - - if args.output: - if os.path.isdir(args.output): - assert os.path.isdir(args.output), args.output - out_filename = os.path.join(args.output, os.path.basename(path)) - else: - assert len(args.input) == 1, "Please specify a directory with args.output" - out_filename = args.output - visualized_output.save(out_filename) - else: - cv2.namedWindow(WINDOW_NAME, cv2.WINDOW_NORMAL) - cv2.imshow(WINDOW_NAME, visualized_output.get_image()[:, :, ::-1]) - if cv2.waitKey(0) == 27: - break # esc to quit - elif args.webcam: - assert args.input is None, "Cannot have both --input and --webcam!" - assert args.output is None, "output not yet supported with --webcam!" - cam = cv2.VideoCapture(0) - for vis in tqdm.tqdm(demo.run_on_video(cam)): - cv2.namedWindow(WINDOW_NAME, cv2.WINDOW_NORMAL) - cv2.imshow(WINDOW_NAME, vis) - if cv2.waitKey(1) == 27: - break # esc to quit - cam.release() - cv2.destroyAllWindows() - elif args.video_input: - video = cv2.VideoCapture(args.video_input) - width = int(video.get(cv2.CAP_PROP_FRAME_WIDTH)) - height = int(video.get(cv2.CAP_PROP_FRAME_HEIGHT)) - frames_per_second = video.get(cv2.CAP_PROP_FPS) - num_frames = int(video.get(cv2.CAP_PROP_FRAME_COUNT)) - basename = os.path.basename(args.video_input) - codec, file_ext = ( - ("x264", ".mkv") if test_opencv_video_format("x264", ".mkv") else ("mp4v", ".mp4") - ) - if codec == ".mp4v": - warnings.warn("x264 codec not available, switching to mp4v") - if args.output: - if os.path.isdir(args.output): - output_fname = os.path.join(args.output, basename) - output_fname = os.path.splitext(output_fname)[0] + file_ext - else: - output_fname = args.output - assert not os.path.isfile(output_fname), output_fname - output_file = cv2.VideoWriter( - filename=output_fname, - # some installation of opencv may not support x264 (due to its license), - # you can try other format (e.g. MPEG) - fourcc=cv2.VideoWriter_fourcc(*codec), - fps=float(frames_per_second), - frameSize=(width, height), - isColor=True, - ) - assert os.path.isfile(args.video_input) - for vis_frame in tqdm.tqdm(demo.run_on_video(video), total=num_frames): - if args.output: - output_file.write(vis_frame) - else: - cv2.namedWindow(basename, cv2.WINDOW_NORMAL) - cv2.imshow(basename, vis_frame) - if cv2.waitKey(1) == 27: - break # esc to quit - video.release() - if args.output: - output_file.release() - else: - cv2.destroyAllWindows() diff --git a/spaces/PSLD/PSLD/diffusion-posterior-sampling/bkse/models/__init__.py b/spaces/PSLD/PSLD/diffusion-posterior-sampling/bkse/models/__init__.py deleted file mode 100644 index 81ac34183d164e666d42b5481e7f7e83ad15c183..0000000000000000000000000000000000000000 --- a/spaces/PSLD/PSLD/diffusion-posterior-sampling/bkse/models/__init__.py +++ /dev/null @@ -1,15 +0,0 @@ -import logging - - -logger = logging.getLogger("base") - - -def create_model(opt): - model = opt["model"] - if model == "image_base": - from models.kernel_encoding.image_base_model import ImageBaseModel as M - else: - raise NotImplementedError("Model [{:s}] not recognized.".format(model)) - m = M(opt) - logger.info("Model [{:s}] is created.".format(m.__class__.__name__)) - return m diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/ice-9/control.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/ice-9/control.go deleted file mode 100644 index 69039befe7589f80e6a2e859e724aaf30bc62971..0000000000000000000000000000000000000000 Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/ice-9/control.go and /dev/null differ diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/language/elisp/parser.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/language/elisp/parser.go deleted file mode 100644 index 379fef982b440ee7d238efeacf0ef848e7ac122b..0000000000000000000000000000000000000000 Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/language/elisp/parser.go and /dev/null differ diff --git a/spaces/Pixeled/dogcat/app.py b/spaces/Pixeled/dogcat/app.py deleted file mode 100644 index f394165c0f651ead3c64ecbcb8cde550e15bff31..0000000000000000000000000000000000000000 --- a/spaces/Pixeled/dogcat/app.py +++ /dev/null @@ -1,18 +0,0 @@ -from fastai.vision.all import * -import gradio as gr - -learn = load_learner("model.pkl") - -categories = ("Cat","Dog") - -def classify_image(img): - pred,idx,probs = learn.predict(img) - return dict(zip(categories, map(float,probs))) - -image = gr.inputs.Image(shape=(192,192)) -label = gr.outputs.Label() - -examples = ["cat.jpg","dog.jpg"] - -intf = gr.Interface(fn=classify_image, inputs=image, outputs=label, examples=examples) -intf.launch(inline=False) \ No newline at end of file diff --git a/spaces/Proveedy/dreambooth-trainingv15/train_dreambooth.py b/spaces/Proveedy/dreambooth-trainingv15/train_dreambooth.py deleted file mode 100644 index f4ff135e549f0d6c72f733092f3df817cb178e01..0000000000000000000000000000000000000000 --- a/spaces/Proveedy/dreambooth-trainingv15/train_dreambooth.py +++ /dev/null @@ -1,889 +0,0 @@ -import argparse -import itertools -import math -import os -from pathlib import Path -from typing import Optional -import subprocess -import sys -import gc -import random - -import torch -import torch.nn.functional as F -import torch.utils.checkpoint -from torch.utils.data import Dataset - -from accelerate import Accelerator -from accelerate.logging import get_logger -from accelerate.utils import set_seed -from diffusers import AutoencoderKL, DDPMScheduler, StableDiffusionPipeline, UNet2DConditionModel -from diffusers.utils.import_utils import is_xformers_available -from diffusers.optimization import get_scheduler -from huggingface_hub import HfFolder, Repository, whoami -from PIL import Image -from torchvision import transforms -from tqdm.auto import tqdm -from transformers import CLIPTextModel, CLIPTokenizer - - -logger = get_logger(__name__) - - -def parse_args(): - parser = argparse.ArgumentParser(description="Simple example of a training script.") - parser.add_argument( - "--pretrained_model_name_or_path", - type=str, - default=None, - #required=True, - help="Path to pretrained model or model identifier from huggingface.co/models.", - ) - parser.add_argument( - "--tokenizer_name", - type=str, - default=None, - help="Pretrained tokenizer name or path if not the same as model_name", - ) - parser.add_argument( - "--instance_data_dir", - type=str, - default=None, - #required=True, - help="A folder containing the training data of instance images.", - ) - parser.add_argument( - "--class_data_dir", - type=str, - default=None, - #required=False, - help="A folder containing the training data of class images.", - ) - parser.add_argument( - "--instance_prompt", - type=str, - default=None, - help="The prompt with identifier specifying the instance", - ) - parser.add_argument( - "--class_prompt", - type=str, - default="", - help="The prompt to specify images in the same class as provided instance images.", - ) - parser.add_argument( - "--with_prior_preservation", - default=False, - action="store_true", - help="Flag to add prior preservation loss.", - ) - parser.add_argument("--prior_loss_weight", type=float, default=1.0, help="The weight of prior preservation loss.") - parser.add_argument( - "--num_class_images", - type=int, - default=100, - help=( - "Minimal class images for prior preservation loss. If not have enough images, additional images will be" - " sampled with class_prompt." - ), - ) - parser.add_argument( - "--output_dir", - type=str, - default="", - help="The output directory where the model predictions and checkpoints will be written.", - ) - parser.add_argument("--seed", type=int, default=None, help="A seed for reproducible training.") - parser.add_argument( - "--resolution", - type=int, - default=512, - help=( - "The resolution for input images, all the images in the train/validation dataset will be resized to this" - " resolution" - ), - ) - parser.add_argument( - "--center_crop", action="store_true", help="Whether to center crop images before resizing to resolution" - ) - parser.add_argument("--train_text_encoder", action="store_true", help="Whether to train the text encoder") - parser.add_argument( - "--train_batch_size", type=int, default=4, help="Batch size (per device) for the training dataloader." - ) - parser.add_argument( - "--sample_batch_size", type=int, default=4, help="Batch size (per device) for sampling images." - ) - parser.add_argument("--num_train_epochs", type=int, default=1) - parser.add_argument( - "--max_train_steps", - type=int, - default=None, - help="Total number of training steps to perform. If provided, overrides num_train_epochs.", - ) - parser.add_argument( - "--gradient_accumulation_steps", - type=int, - default=1, - help="Number of updates steps to accumulate before performing a backward/update pass.", - ) - parser.add_argument( - "--gradient_checkpointing", - action="store_true", - help="Whether or not to use gradient checkpointing to save memory at the expense of slower backward pass.", - ) - parser.add_argument( - "--learning_rate", - type=float, - default=5e-6, - help="Initial learning rate (after the potential warmup period) to use.", - ) - parser.add_argument( - "--scale_lr", - action="store_true", - default=False, - help="Scale the learning rate by the number of GPUs, gradient accumulation steps, and batch size.", - ) - parser.add_argument( - "--lr_scheduler", - type=str, - default="constant", - help=( - 'The scheduler type to use. Choose between ["linear", "cosine", "cosine_with_restarts", "polynomial",' - ' "constant", "constant_with_warmup"]' - ), - ) - parser.add_argument( - "--lr_warmup_steps", type=int, default=500, help="Number of steps for the warmup in the lr scheduler." - ) - parser.add_argument( - "--use_8bit_adam", action="store_true", help="Whether or not to use 8-bit Adam from bitsandbytes." - ) - parser.add_argument("--adam_beta1", type=float, default=0.9, help="The beta1 parameter for the Adam optimizer.") - parser.add_argument("--adam_beta2", type=float, default=0.999, help="The beta2 parameter for the Adam optimizer.") - parser.add_argument("--adam_weight_decay", type=float, default=1e-2, help="Weight decay to use.") - parser.add_argument("--adam_epsilon", type=float, default=1e-08, help="Epsilon value for the Adam optimizer") - parser.add_argument("--max_grad_norm", default=1.0, type=float, help="Max gradient norm.") - parser.add_argument("--push_to_hub", action="store_true", help="Whether or not to push the model to the Hub.") - parser.add_argument("--hub_token", type=str, default=None, help="The token to use to push to the Model Hub.") - parser.add_argument( - "--hub_model_id", - type=str, - default=None, - help="The name of the repository to keep in sync with the local `output_dir`.", - ) - parser.add_argument( - "--logging_dir", - type=str, - default="logs", - help=( - "[TensorBoard](https://www.tensorflow.org/tensorboard) log directory. Will default to" - " *output_dir/runs/**CURRENT_DATETIME_HOSTNAME***." - ), - ) - parser.add_argument( - "--mixed_precision", - type=str, - default="no", - choices=["no", "fp16", "bf16"], - help=( - "Whether to use mixed precision. Choose" - "between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >= 1.10." - "and an Nvidia Ampere GPU." - ), - ) - - parser.add_argument( - "--save_n_steps", - type=int, - default=1, - help=("Save the model every n global_steps"), - ) - - - parser.add_argument( - "--save_starting_step", - type=int, - default=1, - help=("The step from which it starts saving intermediary checkpoints"), - ) - - parser.add_argument( - "--stop_text_encoder_training", - type=int, - default=1000000, - help=("The step at which the text_encoder is no longer trained"), - ) - - - parser.add_argument( - "--image_captions_filename", - action="store_true", - help="Get captions from filename", - ) - - - parser.add_argument( - "--dump_only_text_encoder", - action="store_true", - default=False, - help="Dump only text encoder", - ) - - parser.add_argument( - "--train_only_unet", - action="store_true", - default=False, - help="Train only the unet", - ) - - parser.add_argument( - "--cache_latents", - action="store_true", - default=False, - help="Train only the unet", - ) - - parser.add_argument( - "--Session_dir", - type=str, - default="", - help="Current session directory", - ) - - - - - parser.add_argument("--local_rank", type=int, default=-1, help="For distributed training: local_rank") - - args = parser.parse_args() - env_local_rank = int(os.environ.get("LOCAL_RANK", -1)) - if env_local_rank != -1 and env_local_rank != args.local_rank: - args.local_rank = env_local_rank - - #if args.instance_data_dir is None: - # raise ValueError("You must specify a train data directory.") - - #if args.with_prior_preservation: - # if args.class_data_dir is None: - # raise ValueError("You must specify a data directory for class images.") - # if args.class_prompt is None: - # raise ValueError("You must specify prompt for class images.") - - return args - - -class DreamBoothDataset(Dataset): - """ - A dataset to prepare the instance and class images with the prompts for fine-tuning the model. - It pre-processes the images and the tokenizes prompts. - """ - - def __init__( - self, - instance_data_root, - instance_prompt, - tokenizer, - args, - class_data_root=None, - class_prompt=None, - size=512, - center_crop=False, - ): - self.size = size - self.center_crop = center_crop - self.tokenizer = tokenizer - self.image_captions_filename = None - - self.instance_data_root = Path(instance_data_root) - if not self.instance_data_root.exists(): - raise ValueError("Instance images root doesn't exists.") - - self.instance_images_path = list(Path(instance_data_root).iterdir()) - self.num_instance_images = len(self.instance_images_path) - self.instance_prompt = instance_prompt - self._length = self.num_instance_images - - if args.image_captions_filename: - self.image_captions_filename = True - - if class_data_root is not None: - self.class_data_root = Path(class_data_root) - self.class_data_root.mkdir(parents=True, exist_ok=True) - self.class_images_path = list(self.class_data_root.iterdir()) - random.shuffle(self.class_images_path) - self.num_class_images = len(self.class_images_path) - self._length = max(self.num_class_images, self.num_instance_images) - self.class_prompt = class_prompt - else: - self.class_data_root = None - - self.image_transforms = transforms.Compose( - [ - transforms.Resize(size, interpolation=transforms.InterpolationMode.BILINEAR), - transforms.CenterCrop(size) if center_crop else transforms.RandomCrop(size), - transforms.ToTensor(), - transforms.Normalize([0.5], [0.5]), - ] - ) - - def __len__(self): - return self._length - - def __getitem__(self, index): - example = {} - path = self.instance_images_path[index % self.num_instance_images] - instance_image = Image.open(path) - if not instance_image.mode == "RGB": - instance_image = instance_image.convert("RGB") - - instance_prompt = self.instance_prompt - - if self.image_captions_filename: - filename = Path(path).stem - pt=''.join([i for i in filename if not i.isdigit()]) - pt=pt.replace("_"," ") - pt=pt.replace("(","") - pt=pt.replace(")","") - pt=pt.replace("-","") - instance_prompt = pt - sys.stdout.write(" " +instance_prompt+" ") - sys.stdout.flush() - - - example["instance_images"] = self.image_transforms(instance_image) - example["instance_prompt_ids"] = self.tokenizer( - instance_prompt, - padding="do_not_pad", - truncation=True, - max_length=self.tokenizer.model_max_length, - ).input_ids - - if self.class_data_root: - class_image = Image.open(self.class_images_path[index % self.num_class_images]) - if not class_image.mode == "RGB": - class_image = class_image.convert("RGB") - example["class_images"] = self.image_transforms(class_image) - example["class_prompt_ids"] = self.tokenizer( - self.class_prompt, - padding="do_not_pad", - truncation=True, - max_length=self.tokenizer.model_max_length, - ).input_ids - - return example - - - -class PromptDataset(Dataset): - "A simple dataset to prepare the prompts to generate class images on multiple GPUs." - - def __init__(self, prompt, num_samples): - self.prompt = prompt - self.num_samples = num_samples - - def __len__(self): - return self.num_samples - - def __getitem__(self, index): - example = {} - example["prompt"] = self.prompt - example["index"] = index - return example - -class LatentsDataset(Dataset): - def __init__(self, latents_cache, text_encoder_cache): - self.latents_cache = latents_cache - self.text_encoder_cache = text_encoder_cache - - def __len__(self): - return len(self.latents_cache) - - def __getitem__(self, index): - return self.latents_cache[index], self.text_encoder_cache[index] - -def get_full_repo_name(model_id: str, organization: Optional[str] = None, token: Optional[str] = None): - if token is None: - token = HfFolder.get_token() - if organization is None: - username = whoami(token)["name"] - return f"{username}/{model_id}" - else: - return f"{organization}/{model_id}" - -def merge_two_dicts(starting_dict: dict, updater_dict: dict) -> dict: - """ - Starts from base starting dict and then adds the remaining key values from updater replacing the values from - the first starting/base dict with the second updater dict. - - For later: how does d = {**d1, **d2} replace collision? - - :param starting_dict: - :param updater_dict: - :return: - """ - new_dict: dict = starting_dict.copy() # start with keys and values of starting_dict - new_dict.update(updater_dict) # modifies starting_dict with keys and values of updater_dict - return new_dict - -def merge_args(args1: argparse.Namespace, args2: argparse.Namespace) -> argparse.Namespace: - """ - - ref: https://stackoverflow.com/questions/56136549/how-can-i-merge-two-argparse-namespaces-in-python-2-x - :param args1: - :param args2: - :return: - """ - # - the merged args - # The vars() function returns the __dict__ attribute to values of the given object e.g {field:value}. - merged_key_values_for_namespace: dict = merge_two_dicts(vars(args1), vars(args2)) - args = argparse.Namespace(**merged_key_values_for_namespace) - return args - -def run_training(args_imported): - args_default = parse_args() - args = merge_args(args_default, args_imported) - print(args) - logging_dir = Path(args.output_dir, args.logging_dir) - i=args.save_starting_step - accelerator = Accelerator( - gradient_accumulation_steps=args.gradient_accumulation_steps, - mixed_precision=args.mixed_precision, - log_with="tensorboard", - logging_dir=logging_dir, - ) - - # Currently, it's not possible to do gradient accumulation when training two models with accelerate.accumulate - # This will be enabled soon in accelerate. For now, we don't allow gradient accumulation when training two models. - # TODO (patil-suraj): Remove this check when gradient accumulation with two models is enabled in accelerate. - if args.train_text_encoder and args.gradient_accumulation_steps > 1 and accelerator.num_processes > 1: - raise ValueError( - "Gradient accumulation is not supported when training the text encoder in distributed training. " - "Please set gradient_accumulation_steps to 1. This feature will be supported in the future." - ) - - if args.seed is not None: - set_seed(args.seed) - - if args.with_prior_preservation: - class_images_dir = Path(args.class_data_dir) - if not class_images_dir.exists(): - class_images_dir.mkdir(parents=True) - cur_class_images = len(list(class_images_dir.iterdir())) - - if cur_class_images < args.num_class_images: - torch_dtype = torch.float16 if accelerator.device.type == "cuda" else torch.float32 - pipeline = StableDiffusionPipeline.from_pretrained( - args.pretrained_model_name_or_path, torch_dtype=torch_dtype - ) - pipeline.set_progress_bar_config(disable=True) - - num_new_images = args.num_class_images - cur_class_images - logger.info(f"Number of class images to sample: {num_new_images}.") - - sample_dataset = PromptDataset(args.class_prompt, num_new_images) - sample_dataloader = torch.utils.data.DataLoader(sample_dataset, batch_size=args.sample_batch_size) - - sample_dataloader = accelerator.prepare(sample_dataloader) - pipeline.to(accelerator.device) - - for example in tqdm( - sample_dataloader, desc="Generating class images", disable=not accelerator.is_local_main_process - ): - with torch.autocast("cuda"): - images = pipeline(example["prompt"]).images - - for i, image in enumerate(images): - image.save(class_images_dir / f"{example['index'][i] + cur_class_images}.jpg") - - del pipeline - if torch.cuda.is_available(): - torch.cuda.empty_cache() - - # Handle the repository creation - if accelerator.is_main_process: - if args.push_to_hub: - if args.hub_model_id is None: - repo_name = get_full_repo_name(Path(args.output_dir).name, token=args.hub_token) - else: - repo_name = args.hub_model_id - repo = Repository(args.output_dir, clone_from=repo_name) - - with open(os.path.join(args.output_dir, ".gitignore"), "w+") as gitignore: - if "step_*" not in gitignore: - gitignore.write("step_*\n") - if "epoch_*" not in gitignore: - gitignore.write("epoch_*\n") - elif args.output_dir is not None: - os.makedirs(args.output_dir, exist_ok=True) - - # Load the tokenizer - if args.tokenizer_name: - tokenizer = CLIPTokenizer.from_pretrained(args.tokenizer_name) - elif args.pretrained_model_name_or_path: - tokenizer = CLIPTokenizer.from_pretrained(args.pretrained_model_name_or_path, subfolder="tokenizer") - - # Load models and create wrapper for stable diffusion - if args.train_only_unet: - if os.path.exists(str(args.output_dir+"/text_encoder_trained")): - text_encoder = CLIPTextModel.from_pretrained(args.output_dir, subfolder="text_encoder_trained") - elif os.path.exists(str(args.output_dir+"/text_encoder")): - text_encoder = CLIPTextModel.from_pretrained(args.output_dir, subfolder="text_encoder") - else: - text_encoder = CLIPTextModel.from_pretrained(args.pretrained_model_name_or_path, subfolder="text_encoder") - else: - text_encoder = CLIPTextModel.from_pretrained(args.pretrained_model_name_or_path, subfolder="text_encoder") - vae = AutoencoderKL.from_pretrained(args.pretrained_model_name_or_path, subfolder="vae") - unet = UNet2DConditionModel.from_pretrained(args.pretrained_model_name_or_path, subfolder="unet") - if is_xformers_available(): - try: - print("Enabling memory efficient attention with xformers...") - unet.enable_xformers_memory_efficient_attention() - except Exception as e: - logger.warning( - f"Could not enable memory efficient attention. Make sure xformers is installed correctly and a GPU is available: {e}" - ) - vae.requires_grad_(False) - if not args.train_text_encoder: - text_encoder.requires_grad_(False) - - if args.gradient_checkpointing: - unet.enable_gradient_checkpointing() - if args.train_text_encoder: - text_encoder.gradient_checkpointing_enable() - - if args.scale_lr: - args.learning_rate = ( - args.learning_rate * args.gradient_accumulation_steps * args.train_batch_size * accelerator.num_processes - ) - - # Use 8-bit Adam for lower memory usage or to fine-tune the model in 16GB GPUs - if args.use_8bit_adam: - try: - import bitsandbytes as bnb - except ImportError: - raise ImportError( - "To use 8-bit Adam, please install the bitsandbytes library: `pip install bitsandbytes`." - ) - - optimizer_class = bnb.optim.AdamW8bit - else: - optimizer_class = torch.optim.AdamW - - params_to_optimize = ( - itertools.chain(unet.parameters(), text_encoder.parameters()) if args.train_text_encoder else unet.parameters() - ) - optimizer = optimizer_class( - params_to_optimize, - lr=args.learning_rate, - betas=(args.adam_beta1, args.adam_beta2), - weight_decay=args.adam_weight_decay, - eps=args.adam_epsilon, - ) - - noise_scheduler = DDPMScheduler.from_config(args.pretrained_model_name_or_path, subfolder="scheduler") - - train_dataset = DreamBoothDataset( - instance_data_root=args.instance_data_dir, - instance_prompt=args.instance_prompt, - class_data_root=args.class_data_dir if args.with_prior_preservation else None, - class_prompt=args.class_prompt, - tokenizer=tokenizer, - size=args.resolution, - center_crop=args.center_crop, - args=args, - ) - - def collate_fn(examples): - input_ids = [example["instance_prompt_ids"] for example in examples] - pixel_values = [example["instance_images"] for example in examples] - - # Concat class and instance examples for prior preservation. - # We do this to avoid doing two forward passes. - if args.with_prior_preservation: - input_ids += [example["class_prompt_ids"] for example in examples] - pixel_values += [example["class_images"] for example in examples] - - pixel_values = torch.stack(pixel_values) - pixel_values = pixel_values.to(memory_format=torch.contiguous_format).float() - - input_ids = tokenizer.pad({"input_ids": input_ids}, padding=True, return_tensors="pt").input_ids - - batch = { - "input_ids": input_ids, - "pixel_values": pixel_values, - } - return batch - - train_dataloader = torch.utils.data.DataLoader( - train_dataset, batch_size=args.train_batch_size, shuffle=True, collate_fn=collate_fn - ) - - # Scheduler and math around the number of training steps. - overrode_max_train_steps = False - num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps) - if args.max_train_steps is None: - args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch - overrode_max_train_steps = True - - lr_scheduler = get_scheduler( - args.lr_scheduler, - optimizer=optimizer, - num_warmup_steps=args.lr_warmup_steps * args.gradient_accumulation_steps, - num_training_steps=args.max_train_steps * args.gradient_accumulation_steps, - ) - - if args.train_text_encoder: - unet, text_encoder, optimizer, train_dataloader, lr_scheduler = accelerator.prepare( - unet, text_encoder, optimizer, train_dataloader, lr_scheduler - ) - else: - unet, optimizer, train_dataloader, lr_scheduler = accelerator.prepare( - unet, optimizer, train_dataloader, lr_scheduler - ) - - weight_dtype = torch.float32 - if args.mixed_precision == "fp16": - weight_dtype = torch.float16 - elif args.mixed_precision == "bf16": - weight_dtype = torch.bfloat16 - - # Move text_encode and vae to gpu. - # For mixed precision training we cast the text_encoder and vae weights to half-precision - # as these models are only used for inference, keeping weights in full precision is not required. - vae.to(accelerator.device, dtype=weight_dtype) - if not args.train_text_encoder: - text_encoder.to(accelerator.device, dtype=weight_dtype) - - - if args.cache_latents: - latents_cache = [] - text_encoder_cache = [] - for batch in tqdm(train_dataloader, desc="Caching latents"): - with torch.no_grad(): - batch["pixel_values"] = batch["pixel_values"].to(accelerator.device, non_blocking=True, dtype=weight_dtype) - batch["input_ids"] = batch["input_ids"].to(accelerator.device, non_blocking=True) - latents_cache.append(vae.encode(batch["pixel_values"]).latent_dist) - if args.train_text_encoder: - text_encoder_cache.append(batch["input_ids"]) - else: - text_encoder_cache.append(text_encoder(batch["input_ids"])[0]) - train_dataset = LatentsDataset(latents_cache, text_encoder_cache) - train_dataloader = torch.utils.data.DataLoader(train_dataset, batch_size=1, collate_fn=lambda x: x, shuffle=True) - - del vae - #if not args.train_text_encoder: - # del text_encoder - if torch.cuda.is_available(): - torch.cuda.empty_cache() - - # We need to recalculate our total training steps as the size of the training dataloader may have changed. - num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps) - if overrode_max_train_steps: - args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch - # Afterwards we recalculate our number of training epochs - args.num_train_epochs = math.ceil(args.max_train_steps / num_update_steps_per_epoch) - - # We need to initialize the trackers we use, and also store our configuration. - # The trackers initializes automatically on the main process. - if accelerator.is_main_process: - accelerator.init_trackers("dreambooth", config=vars(args)) - - def bar(prg): - br='|'+'█' * prg + ' ' * (25-prg)+'|' - return br - - # Train! - total_batch_size = args.train_batch_size * accelerator.num_processes * args.gradient_accumulation_steps - - logger.info("***** Running training *****") - logger.info(f" Num examples = {len(train_dataset)}") - logger.info(f" Num batches each epoch = {len(train_dataloader)}") - logger.info(f" Num Epochs = {args.num_train_epochs}") - logger.info(f" Instantaneous batch size per device = {args.train_batch_size}") - logger.info(f" Total train batch size (w. parallel, distributed & accumulation) = {total_batch_size}") - logger.info(f" Gradient Accumulation steps = {args.gradient_accumulation_steps}") - logger.info(f" Total optimization steps = {args.max_train_steps}") - # Only show the progress bar once on each machine. - progress_bar = tqdm(range(args.max_train_steps), disable=not accelerator.is_local_main_process) - global_step = 0 - - for epoch in range(args.num_train_epochs): - unet.train() - if args.train_text_encoder: - text_encoder.train() - for step, batch in enumerate(train_dataloader): - with accelerator.accumulate(unet): - # Convert images to latent space - with torch.no_grad(): - if args.cache_latents: - latents_dist = batch[0][0] - else: - latents_dist = vae.encode(batch["pixel_values"].to(dtype=weight_dtype)).latent_dist - latents = latents_dist.sample() * 0.18215 - - # Sample noise that we'll add to the latents - noise = torch.randn_like(latents) - bsz = latents.shape[0] - # Sample a random timestep for each image - timesteps = torch.randint(0, noise_scheduler.config.num_train_timesteps, (bsz,), device=latents.device) - timesteps = timesteps.long() - - # Add noise to the latents according to the noise magnitude at each timestep - # (this is the forward diffusion process) - noisy_latents = noise_scheduler.add_noise(latents, noise, timesteps) - - # Get the text embedding for conditioning - if(args.cache_latents): - if args.train_text_encoder: - encoder_hidden_states = text_encoder(batch[0][1])[0] - else: - encoder_hidden_states = batch[0][1] - else: - encoder_hidden_states = text_encoder(batch["input_ids"])[0] - - # Predict the noise residual - model_pred = unet(noisy_latents, timesteps, encoder_hidden_states).sample - - # Get the target for loss depending on the prediction type - if noise_scheduler.config.prediction_type == "epsilon": - target = noise - elif noise_scheduler.config.prediction_type == "v_prediction": - target = noise_scheduler.get_velocity(latents, noise, timesteps) - else: - raise ValueError(f"Unknown prediction type {noise_scheduler.config.prediction_type}") - - if args.with_prior_preservation: - # Chunk the noise and model_pred into two parts and compute the loss on each part separately. - model_pred, model_pred_prior = torch.chunk(model_pred, 2, dim=0) - target, target_prior = torch.chunk(target, 2, dim=0) - - # Compute instance loss - loss = F.mse_loss(model_pred.float(), target.float(), reduction="none").mean([1, 2, 3]).mean() - - # Compute prior loss - prior_loss = F.mse_loss(model_pred_prior.float(), target_prior.float(), reduction="mean") - - # Add the prior loss to the instance loss. - loss = loss + args.prior_loss_weight * prior_loss - else: - loss = F.mse_loss(model_pred.float(), target.float(), reduction="mean") - - accelerator.backward(loss) - if accelerator.sync_gradients: - params_to_clip = ( - itertools.chain(unet.parameters(), text_encoder.parameters()) - if args.train_text_encoder - else unet.parameters() - ) - accelerator.clip_grad_norm_(params_to_clip, args.max_grad_norm) - optimizer.step() - lr_scheduler.step() - optimizer.zero_grad() - - # Checks if the accelerator has performed an optimization step behind the scenes - if accelerator.sync_gradients: - progress_bar.update(1) - global_step += 1 - - fll=round((global_step*100)/args.max_train_steps) - fll=round(fll/4) - pr=bar(fll) - - logs = {"loss": loss.detach().item(), "lr": lr_scheduler.get_last_lr()[0]} - progress_bar.set_postfix(**logs) - progress_bar.set_description_str("Progress:"+pr) - accelerator.log(logs, step=global_step) - - if global_step >= args.max_train_steps: - break - - if args.train_text_encoder and global_step == args.stop_text_encoder_training and global_step >= 30: - if accelerator.is_main_process: - print(" " +" Freezing the text_encoder ..."+" ") - frz_dir=args.output_dir + "/text_encoder_frozen" - if os.path.exists(frz_dir): - subprocess.call('rm -r '+ frz_dir, shell=True) - os.mkdir(frz_dir) - pipeline = StableDiffusionPipeline.from_pretrained( - args.pretrained_model_name_or_path, - unet=accelerator.unwrap_model(unet), - text_encoder=accelerator.unwrap_model(text_encoder), - ) - pipeline.text_encoder.save_pretrained(frz_dir) - - if args.save_n_steps >= 200: - if global_step < args.max_train_steps and global_step+1==i: - ckpt_name = "_step_" + str(global_step+1) - save_dir = Path(args.output_dir+ckpt_name) - save_dir=str(save_dir) - save_dir=save_dir.replace(" ", "_") - if not os.path.exists(save_dir): - os.mkdir(save_dir) - inst=save_dir[16:] - inst=inst.replace(" ", "_") - print(" SAVING CHECKPOINT: "+args.Session_dir+"/"+inst+".ckpt") - # Create the pipeline using the trained modules and save it. - if accelerator.is_main_process: - pipeline = StableDiffusionPipeline.from_pretrained( - args.pretrained_model_name_or_path, - unet=accelerator.unwrap_model(unet), - text_encoder=accelerator.unwrap_model(text_encoder), - ) - pipeline.save_pretrained(save_dir) - frz_dir=args.output_dir + "/text_encoder_frozen" - if args.train_text_encoder and os.path.exists(frz_dir): - subprocess.call('rm -r '+save_dir+'/text_encoder/*.*', shell=True) - subprocess.call('cp -f '+frz_dir +'/*.* '+ save_dir+'/text_encoder', shell=True) - chkpth=args.Session_dir+"/"+inst+".ckpt" - subprocess.call('python /content/diffusers/scripts/convert_diffusers_to_original_stable_diffusion.py --model_path ' + save_dir + ' --checkpoint_path ' + chkpth + ' --half', shell=True) - subprocess.call('rm -r '+ save_dir, shell=True) - i=i+args.save_n_steps - - accelerator.wait_for_everyone() - - # Create the pipeline using using the trained modules and save it. - if accelerator.is_main_process: - if args.dump_only_text_encoder: - txt_dir=args.output_dir + "/text_encoder_trained" - if not os.path.exists(txt_dir): - os.mkdir(txt_dir) - pipeline = StableDiffusionPipeline.from_pretrained( - args.pretrained_model_name_or_path, - unet=accelerator.unwrap_model(unet), - text_encoder=accelerator.unwrap_model(text_encoder), - ) - pipeline.text_encoder.save_pretrained(txt_dir) - - elif args.train_only_unet: - pipeline = StableDiffusionPipeline.from_pretrained( - args.pretrained_model_name_or_path, - unet=accelerator.unwrap_model(unet), - text_encoder=accelerator.unwrap_model(text_encoder), - ) - pipeline.save_pretrained(args.output_dir) - txt_dir=args.output_dir + "/text_encoder_trained" - subprocess.call('rm -r '+txt_dir, shell=True) - - else: - pipeline = StableDiffusionPipeline.from_pretrained( - args.pretrained_model_name_or_path, - unet=accelerator.unwrap_model(unet), - text_encoder=accelerator.unwrap_model(text_encoder), - ) - frz_dir=args.output_dir + "/text_encoder_frozen" - pipeline.save_pretrained(args.output_dir) - if args.train_text_encoder and os.path.exists(frz_dir): - subprocess.call('mv -f '+frz_dir +'/*.* '+ args.output_dir+'/text_encoder', shell=True) - subprocess.call('rm -r '+ frz_dir, shell=True) - - if args.push_to_hub: - repo.push_to_hub(commit_message="End of training", blocking=False, auto_lfs_prune=True) - - accelerator.end_training() - del pipeline - torch.cuda.empty_cache() - gc.collect() -if __name__ == "__main__": - pass - #main() - diff --git a/spaces/Purple11/Grounded-Diffusion/src/taming-transformers/taming/data/open_images_helper.py b/spaces/Purple11/Grounded-Diffusion/src/taming-transformers/taming/data/open_images_helper.py deleted file mode 100644 index 8feb7c6e705fc165d2983303192aaa88f579b243..0000000000000000000000000000000000000000 --- a/spaces/Purple11/Grounded-Diffusion/src/taming-transformers/taming/data/open_images_helper.py +++ /dev/null @@ -1,379 +0,0 @@ -open_images_unify_categories_for_coco = { - '/m/03bt1vf': '/m/01g317', - '/m/04yx4': '/m/01g317', - '/m/05r655': '/m/01g317', - '/m/01bl7v': '/m/01g317', - '/m/0cnyhnx': '/m/01xq0k1', - '/m/01226z': '/m/018xm', - '/m/05ctyq': '/m/018xm', - '/m/058qzx': '/m/04ctx', - '/m/06pcq': '/m/0l515', - '/m/03m3pdh': '/m/02crq1', - '/m/046dlr': '/m/01x3z', - '/m/0h8mzrc': '/m/01x3z', -} - - -top_300_classes_plus_coco_compatibility = [ - ('Man', 1060962), - ('Clothing', 986610), - ('Tree', 748162), - ('Woman', 611896), - ('Person', 610294), - ('Human face', 442948), - ('Girl', 175399), - ('Building', 162147), - ('Car', 159135), - ('Plant', 155704), - ('Human body', 137073), - ('Flower', 133128), - ('Window', 127485), - ('Human arm', 118380), - ('House', 114365), - ('Wheel', 111684), - ('Suit', 99054), - ('Human hair', 98089), - ('Human head', 92763), - ('Chair', 88624), - ('Boy', 79849), - ('Table', 73699), - ('Jeans', 57200), - ('Tire', 55725), - ('Skyscraper', 53321), - ('Food', 52400), - ('Footwear', 50335), - ('Dress', 50236), - ('Human leg', 47124), - ('Toy', 46636), - ('Tower', 45605), - ('Boat', 43486), - ('Land vehicle', 40541), - ('Bicycle wheel', 34646), - ('Palm tree', 33729), - ('Fashion accessory', 32914), - ('Glasses', 31940), - ('Bicycle', 31409), - ('Furniture', 30656), - ('Sculpture', 29643), - ('Bottle', 27558), - ('Dog', 26980), - ('Snack', 26796), - ('Human hand', 26664), - ('Bird', 25791), - ('Book', 25415), - ('Guitar', 24386), - ('Jacket', 23998), - ('Poster', 22192), - ('Dessert', 21284), - ('Baked goods', 20657), - ('Drink', 19754), - ('Flag', 18588), - ('Houseplant', 18205), - ('Tableware', 17613), - ('Airplane', 17218), - ('Door', 17195), - ('Sports uniform', 17068), - ('Shelf', 16865), - ('Drum', 16612), - ('Vehicle', 16542), - ('Microphone', 15269), - ('Street light', 14957), - ('Cat', 14879), - ('Fruit', 13684), - ('Fast food', 13536), - ('Animal', 12932), - ('Vegetable', 12534), - ('Train', 12358), - ('Horse', 11948), - ('Flowerpot', 11728), - ('Motorcycle', 11621), - ('Fish', 11517), - ('Desk', 11405), - ('Helmet', 10996), - ('Truck', 10915), - ('Bus', 10695), - ('Hat', 10532), - ('Auto part', 10488), - ('Musical instrument', 10303), - ('Sunglasses', 10207), - ('Picture frame', 10096), - ('Sports equipment', 10015), - ('Shorts', 9999), - ('Wine glass', 9632), - ('Duck', 9242), - ('Wine', 9032), - ('Rose', 8781), - ('Tie', 8693), - ('Butterfly', 8436), - ('Beer', 7978), - ('Cabinetry', 7956), - ('Laptop', 7907), - ('Insect', 7497), - ('Goggles', 7363), - ('Shirt', 7098), - ('Dairy Product', 7021), - ('Marine invertebrates', 7014), - ('Cattle', 7006), - ('Trousers', 6903), - ('Van', 6843), - ('Billboard', 6777), - ('Balloon', 6367), - ('Human nose', 6103), - ('Tent', 6073), - ('Camera', 6014), - ('Doll', 6002), - ('Coat', 5951), - ('Mobile phone', 5758), - ('Swimwear', 5729), - ('Strawberry', 5691), - ('Stairs', 5643), - ('Goose', 5599), - ('Umbrella', 5536), - ('Cake', 5508), - ('Sun hat', 5475), - ('Bench', 5310), - ('Bookcase', 5163), - ('Bee', 5140), - ('Computer monitor', 5078), - ('Hiking equipment', 4983), - ('Office building', 4981), - ('Coffee cup', 4748), - ('Curtain', 4685), - ('Plate', 4651), - ('Box', 4621), - ('Tomato', 4595), - ('Coffee table', 4529), - ('Office supplies', 4473), - ('Maple', 4416), - ('Muffin', 4365), - ('Cocktail', 4234), - ('Castle', 4197), - ('Couch', 4134), - ('Pumpkin', 3983), - ('Computer keyboard', 3960), - ('Human mouth', 3926), - ('Christmas tree', 3893), - ('Mushroom', 3883), - ('Swimming pool', 3809), - ('Pastry', 3799), - ('Lavender (Plant)', 3769), - ('Football helmet', 3732), - ('Bread', 3648), - ('Traffic sign', 3628), - ('Common sunflower', 3597), - ('Television', 3550), - ('Bed', 3525), - ('Cookie', 3485), - ('Fountain', 3484), - ('Paddle', 3447), - ('Bicycle helmet', 3429), - ('Porch', 3420), - ('Deer', 3387), - ('Fedora', 3339), - ('Canoe', 3338), - ('Carnivore', 3266), - ('Bowl', 3202), - ('Human eye', 3166), - ('Ball', 3118), - ('Pillow', 3077), - ('Salad', 3061), - ('Beetle', 3060), - ('Orange', 3050), - ('Drawer', 2958), - ('Platter', 2937), - ('Elephant', 2921), - ('Seafood', 2921), - ('Monkey', 2915), - ('Countertop', 2879), - ('Watercraft', 2831), - ('Helicopter', 2805), - ('Kitchen appliance', 2797), - ('Personal flotation device', 2781), - ('Swan', 2739), - ('Lamp', 2711), - ('Boot', 2695), - ('Bronze sculpture', 2693), - ('Chicken', 2677), - ('Taxi', 2643), - ('Juice', 2615), - ('Cowboy hat', 2604), - ('Apple', 2600), - ('Tin can', 2590), - ('Necklace', 2564), - ('Ice cream', 2560), - ('Human beard', 2539), - ('Coin', 2536), - ('Candle', 2515), - ('Cart', 2512), - ('High heels', 2441), - ('Weapon', 2433), - ('Handbag', 2406), - ('Penguin', 2396), - ('Rifle', 2352), - ('Violin', 2336), - ('Skull', 2304), - ('Lantern', 2285), - ('Scarf', 2269), - ('Saucer', 2225), - ('Sheep', 2215), - ('Vase', 2189), - ('Lily', 2180), - ('Mug', 2154), - ('Parrot', 2140), - ('Human ear', 2137), - ('Sandal', 2115), - ('Lizard', 2100), - ('Kitchen & dining room table', 2063), - ('Spider', 1977), - ('Coffee', 1974), - ('Goat', 1926), - ('Squirrel', 1922), - ('Cello', 1913), - ('Sushi', 1881), - ('Tortoise', 1876), - ('Pizza', 1870), - ('Studio couch', 1864), - ('Barrel', 1862), - ('Cosmetics', 1841), - ('Moths and butterflies', 1841), - ('Convenience store', 1817), - ('Watch', 1792), - ('Home appliance', 1786), - ('Harbor seal', 1780), - ('Luggage and bags', 1756), - ('Vehicle registration plate', 1754), - ('Shrimp', 1751), - ('Jellyfish', 1730), - ('French fries', 1723), - ('Egg (Food)', 1698), - ('Football', 1697), - ('Musical keyboard', 1683), - ('Falcon', 1674), - ('Candy', 1660), - ('Medical equipment', 1654), - ('Eagle', 1651), - ('Dinosaur', 1634), - ('Surfboard', 1630), - ('Tank', 1628), - ('Grape', 1624), - ('Lion', 1624), - ('Owl', 1622), - ('Ski', 1613), - ('Waste container', 1606), - ('Frog', 1591), - ('Sparrow', 1585), - ('Rabbit', 1581), - ('Pen', 1546), - ('Sea lion', 1537), - ('Spoon', 1521), - ('Sink', 1512), - ('Teddy bear', 1507), - ('Bull', 1495), - ('Sofa bed', 1490), - ('Dragonfly', 1479), - ('Brassiere', 1478), - ('Chest of drawers', 1472), - ('Aircraft', 1466), - ('Human foot', 1463), - ('Pig', 1455), - ('Fork', 1454), - ('Antelope', 1438), - ('Tripod', 1427), - ('Tool', 1424), - ('Cheese', 1422), - ('Lemon', 1397), - ('Hamburger', 1393), - ('Dolphin', 1390), - ('Mirror', 1390), - ('Marine mammal', 1387), - ('Giraffe', 1385), - ('Snake', 1368), - ('Gondola', 1364), - ('Wheelchair', 1360), - ('Piano', 1358), - ('Cupboard', 1348), - ('Banana', 1345), - ('Trumpet', 1335), - ('Lighthouse', 1333), - ('Invertebrate', 1317), - ('Carrot', 1268), - ('Sock', 1260), - ('Tiger', 1241), - ('Camel', 1224), - ('Parachute', 1224), - ('Bathroom accessory', 1223), - ('Earrings', 1221), - ('Headphones', 1218), - ('Skirt', 1198), - ('Skateboard', 1190), - ('Sandwich', 1148), - ('Saxophone', 1141), - ('Goldfish', 1136), - ('Stool', 1104), - ('Traffic light', 1097), - ('Shellfish', 1081), - ('Backpack', 1079), - ('Sea turtle', 1078), - ('Cucumber', 1075), - ('Tea', 1051), - ('Toilet', 1047), - ('Roller skates', 1040), - ('Mule', 1039), - ('Bust', 1031), - ('Broccoli', 1030), - ('Crab', 1020), - ('Oyster', 1019), - ('Cannon', 1012), - ('Zebra', 1012), - ('French horn', 1008), - ('Grapefruit', 998), - ('Whiteboard', 997), - ('Zucchini', 997), - ('Crocodile', 992), - - ('Clock', 960), - ('Wall clock', 958), - - ('Doughnut', 869), - ('Snail', 868), - - ('Baseball glove', 859), - - ('Panda', 830), - ('Tennis racket', 830), - - ('Pear', 652), - - ('Bagel', 617), - ('Oven', 616), - ('Ladybug', 615), - ('Shark', 615), - ('Polar bear', 614), - ('Ostrich', 609), - - ('Hot dog', 473), - ('Microwave oven', 467), - ('Fire hydrant', 20), - ('Stop sign', 20), - ('Parking meter', 20), - ('Bear', 20), - ('Flying disc', 20), - ('Snowboard', 20), - ('Tennis ball', 20), - ('Kite', 20), - ('Baseball bat', 20), - ('Kitchen knife', 20), - ('Knife', 20), - ('Submarine sandwich', 20), - ('Computer mouse', 20), - ('Remote control', 20), - ('Toaster', 20), - ('Sink', 20), - ('Refrigerator', 20), - ('Alarm clock', 20), - ('Wall clock', 20), - ('Scissors', 20), - ('Hair dryer', 20), - ('Toothbrush', 20), - ('Suitcase', 20) -] diff --git a/spaces/Qrstud/gpt/README.md b/spaces/Qrstud/gpt/README.md deleted file mode 100644 index e60635d86c3ec00b3dccddee02e30724705775d7..0000000000000000000000000000000000000000 --- a/spaces/Qrstud/gpt/README.md +++ /dev/null @@ -1,30 +0,0 @@ ---- -title: Chatgpt Python -emoji: 📚 -colorFrom: indigo -colorTo: blue -sdk: gradio -sdk_version: 3.18.0 -app_file: app.py -pinned: false -license: gpl-3.0 -python_version: 3.8.9 -tags: -- chatgpt -- gpt3 -- gpt-3 -- chatbot -- chat -- bot -- chatgpt3 -- gpt -duplicated_from: Damonho316/gpt ---- - -# 🤖 ChatGPT-Python 🐍 -[![Build Status](https://img.shields.io/github/stars/dylanmeca/ChatGPT-Python.svg)](https://github.com/dylanmeca/ChatGPT-Python) -[![License](https://img.shields.io/github/license/dylanmeca/ChatGPT-Python.svg)](https://github.com/dylanmeca/ChatGPT-Python/blob/main/LICENSE) -[![dylanmeca](https://img.shields.io/badge/author-dylanmeca-green.svg)](https://github.com/dylanmeca) -[![Python](https://img.shields.io/badge/language-Python%20-yellow.svg)](https://www.python.org) - -ChatGPT-Python is a software that allows you to use ChatGPT from python through a web interface, using the OpenAI API. \ No newline at end of file diff --git a/spaces/RMXK/RVC_HFF/Fixes/local_fixes.py b/spaces/RMXK/RVC_HFF/Fixes/local_fixes.py deleted file mode 100644 index 8a418076eee6f65fe06eb0f607061796b839c1ee..0000000000000000000000000000000000000000 --- a/spaces/RMXK/RVC_HFF/Fixes/local_fixes.py +++ /dev/null @@ -1,136 +0,0 @@ -import os -import sys -import time -import shutil -import requests -import zipfile - -def insert_new_line(file_name, line_to_find, text_to_insert): - lines = [] - with open(file_name, 'r', encoding='utf-8') as read_obj: - lines = read_obj.readlines() - already_exists = False - with open(file_name + '.tmp', 'w', encoding='utf-8') as write_obj: - for i in range(len(lines)): - write_obj.write(lines[i]) - if lines[i].strip() == line_to_find: - # If next line exists and starts with sys.path.append, skip - if i+1 < len(lines) and lines[i+1].strip().startswith("sys.path.append"): - print('It was already fixed! Skip adding a line...') - already_exists = True - break - else: - write_obj.write(text_to_insert + '\n') - # If no existing sys.path.append line was found, replace the original file - if not already_exists: - os.replace(file_name + '.tmp', file_name) - return True - else: - # If existing line was found, delete temporary file - os.remove(file_name + '.tmp') - return False - -def replace_in_file(file_name, old_text, new_text): - with open(file_name, 'r', encoding='utf-8') as file: - file_contents = file.read() - - if old_text in file_contents: - file_contents = file_contents.replace(old_text, new_text) - with open(file_name, 'w', encoding='utf-8') as file: - file.write(file_contents) - return True - - return False - -if __name__ == "__main__": - current_path = os.getcwd() - file_name = os.path.join(current_path, "infer", "modules", "train", "extract", "extract_f0_print.py") - line_to_find = 'import numpy as np, logging' - text_to_insert = "sys.path.append(r'" + current_path + "')" - - - success_1 = insert_new_line(file_name, line_to_find, text_to_insert) - if success_1: - print('The first operation was successful!') - else: - print('He skipped the first operation because it was already fixed!') - - file_name = 'infer-web.py' - old_text = 'with gr.Blocks(theme=gr.themes.Soft()) as app:' - new_text = 'with gr.Blocks() as app:' - - success_2 = replace_in_file(file_name, old_text, new_text) - if success_2: - print('The second operation was successful!') - else: - print('The second operation was omitted because it was already fixed!') - - print('Local corrections successful! You should now be able to infer and train locally in Applio RVC Fork.') - - time.sleep(5) - -def find_torchcrepe_directory(directory): - """ - Recursively searches for the topmost folder named 'torchcrepe' within a directory. - Returns the path of the directory found or None if none is found. - """ - for root, dirs, files in os.walk(directory): - if 'torchcrepe' in dirs: - return os.path.join(root, 'torchcrepe') - return None - -def download_and_extract_torchcrepe(): - url = 'https://github.com/maxrmorrison/torchcrepe/archive/refs/heads/master.zip' - temp_dir = 'temp_torchcrepe' - destination_dir = os.getcwd() - - try: - torchcrepe_dir_path = os.path.join(destination_dir, 'torchcrepe') - - if os.path.exists(torchcrepe_dir_path): - print("Skipping the torchcrepe download. The folder already exists.") - return - - # Download the file - print("Starting torchcrepe download...") - response = requests.get(url) - - # Raise an error if the GET request was unsuccessful - response.raise_for_status() - print("Download completed.") - - # Save the downloaded file - zip_file_path = os.path.join(temp_dir, 'master.zip') - os.makedirs(temp_dir, exist_ok=True) - with open(zip_file_path, 'wb') as file: - file.write(response.content) - print(f"Zip file saved to {zip_file_path}") - - # Extract the zip file - print("Extracting content...") - with zipfile.ZipFile(zip_file_path, 'r') as zip_file: - zip_file.extractall(temp_dir) - print("Extraction completed.") - - # Locate the torchcrepe folder and move it to the destination directory - torchcrepe_dir = find_torchcrepe_directory(temp_dir) - if torchcrepe_dir: - shutil.move(torchcrepe_dir, destination_dir) - print(f"Moved the torchcrepe directory to {destination_dir}!") - else: - print("The torchcrepe directory could not be located.") - - except Exception as e: - print("Torchcrepe not successfully downloaded", e) - - # Clean up temporary directory - if os.path.exists(temp_dir): - shutil.rmtree(temp_dir) - -# Run the function -download_and_extract_torchcrepe() - -temp_dir = 'temp_torchcrepe' - -if os.path.exists(temp_dir): - shutil.rmtree(temp_dir) diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/commands/inspect.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/commands/inspect.py deleted file mode 100644 index a4e3599306e554cb62528cc6e807f319658d86bf..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/commands/inspect.py +++ /dev/null @@ -1,97 +0,0 @@ -import logging -from optparse import Values -from typing import Any, Dict, List - -from pip._vendor.packaging.markers import default_environment -from pip._vendor.rich import print_json - -from pip import __version__ -from pip._internal.cli import cmdoptions -from pip._internal.cli.req_command import Command -from pip._internal.cli.status_codes import SUCCESS -from pip._internal.metadata import BaseDistribution, get_environment -from pip._internal.utils.compat import stdlib_pkgs -from pip._internal.utils.urls import path_to_url - -logger = logging.getLogger(__name__) - - -class InspectCommand(Command): - """ - Inspect the content of a Python environment and produce a report in JSON format. - """ - - ignore_require_venv = True - usage = """ - %prog [options]""" - - def add_options(self) -> None: - self.cmd_opts.add_option( - "--local", - action="store_true", - default=False, - help=( - "If in a virtualenv that has global access, do not list " - "globally-installed packages." - ), - ) - self.cmd_opts.add_option( - "--user", - dest="user", - action="store_true", - default=False, - help="Only output packages installed in user-site.", - ) - self.cmd_opts.add_option(cmdoptions.list_path()) - self.parser.insert_option_group(0, self.cmd_opts) - - def run(self, options: Values, args: List[str]) -> int: - logger.warning( - "pip inspect is currently an experimental command. " - "The output format may change in a future release without prior warning." - ) - - cmdoptions.check_list_path_option(options) - dists = get_environment(options.path).iter_installed_distributions( - local_only=options.local, - user_only=options.user, - skip=set(stdlib_pkgs), - ) - output = { - "version": "0", - "pip_version": __version__, - "installed": [self._dist_to_dict(dist) for dist in dists], - "environment": default_environment(), - # TODO tags? scheme? - } - print_json(data=output) - return SUCCESS - - def _dist_to_dict(self, dist: BaseDistribution) -> Dict[str, Any]: - res: Dict[str, Any] = { - "metadata": dist.metadata_dict, - "metadata_location": dist.info_location, - } - # direct_url. Note that we don't have download_info (as in the installation - # report) since it is not recorded in installed metadata. - direct_url = dist.direct_url - if direct_url is not None: - res["direct_url"] = direct_url.to_dict() - else: - # Emulate direct_url for legacy editable installs. - editable_project_location = dist.editable_project_location - if editable_project_location is not None: - res["direct_url"] = { - "url": path_to_url(editable_project_location), - "dir_info": { - "editable": True, - }, - } - # installer - installer = dist.installer - if dist.installer: - res["installer"] = installer - # requested - if dist.installed_with_dist_info: - res["requested"] = dist.requested - return res diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/commands/wheel.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/commands/wheel.py deleted file mode 100644 index 1afbd562c65832b985e0e5f350876cd723aa3b72..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/commands/wheel.py +++ /dev/null @@ -1,203 +0,0 @@ -import logging -import os -import shutil -from optparse import Values -from typing import List - -from pip._internal.cache import WheelCache -from pip._internal.cli import cmdoptions -from pip._internal.cli.req_command import RequirementCommand, with_cleanup -from pip._internal.cli.status_codes import SUCCESS -from pip._internal.exceptions import CommandError -from pip._internal.operations.build.build_tracker import get_build_tracker -from pip._internal.req.req_install import ( - InstallRequirement, - LegacySetupPyOptionsCheckMode, - check_legacy_setup_py_options, -) -from pip._internal.utils.deprecation import deprecated -from pip._internal.utils.misc import ensure_dir, normalize_path -from pip._internal.utils.temp_dir import TempDirectory -from pip._internal.wheel_builder import build, should_build_for_wheel_command - -logger = logging.getLogger(__name__) - - -class WheelCommand(RequirementCommand): - """ - Build Wheel archives for your requirements and dependencies. - - Wheel is a built-package format, and offers the advantage of not - recompiling your software during every install. For more details, see the - wheel docs: https://wheel.readthedocs.io/en/latest/ - - 'pip wheel' uses the build system interface as described here: - https://pip.pypa.io/en/stable/reference/build-system/ - - """ - - usage = """ - %prog [options] ... - %prog [options] -r ... - %prog [options] [-e] ... - %prog [options] [-e] ... - %prog [options] ...""" - - def add_options(self) -> None: - - self.cmd_opts.add_option( - "-w", - "--wheel-dir", - dest="wheel_dir", - metavar="dir", - default=os.curdir, - help=( - "Build wheels into , where the default is the " - "current working directory." - ), - ) - self.cmd_opts.add_option(cmdoptions.no_binary()) - self.cmd_opts.add_option(cmdoptions.only_binary()) - self.cmd_opts.add_option(cmdoptions.prefer_binary()) - self.cmd_opts.add_option(cmdoptions.no_build_isolation()) - self.cmd_opts.add_option(cmdoptions.use_pep517()) - self.cmd_opts.add_option(cmdoptions.no_use_pep517()) - self.cmd_opts.add_option(cmdoptions.check_build_deps()) - self.cmd_opts.add_option(cmdoptions.constraints()) - self.cmd_opts.add_option(cmdoptions.editable()) - self.cmd_opts.add_option(cmdoptions.requirements()) - self.cmd_opts.add_option(cmdoptions.src()) - self.cmd_opts.add_option(cmdoptions.ignore_requires_python()) - self.cmd_opts.add_option(cmdoptions.no_deps()) - self.cmd_opts.add_option(cmdoptions.progress_bar()) - - self.cmd_opts.add_option( - "--no-verify", - dest="no_verify", - action="store_true", - default=False, - help="Don't verify if built wheel is valid.", - ) - - self.cmd_opts.add_option(cmdoptions.config_settings()) - self.cmd_opts.add_option(cmdoptions.build_options()) - self.cmd_opts.add_option(cmdoptions.global_options()) - - self.cmd_opts.add_option( - "--pre", - action="store_true", - default=False, - help=( - "Include pre-release and development versions. By default, " - "pip only finds stable versions." - ), - ) - - self.cmd_opts.add_option(cmdoptions.require_hashes()) - - index_opts = cmdoptions.make_option_group( - cmdoptions.index_group, - self.parser, - ) - - self.parser.insert_option_group(0, index_opts) - self.parser.insert_option_group(0, self.cmd_opts) - - @with_cleanup - def run(self, options: Values, args: List[str]) -> int: - session = self.get_default_session(options) - - finder = self._build_package_finder(options, session) - wheel_cache = WheelCache(options.cache_dir, options.format_control) - - options.wheel_dir = normalize_path(options.wheel_dir) - ensure_dir(options.wheel_dir) - - build_tracker = self.enter_context(get_build_tracker()) - - directory = TempDirectory( - delete=not options.no_clean, - kind="wheel", - globally_managed=True, - ) - - reqs = self.get_requirements(args, options, finder, session) - check_legacy_setup_py_options( - options, reqs, LegacySetupPyOptionsCheckMode.WHEEL - ) - - if "no-binary-enable-wheel-cache" in options.features_enabled: - # TODO: remove format_control from WheelCache when the deprecation cycle - # is over - wheel_cache = WheelCache(options.cache_dir) - else: - if options.format_control.no_binary: - deprecated( - reason=( - "--no-binary currently disables reading from " - "the cache of locally built wheels. In the future " - "--no-binary will not influence the wheel cache." - ), - replacement="to use the --no-cache-dir option", - feature_flag="no-binary-enable-wheel-cache", - issue=11453, - gone_in="23.1", - ) - wheel_cache = WheelCache(options.cache_dir, options.format_control) - - preparer = self.make_requirement_preparer( - temp_build_dir=directory, - options=options, - build_tracker=build_tracker, - session=session, - finder=finder, - download_dir=options.wheel_dir, - use_user_site=False, - verbosity=self.verbosity, - ) - - resolver = self.make_resolver( - preparer=preparer, - finder=finder, - options=options, - wheel_cache=wheel_cache, - ignore_requires_python=options.ignore_requires_python, - use_pep517=options.use_pep517, - ) - - self.trace_basic_info(finder) - - requirement_set = resolver.resolve(reqs, check_supported_wheels=True) - - reqs_to_build: List[InstallRequirement] = [] - for req in requirement_set.requirements.values(): - if req.is_wheel: - preparer.save_linked_requirement(req) - elif should_build_for_wheel_command(req): - reqs_to_build.append(req) - - # build wheels - build_successes, build_failures = build( - reqs_to_build, - wheel_cache=wheel_cache, - verify=(not options.no_verify), - build_options=options.build_options or [], - global_options=options.global_options or [], - ) - for req in build_successes: - assert req.link and req.link.is_wheel - assert req.local_file_path - # copy from cache to target directory - try: - shutil.copy(req.local_file_path, options.wheel_dir) - except OSError as e: - logger.warning( - "Building wheel for %s failed: %s", - req.name, - e, - ) - build_failures.append(req) - if len(build_failures) != 0: - raise CommandError("Failed to build one or more wheels") - - return SUCCESS diff --git a/spaces/Realcat/image-matching-webui/third_party/r2d2/datasets/imgfolder.py b/spaces/Realcat/image-matching-webui/third_party/r2d2/datasets/imgfolder.py deleted file mode 100644 index 40168f00e8ad177f3d94f75578dba2e640944c4c..0000000000000000000000000000000000000000 --- a/spaces/Realcat/image-matching-webui/third_party/r2d2/datasets/imgfolder.py +++ /dev/null @@ -1,21 +0,0 @@ -# Copyright 2019-present NAVER Corp. -# CC BY-NC-SA 3.0 -# Available only for non-commercial use - -import os, pdb - -from .dataset import Dataset -from .pair_dataset import SyntheticPairDataset - - -class ImgFolder(Dataset): - """load all images in a folder (no recursion).""" - - def __init__(self, root, imgs=None, exts=(".jpg", ".png", ".ppm")): - Dataset.__init__(self) - self.root = root - self.imgs = imgs or [f for f in os.listdir(root) if f.endswith(exts)] - self.nimg = len(self.imgs) - - def get_key(self, idx): - return self.imgs[idx] diff --git a/spaces/Ricecake123/RVC-demo/docs/Changelog_CN.md b/spaces/Ricecake123/RVC-demo/docs/Changelog_CN.md deleted file mode 100644 index eb67ba574ef10fb980292baec0cd0f9e7582f851..0000000000000000000000000000000000000000 --- a/spaces/Ricecake123/RVC-demo/docs/Changelog_CN.md +++ /dev/null @@ -1,80 +0,0 @@ -### 20230618更新 -- v2增加32k和48k两个新预训练模型 -- 修复非f0模型推理报错 -- 对于超过一小时的训练集的索引建立环节,自动kmeans缩小特征处理以加速索引训练、加入和查询 -- 附送一个人声转吉他玩具仓库 -- 数据处理剔除异常值切片 -- onnx导出选项卡 - -失败的实验: -- ~~特征检索增加时序维度:寄,没啥效果~~ -- ~~特征检索增加PCAR降维可选项:寄,数据大用kmeans缩小数据量,数据小降维操作耗时比省下的匹配耗时还多~~ -- ~~支持onnx推理(附带仅推理的小压缩包):寄,生成nsf还是需要pytorch~~ -- ~~训练时在音高、gender、eq、噪声等方面对输入进行随机增强:寄,没啥效果~~ - -todolist: -- 接入小型声码器调研 -- 训练集音高识别支持crepe -- crepe的精度支持和RVC-config同步 -- 对接F0编辑器 - - -### 20230528更新 -- 增加v2的jupyter notebook,韩文changelog,增加一些环境依赖 -- 增加呼吸、清辅音、齿音保护模式 -- 支持crepe-full推理 -- UVR5人声伴奏分离加上3个去延迟模型和MDX-Net去混响模型,增加HP3人声提取模型 -- 索引名称增加版本和实验名称 -- 人声伴奏分离、推理批量导出增加音频导出格式选项 -- 废弃32k模型的训练 - -### 20230513更新 -- 清除一键包内部老版本runtime内残留的lib.infer_pack和uvr5_pack -- 修复训练集预处理伪多进程的bug -- 增加harvest识别音高可选通过中值滤波削弱哑音现象,可调整中值滤波半径 -- 导出音频增加后处理重采样 -- 训练n_cpu进程数从"仅调整f0提取"改为"调整数据预处理和f0提取" -- 自动检测logs文件夹下的index路径,提供下拉列表功能 -- tab页增加"常见问题解答"(也可参考github-rvc-wiki) -- 相同路径的输入音频推理增加了音高缓存(用途:使用harvest音高提取,整个pipeline会经历漫长且重复的音高提取过程,如果不使用缓存,实验不同音色、索引、音高中值滤波半径参数的用户在第一次测试后的等待结果会非常痛苦) - -### 20230514更新 -- 音量包络对齐输入混合(可以缓解“输入静音输出小幅度噪声”的问题。如果输入音频背景底噪大则不建议开启,默认不开启(值为1可视为不开启)) -- 支持按照指定频率保存提取的小模型(假如你想尝试不同epoch下的推理效果,但是不想保存所有大checkpoint并且每次都要ckpt手工处理提取小模型,这项功能会非常实用) -- 通过设置环境变量解决服务端开了系统全局代理导致浏览器连接错误的问题 -- 支持v2预训练模型(目前只公开了40k版本进行测试,另外2个采样率还没有训练完全) -- 推理前限制超过1的过大音量 -- 微调数据预处理参数 - - -### 20230409更新 -- 修正训练参数,提升显卡平均利用率,A100最高从25%提升至90%左右,V100:50%->90%左右,2060S:60%->85%左右,P40:25%->95%左右,训练速度显著提升 -- 修正参数:总batch_size改为每张卡的batch_size -- 修正total_epoch:最大限制100解锁至1000;默认10提升至默认20 -- 修复ckpt提取识别是否带音高错误导致推理异常的问题 -- 修复分布式训练每个rank都保存一次ckpt的问题 -- 特征提取进行nan特征过滤 -- 修复静音输入输出随机辅音or噪声的问题(老版模型需要重做训练集重训) - -### 20230416更新 -- 新增本地实时变声迷你GUI,双击go-realtime-gui.bat启动 -- 训练推理均对<50Hz的频段进行滤波过滤 -- 训练推理音高提取pyworld最低音高从默认80下降至50,50-80hz间的男声低音不会哑 -- WebUI支持根据系统区域变更语言(现支持en_US,ja_JP,zh_CN,zh_HK,zh_SG,zh_TW,不支持的默认en_US) -- 修正部分显卡识别(例如V100-16G识别失败,P4识别失败) - -### 20230428更新 -- 升级faiss索引设置,速度更快,质量更高 -- 取消total_npy依赖,后续分享模型不再需要填写total_npy -- 解锁16系限制。4G显存GPU给到4G的推理设置。 -- 修复部分音频格式下UVR5人声伴奏分离的bug -- 实时变声迷你gui增加对非40k与不懈怠音高模型的支持 - -### 后续计划: -功能: -- 支持多人训练选项卡(至多4人) - -底模: -- 收集呼吸wav加入训练集修正呼吸变声电音的问题 -- 我们正在训练增加了歌声训练集的底模,未来会公开 - diff --git a/spaces/Ritori/TTS_Yui/hifi-gan/README.md b/spaces/Ritori/TTS_Yui/hifi-gan/README.md deleted file mode 100644 index 4c63381f7c41874fe97c13f40b92d6bb9413e751..0000000000000000000000000000000000000000 --- a/spaces/Ritori/TTS_Yui/hifi-gan/README.md +++ /dev/null @@ -1,105 +0,0 @@ -# HiFi-GAN: Generative Adversarial Networks for Efficient and High Fidelity Speech Synthesis - -### Jungil Kong, Jaehyeon Kim, Jaekyoung Bae - -In our [paper](https://arxiv.org/abs/2010.05646), -we proposed HiFi-GAN: a GAN-based model capable of generating high fidelity speech efficiently.
-We provide our implementation and pretrained models as open source in this repository. - -**Abstract :** -Several recent work on speech synthesis have employed generative adversarial networks (GANs) to produce raw waveforms. -Although such methods improve the sampling efficiency and memory usage, -their sample quality has not yet reached that of autoregressive and flow-based generative models. -In this work, we propose HiFi-GAN, which achieves both efficient and high-fidelity speech synthesis. -As speech audio consists of sinusoidal signals with various periods, -we demonstrate that modeling periodic patterns of an audio is crucial for enhancing sample quality. -A subjective human evaluation (mean opinion score, MOS) of a single speaker dataset indicates that our proposed method -demonstrates similarity to human quality while generating 22.05 kHz high-fidelity audio 167.9 times faster than -real-time on a single V100 GPU. We further show the generality of HiFi-GAN to the mel-spectrogram inversion of unseen -speakers and end-to-end speech synthesis. Finally, a small footprint version of HiFi-GAN generates samples 13.4 times -faster than real-time on CPU with comparable quality to an autoregressive counterpart. - -Visit our [demo website](https://jik876.github.io/hifi-gan-demo/) for audio samples. - - -## Pre-requisites -1. Python >= 3.6 -2. Clone this repository. -3. Install python requirements. Please refer [requirements.txt](requirements.txt) -4. Download and extract the [LJ Speech dataset](https://keithito.com/LJ-Speech-Dataset/). -And move all wav files to `LJSpeech-1.1/wavs` - - -## Training -``` -python train.py --config config_v1.json -``` -To train V2 or V3 Generator, replace `config_v1.json` with `config_v2.json` or `config_v3.json`.
-Checkpoints and copy of the configuration file are saved in `cp_hifigan` directory by default.
-You can change the path by adding `--checkpoint_path` option. - -Validation loss during training with V1 generator.
-![validation loss](./validation_loss.png) - -## Pretrained Model -You can also use pretrained models we provide.
-[Download pretrained models](https://drive.google.com/drive/folders/1-eEYTB5Av9jNql0WGBlRoi-WH2J7bp5Y?usp=sharing)
-Details of each folder are as in follows: - -|Folder Name|Generator|Dataset|Fine-Tuned| -|------|---|---|---| -|LJ_V1|V1|LJSpeech|No| -|LJ_V2|V2|LJSpeech|No| -|LJ_V3|V3|LJSpeech|No| -|LJ_FT_T2_V1|V1|LJSpeech|Yes ([Tacotron2](https://github.com/NVIDIA/tacotron2))| -|LJ_FT_T2_V2|V2|LJSpeech|Yes ([Tacotron2](https://github.com/NVIDIA/tacotron2))| -|LJ_FT_T2_V3|V3|LJSpeech|Yes ([Tacotron2](https://github.com/NVIDIA/tacotron2))| -|VCTK_V1|V1|VCTK|No| -|VCTK_V2|V2|VCTK|No| -|VCTK_V3|V3|VCTK|No| -|UNIVERSAL_V1|V1|Universal|No| - -We provide the universal model with discriminator weights that can be used as a base for transfer learning to other datasets. - -## Fine-Tuning -1. Generate mel-spectrograms in numpy format using [Tacotron2](https://github.com/NVIDIA/tacotron2) with teacher-forcing.
-The file name of the generated mel-spectrogram should match the audio file and the extension should be `.npy`.
-Example: - ``` - Audio File : LJ001-0001.wav - Mel-Spectrogram File : LJ001-0001.npy - ``` -2. Create `ft_dataset` folder and copy the generated mel-spectrogram files into it.
-3. Run the following command. - ``` - python train.py --fine_tuning True --config config_v1.json - ``` - For other command line options, please refer to the training section. - - -## Inference from wav file -1. Make `test_files` directory and copy wav files into the directory. -2. Run the following command. - ``` - python inference.py --checkpoint_file [generator checkpoint file path] - ``` -Generated wav files are saved in `generated_files` by default.
-You can change the path by adding `--output_dir` option. - - -## Inference for end-to-end speech synthesis -1. Make `test_mel_files` directory and copy generated mel-spectrogram files into the directory.
-You can generate mel-spectrograms using [Tacotron2](https://github.com/NVIDIA/tacotron2), -[Glow-TTS](https://github.com/jaywalnut310/glow-tts) and so forth. -2. Run the following command. - ``` - python inference_e2e.py --checkpoint_file [generator checkpoint file path] - ``` -Generated wav files are saved in `generated_files_from_mel` by default.
-You can change the path by adding `--output_dir` option. - - -## Acknowledgements -We referred to [WaveGlow](https://github.com/NVIDIA/waveglow), [MelGAN](https://github.com/descriptinc/melgan-neurips) -and [Tacotron2](https://github.com/NVIDIA/tacotron2) to implement this. - diff --git a/spaces/RoAr777/fer/app.py b/spaces/RoAr777/fer/app.py deleted file mode 100644 index 6fc412b5fa7f6f469ab5316880c8ff527a0f908e..0000000000000000000000000000000000000000 --- a/spaces/RoAr777/fer/app.py +++ /dev/null @@ -1,48 +0,0 @@ -# -*- coding: utf-8 -*- -""" -Created on Sun Dec 25 08:38:00 2022 - -@author: ROSHAN -""" - -import tensorflow as tf -import gradio as gr -import numpy as np -import cv2 -from PIL import Image as im -from matplotlib import pyplot as plt -cls=['Angry', 'Disgust', 'Fear', 'Happy', 'Sad', 'Surprise', 'Neutral'] -model = tf.keras.models.load_model("56fer.h5") -face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml') -def show(img): - img=img[:, :, ::-1].copy() - - gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) - faces = face_cascade.detectMultiScale(gray, 1.5, 1) - r=[] - x=faces[0][0] - y=faces[0][1] - w=faces[0][2] - h=faces[0][3] - cv2.rectangle(img,(x,y),(x+w,y+h),(255,255,0),2) - r.append(img) - sharp_kernel = np.array([[0, -1, 0], - [-1, 5, -1], - [0, -1, 0]]) - sharp_img = cv2.filter2D(src=gray, ddepth=-1, kernel=sharp_kernel) - - crop_img = sharp_img[y:y+h, x:x+w] - - npa=np.array(crop_img)/255.0 - predictions = model.predict(np.resize(npa,(48,48)).reshape(-1,48,48,1)) - score =predictions[0] - score=tf.nn.softmax(predictions[0]) - plt.figure() - confidences = {cls[i]: float(score[i]) for i in range(len(cls))} - return confidences -demo = gr.Interface( - fn=show, - inputs="image", - outputs=gr.outputs.Label(num_top_classes=7), -) -demo.launch() \ No newline at end of file diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/core/bbox/assigners/hungarian_assigner.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/core/bbox/assigners/hungarian_assigner.py deleted file mode 100644 index e10cc14afac4ddfcb9395c1a250ece1fbfe3263c..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/core/bbox/assigners/hungarian_assigner.py +++ /dev/null @@ -1,145 +0,0 @@ -import torch - -from ..builder import BBOX_ASSIGNERS -from ..match_costs import build_match_cost -from ..transforms import bbox_cxcywh_to_xyxy -from .assign_result import AssignResult -from .base_assigner import BaseAssigner - -try: - from scipy.optimize import linear_sum_assignment -except ImportError: - linear_sum_assignment = None - - -@BBOX_ASSIGNERS.register_module() -class HungarianAssigner(BaseAssigner): - """Computes one-to-one matching between predictions and ground truth. - - This class computes an assignment between the targets and the predictions - based on the costs. The costs are weighted sum of three components: - classification cost, regression L1 cost and regression iou cost. The - targets don't include the no_object, so generally there are more - predictions than targets. After the one-to-one matching, the un-matched - are treated as backgrounds. Thus each query prediction will be assigned - with `0` or a positive integer indicating the ground truth index: - - - 0: negative sample, no assigned gt - - positive integer: positive sample, index (1-based) of assigned gt - - Args: - cls_weight (int | float, optional): The scale factor for classification - cost. Default 1.0. - bbox_weight (int | float, optional): The scale factor for regression - L1 cost. Default 1.0. - iou_weight (int | float, optional): The scale factor for regression - iou cost. Default 1.0. - iou_calculator (dict | optional): The config for the iou calculation. - Default type `BboxOverlaps2D`. - iou_mode (str | optional): "iou" (intersection over union), "iof" - (intersection over foreground), or "giou" (generalized - intersection over union). Default "giou". - """ - - def __init__(self, - cls_cost=dict(type='ClassificationCost', weight=1.), - reg_cost=dict(type='BBoxL1Cost', weight=1.0), - iou_cost=dict(type='IoUCost', iou_mode='giou', weight=1.0)): - self.cls_cost = build_match_cost(cls_cost) - self.reg_cost = build_match_cost(reg_cost) - self.iou_cost = build_match_cost(iou_cost) - - def assign(self, - bbox_pred, - cls_pred, - gt_bboxes, - gt_labels, - img_meta, - gt_bboxes_ignore=None, - eps=1e-7): - """Computes one-to-one matching based on the weighted costs. - - This method assign each query prediction to a ground truth or - background. The `assigned_gt_inds` with -1 means don't care, - 0 means negative sample, and positive number is the index (1-based) - of assigned gt. - The assignment is done in the following steps, the order matters. - - 1. assign every prediction to -1 - 2. compute the weighted costs - 3. do Hungarian matching on CPU based on the costs - 4. assign all to 0 (background) first, then for each matched pair - between predictions and gts, treat this prediction as foreground - and assign the corresponding gt index (plus 1) to it. - - Args: - bbox_pred (Tensor): Predicted boxes with normalized coordinates - (cx, cy, w, h), which are all in range [0, 1]. Shape - [num_query, 4]. - cls_pred (Tensor): Predicted classification logits, shape - [num_query, num_class]. - gt_bboxes (Tensor): Ground truth boxes with unnormalized - coordinates (x1, y1, x2, y2). Shape [num_gt, 4]. - gt_labels (Tensor): Label of `gt_bboxes`, shape (num_gt,). - img_meta (dict): Meta information for current image. - gt_bboxes_ignore (Tensor, optional): Ground truth bboxes that are - labelled as `ignored`. Default None. - eps (int | float, optional): A value added to the denominator for - numerical stability. Default 1e-7. - - Returns: - :obj:`AssignResult`: The assigned result. - """ - assert gt_bboxes_ignore is None, \ - 'Only case when gt_bboxes_ignore is None is supported.' - num_gts, num_bboxes = gt_bboxes.size(0), bbox_pred.size(0) - - # 1. assign -1 by default - assigned_gt_inds = bbox_pred.new_full((num_bboxes, ), - -1, - dtype=torch.long) - assigned_labels = bbox_pred.new_full((num_bboxes, ), - -1, - dtype=torch.long) - if num_gts == 0 or num_bboxes == 0: - # No ground truth or boxes, return empty assignment - if num_gts == 0: - # No ground truth, assign all to background - assigned_gt_inds[:] = 0 - return AssignResult( - num_gts, assigned_gt_inds, None, labels=assigned_labels) - img_h, img_w, _ = img_meta['img_shape'] - factor = gt_bboxes.new_tensor([img_w, img_h, img_w, - img_h]).unsqueeze(0) - - # 2. compute the weighted costs - # classification and bboxcost. - cls_cost = self.cls_cost(cls_pred, gt_labels) - # regression L1 cost - normalize_gt_bboxes = gt_bboxes / factor - reg_cost = self.reg_cost(bbox_pred, normalize_gt_bboxes) - # regression iou cost, defaultly giou is used in official DETR. - bboxes = bbox_cxcywh_to_xyxy(bbox_pred) * factor - iou_cost = self.iou_cost(bboxes, gt_bboxes) - # weighted sum of above three costs - cost = cls_cost + reg_cost + iou_cost - - # 3. do Hungarian matching on CPU using linear_sum_assignment - cost = cost.detach().cpu() - if linear_sum_assignment is None: - raise ImportError('Please run "pip install scipy" ' - 'to install scipy first.') - matched_row_inds, matched_col_inds = linear_sum_assignment(cost) - matched_row_inds = torch.from_numpy(matched_row_inds).to( - bbox_pred.device) - matched_col_inds = torch.from_numpy(matched_col_inds).to( - bbox_pred.device) - - # 4. assign backgrounds and foregrounds - # assign all indices to backgrounds first - assigned_gt_inds[:] = 0 - # assign foregrounds based on matching results - assigned_gt_inds[matched_row_inds] = matched_col_inds + 1 - assigned_labels[matched_row_inds] = gt_labels[matched_col_inds] - return AssignResult( - num_gts, assigned_gt_inds, None, labels=assigned_labels) diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/ops/points_sampler.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/ops/points_sampler.py deleted file mode 100644 index a802a74fd6c3610d9ae178e6201f47423eca7ad1..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/ops/points_sampler.py +++ /dev/null @@ -1,177 +0,0 @@ -from typing import List - -import torch -from torch import nn as nn - -from annotator.uniformer.mmcv.runner import force_fp32 -from .furthest_point_sample import (furthest_point_sample, - furthest_point_sample_with_dist) - - -def calc_square_dist(point_feat_a, point_feat_b, norm=True): - """Calculating square distance between a and b. - - Args: - point_feat_a (Tensor): (B, N, C) Feature vector of each point. - point_feat_b (Tensor): (B, M, C) Feature vector of each point. - norm (Bool, optional): Whether to normalize the distance. - Default: True. - - Returns: - Tensor: (B, N, M) Distance between each pair points. - """ - num_channel = point_feat_a.shape[-1] - # [bs, n, 1] - a_square = torch.sum(point_feat_a.unsqueeze(dim=2).pow(2), dim=-1) - # [bs, 1, m] - b_square = torch.sum(point_feat_b.unsqueeze(dim=1).pow(2), dim=-1) - - corr_matrix = torch.matmul(point_feat_a, point_feat_b.transpose(1, 2)) - - dist = a_square + b_square - 2 * corr_matrix - if norm: - dist = torch.sqrt(dist) / num_channel - return dist - - -def get_sampler_cls(sampler_type): - """Get the type and mode of points sampler. - - Args: - sampler_type (str): The type of points sampler. - The valid value are "D-FPS", "F-FPS", or "FS". - - Returns: - class: Points sampler type. - """ - sampler_mappings = { - 'D-FPS': DFPSSampler, - 'F-FPS': FFPSSampler, - 'FS': FSSampler, - } - try: - return sampler_mappings[sampler_type] - except KeyError: - raise KeyError( - f'Supported `sampler_type` are {sampler_mappings.keys()}, but got \ - {sampler_type}') - - -class PointsSampler(nn.Module): - """Points sampling. - - Args: - num_point (list[int]): Number of sample points. - fps_mod_list (list[str], optional): Type of FPS method, valid mod - ['F-FPS', 'D-FPS', 'FS'], Default: ['D-FPS']. - F-FPS: using feature distances for FPS. - D-FPS: using Euclidean distances of points for FPS. - FS: using F-FPS and D-FPS simultaneously. - fps_sample_range_list (list[int], optional): - Range of points to apply FPS. Default: [-1]. - """ - - def __init__(self, - num_point: List[int], - fps_mod_list: List[str] = ['D-FPS'], - fps_sample_range_list: List[int] = [-1]): - super().__init__() - # FPS would be applied to different fps_mod in the list, - # so the length of the num_point should be equal to - # fps_mod_list and fps_sample_range_list. - assert len(num_point) == len(fps_mod_list) == len( - fps_sample_range_list) - self.num_point = num_point - self.fps_sample_range_list = fps_sample_range_list - self.samplers = nn.ModuleList() - for fps_mod in fps_mod_list: - self.samplers.append(get_sampler_cls(fps_mod)()) - self.fp16_enabled = False - - @force_fp32() - def forward(self, points_xyz, features): - """ - Args: - points_xyz (Tensor): (B, N, 3) xyz coordinates of the features. - features (Tensor): (B, C, N) Descriptors of the features. - - Returns: - Tensor: (B, npoint, sample_num) Indices of sampled points. - """ - indices = [] - last_fps_end_index = 0 - - for fps_sample_range, sampler, npoint in zip( - self.fps_sample_range_list, self.samplers, self.num_point): - assert fps_sample_range < points_xyz.shape[1] - - if fps_sample_range == -1: - sample_points_xyz = points_xyz[:, last_fps_end_index:] - if features is not None: - sample_features = features[:, :, last_fps_end_index:] - else: - sample_features = None - else: - sample_points_xyz = \ - points_xyz[:, last_fps_end_index:fps_sample_range] - if features is not None: - sample_features = features[:, :, last_fps_end_index: - fps_sample_range] - else: - sample_features = None - - fps_idx = sampler(sample_points_xyz.contiguous(), sample_features, - npoint) - - indices.append(fps_idx + last_fps_end_index) - last_fps_end_index += fps_sample_range - indices = torch.cat(indices, dim=1) - - return indices - - -class DFPSSampler(nn.Module): - """Using Euclidean distances of points for FPS.""" - - def __init__(self): - super().__init__() - - def forward(self, points, features, npoint): - """Sampling points with D-FPS.""" - fps_idx = furthest_point_sample(points.contiguous(), npoint) - return fps_idx - - -class FFPSSampler(nn.Module): - """Using feature distances for FPS.""" - - def __init__(self): - super().__init__() - - def forward(self, points, features, npoint): - """Sampling points with F-FPS.""" - assert features is not None, \ - 'feature input to FFPS_Sampler should not be None' - features_for_fps = torch.cat([points, features.transpose(1, 2)], dim=2) - features_dist = calc_square_dist( - features_for_fps, features_for_fps, norm=False) - fps_idx = furthest_point_sample_with_dist(features_dist, npoint) - return fps_idx - - -class FSSampler(nn.Module): - """Using F-FPS and D-FPS simultaneously.""" - - def __init__(self): - super().__init__() - - def forward(self, points, features, npoint): - """Sampling points with FS_Sampling.""" - assert features is not None, \ - 'feature input to FS_Sampler should not be None' - ffps_sampler = FFPSSampler() - dfps_sampler = DFPSSampler() - fps_idx_ffps = ffps_sampler(points, features, npoint) - fps_idx_dfps = dfps_sampler(points, features, npoint) - fps_idx = torch.cat([fps_idx_ffps, fps_idx_dfps], dim=1) - return fps_idx diff --git a/spaces/Ron0420/EfficientNetV2_Deepfakes_Video_Detector/README.md b/spaces/Ron0420/EfficientNetV2_Deepfakes_Video_Detector/README.md deleted file mode 100644 index 60c605b88ac603544946c4e6ef07ac4a97c85b27..0000000000000000000000000000000000000000 --- a/spaces/Ron0420/EfficientNetV2_Deepfakes_Video_Detector/README.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: Deepfakes_Video_Detector -emoji: 🔥 -colorFrom: blue -colorTo: gray -sdk: gradio -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio`, `streamlit`, or `static` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code, or `static` html code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/SantoshKumar/SD-H5-AR-VR-IOT/README.md b/spaces/SantoshKumar/SD-H5-AR-VR-IOT/README.md deleted file mode 100644 index cf3f8d98b20b52d747b638f3351abd6d208f489c..0000000000000000000000000000000000000000 --- a/spaces/SantoshKumar/SD-H5-AR-VR-IOT/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: SD H5 AR VR IOT -emoji: 💩 -colorFrom: purple -colorTo: purple -sdk: static -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/ServerX/PorcoDiaz/infer/lib/uvr5_pack/lib_v5/nets.py b/spaces/ServerX/PorcoDiaz/infer/lib/uvr5_pack/lib_v5/nets.py deleted file mode 100644 index 5da3948c2f2e9edcc3cdac49bdf9f738e403de40..0000000000000000000000000000000000000000 --- a/spaces/ServerX/PorcoDiaz/infer/lib/uvr5_pack/lib_v5/nets.py +++ /dev/null @@ -1,123 +0,0 @@ -import layers -import torch -import torch.nn.functional as F -from torch import nn - -from . import spec_utils - - -class BaseASPPNet(nn.Module): - def __init__(self, nin, ch, dilations=(4, 8, 16)): - super(BaseASPPNet, self).__init__() - self.enc1 = layers.Encoder(nin, ch, 3, 2, 1) - self.enc2 = layers.Encoder(ch, ch * 2, 3, 2, 1) - self.enc3 = layers.Encoder(ch * 2, ch * 4, 3, 2, 1) - self.enc4 = layers.Encoder(ch * 4, ch * 8, 3, 2, 1) - - self.aspp = layers.ASPPModule(ch * 8, ch * 16, dilations) - - self.dec4 = layers.Decoder(ch * (8 + 16), ch * 8, 3, 1, 1) - self.dec3 = layers.Decoder(ch * (4 + 8), ch * 4, 3, 1, 1) - self.dec2 = layers.Decoder(ch * (2 + 4), ch * 2, 3, 1, 1) - self.dec1 = layers.Decoder(ch * (1 + 2), ch, 3, 1, 1) - - def __call__(self, x): - h, e1 = self.enc1(x) - h, e2 = self.enc2(h) - h, e3 = self.enc3(h) - h, e4 = self.enc4(h) - - h = self.aspp(h) - - h = self.dec4(h, e4) - h = self.dec3(h, e3) - h = self.dec2(h, e2) - h = self.dec1(h, e1) - - return h - - -class CascadedASPPNet(nn.Module): - def __init__(self, n_fft): - super(CascadedASPPNet, self).__init__() - self.stg1_low_band_net = BaseASPPNet(2, 16) - self.stg1_high_band_net = BaseASPPNet(2, 16) - - self.stg2_bridge = layers.Conv2DBNActiv(18, 8, 1, 1, 0) - self.stg2_full_band_net = BaseASPPNet(8, 16) - - self.stg3_bridge = layers.Conv2DBNActiv(34, 16, 1, 1, 0) - self.stg3_full_band_net = BaseASPPNet(16, 32) - - self.out = nn.Conv2d(32, 2, 1, bias=False) - self.aux1_out = nn.Conv2d(16, 2, 1, bias=False) - self.aux2_out = nn.Conv2d(16, 2, 1, bias=False) - - self.max_bin = n_fft // 2 - self.output_bin = n_fft // 2 + 1 - - self.offset = 128 - - def forward(self, x, aggressiveness=None): - mix = x.detach() - x = x.clone() - - x = x[:, :, : self.max_bin] - - bandw = x.size()[2] // 2 - aux1 = torch.cat( - [ - self.stg1_low_band_net(x[:, :, :bandw]), - self.stg1_high_band_net(x[:, :, bandw:]), - ], - dim=2, - ) - - h = torch.cat([x, aux1], dim=1) - aux2 = self.stg2_full_band_net(self.stg2_bridge(h)) - - h = torch.cat([x, aux1, aux2], dim=1) - h = self.stg3_full_band_net(self.stg3_bridge(h)) - - mask = torch.sigmoid(self.out(h)) - mask = F.pad( - input=mask, - pad=(0, 0, 0, self.output_bin - mask.size()[2]), - mode="replicate", - ) - - if self.training: - aux1 = torch.sigmoid(self.aux1_out(aux1)) - aux1 = F.pad( - input=aux1, - pad=(0, 0, 0, self.output_bin - aux1.size()[2]), - mode="replicate", - ) - aux2 = torch.sigmoid(self.aux2_out(aux2)) - aux2 = F.pad( - input=aux2, - pad=(0, 0, 0, self.output_bin - aux2.size()[2]), - mode="replicate", - ) - return mask * mix, aux1 * mix, aux2 * mix - else: - if aggressiveness: - mask[:, :, : aggressiveness["split_bin"]] = torch.pow( - mask[:, :, : aggressiveness["split_bin"]], - 1 + aggressiveness["value"] / 3, - ) - mask[:, :, aggressiveness["split_bin"] :] = torch.pow( - mask[:, :, aggressiveness["split_bin"] :], - 1 + aggressiveness["value"], - ) - - return mask * mix - - def predict(self, x_mag, aggressiveness=None): - h = self.forward(x_mag, aggressiveness) - - if self.offset > 0: - h = h[:, :, :, self.offset : -self.offset] - assert h.size()[3] > 0 - - return h diff --git a/spaces/SeyedAli/Image-Similarity/src/model/similarity_interface.py b/spaces/SeyedAli/Image-Similarity/src/model/similarity_interface.py deleted file mode 100644 index 318cdd972c2d2f758bd9b3dfdbb92cc9dfb28bee..0000000000000000000000000000000000000000 --- a/spaces/SeyedAli/Image-Similarity/src/model/similarity_interface.py +++ /dev/null @@ -1,3 +0,0 @@ -class SimilarityInterface: - def extract_feature(img): - return [] \ No newline at end of file diff --git a/spaces/Snowling/White-box-Cartoonization/wbc/guided_filter.py b/spaces/Snowling/White-box-Cartoonization/wbc/guided_filter.py deleted file mode 100644 index fd019d145efc7f308cd96de90f4e7b648f6820b4..0000000000000000000000000000000000000000 --- a/spaces/Snowling/White-box-Cartoonization/wbc/guided_filter.py +++ /dev/null @@ -1,87 +0,0 @@ -import tensorflow as tf -import numpy as np - - - - -def tf_box_filter(x, r): - k_size = int(2*r+1) - ch = x.get_shape().as_list()[-1] - weight = 1/(k_size**2) - box_kernel = weight*np.ones((k_size, k_size, ch, 1)) - box_kernel = np.array(box_kernel).astype(np.float32) - output = tf.nn.depthwise_conv2d(x, box_kernel, [1, 1, 1, 1], 'SAME') - return output - - - -def guided_filter(x, y, r, eps=1e-2): - - x_shape = tf.shape(x) - #y_shape = tf.shape(y) - - N = tf_box_filter(tf.ones((1, x_shape[1], x_shape[2], 1), dtype=x.dtype), r) - - mean_x = tf_box_filter(x, r) / N - mean_y = tf_box_filter(y, r) / N - cov_xy = tf_box_filter(x * y, r) / N - mean_x * mean_y - var_x = tf_box_filter(x * x, r) / N - mean_x * mean_x - - A = cov_xy / (var_x + eps) - b = mean_y - A * mean_x - - mean_A = tf_box_filter(A, r) / N - mean_b = tf_box_filter(b, r) / N - - output = mean_A * x + mean_b - - return output - - - -def fast_guided_filter(lr_x, lr_y, hr_x, r=1, eps=1e-8): - - #assert lr_x.shape.ndims == 4 and lr_y.shape.ndims == 4 and hr_x.shape.ndims == 4 - - lr_x_shape = tf.shape(lr_x) - #lr_y_shape = tf.shape(lr_y) - hr_x_shape = tf.shape(hr_x) - - N = tf_box_filter(tf.ones((1, lr_x_shape[1], lr_x_shape[2], 1), dtype=lr_x.dtype), r) - - mean_x = tf_box_filter(lr_x, r) / N - mean_y = tf_box_filter(lr_y, r) / N - cov_xy = tf_box_filter(lr_x * lr_y, r) / N - mean_x * mean_y - var_x = tf_box_filter(lr_x * lr_x, r) / N - mean_x * mean_x - - A = cov_xy / (var_x + eps) - b = mean_y - A * mean_x - - mean_A = tf.image.resize_images(A, hr_x_shape[1: 3]) - mean_b = tf.image.resize_images(b, hr_x_shape[1: 3]) - - output = mean_A * hr_x + mean_b - - return output - - -if __name__ == '__main__': - import cv2 - from tqdm import tqdm - - input_photo = tf.placeholder(tf.float32, [1, None, None, 3]) - #input_superpixel = tf.placeholder(tf.float32, [16, 256, 256, 3]) - output = guided_filter(input_photo, input_photo, 5, eps=1) - image = cv2.imread('output_figure1/cartoon2.jpg') - image = image/127.5 - 1 - image = np.expand_dims(image, axis=0) - - config = tf.ConfigProto() - config.gpu_options.allow_growth = True - sess = tf.Session(config=config) - sess.run(tf.global_variables_initializer()) - - out = sess.run(output, feed_dict={input_photo: image}) - out = (np.squeeze(out)+1)*127.5 - out = np.clip(out, 0, 255).astype(np.uint8) - cv2.imwrite('output_figure1/cartoon2_filter.jpg', out) diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/utils/capture.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/utils/capture.py deleted file mode 100644 index 97b6336688b058e689ac6aa410c51a041377014b..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/utils/capture.py +++ /dev/null @@ -1,170 +0,0 @@ -# encoding: utf-8 -"""IO capturing utilities.""" - -# Copyright (c) IPython Development Team. -# Distributed under the terms of the Modified BSD License. - - -import sys -from io import StringIO - -#----------------------------------------------------------------------------- -# Classes and functions -#----------------------------------------------------------------------------- - - -class RichOutput(object): - def __init__(self, data=None, metadata=None, transient=None, update=False): - self.data = data or {} - self.metadata = metadata or {} - self.transient = transient or {} - self.update = update - - def display(self): - from IPython.display import publish_display_data - publish_display_data(data=self.data, metadata=self.metadata, - transient=self.transient, update=self.update) - - def _repr_mime_(self, mime): - if mime not in self.data: - return - data = self.data[mime] - if mime in self.metadata: - return data, self.metadata[mime] - else: - return data - - def _repr_mimebundle_(self, include=None, exclude=None): - return self.data, self.metadata - - def _repr_html_(self): - return self._repr_mime_("text/html") - - def _repr_latex_(self): - return self._repr_mime_("text/latex") - - def _repr_json_(self): - return self._repr_mime_("application/json") - - def _repr_javascript_(self): - return self._repr_mime_("application/javascript") - - def _repr_png_(self): - return self._repr_mime_("image/png") - - def _repr_jpeg_(self): - return self._repr_mime_("image/jpeg") - - def _repr_svg_(self): - return self._repr_mime_("image/svg+xml") - - -class CapturedIO(object): - """Simple object for containing captured stdout/err and rich display StringIO objects - - Each instance `c` has three attributes: - - - ``c.stdout`` : standard output as a string - - ``c.stderr`` : standard error as a string - - ``c.outputs``: a list of rich display outputs - - Additionally, there's a ``c.show()`` method which will print all of the - above in the same order, and can be invoked simply via ``c()``. - """ - - def __init__(self, stdout, stderr, outputs=None): - self._stdout = stdout - self._stderr = stderr - if outputs is None: - outputs = [] - self._outputs = outputs - - def __str__(self): - return self.stdout - - @property - def stdout(self): - "Captured standard output" - if not self._stdout: - return '' - return self._stdout.getvalue() - - @property - def stderr(self): - "Captured standard error" - if not self._stderr: - return '' - return self._stderr.getvalue() - - @property - def outputs(self): - """A list of the captured rich display outputs, if any. - - If you have a CapturedIO object ``c``, these can be displayed in IPython - using:: - - from IPython.display import display - for o in c.outputs: - display(o) - """ - return [ RichOutput(**kargs) for kargs in self._outputs ] - - def show(self): - """write my output to sys.stdout/err as appropriate""" - sys.stdout.write(self.stdout) - sys.stderr.write(self.stderr) - sys.stdout.flush() - sys.stderr.flush() - for kargs in self._outputs: - RichOutput(**kargs).display() - - __call__ = show - - -class capture_output(object): - """context manager for capturing stdout/err""" - stdout = True - stderr = True - display = True - - def __init__(self, stdout=True, stderr=True, display=True): - self.stdout = stdout - self.stderr = stderr - self.display = display - self.shell = None - - def __enter__(self): - from IPython.core.getipython import get_ipython - from IPython.core.displaypub import CapturingDisplayPublisher - from IPython.core.displayhook import CapturingDisplayHook - - self.sys_stdout = sys.stdout - self.sys_stderr = sys.stderr - - if self.display: - self.shell = get_ipython() - if self.shell is None: - self.save_display_pub = None - self.display = False - - stdout = stderr = outputs = None - if self.stdout: - stdout = sys.stdout = StringIO() - if self.stderr: - stderr = sys.stderr = StringIO() - if self.display: - self.save_display_pub = self.shell.display_pub - self.shell.display_pub = CapturingDisplayPublisher() - outputs = self.shell.display_pub.outputs - self.save_display_hook = sys.displayhook - sys.displayhook = CapturingDisplayHook(shell=self.shell, - outputs=outputs) - - return CapturedIO(stdout, stderr, outputs) - - def __exit__(self, exc_type, exc_value, traceback): - sys.stdout = self.sys_stdout - sys.stderr = self.sys_stderr - if self.display and self.shell: - self.shell.display_pub = self.save_display_pub - sys.displayhook = self.save_display_hook diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/utils/tests/test_decorators.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/utils/tests/test_decorators.py deleted file mode 100644 index 38e871deb7e27ba896d15304378708f50e5ea0ba..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/utils/tests/test_decorators.py +++ /dev/null @@ -1,10 +0,0 @@ -from IPython.utils import decorators - -def test_flag_calls(): - @decorators.flag_calls - def f(): - pass - - assert not f.called - f() - assert f.called \ No newline at end of file diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/anyio/abc/_streams.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/anyio/abc/_streams.py deleted file mode 100644 index 4fa7ccc9ffe0e750a1b5a4164970ed4de9c93b2b..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/anyio/abc/_streams.py +++ /dev/null @@ -1,203 +0,0 @@ -from __future__ import annotations - -from abc import abstractmethod -from typing import Any, Callable, Generic, TypeVar, Union - -from .._core._exceptions import EndOfStream -from .._core._typedattr import TypedAttributeProvider -from ._resources import AsyncResource -from ._tasks import TaskGroup - -T_Item = TypeVar("T_Item") -T_co = TypeVar("T_co", covariant=True) -T_contra = TypeVar("T_contra", contravariant=True) - - -class UnreliableObjectReceiveStream( - Generic[T_co], AsyncResource, TypedAttributeProvider -): - """ - An interface for receiving objects. - - This interface makes no guarantees that the received messages arrive in the order in which they - were sent, or that no messages are missed. - - Asynchronously iterating over objects of this type will yield objects matching the given type - parameter. - """ - - def __aiter__(self) -> UnreliableObjectReceiveStream[T_co]: - return self - - async def __anext__(self) -> T_co: - try: - return await self.receive() - except EndOfStream: - raise StopAsyncIteration - - @abstractmethod - async def receive(self) -> T_co: - """ - Receive the next item. - - :raises ~anyio.ClosedResourceError: if the receive stream has been explicitly - closed - :raises ~anyio.EndOfStream: if this stream has been closed from the other end - :raises ~anyio.BrokenResourceError: if this stream has been rendered unusable - due to external causes - """ - - -class UnreliableObjectSendStream( - Generic[T_contra], AsyncResource, TypedAttributeProvider -): - """ - An interface for sending objects. - - This interface makes no guarantees that the messages sent will reach the recipient(s) in the - same order in which they were sent, or at all. - """ - - @abstractmethod - async def send(self, item: T_contra) -> None: - """ - Send an item to the peer(s). - - :param item: the item to send - :raises ~anyio.ClosedResourceError: if the send stream has been explicitly - closed - :raises ~anyio.BrokenResourceError: if this stream has been rendered unusable - due to external causes - """ - - -class UnreliableObjectStream( - UnreliableObjectReceiveStream[T_Item], UnreliableObjectSendStream[T_Item] -): - """ - A bidirectional message stream which does not guarantee the order or reliability of message - delivery. - """ - - -class ObjectReceiveStream(UnreliableObjectReceiveStream[T_co]): - """ - A receive message stream which guarantees that messages are received in the same order in - which they were sent, and that no messages are missed. - """ - - -class ObjectSendStream(UnreliableObjectSendStream[T_contra]): - """ - A send message stream which guarantees that messages are delivered in the same order in which - they were sent, without missing any messages in the middle. - """ - - -class ObjectStream( - ObjectReceiveStream[T_Item], - ObjectSendStream[T_Item], - UnreliableObjectStream[T_Item], -): - """ - A bidirectional message stream which guarantees the order and reliability of message delivery. - """ - - @abstractmethod - async def send_eof(self) -> None: - """ - Send an end-of-file indication to the peer. - - You should not try to send any further data to this stream after calling this method. - This method is idempotent (does nothing on successive calls). - """ - - -class ByteReceiveStream(AsyncResource, TypedAttributeProvider): - """ - An interface for receiving bytes from a single peer. - - Iterating this byte stream will yield a byte string of arbitrary length, but no more than - 65536 bytes. - """ - - def __aiter__(self) -> ByteReceiveStream: - return self - - async def __anext__(self) -> bytes: - try: - return await self.receive() - except EndOfStream: - raise StopAsyncIteration - - @abstractmethod - async def receive(self, max_bytes: int = 65536) -> bytes: - """ - Receive at most ``max_bytes`` bytes from the peer. - - .. note:: Implementors of this interface should not return an empty :class:`bytes` object, - and users should ignore them. - - :param max_bytes: maximum number of bytes to receive - :return: the received bytes - :raises ~anyio.EndOfStream: if this stream has been closed from the other end - """ - - -class ByteSendStream(AsyncResource, TypedAttributeProvider): - """An interface for sending bytes to a single peer.""" - - @abstractmethod - async def send(self, item: bytes) -> None: - """ - Send the given bytes to the peer. - - :param item: the bytes to send - """ - - -class ByteStream(ByteReceiveStream, ByteSendStream): - """A bidirectional byte stream.""" - - @abstractmethod - async def send_eof(self) -> None: - """ - Send an end-of-file indication to the peer. - - You should not try to send any further data to this stream after calling this method. - This method is idempotent (does nothing on successive calls). - """ - - -#: Type alias for all unreliable bytes-oriented receive streams. -AnyUnreliableByteReceiveStream = Union[ - UnreliableObjectReceiveStream[bytes], ByteReceiveStream -] -#: Type alias for all unreliable bytes-oriented send streams. -AnyUnreliableByteSendStream = Union[UnreliableObjectSendStream[bytes], ByteSendStream] -#: Type alias for all unreliable bytes-oriented streams. -AnyUnreliableByteStream = Union[UnreliableObjectStream[bytes], ByteStream] -#: Type alias for all bytes-oriented receive streams. -AnyByteReceiveStream = Union[ObjectReceiveStream[bytes], ByteReceiveStream] -#: Type alias for all bytes-oriented send streams. -AnyByteSendStream = Union[ObjectSendStream[bytes], ByteSendStream] -#: Type alias for all bytes-oriented streams. -AnyByteStream = Union[ObjectStream[bytes], ByteStream] - - -class Listener(Generic[T_co], AsyncResource, TypedAttributeProvider): - """An interface for objects that let you accept incoming connections.""" - - @abstractmethod - async def serve( - self, - handler: Callable[[T_co], Any], - task_group: TaskGroup | None = None, - ) -> None: - """ - Accept incoming connections as they come in and start tasks to handle them. - - :param handler: a callable that will be used to handle each accepted connection - :param task_group: the task group that will be used to start tasks for handling each - accepted connection (if omitted, an ad-hoc task group will be created) - """ diff --git a/spaces/Superlang/ImageProcessor/annotator/oneformer/oneformer/modeling/pixel_decoder/ops/src/vision.cpp b/spaces/Superlang/ImageProcessor/annotator/oneformer/oneformer/modeling/pixel_decoder/ops/src/vision.cpp deleted file mode 100644 index 4a08821e0121a77556aa7a263ec8ebfa928b13b6..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/oneformer/oneformer/modeling/pixel_decoder/ops/src/vision.cpp +++ /dev/null @@ -1,21 +0,0 @@ -/*! -************************************************************************************************** -* Deformable DETR -* Copyright (c) 2020 SenseTime. All Rights Reserved. -* Licensed under the Apache License, Version 2.0 [see LICENSE for details] -************************************************************************************************** -* Modified from https://github.com/chengdazhi/Deformable-Convolution-V2-PyTorch/tree/pytorch_1.0.0 -************************************************************************************************** -*/ - -/*! -* Copyright (c) Facebook, Inc. and its affiliates. -* Modified by Bowen Cheng from https://github.com/fundamentalvision/Deformable-DETR -*/ - -#include "ms_deform_attn.h" - -PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) { - m.def("ms_deform_attn_forward", &ms_deform_attn_forward, "ms_deform_attn_forward"); - m.def("ms_deform_attn_backward", &ms_deform_attn_backward, "ms_deform_attn_backward"); -} diff --git a/spaces/Superlang/ImageProcessor/annotator/uniformer/configs/_base_/models/fcn_hr18.py b/spaces/Superlang/ImageProcessor/annotator/uniformer/configs/_base_/models/fcn_hr18.py deleted file mode 100644 index c3e299bc89ada56ca14bbffcbdb08a586b8ed9e9..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/uniformer/configs/_base_/models/fcn_hr18.py +++ /dev/null @@ -1,52 +0,0 @@ -# model settings -norm_cfg = dict(type='SyncBN', requires_grad=True) -model = dict( - type='EncoderDecoder', - pretrained='open-mmlab://msra/hrnetv2_w18', - backbone=dict( - type='HRNet', - norm_cfg=norm_cfg, - norm_eval=False, - extra=dict( - stage1=dict( - num_modules=1, - num_branches=1, - block='BOTTLENECK', - num_blocks=(4, ), - num_channels=(64, )), - stage2=dict( - num_modules=1, - num_branches=2, - block='BASIC', - num_blocks=(4, 4), - num_channels=(18, 36)), - stage3=dict( - num_modules=4, - num_branches=3, - block='BASIC', - num_blocks=(4, 4, 4), - num_channels=(18, 36, 72)), - stage4=dict( - num_modules=3, - num_branches=4, - block='BASIC', - num_blocks=(4, 4, 4, 4), - num_channels=(18, 36, 72, 144)))), - decode_head=dict( - type='FCNHead', - in_channels=[18, 36, 72, 144], - in_index=(0, 1, 2, 3), - channels=sum([18, 36, 72, 144]), - input_transform='resize_concat', - kernel_size=1, - num_convs=1, - concat_input=False, - dropout_ratio=-1, - num_classes=19, - norm_cfg=norm_cfg, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)), - # model training and testing settings - train_cfg=dict(), - test_cfg=dict(mode='whole')) diff --git a/spaces/TH5314/newbing/src/components/tone-selector.tsx b/spaces/TH5314/newbing/src/components/tone-selector.tsx deleted file mode 100644 index 5c6e464c91f564b895acd121f0a4a79ed9c5c356..0000000000000000000000000000000000000000 --- a/spaces/TH5314/newbing/src/components/tone-selector.tsx +++ /dev/null @@ -1,43 +0,0 @@ -import React from 'react' -import { BingConversationStyle } from '@/lib/bots/bing/types' -import { cn } from '@/lib/utils' - -type ToneItem = { - type: BingConversationStyle, - name: string -} - -const ToneList: ToneItem[] = [ - { name: '有创造力', type: BingConversationStyle.Creative }, - { name: '更平衡', type: BingConversationStyle.Balanced }, - { name: '更精确', type: BingConversationStyle.Precise } -] - -interface ToneSelectorProps { - type: BingConversationStyle | '' - onChange?: (type: BingConversationStyle) => void -} - -export function ToneSelector({ type, onChange }: ToneSelectorProps) { - return ( -
-
- 选择对话样式 -
-
-
    - { - ToneList.map(tone => ( -
  • onChange?.(tone.type)}> - -
  • - )) - } -
-
-
- ) -} diff --git a/spaces/Taithrah/Minimal/app.py b/spaces/Taithrah/Minimal/app.py deleted file mode 100644 index 0fd61cfee77ee26b8a016a8cc6a851866f3386ad..0000000000000000000000000000000000000000 --- a/spaces/Taithrah/Minimal/app.py +++ /dev/null @@ -1,146 +0,0 @@ -import time - -import gradio as gr -from gradio.themes.utils.theme_dropdown import create_theme_dropdown - -dropdown, js = create_theme_dropdown() - -with gr.Blocks(theme='Taithrah/Minimal') as demo: - with gr.Row().style(equal_height=True): - with gr.Column(scale=10): - gr.Markdown( - """ - # Theme preview: `Minimal` - To use this theme, set `theme='Taithrah/Minimal'` in `gr.Blocks()` or `gr.Interface()`. - You can append an `@` and a semantic version expression, e.g. @>=1.0.0,<2.0.0 to pin to a given version - of this theme. - """ - ) - with gr.Column(scale=3): - with gr.Box(): - dropdown.render() - toggle_dark = gr.Button(value="Toggle Dark").style(full_width=True) - - dropdown.change(None, dropdown, None, _js=js) - toggle_dark.click( - None, - _js=""" - () => { - document.body.classList.toggle('dark'); - } - """, - ) - - name = gr.Textbox( - label="Name", - info="Full name, including middle name. No special characters.", - placeholder="John Doe", - value="John Doe", - interactive=True, - ) - - with gr.Row(): - slider1 = gr.Slider(label="Slider 1") - slider2 = gr.Slider(label="Slider 2") - gr.CheckboxGroup(["A", "B", "C"], label="Checkbox Group") - - with gr.Row(): - with gr.Column(variant="panel", scale=1): - gr.Markdown("## Panel 1") - radio = gr.Radio( - ["A", "B", "C"], - label="Radio", - info="Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.", - ) - drop = gr.Dropdown(["Option 1", "Option 2", "Option 3"], show_label=False) - drop_2 = gr.Dropdown( - ["Option A", "Option B", "Option C"], - multiselect=True, - value=["Option A"], - label="Dropdown", - interactive=True, - ) - check = gr.Checkbox(label="Go") - with gr.Column(variant="panel", scale=2): - img = gr.Image( - "https://gradio-static-files.s3.us-west-2.amazonaws.com/header-image.jpg", - label="Image", - ).style(height=320) - with gr.Row(): - go_btn = gr.Button("Go", label="Primary Button", variant="primary") - clear_btn = gr.Button( - "Clear", label="Secondary Button", variant="secondary" - ) - - def go(*args): - time.sleep(3) - return "https://gradio-static-files.s3.us-west-2.amazonaws.com/header-image.jpgjpg" - - go_btn.click(go, [radio, drop, drop_2, check, name], img, api_name="go") - - def clear(): - time.sleep(0.2) - return None - - clear_btn.click(clear, None, img) - - with gr.Row(): - btn1 = gr.Button("Button 1").style(size="sm") - btn2 = gr.UploadButton().style(size="sm") - stop_btn = gr.Button("Stop", label="Stop Button", variant="stop").style( - size="sm" - ) - - with gr.Row(): - gr.Dataframe(value=[[1, 2, 3], [4, 5, 6], [7, 8, 9]], label="Dataframe") - gr.JSON( - value={"a": 1, "b": 2, "c": {"test": "a", "test2": [1, 2, 3]}}, label="JSON" - ) - gr.Label(value={"cat": 0.7, "dog": 0.2, "fish": 0.1}) - gr.File() - with gr.Row(): - gr.ColorPicker() - gr.Video("https://gradio-static-files.s3.us-west-2.amazonaws.com/world.mp4") - gr.Gallery( - [ - ( - "https://gradio-static-files.s3.us-west-2.amazonaws.com/lion.jpg", - "lion", - ), - ( - "https://gradio-static-files.s3.us-west-2.amazonaws.com/logo.png", - "logo", - ), - ( - "https://gradio-static-files.s3.us-west-2.amazonaws.com/tower.jpg", - "tower", - ), - ] - ).style(height="200px", grid=2) - - with gr.Row(): - with gr.Column(scale=2): - chatbot = gr.Chatbot([("Hello", "Hi")], label="Chatbot") - chat_btn = gr.Button("Add messages") - - def chat(history): - time.sleep(2) - yield [["How are you?", "I am good."]] - - chat_btn.click( - lambda history: history - + [["How are you?", "I am good."]] - + (time.sleep(2) or []), - chatbot, - chatbot, - ) - with gr.Column(scale=1): - with gr.Accordion("Advanced Settings"): - gr.Markdown("Hello") - gr.Number(label="Chatbot control 1") - gr.Number(label="Chatbot control 2") - gr.Number(label="Chatbot control 3") - - -if __name__ == "__main__": - demo.queue().launch() \ No newline at end of file diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/command/install_lib.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/command/install_lib.py deleted file mode 100644 index 2e9d8757a582b1dcdb47a34c35c6cfb3ed23ba90..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/command/install_lib.py +++ /dev/null @@ -1,122 +0,0 @@ -import os -import sys -from itertools import product, starmap -import distutils.command.install_lib as orig - - -class install_lib(orig.install_lib): - """Don't add compiled flags to filenames of non-Python files""" - - def run(self): - self.build() - outfiles = self.install() - if outfiles is not None: - # always compile, in case we have any extension stubs to deal with - self.byte_compile(outfiles) - - def get_exclusions(self): - """ - Return a collections.Sized collections.Container of paths to be - excluded for single_version_externally_managed installations. - """ - all_packages = ( - pkg - for ns_pkg in self._get_SVEM_NSPs() - for pkg in self._all_packages(ns_pkg) - ) - - excl_specs = product(all_packages, self._gen_exclusion_paths()) - return set(starmap(self._exclude_pkg_path, excl_specs)) - - def _exclude_pkg_path(self, pkg, exclusion_path): - """ - Given a package name and exclusion path within that package, - compute the full exclusion path. - """ - parts = pkg.split('.') + [exclusion_path] - return os.path.join(self.install_dir, *parts) - - @staticmethod - def _all_packages(pkg_name): - """ - >>> list(install_lib._all_packages('foo.bar.baz')) - ['foo.bar.baz', 'foo.bar', 'foo'] - """ - while pkg_name: - yield pkg_name - pkg_name, sep, child = pkg_name.rpartition('.') - - def _get_SVEM_NSPs(self): - """ - Get namespace packages (list) but only for - single_version_externally_managed installations and empty otherwise. - """ - # TODO: is it necessary to short-circuit here? i.e. what's the cost - # if get_finalized_command is called even when namespace_packages is - # False? - if not self.distribution.namespace_packages: - return [] - - install_cmd = self.get_finalized_command('install') - svem = install_cmd.single_version_externally_managed - - return self.distribution.namespace_packages if svem else [] - - @staticmethod - def _gen_exclusion_paths(): - """ - Generate file paths to be excluded for namespace packages (bytecode - cache files). - """ - # always exclude the package module itself - yield '__init__.py' - - yield '__init__.pyc' - yield '__init__.pyo' - - if not hasattr(sys, 'implementation'): - return - - base = os.path.join( - '__pycache__', '__init__.' + sys.implementation.cache_tag) - yield base + '.pyc' - yield base + '.pyo' - yield base + '.opt-1.pyc' - yield base + '.opt-2.pyc' - - def copy_tree( - self, infile, outfile, - preserve_mode=1, preserve_times=1, preserve_symlinks=0, level=1 - ): - assert preserve_mode and preserve_times and not preserve_symlinks - exclude = self.get_exclusions() - - if not exclude: - return orig.install_lib.copy_tree(self, infile, outfile) - - # Exclude namespace package __init__.py* files from the output - - from setuptools.archive_util import unpack_directory - from distutils import log - - outfiles = [] - - def pf(src, dst): - if dst in exclude: - log.warn("Skipping installation of %s (namespace package)", - dst) - return False - - log.info("copying %s -> %s", src, os.path.dirname(dst)) - outfiles.append(dst) - return dst - - unpack_directory(infile, outfile, pf) - return outfiles - - def get_outputs(self): - outputs = orig.install_lib.get_outputs(self) - exclude = self.get_exclusions() - if exclude: - return [f for f in outputs if f not in exclude] - return outputs diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/wheel/cli/tags.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/wheel/cli/tags.py deleted file mode 100644 index b9094d798e6f0e4c78be3bd6137201e21bf2b12c..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/wheel/cli/tags.py +++ /dev/null @@ -1,151 +0,0 @@ -from __future__ import annotations - -import itertools -import os -from collections.abc import Iterable - -from ..wheelfile import WheelFile -from .pack import read_tags, set_build_number - - -def _compute_tags(original_tags: Iterable[str], new_tags: str | None) -> set[str]: - """Add or replace tags. Supports dot-separated tags""" - if new_tags is None: - return set(original_tags) - - if new_tags.startswith("+"): - return {*original_tags, *new_tags[1:].split(".")} - - if new_tags.startswith("-"): - return set(original_tags) - set(new_tags[1:].split(".")) - - return set(new_tags.split(".")) - - -def tags( - wheel: str, - python_tags: str | None = None, - abi_tags: str | None = None, - platform_tags: str | None = None, - build_tag: str | None = None, - remove: bool = False, -) -> str: - """Change the tags on a wheel file. - - The tags are left unchanged if they are not specified. To specify "none", - use ["none"]. To append to the previous tags, a tag should start with a - "+". If a tag starts with "-", it will be removed from existing tags. - Processing is done left to right. - - :param wheel: The paths to the wheels - :param python_tags: The Python tags to set - :param abi_tags: The ABI tags to set - :param platform_tags: The platform tags to set - :param build_tag: The build tag to set - :param remove: Remove the original wheel - """ - with WheelFile(wheel, "r") as f: - assert f.filename, f"{f.filename} must be available" - - wheel_info = f.read(f.dist_info_path + "/WHEEL") - - original_wheel_name = os.path.basename(f.filename) - namever = f.parsed_filename.group("namever") - build = f.parsed_filename.group("build") - original_python_tags = f.parsed_filename.group("pyver").split(".") - original_abi_tags = f.parsed_filename.group("abi").split(".") - original_plat_tags = f.parsed_filename.group("plat").split(".") - - tags, existing_build_tag = read_tags(wheel_info) - - impls = {tag.split("-")[0] for tag in tags} - abivers = {tag.split("-")[1] for tag in tags} - platforms = {tag.split("-")[2] for tag in tags} - - if impls != set(original_python_tags): - msg = f"Wheel internal tags {impls!r} != filename tags {original_python_tags!r}" - raise AssertionError(msg) - - if abivers != set(original_abi_tags): - msg = f"Wheel internal tags {abivers!r} != filename tags {original_abi_tags!r}" - raise AssertionError(msg) - - if platforms != set(original_plat_tags): - msg = ( - f"Wheel internal tags {platforms!r} != filename tags {original_plat_tags!r}" - ) - raise AssertionError(msg) - - if existing_build_tag != build: - msg = ( - f"Incorrect filename '{build}' " - f"& *.dist-info/WHEEL '{existing_build_tag}' build numbers" - ) - raise AssertionError(msg) - - # Start changing as needed - if build_tag is not None: - build = build_tag - - final_python_tags = sorted(_compute_tags(original_python_tags, python_tags)) - final_abi_tags = sorted(_compute_tags(original_abi_tags, abi_tags)) - final_plat_tags = sorted(_compute_tags(original_plat_tags, platform_tags)) - - final_tags = [ - namever, - ".".join(final_python_tags), - ".".join(final_abi_tags), - ".".join(final_plat_tags), - ] - if build: - final_tags.insert(1, build) - - final_wheel_name = "-".join(final_tags) + ".whl" - - if original_wheel_name != final_wheel_name: - tags = [ - f"{a}-{b}-{c}" - for a, b, c in itertools.product( - final_python_tags, final_abi_tags, final_plat_tags - ) - ] - - original_wheel_path = os.path.join( - os.path.dirname(f.filename), original_wheel_name - ) - final_wheel_path = os.path.join(os.path.dirname(f.filename), final_wheel_name) - - with WheelFile(original_wheel_path, "r") as fin, WheelFile( - final_wheel_path, "w" - ) as fout: - fout.comment = fin.comment # preserve the comment - for item in fin.infolist(): - if item.filename == f.dist_info_path + "/RECORD": - continue - if item.filename == f.dist_info_path + "/WHEEL": - content = fin.read(item) - content = set_tags(content, tags) - content = set_build_number(content, build) - fout.writestr(item, content) - else: - fout.writestr(item, fin.read(item)) - - if remove: - os.remove(original_wheel_path) - - return final_wheel_name - - -def set_tags(in_string: bytes, tags: Iterable[str]) -> bytes: - """Set the tags in the .dist-info/WHEEL file contents. - - :param in_string: The string to modify. - :param tags: The tags to set. - """ - - lines = [line for line in in_string.splitlines() if not line.startswith(b"Tag:")] - for tag in tags: - lines.append(b"Tag: " + tag.encode("ascii")) - in_string = b"\r\n".join(lines) + b"\r\n" - - return in_string diff --git a/spaces/Terminus0501/vits-uma-genshin-honkai/attentions.py b/spaces/Terminus0501/vits-uma-genshin-honkai/attentions.py deleted file mode 100644 index 86bc73b5fe98cc7b443e9078553920346c996707..0000000000000000000000000000000000000000 --- a/spaces/Terminus0501/vits-uma-genshin-honkai/attentions.py +++ /dev/null @@ -1,300 +0,0 @@ -import math -import torch -from torch import nn -from torch.nn import functional as F - -import commons -from modules import LayerNorm - - -class Encoder(nn.Module): - def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., window_size=4, **kwargs): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.window_size = window_size - - self.drop = nn.Dropout(p_dropout) - self.attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, window_size=window_size)) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout)) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask): - attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.attn_layers[i](x, x, attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class Decoder(nn.Module): - def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., proximal_bias=False, proximal_init=True, **kwargs): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - - self.drop = nn.Dropout(p_dropout) - self.self_attn_layers = nn.ModuleList() - self.norm_layers_0 = nn.ModuleList() - self.encdec_attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.self_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, proximal_bias=proximal_bias, proximal_init=proximal_init)) - self.norm_layers_0.append(LayerNorm(hidden_channels)) - self.encdec_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout)) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout, causal=True)) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask, h, h_mask): - """ - x: decoder input - h: encoder output - """ - self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to(device=x.device, dtype=x.dtype) - encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.self_attn_layers[i](x, x, self_attn_mask) - y = self.drop(y) - x = self.norm_layers_0[i](x + y) - - y = self.encdec_attn_layers[i](x, h, encdec_attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class MultiHeadAttention(nn.Module): - def __init__(self, channels, out_channels, n_heads, p_dropout=0., window_size=None, heads_share=True, block_length=None, proximal_bias=False, proximal_init=False): - super().__init__() - assert channels % n_heads == 0 - - self.channels = channels - self.out_channels = out_channels - self.n_heads = n_heads - self.p_dropout = p_dropout - self.window_size = window_size - self.heads_share = heads_share - self.block_length = block_length - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - self.attn = None - - self.k_channels = channels // n_heads - self.conv_q = nn.Conv1d(channels, channels, 1) - self.conv_k = nn.Conv1d(channels, channels, 1) - self.conv_v = nn.Conv1d(channels, channels, 1) - self.conv_o = nn.Conv1d(channels, out_channels, 1) - self.drop = nn.Dropout(p_dropout) - - if window_size is not None: - n_heads_rel = 1 if heads_share else n_heads - rel_stddev = self.k_channels**-0.5 - self.emb_rel_k = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev) - self.emb_rel_v = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev) - - nn.init.xavier_uniform_(self.conv_q.weight) - nn.init.xavier_uniform_(self.conv_k.weight) - nn.init.xavier_uniform_(self.conv_v.weight) - if proximal_init: - with torch.no_grad(): - self.conv_k.weight.copy_(self.conv_q.weight) - self.conv_k.bias.copy_(self.conv_q.bias) - - def forward(self, x, c, attn_mask=None): - q = self.conv_q(x) - k = self.conv_k(c) - v = self.conv_v(c) - - x, self.attn = self.attention(q, k, v, mask=attn_mask) - - x = self.conv_o(x) - return x - - def attention(self, query, key, value, mask=None): - # reshape [b, d, t] -> [b, n_h, t, d_k] - b, d, t_s, t_t = (*key.size(), query.size(2)) - query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3) - key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - - scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1)) - if self.window_size is not None: - assert t_s == t_t, "Relative attention is only available for self-attention." - key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s) - rel_logits = self._matmul_with_relative_keys(query /math.sqrt(self.k_channels), key_relative_embeddings) - scores_local = self._relative_position_to_absolute_position(rel_logits) - scores = scores + scores_local - if self.proximal_bias: - assert t_s == t_t, "Proximal bias is only available for self-attention." - scores = scores + self._attention_bias_proximal(t_s).to(device=scores.device, dtype=scores.dtype) - if mask is not None: - scores = scores.masked_fill(mask == 0, -1e4) - if self.block_length is not None: - assert t_s == t_t, "Local attention is only available for self-attention." - block_mask = torch.ones_like(scores).triu(-self.block_length).tril(self.block_length) - scores = scores.masked_fill(block_mask == 0, -1e4) - p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s] - p_attn = self.drop(p_attn) - output = torch.matmul(p_attn, value) - if self.window_size is not None: - relative_weights = self._absolute_position_to_relative_position(p_attn) - value_relative_embeddings = self._get_relative_embeddings(self.emb_rel_v, t_s) - output = output + self._matmul_with_relative_values(relative_weights, value_relative_embeddings) - output = output.transpose(2, 3).contiguous().view(b, d, t_t) # [b, n_h, t_t, d_k] -> [b, d, t_t] - return output, p_attn - - def _matmul_with_relative_values(self, x, y): - """ - x: [b, h, l, m] - y: [h or 1, m, d] - ret: [b, h, l, d] - """ - ret = torch.matmul(x, y.unsqueeze(0)) - return ret - - def _matmul_with_relative_keys(self, x, y): - """ - x: [b, h, l, d] - y: [h or 1, m, d] - ret: [b, h, l, m] - """ - ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1)) - return ret - - def _get_relative_embeddings(self, relative_embeddings, length): - max_relative_position = 2 * self.window_size + 1 - # Pad first before slice to avoid using cond ops. - pad_length = max(length - (self.window_size + 1), 0) - slice_start_position = max((self.window_size + 1) - length, 0) - slice_end_position = slice_start_position + 2 * length - 1 - if pad_length > 0: - padded_relative_embeddings = F.pad( - relative_embeddings, - commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]])) - else: - padded_relative_embeddings = relative_embeddings - used_relative_embeddings = padded_relative_embeddings[:,slice_start_position:slice_end_position] - return used_relative_embeddings - - def _relative_position_to_absolute_position(self, x): - """ - x: [b, h, l, 2*l-1] - ret: [b, h, l, l] - """ - batch, heads, length, _ = x.size() - # Concat columns of pad to shift from relative to absolute indexing. - x = F.pad(x, commons.convert_pad_shape([[0,0],[0,0],[0,0],[0,1]])) - - # Concat extra elements so to add up to shape (len+1, 2*len-1). - x_flat = x.view([batch, heads, length * 2 * length]) - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0,0],[0,0],[0,length-1]])) - - # Reshape and slice out the padded elements. - x_final = x_flat.view([batch, heads, length+1, 2*length-1])[:, :, :length, length-1:] - return x_final - - def _absolute_position_to_relative_position(self, x): - """ - x: [b, h, l, l] - ret: [b, h, l, 2*l-1] - """ - batch, heads, length, _ = x.size() - # padd along column - x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length-1]])) - x_flat = x.view([batch, heads, length**2 + length*(length -1)]) - # add 0's in the beginning that will skew the elements after reshape - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]])) - x_final = x_flat.view([batch, heads, length, 2*length])[:,:,:,1:] - return x_final - - def _attention_bias_proximal(self, length): - """Bias for self-attention to encourage attention to close positions. - Args: - length: an integer scalar. - Returns: - a Tensor with shape [1, 1, length, length] - """ - r = torch.arange(length, dtype=torch.float32) - diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1) - return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0) - - -class FFN(nn.Module): - def __init__(self, in_channels, out_channels, filter_channels, kernel_size, p_dropout=0., activation=None, causal=False): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.activation = activation - self.causal = causal - - if causal: - self.padding = self._causal_padding - else: - self.padding = self._same_padding - - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size) - self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size) - self.drop = nn.Dropout(p_dropout) - - def forward(self, x, x_mask): - x = self.conv_1(self.padding(x * x_mask)) - if self.activation == "gelu": - x = x * torch.sigmoid(1.702 * x) - else: - x = torch.relu(x) - x = self.drop(x) - x = self.conv_2(self.padding(x * x_mask)) - return x * x_mask - - def _causal_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = self.kernel_size - 1 - pad_r = 0 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x - - def _same_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = (self.kernel_size - 1) // 2 - pad_r = self.kernel_size // 2 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x diff --git a/spaces/ThankGod/face-id/README.md b/spaces/ThankGod/face-id/README.md deleted file mode 100644 index 74d466d728d38b212137c33043333a453a8cd0be..0000000000000000000000000000000000000000 --- a/spaces/ThankGod/face-id/README.md +++ /dev/null @@ -1,21 +0,0 @@ ---- -title: face id demo app -emoji: 🤓 -colorFrom: yellow -colorTo: pink -sdk: gradio -sdk_version: 3.1.7 -app_file: app.py -pinned: false ---- - -[Try Demo Image classification Here](https://huggingface.co/spaces/ThankGod/image-classifier) - -## Credits -- Hugging face 🤗 for hosting this demo. -- Hugging face transformer model for image classification transformer model -- Gradio for the beautiful visualization dashboards. - -## References -- https://gradio.app/ -- https://huggingface.co/ diff --git a/spaces/Tirendaz/pytorch_cat_vs_dog/README.md b/spaces/Tirendaz/pytorch_cat_vs_dog/README.md deleted file mode 100644 index 7999d003ab79d64284e87c5f275769c29066bd1b..0000000000000000000000000000000000000000 --- a/spaces/Tirendaz/pytorch_cat_vs_dog/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Pytorch Cat Vs Dog -emoji: 🏢 -colorFrom: red -colorTo: purple -sdk: gradio -sdk_version: 3.16.2 -app_file: app.py -pinned: false -license: other ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/VIPLab/Caption-Anything/caption_anything/text_refiner/text_refiner.py b/spaces/VIPLab/Caption-Anything/caption_anything/text_refiner/text_refiner.py deleted file mode 100644 index 3af34334eb24df07a22278f7e9956f9097be04b3..0000000000000000000000000000000000000000 --- a/spaces/VIPLab/Caption-Anything/caption_anything/text_refiner/text_refiner.py +++ /dev/null @@ -1,86 +0,0 @@ -from langchain.llms.openai import OpenAI -import torch -from PIL import Image, ImageDraw, ImageOps -from transformers import pipeline, BlipProcessor, BlipForConditionalGeneration, BlipForQuestionAnswering -import pdb - -class TextRefiner: - def __init__(self, device, api_key=""): - print(f"Initializing TextRefiner to {device}") - self.llm = OpenAI(model_name="gpt-3.5-turbo", temperature=0, openai_api_key=api_key) - self.prompt_tag = { - "imagination": {"True": "could", - "False": "could not"} - } - self.short_prompts = { - "length": "around {length} words", - "sentiment": "of {sentiment} sentiment", - "language": "in {language}", - } - - self.long_prompts = { - "imagination": "The new sentence could extend the original description by using your imagination to create additional details, or think about what might have happened before or after the scene in the image, but should not conflict with the original sentence", - } - - self.wiki_prompts = "I want you to act as a Wikipedia page. I will give you a sentence and you will parse the single main object in the sentence and provide a summary of that object in the format of a Wikipedia page. Your summary should be informative and factual, covering the most important aspects of the object. Start your summary with an introductory paragraph that gives an overview of the object. The overall length of the response should be around 100 words. You should not describe the parsing process and only provide the final summary. The sentence is \"{query}\"." - - self.control_prompts = "As a text reviser, you will convert an image description into a new sentence or long paragraph. The new text is {prompts}. {long_prompts} The sentence is \"{query}\" (give me the revised sentence only)" - - def parse(self, response): - out = response.strip() - return out - - def parse2(self, response): - out = response.strip() - return out - - def prepare_input(self, query, short_prompts, long_prompts): - input = self.control_prompts.format(**{'prompts': ', '.join(short_prompts), 'long_prompts': '. '.join(long_prompts), 'query': query}) - print('prompt: ', input) - return input - - def inference(self, query: str, controls: dict, context: list=[], enable_wiki=False): - """ - query: the caption of the region of interest, generated by captioner - controls: a dict of control singals, e.g., {"length": 5, "sentiment": "positive"} - """ - prompts = [] - long_prompts = [] - for control, value in controls.items(): - # if control in self.prompt_tag: - # value = self.prompt_tag[control][value] - if control in self.short_prompts: - prompts.append(self.short_prompts[control].format(**{control: value})) - else: - if value in [True, "True", "true"]: - long_prompts.append(self.long_prompts[control]) - input = self.prepare_input(query, prompts, long_prompts) - response = self.llm(input) - response = self.parse(response) - - response_wiki = "" - if enable_wiki: - tmp_configs = {"query": query} - prompt_wiki = self.wiki_prompts.format(**tmp_configs) - response_wiki = self.llm(prompt_wiki) - response_wiki = self.parse2(response_wiki) - out = { - 'raw_caption': query, - 'caption': response, - 'wiki': response_wiki - } - print(out) - return out - -if __name__ == "__main__": - model = TextRefiner(device='cpu') - controls = { - "length": "30", - "sentiment": "negative", - # "imagination": "True", - "imagination": "False", - "language": "English", - } - # model.inference(query='a dog is sitting on a brown bench', controls=controls) - model.inference(query='a cat is sleeping', controls=controls) - \ No newline at end of file diff --git a/spaces/XGBooster/WhisperingDiffusion/README.md b/spaces/XGBooster/WhisperingDiffusion/README.md deleted file mode 100644 index 3917873db2d9d69ce1a7220c51dc38f70fc46b92..0000000000000000000000000000000000000000 --- a/spaces/XGBooster/WhisperingDiffusion/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: WhisperingDiffusion -emoji: ⚡ -colorFrom: indigo -colorTo: blue -sdk: gradio -sdk_version: 3.4.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/XzJosh/Ava-Bert-VITS2/modules.py b/spaces/XzJosh/Ava-Bert-VITS2/modules.py deleted file mode 100644 index 92e0f32a51c472bfd1659a50a95a95d195281d2b..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/Ava-Bert-VITS2/modules.py +++ /dev/null @@ -1,452 +0,0 @@ -import copy -import math -import numpy as np -import scipy -import torch -from torch import nn -from torch.nn import functional as F - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm - -import commons -from commons import init_weights, get_padding -from transforms import piecewise_rational_quadratic_transform -from attentions import Encoder - -LRELU_SLOPE = 0.1 - -class LayerNorm(nn.Module): - def __init__(self, channels, eps=1e-5): - super().__init__() - self.channels = channels - self.eps = eps - - self.gamma = nn.Parameter(torch.ones(channels)) - self.beta = nn.Parameter(torch.zeros(channels)) - - def forward(self, x): - x = x.transpose(1, -1) - x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps) - return x.transpose(1, -1) - -class ConvReluNorm(nn.Module): - def __init__(self, in_channels, hidden_channels, out_channels, kernel_size, n_layers, p_dropout): - super().__init__() - self.in_channels = in_channels - self.hidden_channels = hidden_channels - self.out_channels = out_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - assert n_layers > 1, "Number of layers should be larger than 0." - - self.conv_layers = nn.ModuleList() - self.norm_layers = nn.ModuleList() - self.conv_layers.append(nn.Conv1d(in_channels, hidden_channels, kernel_size, padding=kernel_size//2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.relu_drop = nn.Sequential( - nn.ReLU(), - nn.Dropout(p_dropout)) - for _ in range(n_layers-1): - self.conv_layers.append(nn.Conv1d(hidden_channels, hidden_channels, kernel_size, padding=kernel_size//2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask): - x_org = x - for i in range(self.n_layers): - x = self.conv_layers[i](x * x_mask) - x = self.norm_layers[i](x) - x = self.relu_drop(x) - x = x_org + self.proj(x) - return x * x_mask - - -class DDSConv(nn.Module): - """ - Dialted and Depth-Separable Convolution - """ - def __init__(self, channels, kernel_size, n_layers, p_dropout=0.): - super().__init__() - self.channels = channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - - self.drop = nn.Dropout(p_dropout) - self.convs_sep = nn.ModuleList() - self.convs_1x1 = nn.ModuleList() - self.norms_1 = nn.ModuleList() - self.norms_2 = nn.ModuleList() - for i in range(n_layers): - dilation = kernel_size ** i - padding = (kernel_size * dilation - dilation) // 2 - self.convs_sep.append(nn.Conv1d(channels, channels, kernel_size, - groups=channels, dilation=dilation, padding=padding - )) - self.convs_1x1.append(nn.Conv1d(channels, channels, 1)) - self.norms_1.append(LayerNorm(channels)) - self.norms_2.append(LayerNorm(channels)) - - def forward(self, x, x_mask, g=None): - if g is not None: - x = x + g - for i in range(self.n_layers): - y = self.convs_sep[i](x * x_mask) - y = self.norms_1[i](y) - y = F.gelu(y) - y = self.convs_1x1[i](y) - y = self.norms_2[i](y) - y = F.gelu(y) - y = self.drop(y) - x = x + y - return x * x_mask - - -class WN(torch.nn.Module): - def __init__(self, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=0, p_dropout=0): - super(WN, self).__init__() - assert(kernel_size % 2 == 1) - self.hidden_channels =hidden_channels - self.kernel_size = kernel_size, - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - self.p_dropout = p_dropout - - self.in_layers = torch.nn.ModuleList() - self.res_skip_layers = torch.nn.ModuleList() - self.drop = nn.Dropout(p_dropout) - - if gin_channels != 0: - cond_layer = torch.nn.Conv1d(gin_channels, 2*hidden_channels*n_layers, 1) - self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name='weight') - - for i in range(n_layers): - dilation = dilation_rate ** i - padding = int((kernel_size * dilation - dilation) / 2) - in_layer = torch.nn.Conv1d(hidden_channels, 2*hidden_channels, kernel_size, - dilation=dilation, padding=padding) - in_layer = torch.nn.utils.weight_norm(in_layer, name='weight') - self.in_layers.append(in_layer) - - # last one is not necessary - if i < n_layers - 1: - res_skip_channels = 2 * hidden_channels - else: - res_skip_channels = hidden_channels - - res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1) - res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name='weight') - self.res_skip_layers.append(res_skip_layer) - - def forward(self, x, x_mask, g=None, **kwargs): - output = torch.zeros_like(x) - n_channels_tensor = torch.IntTensor([self.hidden_channels]) - - if g is not None: - g = self.cond_layer(g) - - for i in range(self.n_layers): - x_in = self.in_layers[i](x) - if g is not None: - cond_offset = i * 2 * self.hidden_channels - g_l = g[:,cond_offset:cond_offset+2*self.hidden_channels,:] - else: - g_l = torch.zeros_like(x_in) - - acts = commons.fused_add_tanh_sigmoid_multiply( - x_in, - g_l, - n_channels_tensor) - acts = self.drop(acts) - - res_skip_acts = self.res_skip_layers[i](acts) - if i < self.n_layers - 1: - res_acts = res_skip_acts[:,:self.hidden_channels,:] - x = (x + res_acts) * x_mask - output = output + res_skip_acts[:,self.hidden_channels:,:] - else: - output = output + res_skip_acts - return output * x_mask - - def remove_weight_norm(self): - if self.gin_channels != 0: - torch.nn.utils.remove_weight_norm(self.cond_layer) - for l in self.in_layers: - torch.nn.utils.remove_weight_norm(l) - for l in self.res_skip_layers: - torch.nn.utils.remove_weight_norm(l) - - -class ResBlock1(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock1, self).__init__() - self.convs1 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]))) - ]) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))) - ]) - self.convs2.apply(init_weights) - - def forward(self, x, x_mask=None): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c2(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class ResBlock2(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3)): - super(ResBlock2, self).__init__() - self.convs = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))) - ]) - self.convs.apply(init_weights) - - def forward(self, x, x_mask=None): - for c in self.convs: - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -class Log(nn.Module): - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask - logdet = torch.sum(-y, [1, 2]) - return y, logdet - else: - x = torch.exp(x) * x_mask - return x - - -class Flip(nn.Module): - def forward(self, x, *args, reverse=False, **kwargs): - x = torch.flip(x, [1]) - if not reverse: - logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device) - return x, logdet - else: - return x - - -class ElementwiseAffine(nn.Module): - def __init__(self, channels): - super().__init__() - self.channels = channels - self.m = nn.Parameter(torch.zeros(channels,1)) - self.logs = nn.Parameter(torch.zeros(channels,1)) - - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = self.m + torch.exp(self.logs) * x - y = y * x_mask - logdet = torch.sum(self.logs * x_mask, [1,2]) - return y, logdet - else: - x = (x - self.m) * torch.exp(-self.logs) * x_mask - return x - - -class ResidualCouplingLayer(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=0, - gin_channels=0, - mean_only=False): - assert channels % 2 == 0, "channels should be divisible by 2" - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.half_channels = channels // 2 - self.mean_only = mean_only - - self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1) - self.enc = WN(hidden_channels, kernel_size, dilation_rate, n_layers, p_dropout=p_dropout, gin_channels=gin_channels) - self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1) - self.post.weight.data.zero_() - self.post.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) * x_mask - h = self.enc(h, x_mask, g=g) - stats = self.post(h) * x_mask - if not self.mean_only: - m, logs = torch.split(stats, [self.half_channels]*2, 1) - else: - m = stats - logs = torch.zeros_like(m) - - if not reverse: - x1 = m + x1 * torch.exp(logs) * x_mask - x = torch.cat([x0, x1], 1) - logdet = torch.sum(logs, [1,2]) - return x, logdet - else: - x1 = (x1 - m) * torch.exp(-logs) * x_mask - x = torch.cat([x0, x1], 1) - return x - - -class ConvFlow(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, n_layers, num_bins=10, tail_bound=5.0): - super().__init__() - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.num_bins = num_bins - self.tail_bound = tail_bound - self.half_channels = in_channels // 2 - - self.pre = nn.Conv1d(self.half_channels, filter_channels, 1) - self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.) - self.proj = nn.Conv1d(filter_channels, self.half_channels * (num_bins * 3 - 1), 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) - h = self.convs(h, x_mask, g=g) - h = self.proj(h) * x_mask - - b, c, t = x0.shape - h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?] - - unnormalized_widths = h[..., :self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_heights = h[..., self.num_bins:2*self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_derivatives = h[..., 2 * self.num_bins:] - - x1, logabsdet = piecewise_rational_quadratic_transform(x1, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=reverse, - tails='linear', - tail_bound=self.tail_bound - ) - - x = torch.cat([x0, x1], 1) * x_mask - logdet = torch.sum(logabsdet * x_mask, [1,2]) - if not reverse: - return x, logdet - else: - return x -class TransformerCouplingLayer(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - n_layers, - n_heads, - p_dropout=0, - filter_channels=0, - mean_only=False, - wn_sharing_parameter=None, - gin_channels = 0 - ): - assert channels % 2 == 0, "channels should be divisible by 2" - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.half_channels = channels // 2 - self.mean_only = mean_only - - self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1) - self.enc = Encoder(hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout, isflow = True, gin_channels = gin_channels) if wn_sharing_parameter is None else wn_sharing_parameter - self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1) - self.post.weight.data.zero_() - self.post.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) * x_mask - h = self.enc(h, x_mask, g=g) - stats = self.post(h) * x_mask - if not self.mean_only: - m, logs = torch.split(stats, [self.half_channels]*2, 1) - else: - m = stats - logs = torch.zeros_like(m) - - if not reverse: - x1 = m + x1 * torch.exp(logs) * x_mask - x = torch.cat([x0, x1], 1) - logdet = torch.sum(logs, [1,2]) - return x, logdet - else: - x1 = (x1 - m) * torch.exp(-logs) * x_mask - x = torch.cat([x0, x1], 1) - return x - - x1, logabsdet = piecewise_rational_quadratic_transform(x1, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=reverse, - tails='linear', - tail_bound=self.tail_bound - ) - - x = torch.cat([x0, x1], 1) * x_mask - logdet = torch.sum(logabsdet * x_mask, [1,2]) - if not reverse: - return x, logdet - else: - return x diff --git a/spaces/XzJosh/Bekki-Bert-VITS2/text/chinese.py b/spaces/XzJosh/Bekki-Bert-VITS2/text/chinese.py deleted file mode 100644 index 276753880b73de2e8889dcb2101cd98c09e0710b..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/Bekki-Bert-VITS2/text/chinese.py +++ /dev/null @@ -1,193 +0,0 @@ -import os -import re - -import cn2an -from pypinyin import lazy_pinyin, Style - -from text import symbols -from text.symbols import punctuation -from text.tone_sandhi import ToneSandhi - -current_file_path = os.path.dirname(__file__) -pinyin_to_symbol_map = {line.split("\t")[0]: line.strip().split("\t")[1] for line in - open(os.path.join(current_file_path, 'opencpop-strict.txt')).readlines()} - -import jieba.posseg as psg - - -rep_map = { - ':': ',', - ';': ',', - ',': ',', - '。': '.', - '!': '!', - '?': '?', - '\n': '.', - "·": ",", - '、': ",", - '...': '…', - '$': '.', - '“': "'", - '”': "'", - '‘': "'", - '’': "'", - '(': "'", - ')': "'", - '(': "'", - ')': "'", - '《': "'", - '》': "'", - '【': "'", - '】': "'", - '[': "'", - ']': "'", - '—': "-", - '~': "-", - '~': "-", - '「': "'", - '」': "'", - -} - -tone_modifier = ToneSandhi() - -def replace_punctuation(text): - text = text.replace("嗯", "恩").replace("呣","母") - pattern = re.compile('|'.join(re.escape(p) for p in rep_map.keys())) - - replaced_text = pattern.sub(lambda x: rep_map[x.group()], text) - - replaced_text = re.sub(r'[^\u4e00-\u9fa5'+"".join(punctuation)+r']+', '', replaced_text) - - return replaced_text - -def g2p(text): - pattern = r'(?<=[{0}])\s*'.format(''.join(punctuation)) - sentences = [i for i in re.split(pattern, text) if i.strip()!=''] - phones, tones, word2ph = _g2p(sentences) - assert sum(word2ph) == len(phones) - assert len(word2ph) == len(text) #Sometimes it will crash,you can add a try-catch. - phones = ['_'] + phones + ["_"] - tones = [0] + tones + [0] - word2ph = [1] + word2ph + [1] - return phones, tones, word2ph - - -def _get_initials_finals(word): - initials = [] - finals = [] - orig_initials = lazy_pinyin( - word, neutral_tone_with_five=True, style=Style.INITIALS) - orig_finals = lazy_pinyin( - word, neutral_tone_with_five=True, style=Style.FINALS_TONE3) - for c, v in zip(orig_initials, orig_finals): - initials.append(c) - finals.append(v) - return initials, finals - - -def _g2p(segments): - phones_list = [] - tones_list = [] - word2ph = [] - for seg in segments: - pinyins = [] - # Replace all English words in the sentence - seg = re.sub('[a-zA-Z]+', '', seg) - seg_cut = psg.lcut(seg) - initials = [] - finals = [] - seg_cut = tone_modifier.pre_merge_for_modify(seg_cut) - for word, pos in seg_cut: - if pos == 'eng': - continue - sub_initials, sub_finals = _get_initials_finals(word) - sub_finals = tone_modifier.modified_tone(word, pos, - sub_finals) - initials.append(sub_initials) - finals.append(sub_finals) - - # assert len(sub_initials) == len(sub_finals) == len(word) - initials = sum(initials, []) - finals = sum(finals, []) - # - for c, v in zip(initials, finals): - raw_pinyin = c+v - # NOTE: post process for pypinyin outputs - # we discriminate i, ii and iii - if c == v: - assert c in punctuation - phone = [c] - tone = '0' - word2ph.append(1) - else: - v_without_tone = v[:-1] - tone = v[-1] - - pinyin = c+v_without_tone - assert tone in '12345' - - if c: - # 多音节 - v_rep_map = { - "uei": 'ui', - 'iou': 'iu', - 'uen': 'un', - } - if v_without_tone in v_rep_map.keys(): - pinyin = c+v_rep_map[v_without_tone] - else: - # 单音节 - pinyin_rep_map = { - 'ing': 'ying', - 'i': 'yi', - 'in': 'yin', - 'u': 'wu', - } - if pinyin in pinyin_rep_map.keys(): - pinyin = pinyin_rep_map[pinyin] - else: - single_rep_map = { - 'v': 'yu', - 'e': 'e', - 'i': 'y', - 'u': 'w', - } - if pinyin[0] in single_rep_map.keys(): - pinyin = single_rep_map[pinyin[0]]+pinyin[1:] - - assert pinyin in pinyin_to_symbol_map.keys(), (pinyin, seg, raw_pinyin) - phone = pinyin_to_symbol_map[pinyin].split(' ') - word2ph.append(len(phone)) - - phones_list += phone - tones_list += [int(tone)] * len(phone) - return phones_list, tones_list, word2ph - - - -def text_normalize(text): - numbers = re.findall(r'\d+(?:\.?\d+)?', text) - for number in numbers: - text = text.replace(number, cn2an.an2cn(number), 1) - text = replace_punctuation(text) - return text - -def get_bert_feature(text, word2ph): - from text import chinese_bert - return chinese_bert.get_bert_feature(text, word2ph) - -if __name__ == '__main__': - from text.chinese_bert import get_bert_feature - text = "啊!但是《原神》是由,米哈\游自主, [研发]的一款全.新开放世界.冒险游戏" - text = text_normalize(text) - print(text) - phones, tones, word2ph = g2p(text) - bert = get_bert_feature(text, word2ph) - - print(phones, tones, word2ph, bert.shape) - - -# # 示例用法 -# text = "这是一个示例文本:,你好!这是一个测试...." -# print(g2p_paddle(text)) # 输出: 这是一个示例文本你好这是一个测试 diff --git a/spaces/XzJosh/Jiaran-Bert-VITS2/bert/chinese-roberta-wwm-ext-large/README.md b/spaces/XzJosh/Jiaran-Bert-VITS2/bert/chinese-roberta-wwm-ext-large/README.md deleted file mode 100644 index 7bce039b7f81ee328fdf8efe3f14409200aacbef..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/Jiaran-Bert-VITS2/bert/chinese-roberta-wwm-ext-large/README.md +++ /dev/null @@ -1,57 +0,0 @@ ---- -language: -- zh -tags: -- bert -license: "apache-2.0" ---- - -# Please use 'Bert' related functions to load this model! - -## Chinese BERT with Whole Word Masking -For further accelerating Chinese natural language processing, we provide **Chinese pre-trained BERT with Whole Word Masking**. - -**[Pre-Training with Whole Word Masking for Chinese BERT](https://arxiv.org/abs/1906.08101)** -Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Ziqing Yang, Shijin Wang, Guoping Hu - -This repository is developed based on:https://github.com/google-research/bert - -You may also interested in, -- Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm -- Chinese MacBERT: https://github.com/ymcui/MacBERT -- Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA -- Chinese XLNet: https://github.com/ymcui/Chinese-XLNet -- Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer - -More resources by HFL: https://github.com/ymcui/HFL-Anthology - -## Citation -If you find the technical report or resource is useful, please cite the following technical report in your paper. -- Primary: https://arxiv.org/abs/2004.13922 -``` -@inproceedings{cui-etal-2020-revisiting, - title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing", - author = "Cui, Yiming and - Che, Wanxiang and - Liu, Ting and - Qin, Bing and - Wang, Shijin and - Hu, Guoping", - booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings", - month = nov, - year = "2020", - address = "Online", - publisher = "Association for Computational Linguistics", - url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58", - pages = "657--668", -} -``` -- Secondary: https://arxiv.org/abs/1906.08101 -``` -@article{chinese-bert-wwm, - title={Pre-Training with Whole Word Masking for Chinese BERT}, - author={Cui, Yiming and Che, Wanxiang and Liu, Ting and Qin, Bing and Yang, Ziqing and Wang, Shijin and Hu, Guoping}, - journal={arXiv preprint arXiv:1906.08101}, - year={2019} - } -``` \ No newline at end of file diff --git a/spaces/XzJosh/LAPLACE-Bert-VITS2/monotonic_align/__init__.py b/spaces/XzJosh/LAPLACE-Bert-VITS2/monotonic_align/__init__.py deleted file mode 100644 index 75603d26cf2b8d6196f5a68a89f9e49d8e519bc8..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/LAPLACE-Bert-VITS2/monotonic_align/__init__.py +++ /dev/null @@ -1,15 +0,0 @@ -from numpy import zeros, int32, float32 -from torch import from_numpy - -from .core import maximum_path_jit - -def maximum_path(neg_cent, mask): - device = neg_cent.device - dtype = neg_cent.dtype - neg_cent = neg_cent.data.cpu().numpy().astype(float32) - path = zeros(neg_cent.shape, dtype=int32) - - t_t_max = mask.sum(1)[:, 0].data.cpu().numpy().astype(int32) - t_s_max = mask.sum(2)[:, 0].data.cpu().numpy().astype(int32) - maximum_path_jit(path, neg_cent, t_t_max, t_s_max) - return from_numpy(path).to(device=device, dtype=dtype) diff --git a/spaces/Yilin98/Whisper-Small-Swedish/README.md b/spaces/Yilin98/Whisper-Small-Swedish/README.md deleted file mode 100644 index 6724dad17fc072cf51e792034768532bf749be11..0000000000000000000000000000000000000000 --- a/spaces/Yilin98/Whisper-Small-Swedish/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Whisper Small Swedish -emoji: 😻 -colorFrom: green -colorTo: indigo -sdk: gradio -sdk_version: 3.12.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Zengyf-CVer/Streamlit_YOLOv5_Model2x/models/yolo.py b/spaces/Zengyf-CVer/Streamlit_YOLOv5_Model2x/models/yolo.py deleted file mode 100644 index fa05fcf9a8d9c747bccb6474a7dafd0ee7fd7810..0000000000000000000000000000000000000000 --- a/spaces/Zengyf-CVer/Streamlit_YOLOv5_Model2x/models/yolo.py +++ /dev/null @@ -1,357 +0,0 @@ -# YOLOv5 🚀 by Ultralytics, GPL-3.0 license -""" -YOLO-specific modules - -Usage: - $ python models/yolo.py --cfg yolov5s.yaml -""" - -import argparse -import contextlib -import os -import platform -import sys -from copy import deepcopy -from pathlib import Path - -FILE = Path(__file__).resolve() -ROOT = FILE.parents[1] # YOLOv5 root directory -if str(ROOT) not in sys.path: - sys.path.append(str(ROOT)) # add ROOT to PATH -if platform.system() != 'Windows': - ROOT = Path(os.path.relpath(ROOT, Path.cwd())) # relative - -from models.common import * -from models.experimental import * -from utils.autoanchor import check_anchor_order -from utils.general import LOGGER, check_version, check_yaml, make_divisible, print_args -from utils.plots import feature_visualization -from utils.torch_utils import (fuse_conv_and_bn, initialize_weights, model_info, profile, scale_img, select_device, - time_sync) - -try: - import thop # for FLOPs computation -except ImportError: - thop = None - - -class Detect(nn.Module): - stride = None # strides computed during build - dynamic = False # force grid reconstruction - export = False # export mode - - def __init__(self, nc=80, anchors=(), ch=(), inplace=True): # detection layer - super().__init__() - self.nc = nc # number of classes - self.no = nc + 5 # number of outputs per anchor - self.nl = len(anchors) # number of detection layers - self.na = len(anchors[0]) // 2 # number of anchors - self.grid = [torch.empty(1)] * self.nl # init grid - self.anchor_grid = [torch.empty(1)] * self.nl # init anchor grid - self.register_buffer('anchors', torch.tensor(anchors).float().view(self.nl, -1, 2)) # shape(nl,na,2) - self.m = nn.ModuleList(nn.Conv2d(x, self.no * self.na, 1) for x in ch) # output conv - self.inplace = inplace # use inplace ops (e.g. slice assignment) - - def forward(self, x): - z = [] # inference output - for i in range(self.nl): - x[i] = self.m[i](x[i]) # conv - bs, _, ny, nx = x[i].shape # x(bs,255,20,20) to x(bs,3,20,20,85) - x[i] = x[i].view(bs, self.na, self.no, ny, nx).permute(0, 1, 3, 4, 2).contiguous() - - if not self.training: # inference - if self.dynamic or self.grid[i].shape[2:4] != x[i].shape[2:4]: - self.grid[i], self.anchor_grid[i] = self._make_grid(nx, ny, i) - - y = x[i].sigmoid() - if self.inplace: - y[..., 0:2] = (y[..., 0:2] * 2 + self.grid[i]) * self.stride[i] # xy - y[..., 2:4] = (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i] # wh - else: # for YOLOv5 on AWS Inferentia https://github.com/ultralytics/yolov5/pull/2953 - xy, wh, conf = y.split((2, 2, self.nc + 1), 4) # y.tensor_split((2, 4, 5), 4) # torch 1.8.0 - xy = (xy * 2 + self.grid[i]) * self.stride[i] # xy - wh = (wh * 2) ** 2 * self.anchor_grid[i] # wh - y = torch.cat((xy, wh, conf), 4) - z.append(y.view(bs, -1, self.no)) - - return x if self.training else (torch.cat(z, 1),) if self.export else (torch.cat(z, 1), x) - - def _make_grid(self, nx=20, ny=20, i=0, torch_1_10=check_version(torch.__version__, '1.10.0')): - d = self.anchors[i].device - t = self.anchors[i].dtype - shape = 1, self.na, ny, nx, 2 # grid shape - y, x = torch.arange(ny, device=d, dtype=t), torch.arange(nx, device=d, dtype=t) - yv, xv = torch.meshgrid(y, x, indexing='ij') if torch_1_10 else torch.meshgrid(y, x) # torch>=0.7 compatibility - grid = torch.stack((xv, yv), 2).expand(shape) - 0.5 # add grid offset, i.e. y = 2.0 * x - 0.5 - anchor_grid = (self.anchors[i] * self.stride[i]).view((1, self.na, 1, 1, 2)).expand(shape) - return grid, anchor_grid - - -class BaseModel(nn.Module): - # YOLOv5 base model - def forward(self, x, profile=False, visualize=False): - return self._forward_once(x, profile, visualize) # single-scale inference, train - - def _forward_once(self, x, profile=False, visualize=False): - y, dt = [], [] # outputs - for m in self.model: - if m.f != -1: # if not from previous layer - x = y[m.f] if isinstance(m.f, int) else [x if j == -1 else y[j] for j in m.f] # from earlier layers - if profile: - self._profile_one_layer(m, x, dt) - x = m(x) # run - y.append(x if m.i in self.save else None) # save output - if visualize: - feature_visualization(x, m.type, m.i, save_dir=visualize) - return x - - def _profile_one_layer(self, m, x, dt): - c = m == self.model[-1] # is final layer, copy input as inplace fix - o = thop.profile(m, inputs=(x.copy() if c else x,), verbose=False)[0] / 1E9 * 2 if thop else 0 # FLOPs - t = time_sync() - for _ in range(10): - m(x.copy() if c else x) - dt.append((time_sync() - t) * 100) - if m == self.model[0]: - LOGGER.info(f"{'time (ms)':>10s} {'GFLOPs':>10s} {'params':>10s} module") - LOGGER.info(f'{dt[-1]:10.2f} {o:10.2f} {m.np:10.0f} {m.type}') - if c: - LOGGER.info(f"{sum(dt):10.2f} {'-':>10s} {'-':>10s} Total") - - def fuse(self): # fuse model Conv2d() + BatchNorm2d() layers - LOGGER.info('Fusing layers... ') - for m in self.model.modules(): - if isinstance(m, (Conv, DWConv)) and hasattr(m, 'bn'): - m.conv = fuse_conv_and_bn(m.conv, m.bn) # update conv - delattr(m, 'bn') # remove batchnorm - m.forward = m.forward_fuse # update forward - self.info() - return self - - def info(self, verbose=False, img_size=640): # print model information - model_info(self, verbose, img_size) - - def _apply(self, fn): - # Apply to(), cpu(), cuda(), half() to model tensors that are not parameters or registered buffers - self = super()._apply(fn) - m = self.model[-1] # Detect() - if isinstance(m, Detect): - m.stride = fn(m.stride) - m.grid = list(map(fn, m.grid)) - if isinstance(m.anchor_grid, list): - m.anchor_grid = list(map(fn, m.anchor_grid)) - return self - - -class DetectionModel(BaseModel): - # YOLOv5 detection model - def __init__(self, cfg='yolov5s.yaml', ch=3, nc=None, anchors=None): # model, input channels, number of classes - super().__init__() - if isinstance(cfg, dict): - self.yaml = cfg # model dict - else: # is *.yaml - import yaml # for torch hub - self.yaml_file = Path(cfg).name - with open(cfg, encoding='ascii', errors='ignore') as f: - self.yaml = yaml.safe_load(f) # model dict - - # Define model - ch = self.yaml['ch'] = self.yaml.get('ch', ch) # input channels - if nc and nc != self.yaml['nc']: - LOGGER.info(f"Overriding model.yaml nc={self.yaml['nc']} with nc={nc}") - self.yaml['nc'] = nc # override yaml value - if anchors: - LOGGER.info(f'Overriding model.yaml anchors with anchors={anchors}') - self.yaml['anchors'] = round(anchors) # override yaml value - self.model, self.save = parse_model(deepcopy(self.yaml), ch=[ch]) # model, savelist - self.names = [str(i) for i in range(self.yaml['nc'])] # default names - self.inplace = self.yaml.get('inplace', True) - - # Build strides, anchors - m = self.model[-1] # Detect() - if isinstance(m, Detect): - s = 256 # 2x min stride - m.inplace = self.inplace - m.stride = torch.tensor([s / x.shape[-2] for x in self.forward(torch.empty(1, ch, s, s))]) # forward - check_anchor_order(m) # must be in pixel-space (not grid-space) - m.anchors /= m.stride.view(-1, 1, 1) - self.stride = m.stride - self._initialize_biases() # only run once - - # Init weights, biases - initialize_weights(self) - self.info() - LOGGER.info('') - - def forward(self, x, augment=False, profile=False, visualize=False): - if augment: - return self._forward_augment(x) # augmented inference, None - return self._forward_once(x, profile, visualize) # single-scale inference, train - - def _forward_augment(self, x): - img_size = x.shape[-2:] # height, width - s = [1, 0.83, 0.67] # scales - f = [None, 3, None] # flips (2-ud, 3-lr) - y = [] # outputs - for si, fi in zip(s, f): - xi = scale_img(x.flip(fi) if fi else x, si, gs=int(self.stride.max())) - yi = self._forward_once(xi)[0] # forward - # cv2.imwrite(f'img_{si}.jpg', 255 * xi[0].cpu().numpy().transpose((1, 2, 0))[:, :, ::-1]) # save - yi = self._descale_pred(yi, fi, si, img_size) - y.append(yi) - y = self._clip_augmented(y) # clip augmented tails - return torch.cat(y, 1), None # augmented inference, train - - def _descale_pred(self, p, flips, scale, img_size): - # de-scale predictions following augmented inference (inverse operation) - if self.inplace: - p[..., :4] /= scale # de-scale - if flips == 2: - p[..., 1] = img_size[0] - p[..., 1] # de-flip ud - elif flips == 3: - p[..., 0] = img_size[1] - p[..., 0] # de-flip lr - else: - x, y, wh = p[..., 0:1] / scale, p[..., 1:2] / scale, p[..., 2:4] / scale # de-scale - if flips == 2: - y = img_size[0] - y # de-flip ud - elif flips == 3: - x = img_size[1] - x # de-flip lr - p = torch.cat((x, y, wh, p[..., 4:]), -1) - return p - - def _clip_augmented(self, y): - # Clip YOLOv5 augmented inference tails - nl = self.model[-1].nl # number of detection layers (P3-P5) - g = sum(4 ** x for x in range(nl)) # grid points - e = 1 # exclude layer count - i = (y[0].shape[1] // g) * sum(4 ** x for x in range(e)) # indices - y[0] = y[0][:, :-i] # large - i = (y[-1].shape[1] // g) * sum(4 ** (nl - 1 - x) for x in range(e)) # indices - y[-1] = y[-1][:, i:] # small - return y - - def _initialize_biases(self, cf=None): # initialize biases into Detect(), cf is class frequency - # https://arxiv.org/abs/1708.02002 section 3.3 - # cf = torch.bincount(torch.tensor(np.concatenate(dataset.labels, 0)[:, 0]).long(), minlength=nc) + 1. - m = self.model[-1] # Detect() module - for mi, s in zip(m.m, m.stride): # from - b = mi.bias.view(m.na, -1).detach() # conv.bias(255) to (3,85) - b[:, 4] += math.log(8 / (640 / s) ** 2) # obj (8 objects per 640 image) - b[:, 5:] += math.log(0.6 / (m.nc - 0.999999)) if cf is None else torch.log(cf / cf.sum()) # cls - mi.bias = torch.nn.Parameter(b.view(-1), requires_grad=True) - - -Model = DetectionModel # retain YOLOv5 'Model' class for backwards compatibility - - -class ClassificationModel(BaseModel): - # YOLOv5 classification model - def __init__(self, cfg=None, model=None, nc=1000, cutoff=10): # yaml, model, number of classes, cutoff index - super().__init__() - self._from_detection_model(model, nc, cutoff) if model is not None else self._from_yaml(cfg) - - def _from_detection_model(self, model, nc=1000, cutoff=10): - # Create a YOLOv5 classification model from a YOLOv5 detection model - if isinstance(model, DetectMultiBackend): - model = model.model # unwrap DetectMultiBackend - model.model = model.model[:cutoff] # backbone - m = model.model[-1] # last layer - ch = m.conv.in_channels if hasattr(m, 'conv') else m.cv1.conv.in_channels # ch into module - c = Classify(ch, nc) # Classify() - c.i, c.f, c.type = m.i, m.f, 'models.common.Classify' # index, from, type - model.model[-1] = c # replace - self.model = model.model - self.stride = model.stride - self.save = [] - self.nc = nc - - def _from_yaml(self, cfg): - # Create a YOLOv5 classification model from a *.yaml file - self.model = None - - -def parse_model(d, ch): # model_dict, input_channels(3) - LOGGER.info(f"\n{'':>3}{'from':>18}{'n':>3}{'params':>10} {'module':<40}{'arguments':<30}") - anchors, nc, gd, gw = d['anchors'], d['nc'], d['depth_multiple'], d['width_multiple'] - na = (len(anchors[0]) // 2) if isinstance(anchors, list) else anchors # number of anchors - no = na * (nc + 5) # number of outputs = anchors * (classes + 5) - - layers, save, c2 = [], [], ch[-1] # layers, savelist, ch out - for i, (f, n, m, args) in enumerate(d['backbone'] + d['head']): # from, number, module, args - m = eval(m) if isinstance(m, str) else m # eval strings - for j, a in enumerate(args): - with contextlib.suppress(NameError): - args[j] = eval(a) if isinstance(a, str) else a # eval strings - - n = n_ = max(round(n * gd), 1) if n > 1 else n # depth gain - if m in (Conv, GhostConv, Bottleneck, GhostBottleneck, SPP, SPPF, DWConv, MixConv2d, Focus, CrossConv, - BottleneckCSP, C3, C3TR, C3SPP, C3Ghost, nn.ConvTranspose2d, DWConvTranspose2d, C3x): - c1, c2 = ch[f], args[0] - if c2 != no: # if not output - c2 = make_divisible(c2 * gw, 8) - - args = [c1, c2, *args[1:]] - if m in [BottleneckCSP, C3, C3TR, C3Ghost, C3x]: - args.insert(2, n) # number of repeats - n = 1 - elif m is nn.BatchNorm2d: - args = [ch[f]] - elif m is Concat: - c2 = sum(ch[x] for x in f) - elif m is Detect: - args.append([ch[x] for x in f]) - if isinstance(args[1], int): # number of anchors - args[1] = [list(range(args[1] * 2))] * len(f) - elif m is Contract: - c2 = ch[f] * args[0] ** 2 - elif m is Expand: - c2 = ch[f] // args[0] ** 2 - else: - c2 = ch[f] - - m_ = nn.Sequential(*(m(*args) for _ in range(n))) if n > 1 else m(*args) # module - t = str(m)[8:-2].replace('__main__.', '') # module type - np = sum(x.numel() for x in m_.parameters()) # number params - m_.i, m_.f, m_.type, m_.np = i, f, t, np # attach index, 'from' index, type, number params - LOGGER.info(f'{i:>3}{str(f):>18}{n_:>3}{np:10.0f} {t:<40}{str(args):<30}') # print - save.extend(x % i for x in ([f] if isinstance(f, int) else f) if x != -1) # append to savelist - layers.append(m_) - if i == 0: - ch = [] - ch.append(c2) - return nn.Sequential(*layers), sorted(save) - - -if __name__ == '__main__': - parser = argparse.ArgumentParser() - parser.add_argument('--cfg', type=str, default='yolov5s.yaml', help='model.yaml') - parser.add_argument('--batch-size', type=int, default=1, help='total batch size for all GPUs') - parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu') - parser.add_argument('--profile', action='store_true', help='profile model speed') - parser.add_argument('--line-profile', action='store_true', help='profile model speed layer by layer') - parser.add_argument('--test', action='store_true', help='test all yolo*.yaml') - opt = parser.parse_args() - opt.cfg = check_yaml(opt.cfg) # check YAML - print_args(vars(opt)) - device = select_device(opt.device) - - # Create model - im = torch.rand(opt.batch_size, 3, 640, 640).to(device) - model = Model(opt.cfg).to(device) - - # Options - if opt.line_profile: # profile layer by layer - model(im, profile=True) - - elif opt.profile: # profile forward-backward - results = profile(input=im, ops=[model], n=3) - - elif opt.test: # test all models - for cfg in Path(ROOT / 'models').rglob('yolo*.yaml'): - try: - _ = Model(cfg) - except Exception as e: - print(f'Error in {cfg}: {e}') - - else: # report fused model summary - model.fuse() diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmcv/runner/iter_based_runner.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmcv/runner/iter_based_runner.py deleted file mode 100644 index 1df4de8c0285669dec9b014dfd1f3dd1600f0831..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmcv/runner/iter_based_runner.py +++ /dev/null @@ -1,273 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os.path as osp -import platform -import shutil -import time -import warnings - -import torch -from torch.optim import Optimizer - -import annotator.uniformer.mmcv as mmcv -from .base_runner import BaseRunner -from .builder import RUNNERS -from .checkpoint import save_checkpoint -from .hooks import IterTimerHook -from .utils import get_host_info - - -class IterLoader: - - def __init__(self, dataloader): - self._dataloader = dataloader - self.iter_loader = iter(self._dataloader) - self._epoch = 0 - - @property - def epoch(self): - return self._epoch - - def __next__(self): - try: - data = next(self.iter_loader) - except StopIteration: - self._epoch += 1 - if hasattr(self._dataloader.sampler, 'set_epoch'): - self._dataloader.sampler.set_epoch(self._epoch) - time.sleep(2) # Prevent possible deadlock during epoch transition - self.iter_loader = iter(self._dataloader) - data = next(self.iter_loader) - - return data - - def __len__(self): - return len(self._dataloader) - - -@RUNNERS.register_module() -class IterBasedRunner(BaseRunner): - """Iteration-based Runner. - - This runner train models iteration by iteration. - """ - - def train(self, data_loader, **kwargs): - self.model.train() - self.mode = 'train' - self.data_loader = data_loader - self._epoch = data_loader.epoch - data_batch = next(data_loader) - self.call_hook('before_train_iter') - outputs = self.model.train_step(data_batch, self.optimizer, **kwargs) - if not isinstance(outputs, dict): - raise TypeError('model.train_step() must return a dict') - if 'log_vars' in outputs: - self.log_buffer.update(outputs['log_vars'], outputs['num_samples']) - self.outputs = outputs - self.call_hook('after_train_iter') - self._inner_iter += 1 - self._iter += 1 - - @torch.no_grad() - def val(self, data_loader, **kwargs): - self.model.eval() - self.mode = 'val' - self.data_loader = data_loader - data_batch = next(data_loader) - self.call_hook('before_val_iter') - outputs = self.model.val_step(data_batch, **kwargs) - if not isinstance(outputs, dict): - raise TypeError('model.val_step() must return a dict') - if 'log_vars' in outputs: - self.log_buffer.update(outputs['log_vars'], outputs['num_samples']) - self.outputs = outputs - self.call_hook('after_val_iter') - self._inner_iter += 1 - - def run(self, data_loaders, workflow, max_iters=None, **kwargs): - """Start running. - - Args: - data_loaders (list[:obj:`DataLoader`]): Dataloaders for training - and validation. - workflow (list[tuple]): A list of (phase, iters) to specify the - running order and iterations. E.g, [('train', 10000), - ('val', 1000)] means running 10000 iterations for training and - 1000 iterations for validation, iteratively. - """ - assert isinstance(data_loaders, list) - assert mmcv.is_list_of(workflow, tuple) - assert len(data_loaders) == len(workflow) - if max_iters is not None: - warnings.warn( - 'setting max_iters in run is deprecated, ' - 'please set max_iters in runner_config', DeprecationWarning) - self._max_iters = max_iters - assert self._max_iters is not None, ( - 'max_iters must be specified during instantiation') - - work_dir = self.work_dir if self.work_dir is not None else 'NONE' - self.logger.info('Start running, host: %s, work_dir: %s', - get_host_info(), work_dir) - self.logger.info('Hooks will be executed in the following order:\n%s', - self.get_hook_info()) - self.logger.info('workflow: %s, max: %d iters', workflow, - self._max_iters) - self.call_hook('before_run') - - iter_loaders = [IterLoader(x) for x in data_loaders] - - self.call_hook('before_epoch') - - while self.iter < self._max_iters: - for i, flow in enumerate(workflow): - self._inner_iter = 0 - mode, iters = flow - if not isinstance(mode, str) or not hasattr(self, mode): - raise ValueError( - 'runner has no method named "{}" to run a workflow'. - format(mode)) - iter_runner = getattr(self, mode) - for _ in range(iters): - if mode == 'train' and self.iter >= self._max_iters: - break - iter_runner(iter_loaders[i], **kwargs) - - time.sleep(1) # wait for some hooks like loggers to finish - self.call_hook('after_epoch') - self.call_hook('after_run') - - def resume(self, - checkpoint, - resume_optimizer=True, - map_location='default'): - """Resume model from checkpoint. - - Args: - checkpoint (str): Checkpoint to resume from. - resume_optimizer (bool, optional): Whether resume the optimizer(s) - if the checkpoint file includes optimizer(s). Default to True. - map_location (str, optional): Same as :func:`torch.load`. - Default to 'default'. - """ - if map_location == 'default': - device_id = torch.cuda.current_device() - checkpoint = self.load_checkpoint( - checkpoint, - map_location=lambda storage, loc: storage.cuda(device_id)) - else: - checkpoint = self.load_checkpoint( - checkpoint, map_location=map_location) - - self._epoch = checkpoint['meta']['epoch'] - self._iter = checkpoint['meta']['iter'] - self._inner_iter = checkpoint['meta']['iter'] - if 'optimizer' in checkpoint and resume_optimizer: - if isinstance(self.optimizer, Optimizer): - self.optimizer.load_state_dict(checkpoint['optimizer']) - elif isinstance(self.optimizer, dict): - for k in self.optimizer.keys(): - self.optimizer[k].load_state_dict( - checkpoint['optimizer'][k]) - else: - raise TypeError( - 'Optimizer should be dict or torch.optim.Optimizer ' - f'but got {type(self.optimizer)}') - - self.logger.info(f'resumed from epoch: {self.epoch}, iter {self.iter}') - - def save_checkpoint(self, - out_dir, - filename_tmpl='iter_{}.pth', - meta=None, - save_optimizer=True, - create_symlink=True): - """Save checkpoint to file. - - Args: - out_dir (str): Directory to save checkpoint files. - filename_tmpl (str, optional): Checkpoint file template. - Defaults to 'iter_{}.pth'. - meta (dict, optional): Metadata to be saved in checkpoint. - Defaults to None. - save_optimizer (bool, optional): Whether save optimizer. - Defaults to True. - create_symlink (bool, optional): Whether create symlink to the - latest checkpoint file. Defaults to True. - """ - if meta is None: - meta = {} - elif not isinstance(meta, dict): - raise TypeError( - f'meta should be a dict or None, but got {type(meta)}') - if self.meta is not None: - meta.update(self.meta) - # Note: meta.update(self.meta) should be done before - # meta.update(epoch=self.epoch + 1, iter=self.iter) otherwise - # there will be problems with resumed checkpoints. - # More details in https://github.com/open-mmlab/mmcv/pull/1108 - meta.update(epoch=self.epoch + 1, iter=self.iter) - - filename = filename_tmpl.format(self.iter + 1) - filepath = osp.join(out_dir, filename) - optimizer = self.optimizer if save_optimizer else None - save_checkpoint(self.model, filepath, optimizer=optimizer, meta=meta) - # in some environments, `os.symlink` is not supported, you may need to - # set `create_symlink` to False - if create_symlink: - dst_file = osp.join(out_dir, 'latest.pth') - if platform.system() != 'Windows': - mmcv.symlink(filename, dst_file) - else: - shutil.copy(filepath, dst_file) - - def register_training_hooks(self, - lr_config, - optimizer_config=None, - checkpoint_config=None, - log_config=None, - momentum_config=None, - custom_hooks_config=None): - """Register default hooks for iter-based training. - - Checkpoint hook, optimizer stepper hook and logger hooks will be set to - `by_epoch=False` by default. - - Default hooks include: - - +----------------------+-------------------------+ - | Hooks | Priority | - +======================+=========================+ - | LrUpdaterHook | VERY_HIGH (10) | - +----------------------+-------------------------+ - | MomentumUpdaterHook | HIGH (30) | - +----------------------+-------------------------+ - | OptimizerStepperHook | ABOVE_NORMAL (40) | - +----------------------+-------------------------+ - | CheckpointSaverHook | NORMAL (50) | - +----------------------+-------------------------+ - | IterTimerHook | LOW (70) | - +----------------------+-------------------------+ - | LoggerHook(s) | VERY_LOW (90) | - +----------------------+-------------------------+ - | CustomHook(s) | defaults to NORMAL (50) | - +----------------------+-------------------------+ - - If custom hooks have same priority with default hooks, custom hooks - will be triggered after default hooks. - """ - if checkpoint_config is not None: - checkpoint_config.setdefault('by_epoch', False) - if lr_config is not None: - lr_config.setdefault('by_epoch', False) - if log_config is not None: - for info in log_config['hooks']: - info.setdefault('by_epoch', False) - super(IterBasedRunner, self).register_training_hooks( - lr_config=lr_config, - momentum_config=momentum_config, - optimizer_config=optimizer_config, - checkpoint_config=checkpoint_config, - log_config=log_config, - timer_config=IterTimerHook(), - custom_hooks_config=custom_hooks_config) diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/detectors/reppoints_detector.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/detectors/reppoints_detector.py deleted file mode 100644 index a5f6be31e14488e4b8a006b7142a82c872388d82..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/detectors/reppoints_detector.py +++ /dev/null @@ -1,22 +0,0 @@ -from ..builder import DETECTORS -from .single_stage import SingleStageDetector - - -@DETECTORS.register_module() -class RepPointsDetector(SingleStageDetector): - """RepPoints: Point Set Representation for Object Detection. - - This detector is the implementation of: - - RepPoints detector (https://arxiv.org/pdf/1904.11490) - """ - - def __init__(self, - backbone, - neck, - bbox_head, - train_cfg=None, - test_cfg=None, - pretrained=None): - super(RepPointsDetector, - self).__init__(backbone, neck, bbox_head, train_cfg, test_cfg, - pretrained) diff --git a/spaces/abrar-lohia/text-2-character-anim/VQTrans/utils/config.py b/spaces/abrar-lohia/text-2-character-anim/VQTrans/utils/config.py deleted file mode 100644 index 091d790e963959c326917688ee267e6a4ec136d1..0000000000000000000000000000000000000000 --- a/spaces/abrar-lohia/text-2-character-anim/VQTrans/utils/config.py +++ /dev/null @@ -1,17 +0,0 @@ -import os - -SMPL_DATA_PATH = "./body_models/smpl" - -SMPL_KINTREE_PATH = os.path.join(SMPL_DATA_PATH, "kintree_table.pkl") -SMPL_MODEL_PATH = os.path.join(SMPL_DATA_PATH, "SMPL_NEUTRAL.pkl") -JOINT_REGRESSOR_TRAIN_EXTRA = os.path.join(SMPL_DATA_PATH, 'J_regressor_extra.npy') - -ROT_CONVENTION_TO_ROT_NUMBER = { - 'legacy': 23, - 'no_hands': 21, - 'full_hands': 51, - 'mitten_hands': 33, -} - -GENDERS = ['neutral', 'male', 'female'] -NUM_BETAS = 10 \ No newline at end of file diff --git a/spaces/agueroooooooooo/Transport_Mode_Detector/data_loader.py b/spaces/agueroooooooooo/Transport_Mode_Detector/data_loader.py deleted file mode 100644 index 936b6edb989de7d7e5922fd1dc21745b1780658e..0000000000000000000000000000000000000000 --- a/spaces/agueroooooooooo/Transport_Mode_Detector/data_loader.py +++ /dev/null @@ -1,155 +0,0 @@ -import random -from operator import itemgetter - -from data_enrich import DataEnrich - - -class DataLoader: - - label_mapping = { - 'car': 0, - 'walk': 1, - 'bus': 2, - 'train': 3, - 'subway': 4, - 'bike': 5, - 'run': 6, - 'boat': 7, - 'airplane': 8, - 'motorcycle': 9, - 'taxi': 10 - } - - fields_to_feed = ["dist", "speed", "accel", "timedelta", "jerk", "bearing", "bearing_rate"] - labels_to_remove = ["boat", "motorcycle", "airplane", "run", "bike"] - - - def __init__(self, test_ratio=0.2, val_ratio=0.1, batchsize=4, read_from_pickle=True): - de = DataEnrich() - self._raw = de.get_enriched_data(read_from_pickle) - self._test_ratio = test_ratio - self._val_ratio = val_ratio - self._batchsize = batchsize - - def _remove_traj_containing_labels(self): - cleaned = [] - for elem in self._raw: - if len(elem) == 0: - continue - if all(x not in list(elem["label"]) for x in self.labels_to_remove): - cleaned.append(elem) - self._raw = cleaned - - def _merge_labels(self, target_label, label_to_remove): - for elem in self._raw: - if label_to_remove in list(elem["label"]): - elem["label"] = elem["label"].replace(to_replace=label_to_remove, value=target_label) - - def _labels_to_int_repr(self): - for elem in self._raw: - elem["label"] = elem["label"].apply(lambda x: self.label_mapping[x]) - - def _get_split_indices(self, traj): - train_size = int((1 - self._test_ratio) * len(traj)) - val_size = len(traj) - int((1 - self._val_ratio) * len(traj)) - - indices = [x for x in range(len(traj))] - - indices_for_training = random.sample(indices, train_size) - indices_for_validation = random.sample(indices_for_training, val_size) - indices_for_training = set(indices_for_training) - set(indices_for_validation) - indices_for_testing = set(indices) - indices_for_training - indices_for_testing = list(indices_for_testing) - - return list(indices_for_training), list(indices_for_testing), list(indices_for_validation) - - def _set_splitted_data(self, traj, labels): - - i_train, i_test, i_val = self._get_split_indices(traj) - - random.shuffle(i_train) - - self.test_data = list(itemgetter(*i_test)(traj)) - self.val_data = list(itemgetter(*i_val)(traj)) - self.train_data = list(itemgetter(*i_train)(traj)) - self.test_labels = list(itemgetter(*i_test)(labels)) - self.val_labels = list(itemgetter(*i_val)(labels)) - self.train_labels = list(itemgetter(*i_train)(labels)) - - def _split_too_long_traj(self, traj, labels, max_points): - if len(traj) > max_points*2: - splitted_traj, splitted_labels = [],[] - num_subsets = len(traj) // max_points - print("Splitting trajectory with length ", len(traj), "in ", num_subsets, "trajectories") - for i in range(num_subsets): - end_pointer = len(traj)-1 if ((i+1)*max_points)+max_points > len(traj) else (i*max_points)+max_points - traj_subset = traj[i*max_points:end_pointer] - labels_subset = labels[i*max_points:end_pointer] - assert len(traj_subset) == len(labels_subset) - splitted_traj.append(traj_subset) - splitted_labels.append(labels_subset) - return splitted_traj, splitted_labels - return [traj], [labels] - - def prepare_data(self): - trajs = [] - labels = [] - - self._remove_traj_containing_labels() - self._merge_labels("car", "taxi") - self._labels_to_int_repr() - - for elem in self._raw: - assert len(elem) > 0 - data_ = elem[self.fields_to_feed].values.tolist() - label_ = elem["label"].values.tolist() - data_, label_ = self._split_too_long_traj(data_, label_, 350) - trajs.extend(data_) - labels.extend(label_) - - self._set_splitted_data(trajs, labels) - - def batches(self): - for i in range(0, len(self.train_data), self._batchsize): - - if len(self.train_data[i:i + self._batchsize]) < self._batchsize: - break # drop last incomplete batch - - labels_sorted = sorted(self.train_labels[i:i + self._batchsize:], key=len, reverse=True) - train_sorted = sorted(self.train_data[i:i + self._batchsize:], key=len, reverse=True) - for p in range(len(labels_sorted)): - assert len(labels_sorted[p]) == len(train_sorted[p]) - yield train_sorted, labels_sorted - - def val_batches(self): - for i in range(0, len(self.val_data), self._batchsize): - - if len(self.val_data[i:i + self._batchsize]) < self._batchsize: - break # drop last incomplete batch - - labels_sorted = sorted(self.val_labels[i:i + self._batchsize:], key=len, reverse=True) - val_sorted = sorted(self.val_data[i:i + self._batchsize:], key=len, reverse=True) - for p in range(len(labels_sorted)): - assert len(labels_sorted[p]) == len(val_sorted[p]) - yield val_sorted, labels_sorted - - def test_batches(self): - for i in range(0, len(self.test_data), self._batchsize): - - if len(self.test_data[i:i + self._batchsize]) < self._batchsize: - break # drop last incomplete batch - - labels_sorted = sorted(self.test_labels[i:i + self._batchsize:], key=len, reverse=True) - test_sorted = sorted(self.test_data[i:i + self._batchsize:], key=len, reverse=True) - for p in range(len(labels_sorted)): - assert len(labels_sorted[p]) == len(test_sorted[p]) - yield test_sorted, labels_sorted - - def get_train_size(self): - return len(self.train_data) - - def get_val_size(self): - return len(self.val_data) - - def get_test_size(self): - return len(self.test_data) \ No newline at end of file diff --git a/spaces/aijack/jojo/e4e/models/discriminator.py b/spaces/aijack/jojo/e4e/models/discriminator.py deleted file mode 100644 index 16bf3722c7f2e35cdc9bd177a33ed0975e67200d..0000000000000000000000000000000000000000 --- a/spaces/aijack/jojo/e4e/models/discriminator.py +++ /dev/null @@ -1,20 +0,0 @@ -from torch import nn - - -class LatentCodesDiscriminator(nn.Module): - def __init__(self, style_dim, n_mlp): - super().__init__() - - self.style_dim = style_dim - - layers = [] - for i in range(n_mlp-1): - layers.append( - nn.Linear(style_dim, style_dim) - ) - layers.append(nn.LeakyReLU(0.2)) - layers.append(nn.Linear(512, 1)) - self.mlp = nn.Sequential(*layers) - - def forward(self, w): - return self.mlp(w) diff --git a/spaces/akhaliq/lama/saicinpainting/training/modules/spatial_transform.py b/spaces/akhaliq/lama/saicinpainting/training/modules/spatial_transform.py deleted file mode 100644 index 2de024ba08c549605a08b64d096f1f0db7b7722a..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/lama/saicinpainting/training/modules/spatial_transform.py +++ /dev/null @@ -1,49 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F -from kornia.geometry.transform import rotate - - -class LearnableSpatialTransformWrapper(nn.Module): - def __init__(self, impl, pad_coef=0.5, angle_init_range=80, train_angle=True): - super().__init__() - self.impl = impl - self.angle = torch.rand(1) * angle_init_range - if train_angle: - self.angle = nn.Parameter(self.angle, requires_grad=True) - self.pad_coef = pad_coef - - def forward(self, x): - if torch.is_tensor(x): - return self.inverse_transform(self.impl(self.transform(x)), x) - elif isinstance(x, tuple): - x_trans = tuple(self.transform(elem) for elem in x) - y_trans = self.impl(x_trans) - return tuple(self.inverse_transform(elem, orig_x) for elem, orig_x in zip(y_trans, x)) - else: - raise ValueError(f'Unexpected input type {type(x)}') - - def transform(self, x): - height, width = x.shape[2:] - pad_h, pad_w = int(height * self.pad_coef), int(width * self.pad_coef) - x_padded = F.pad(x, [pad_w, pad_w, pad_h, pad_h], mode='reflect') - x_padded_rotated = rotate(x_padded, angle=self.angle.to(x_padded)) - return x_padded_rotated - - def inverse_transform(self, y_padded_rotated, orig_x): - height, width = orig_x.shape[2:] - pad_h, pad_w = int(height * self.pad_coef), int(width * self.pad_coef) - - y_padded = rotate(y_padded_rotated, angle=-self.angle.to(y_padded_rotated)) - y_height, y_width = y_padded.shape[2:] - y = y_padded[:, :, pad_h : y_height - pad_h, pad_w : y_width - pad_w] - return y - - -if __name__ == '__main__': - layer = LearnableSpatialTransformWrapper(nn.Identity()) - x = torch.arange(2* 3 * 15 * 15).view(2, 3, 15, 15).float() - y = layer(x) - assert x.shape == y.shape - assert torch.allclose(x[:, :, 1:, 1:][:, :, :-1, :-1], y[:, :, 1:, 1:][:, :, :-1, :-1]) - print('all ok') diff --git a/spaces/akhaliq/webui-orangemixs/app.py b/spaces/akhaliq/webui-orangemixs/app.py deleted file mode 100644 index 1cc31288c1f4279e7e87097a28d1853c41b787cf..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/webui-orangemixs/app.py +++ /dev/null @@ -1,72 +0,0 @@ -import os -from subprocess import getoutput - -gpu_info = getoutput('nvidia-smi') -if("A10G" in gpu_info): - os.system(f"pip install -q https://github.com/camenduru/stable-diffusion-webui-colab/releases/download/0.0.15/xformers-0.0.15.dev0+4c06c79.d20221205-cp38-cp38-linux_x86_64.whl") -elif("T4" in gpu_info): - os.system(f"pip install -q https://github.com/camenduru/stable-diffusion-webui-colab/releases/download/0.0.15/xformers-0.0.15.dev0+1515f77.d20221130-cp38-cp38-linux_x86_64.whl") - -os.system(f"git clone -b v1.5 https://github.com/camenduru/stable-diffusion-webui /home/user/app/stable-diffusion-webui") -os.chdir("/home/user/app/stable-diffusion-webui") - -os.system(f"wget -q https://github.com/camenduru/webui/raw/main/env_patch.py -O /home/user/app/env_patch.py") -os.system(f"sed -i -e '/import image_from_url_text/r /home/user/app/env_patch.py' /home/user/app/stable-diffusion-webui/modules/ui.py") -os.system(f"sed -i -e '/(modelmerger_interface, \"Checkpoint Merger\", \"modelmerger\"),/d' /home/user/app/stable-diffusion-webui/modules/ui.py") -os.system(f"sed -i -e '/(train_interface, \"Train\", \"ti\"),/d' /home/user/app/stable-diffusion-webui/modules/ui.py") -os.system(f"sed -i -e '/extensions_interface, \"Extensions\", \"extensions\"/d' /home/user/app/stable-diffusion-webui/modules/ui.py") -os.system(f"sed -i -e '/settings_interface, \"Settings\", \"settings\"/d' /home/user/app/stable-diffusion-webui/modules/ui.py") -os.system(f'''sed -i -e "s/document.getElementsByTagName('gradio-app')\[0\].shadowRoot/!!document.getElementsByTagName('gradio-app')[0].shadowRoot ? document.getElementsByTagName('gradio-app')[0].shadowRoot : document/g" /home/user/app/stable-diffusion-webui/script.js''') -os.system(f"sed -i -e 's/ show_progress=False,/ show_progress=True,/g' /home/user/app/stable-diffusion-webui/modules/ui.py") -os.system(f"sed -i -e 's/shared.demo.launch/shared.demo.queue().launch/g' /home/user/app/stable-diffusion-webui/webui.py") -os.system(f"sed -i -e 's/ outputs=\[/queue=False, &/g' /home/user/app/stable-diffusion-webui/modules/ui.py") -os.system(f"sed -i -e 's/ queue=False, / /g' /home/user/app/stable-diffusion-webui/modules/ui.py") - -# ----------------------------Please duplicate this space and delete this block if you don't want to see the extra header---------------------------- -os.system(f"wget -q https://github.com/camenduru/webui/raw/main/header_patch.py -O /home/user/app/header_patch.py") -os.system(f"sed -i -e '/demo:/r /home/user/app/header_patch.py' /home/user/app/stable-diffusion-webui/modules/ui.py") -# --------------------------------------------------------------------------------------------------------------------------------------------------- - -if "IS_SHARED_UI" in os.environ: - os.system(f"rm -rfv /home/user/app/stable-diffusion-webui/scripts/") - - os.system(f"wget -q https://github.com/camenduru/webui/raw/main/shared-config.json -O /home/user/app/shared-config.json") - os.system(f"wget -q https://github.com/camenduru/webui/raw/main/shared-ui-config.json -O /home/user/app/shared-ui-config.json") - - os.system(f"wget -q {os.getenv('MODEL_LINK')} -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/{os.getenv('MODEL_NAME')}") - os.system(f"wget -q {os.getenv('VAE_LINK')} -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/{os.getenv('VAE_NAME')}") - os.system(f"wget -q {os.getenv('YAML_LINK')} -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/{os.getenv('YAML_NAME')}") - - os.system(f"python launch.py --force-enable-xformers --disable-console-progressbars --enable-console-prompts --ui-config-file /home/user/app/shared-ui-config.json --ui-settings-file /home/user/app/shared-config.json --cors-allow-origins huggingface.co,hf.space --no-progressbar-hiding") -else: - # Please duplicate this space and delete # character in front of the custom script you want to use or add here more custom scripts with same structure os.system(f"wget -q https://CUSTOM_SCRIPT_URL -O /home/user/app/stable-diffusion-webui/scripts/CUSTOM_SCRIPT_NAME.py") - os.system(f"wget -q https://gist.github.com/camenduru/9ec5f8141db9902e375967e93250860f/raw/d0bcf01786f20107c329c03f8968584ee67be12a/run_n_times.py -O /home/user/app/stable-diffusion-webui/scripts/run_n_times.py") - - # Please duplicate this space and delete # character in front of the extension you want to use or add here more extensions with same structure os.system(f"git clone https://EXTENSION_GIT_URL /home/user/app/stable-diffusion-webui/extensions/EXTENSION_NAME") - #os.system(f"git clone https://github.com/camenduru/stable-diffusion-webui-artists-to-study /home/user/app/stable-diffusion-webui/extensions/stable-diffusion-webui-artists-to-study") - os.system(f"git clone https://github.com/yfszzx/stable-diffusion-webui-images-browser /home/user/app/stable-diffusion-webui/extensions/stable-diffusion-webui-images-browser") - os.system(f"git clone https://github.com/deforum-art/deforum-for-automatic1111-webui /home/user/app/stable-diffusion-webui/extensions/deforum-for-automatic1111-webui") - - # Please duplicate this space and delete # character in front of the model you want to use or add here more ckpts with same structure os.system(f"wget -q https://CKPT_URL -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/CKPT_NAME.ckpt") - os.system(f"wget -q https://huggingface.co/WarriorMama777/OrangeMixs/resolve/main/Models/AbyssOrangeMix2/AbyssOrangeMix2_sfw.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/AbyssOrangeMix2_sfw.ckpt") - #os.system(f"wget -q https://huggingface.co/DGSpitzer/Cyberpunk-Anime-Diffusion/resolve/main/Cyberpunk-Anime-Diffusion.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/Cyberpunk-Anime-Diffusion.ckpt") - #os.system(f"wget -q https://huggingface.co/prompthero/midjourney-v4-diffusion/resolve/main/mdjrny-v4.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/mdjrny-v4.ckpt") - #os.system(f"wget -q https://huggingface.co/nitrosocke/mo-di-diffusion/resolve/main/moDi-v1-pruned.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/moDi-v1-pruned.ckpt") - #os.system(f"wget -q https://huggingface.co/Fictiverse/Stable_Diffusion_PaperCut_Model/resolve/main/PaperCut_v1.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/PaperCut_v1.ckpt") - #os.system(f"wget -q https://huggingface.co/lilpotat/sa/resolve/main/samdoesarts_style.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/samdoesarts_style.ckpt") - #os.system(f"wget -q https://huggingface.co/hakurei/waifu-diffusion-v1-3/resolve/main/wd-v1-3-float32.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/wd-v1-3-float32.ckpt") - #os.system(f"wget -q https://huggingface.co/CompVis/stable-diffusion-v-1-4-original/resolve/main/sd-v1-4.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/sd-v1-4.ckpt") - #os.system(f"wget -q https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/v1-5-pruned-emaonly.ckpt") - #os.system(f"wget -q https://huggingface.co/runwayml/stable-diffusion-inpainting/resolve/main/sd-v1-5-inpainting.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/sd-v1-5-inpainting.ckpt") - - #os.system(f"wget -q https://huggingface.co/Linaqruf/anything-v3.0/resolve/main/Anything-V3.0-pruned.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/Anything-V3.0-pruned.ckpt") - #os.system(f"wget -q https://huggingface.co/Linaqruf/anything-v3.0/resolve/main/Anything-V3.0.vae.pt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/Anything-V3.0-pruned.vae.pt") - - #os.system(f"wget -q https://huggingface.co/stabilityai/stable-diffusion-2/resolve/main/768-v-ema.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/768-v-ema.ckpt") - #os.system(f"wget -q https://raw.githubusercontent.com/Stability-AI/stablediffusion/main/configs/stable-diffusion/v2-inference-v.yaml -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/768-v-ema.yaml") - - os.system(f"wget -q https://huggingface.co/stabilityai/stable-diffusion-2-1/resolve/main/v2-1_768-ema-pruned.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/v2-1_768-ema-pruned.ckpt") - os.system(f"wget -q https://raw.githubusercontent.com/Stability-AI/stablediffusion/main/configs/stable-diffusion/v2-inference-v.yaml -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/v2-1_768-ema-pruned.yaml") - - os.system(f"python launch.py --force-enable-xformers --ui-config-file /home/user/app/ui-config.json --ui-settings-file /home/user/app/config.json --disable-console-progressbars --enable-console-prompts --cors-allow-origins huggingface.co,hf.space --no-progressbar-hiding --api --skip-torch-cuda-test") - \ No newline at end of file diff --git a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/html5lib/treewalkers/etree.py b/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/html5lib/treewalkers/etree.py deleted file mode 100644 index 837b27ec486924eb9ccef53c6a5d578bd787aefd..0000000000000000000000000000000000000000 --- a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/html5lib/treewalkers/etree.py +++ /dev/null @@ -1,131 +0,0 @@ -from __future__ import absolute_import, division, unicode_literals - -from collections import OrderedDict -import re - -from pip._vendor.six import string_types - -from . import base -from .._utils import moduleFactoryFactory - -tag_regexp = re.compile("{([^}]*)}(.*)") - - -def getETreeBuilder(ElementTreeImplementation): - ElementTree = ElementTreeImplementation - ElementTreeCommentType = ElementTree.Comment("asd").tag - - class TreeWalker(base.NonRecursiveTreeWalker): # pylint:disable=unused-variable - """Given the particular ElementTree representation, this implementation, - to avoid using recursion, returns "nodes" as tuples with the following - content: - - 1. The current element - - 2. The index of the element relative to its parent - - 3. A stack of ancestor elements - - 4. A flag "text", "tail" or None to indicate if the current node is a - text node; either the text or tail of the current element (1) - """ - def getNodeDetails(self, node): - if isinstance(node, tuple): # It might be the root Element - elt, _, _, flag = node - if flag in ("text", "tail"): - return base.TEXT, getattr(elt, flag) - else: - node = elt - - if not(hasattr(node, "tag")): - node = node.getroot() - - if node.tag in ("DOCUMENT_ROOT", "DOCUMENT_FRAGMENT"): - return (base.DOCUMENT,) - - elif node.tag == "": - return (base.DOCTYPE, node.text, - node.get("publicId"), node.get("systemId")) - - elif node.tag == ElementTreeCommentType: - return base.COMMENT, node.text - - else: - assert isinstance(node.tag, string_types), type(node.tag) - # This is assumed to be an ordinary element - match = tag_regexp.match(node.tag) - if match: - namespace, tag = match.groups() - else: - namespace = None - tag = node.tag - attrs = OrderedDict() - for name, value in list(node.attrib.items()): - match = tag_regexp.match(name) - if match: - attrs[(match.group(1), match.group(2))] = value - else: - attrs[(None, name)] = value - return (base.ELEMENT, namespace, tag, - attrs, len(node) or node.text) - - def getFirstChild(self, node): - if isinstance(node, tuple): - element, key, parents, flag = node - else: - element, key, parents, flag = node, None, [], None - - if flag in ("text", "tail"): - return None - else: - if element.text: - return element, key, parents, "text" - elif len(element): - parents.append(element) - return element[0], 0, parents, None - else: - return None - - def getNextSibling(self, node): - if isinstance(node, tuple): - element, key, parents, flag = node - else: - return None - - if flag == "text": - if len(element): - parents.append(element) - return element[0], 0, parents, None - else: - return None - else: - if element.tail and flag != "tail": - return element, key, parents, "tail" - elif key < len(parents[-1]) - 1: - return parents[-1][key + 1], key + 1, parents, None - else: - return None - - def getParentNode(self, node): - if isinstance(node, tuple): - element, key, parents, flag = node - else: - return None - - if flag == "text": - if not parents: - return element - else: - return element, key, parents, None - else: - parent = parents.pop() - if not parents: - return parent - else: - assert list(parents[-1]).count(parent) == 1 - return parent, list(parents[-1]).index(parent), parents, None - - return locals() - - -getETreeModule = moduleFactoryFactory(getETreeBuilder) diff --git a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/pygments/modeline.py b/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/pygments/modeline.py deleted file mode 100644 index 047d86d6be6a25ae26fba26d959bf2a1609f7dc8..0000000000000000000000000000000000000000 --- a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/pygments/modeline.py +++ /dev/null @@ -1,43 +0,0 @@ -""" - pygments.modeline - ~~~~~~~~~~~~~~~~~ - - A simple modeline parser (based on pymodeline). - - :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. - :license: BSD, see LICENSE for details. -""" - -import re - -__all__ = ['get_filetype_from_buffer'] - - -modeline_re = re.compile(r''' - (?: vi | vim | ex ) (?: [<=>]? \d* )? : - .* (?: ft | filetype | syn | syntax ) = ( [^:\s]+ ) -''', re.VERBOSE) - - -def get_filetype_from_line(l): - m = modeline_re.search(l) - if m: - return m.group(1) - - -def get_filetype_from_buffer(buf, max_lines=5): - """ - Scan the buffer for modelines and return filetype if one is found. - """ - lines = buf.splitlines() - for l in lines[-1:-max_lines-1:-1]: - ret = get_filetype_from_line(l) - if ret: - return ret - for i in range(max_lines, -1, -1): - if i < len(lines): - ret = get_filetype_from_line(lines[i]) - if ret: - return ret - - return None diff --git a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/rich/_stack.py b/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/rich/_stack.py deleted file mode 100644 index 194564e761ddae165b39ef6598877e2e3820af0a..0000000000000000000000000000000000000000 --- a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/rich/_stack.py +++ /dev/null @@ -1,16 +0,0 @@ -from typing import List, TypeVar - -T = TypeVar("T") - - -class Stack(List[T]): - """A small shim over builtin list.""" - - @property - def top(self) -> T: - """Get top of stack.""" - return self[-1] - - def push(self, item: T) -> None: - """Push an item on to the stack (append in stack nomenclature).""" - self.append(item) diff --git a/spaces/ali-ghamdan/deoldify/fastai/torch_core.py b/spaces/ali-ghamdan/deoldify/fastai/torch_core.py deleted file mode 100644 index 6b089e09e4e08c2b6d50b70ef3223fadae2f48cb..0000000000000000000000000000000000000000 --- a/spaces/ali-ghamdan/deoldify/fastai/torch_core.py +++ /dev/null @@ -1,430 +0,0 @@ -"Utility functions to help deal with tensors" -from .imports.torch import * -from .core import * -from collections import OrderedDict -from torch.nn.parallel import DistributedDataParallel - -AffineMatrix = Tensor -BoolOrTensor = Union[bool,Tensor] -FloatOrTensor = Union[float,Tensor] -IntOrTensor = Union[int,Tensor] -ItemsList = Collection[Union[Tensor,ItemBase,'ItemsList',float,int]] -LambdaFunc = Callable[[Tensor],Tensor] -LayerFunc = Callable[[nn.Module],None] -ModuleList = Collection[nn.Module] -NPArray = np.ndarray -OptOptimizer = Optional[optim.Optimizer] -ParamList = Collection[nn.Parameter] -Rank0Tensor = NewType('OneEltTensor', Tensor) -SplitFunc = Callable[[nn.Module], List[nn.Module]] -SplitFuncOrIdxList = Union[Callable, Collection[ModuleList]] -TensorOrNumber = Union[Tensor,Number] -TensorOrNumList = Collection[TensorOrNumber] -TensorImage = Tensor -TensorImageSize = Tuple[int,int,int] -Tensors = Union[Tensor, Collection['Tensors']] -Weights = Dict[str,Tensor] - -AffineFunc = Callable[[KWArgs], AffineMatrix] -HookFunc = Callable[[nn.Module, Tensors, Tensors], Any] -LogitTensorImage = TensorImage -LossFunction = Callable[[Tensor, Tensor], Rank0Tensor] -MetricFunc = Callable[[Tensor,Tensor],TensorOrNumber] -MetricFuncList = Collection[MetricFunc] -MetricsList = Collection[TensorOrNumber] -OptLossFunc = Optional[LossFunction] -OptMetrics = Optional[MetricsList] -OptSplitFunc = Optional[SplitFunc] -PixelFunc = Callable[[TensorImage, ArgStar, KWArgs], TensorImage] - -LightingFunc = Callable[[LogitTensorImage, ArgStar, KWArgs], LogitTensorImage] - -fastai_types = { - AnnealFunc:'AnnealFunc', ArgStar:'ArgStar', BatchSamples:'BatchSamples', - FilePathList:'FilePathList', Floats:'Floats', ImgLabel:'ImgLabel', ImgLabels:'ImgLabels', KeyFunc:'KeyFunc', - KWArgs:'KWArgs', ListOrItem:'ListOrItem', ListRules:'ListRules', ListSizes:'ListSizes', - NPArrayableList:'NPArrayableList', NPArrayList:'NPArrayList', NPArrayMask:'NPArrayMask', NPImage:'NPImage', - OptDataFrame:'OptDataFrame', OptListOrItem:'OptListOrItem', OptRange:'OptRange', OptStrTuple:'OptStrTuple', - OptStats:'OptStats', PathOrStr:'PathOrStr', PBar:'PBar', Point:'Point', Points:'Points', Sizes:'Sizes', - SplitArrayList:'SplitArrayList', StartOptEnd:'StartOptEnd', StrList:'StrList', Tokens:'Tokens', - OptStrList:'OptStrList', AffineMatrix:'AffineMatrix', BoolOrTensor:'BoolOrTensor', FloatOrTensor:'FloatOrTensor', - IntOrTensor:'IntOrTensor', ItemsList:'ItemsList', LambdaFunc:'LambdaFunc', - LayerFunc:'LayerFunc', ModuleList:'ModuleList', OptOptimizer:'OptOptimizer', ParamList:'ParamList', - Rank0Tensor:'Rank0Tensor', SplitFunc:'SplitFunc', SplitFuncOrIdxList:'SplitFuncOrIdxList', - TensorOrNumber:'TensorOrNumber', TensorOrNumList:'TensorOrNumList', TensorImage:'TensorImage', - TensorImageSize:'TensorImageSize', Tensors:'Tensors', Weights:'Weights', AffineFunc:'AffineFunc', - HookFunc:'HookFunc', LogitTensorImage:'LogitTensorImage', LossFunction:'LossFunction', MetricFunc:'MetricFunc', - MetricFuncList:'MetricFuncList', MetricsList:'MetricsList', OptLossFunc:'OptLossFunc', OptMetrics:'OptMetrics', - OptSplitFunc:'OptSplitFunc', PixelFunc:'PixelFunc', LightingFunc:'LightingFunc', IntsOrStrs:'IntsOrStrs', - PathLikeOrBinaryStream:'PathLikeOrBinaryStream' -} - -bn_types = (nn.BatchNorm1d, nn.BatchNorm2d, nn.BatchNorm3d) -bias_types = (nn.Linear, nn.Conv1d, nn.Conv2d, nn.Conv3d, nn.ConvTranspose1d, nn.ConvTranspose2d, nn.ConvTranspose3d) -def is_pool_type(l:Callable): return re.search(r'Pool[123]d$', l.__class__.__name__) -no_wd_types = bn_types + (nn.LayerNorm,) -defaults.device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu') -AdamW = partial(optim.Adam, betas=(0.9,0.99)) - -#Monkey-patch `torch.cuda.set_device` so that it updates `defaults.device` -_old_torch_cuda_set_device = torch.cuda.set_device -def _new_torch_cuda_set_device(device): - _old_torch_cuda_set_device(device) - defaults.device = torch.device('cuda', device) if isinstance(device, int) else device -torch.cuda.set_device = _new_torch_cuda_set_device - -def tensor(x:Any, *rest)->Tensor: - "Like `torch.as_tensor`, but handle lists too, and can pass multiple vector elements directly." - if len(rest): x = (x,)+rest - # XXX: Pytorch bug in dataloader using num_workers>0; TODO: create repro and report - if is_listy(x) and len(x)==0: return tensor(0) - res = torch.tensor(x) if is_listy(x) else as_tensor(x) - if res.dtype is torch.int32: - warn('Tensor is int32: upgrading to int64; for better performance use int64 input') - return res.long() - return res - -class Module(nn.Module, metaclass=PrePostInitMeta): - "Same as `nn.Module`, but no need for subclasses to call `super().__init__`" - def __pre_init__(self): super().__init__() - def __init__(self): pass - -def np_address(x:np.ndarray)->int: - "Address of `x` in memory." - return x.__array_interface__['data'][0] - -def to_detach(b:Tensors, cpu:bool=True): - "Recursively detach lists of tensors in `b `; put them on the CPU if `cpu=True`." - def _inner(x, cpu=True): - if not isinstance(x,Tensor): return x - x = x.detach() - return x.cpu() if cpu else x - return recurse(_inner, b, cpu=cpu) - -def to_data(b:ItemsList): - "Recursively map lists of items in `b ` to their wrapped data." - return recurse(lambda x: x.data if isinstance(x,ItemBase) else x, b) - -def to_cpu(b:ItemsList): - "Recursively map lists of tensors in `b ` to the cpu." - return recurse(lambda x: x.cpu() if isinstance(x,Tensor) else x, b) - -def to_half(b:Collection[Tensor])->Collection[Tensor]: - "Recursively map lists of tensors in `b ` to FP16." - return recurse(lambda x: x.half() if x.dtype not in [torch.int64, torch.int32, torch.int16] else x, b) - -def to_float(b:Collection[Tensor])->Collection[Tensor]: - "Recursively map lists of tensors in `b ` to FP16." - return recurse(lambda x: x.float() if x.dtype not in [torch.int64, torch.int32, torch.int16] else x, b) - -def to_device(b:Tensors, device:torch.device): - "Recursively put `b` on `device`." - device = ifnone(device, defaults.device) - return recurse(lambda x: x.to(device, non_blocking=True), b) - -def data_collate(batch:ItemsList)->Tensor: - "Convert `batch` items to tensor data." - return torch.utils.data.dataloader.default_collate(to_data(batch)) - -def requires_grad(m:nn.Module, b:Optional[bool]=None)->Optional[bool]: - "If `b` is not set return `requires_grad` of first param, else set `requires_grad` on all params as `b`" - ps = list(m.parameters()) - if not ps: return None - if b is None: return ps[0].requires_grad - for p in ps: p.requires_grad=b - -def trainable_params(m:nn.Module)->ParamList: - "Return list of trainable params in `m`." - res = filter(lambda p: p.requires_grad, m.parameters()) - return res - -def children(m:nn.Module)->ModuleList: - "Get children of `m`." - return list(m.children()) - -def num_children(m:nn.Module)->int: - "Get number of children modules in `m`." - return len(children(m)) - -def range_children(m:nn.Module)->Iterator[int]: - "Return iterator of len of children of `m`." - return range(num_children(m)) - -class ParameterModule(Module): - "Register a lone parameter `p` in a module." - def __init__(self, p:nn.Parameter): self.val = p - def forward(self, x): return x - -def children_and_parameters(m:nn.Module): - "Return the children of `m` and its direct parameters not registered in modules." - children = list(m.children()) - children_p = sum([[id(p) for p in c.parameters()] for c in m.children()],[]) - for p in m.parameters(): - if id(p) not in children_p: children.append(ParameterModule(p)) - return children - -def flatten_model(m:nn.Module): - if num_children(m): - mapped = map(flatten_model,children_and_parameters(m)) - return sum(mapped,[]) - else: - return [m] - -#flatten_model = lambda m: sum(map(flatten_model,children_and_parameters(m)),[]) if num_children(m) else [m] - -def first_layer(m:nn.Module)->nn.Module: - "Retrieve first layer in a module `m`." - return flatten_model(m)[0] - -def last_layer(m:nn.Module)->nn.Module: - "Retrieve last layer in a module `m`." - return flatten_model(m)[-1] - -def split_model_idx(model:nn.Module, idxs:Collection[int])->ModuleList: - "Split `model` according to the indexes in `idxs`." - layers = flatten_model(model) - if idxs[0] != 0: idxs = [0] + idxs - if idxs[-1] != len(layers): idxs.append(len(layers)) - return [nn.Sequential(*layers[i:j]) for i,j in zip(idxs[:-1],idxs[1:])] - -def split_model(model:nn.Module=None, splits:Collection[Union[nn.Module,ModuleList]]=None): - "Split `model` according to the layers in `splits`." - splits = listify(splits) - if isinstance(splits[0], nn.Module): - layers = flatten_model(model) - idxs = [layers.index(first_layer(s)) for s in splits] - return split_model_idx(model, idxs) - return [nn.Sequential(*s) for s in splits] - -def get_param_groups(layer_groups:Collection[nn.Module])->List[List[nn.Parameter]]: - return [sum([list(trainable_params(c)) for c in l.children()], []) for l in layer_groups] - -def split_no_wd_params(layer_groups:Collection[nn.Module])->List[List[nn.Parameter]]: - "Separate the parameters in `layer_groups` between `no_wd_types` and bias (`bias_types`) from the rest." - split_params = [] - for l in layer_groups: - l1,l2 = [],[] - for c in l.children(): - if isinstance(c, no_wd_types): l2 += list(trainable_params(c)) - elif isinstance(c, bias_types): - bias = c.bias if hasattr(c, 'bias') else None - l1 += [p for p in trainable_params(c) if not (p is bias)] - if bias is not None: l2.append(bias) - else: l1 += list(trainable_params(c)) - #Since we scan the children separately, we might get duplicates (tied weights). We need to preserve the order - #for the optimizer load of state_dict - l1,l2 = uniqueify(l1),uniqueify(l2) - split_params += [l1, l2] - return split_params - -def set_bn_eval(m:nn.Module)->None: - "Set bn layers in eval mode for all recursive children of `m`." - for l in m.children(): - if isinstance(l, bn_types) and not next(l.parameters()).requires_grad: - l.eval() - set_bn_eval(l) - -def batch_to_half(b:Collection[Tensor])->Collection[Tensor]: - "Set the input of batch `b` to half precision." - return [to_half(b[0]), b[1]] - -def bn2float(module:nn.Module)->nn.Module: - "If `module` is batchnorm don't use half precision." - if isinstance(module, torch.nn.modules.batchnorm._BatchNorm): module.float() - for child in module.children(): bn2float(child) - return module - -def model2half(model:nn.Module)->nn.Module: - "Convert `model` to half precision except the batchnorm layers." - return bn2float(model.half()) - -def init_default(m:nn.Module, func:LayerFunc=nn.init.kaiming_normal_)->nn.Module: - "Initialize `m` weights with `func` and set `bias` to 0." - if func: - if hasattr(m, 'weight'): func(m.weight) - if hasattr(m, 'bias') and hasattr(m.bias, 'data'): m.bias.data.fill_(0.) - return m - -def cond_init(m:nn.Module, init_func:LayerFunc): - "Initialize the non-batchnorm layers of `m` with `init_func`." - if (not isinstance(m, bn_types)) and requires_grad(m): init_default(m, init_func) - -def apply_leaf(m:nn.Module, f:LayerFunc): - "Apply `f` to children of `m`." - c = children(m) - if isinstance(m, nn.Module): f(m) - for l in c: apply_leaf(l,f) - -def apply_init(m, init_func:LayerFunc): - "Initialize all non-batchnorm layers of `m` with `init_func`." - apply_leaf(m, partial(cond_init, init_func=init_func)) - -def in_channels(m:nn.Module) -> List[int]: - "Return the shape of the first weight layer in `m`." - for l in flatten_model(m): - if hasattr(l, 'weight'): return l.weight.shape[1] - raise Exception('No weight layer') - -class ModelOnCPU(): - "A context manager to evaluate `model` on the CPU inside." - def __init__(self, model:nn.Module): self.model = model - def __enter__(self): - self.device = one_param(self.model).device - return self.model.cpu() - def __exit__(self, type, value, traceback): - self.model = self.model.to(self.device) - -class NoneReduceOnCPU(): - "A context manager to evaluate `loss_func` with none reduce and weights on the CPU inside." - def __init__(self, loss_func:LossFunction): - self.loss_func,self.device,self.old_red = loss_func,None,None - - def __enter__(self): - if hasattr(self.loss_func, 'weight') and self.loss_func.weight is not None: - self.device = self.loss_func.weight.device - self.loss_func.weight = self.loss_func.weight.cpu() - if hasattr(self.loss_func, 'reduction'): - self.old_red = getattr(self.loss_func, 'reduction') - setattr(self.loss_func, 'reduction', 'none') - return self.loss_func - else: return partial(self.loss_func, reduction='none') - - def __exit__(self, type, value, traceback): - if self.device is not None: self.loss_func.weight = self.loss_func.weight.to(self.device) - if self.old_red is not None: setattr(self.loss_func, 'reduction', self.old_red) - -def model_type(dtype): - "Return the torch type corresponding to `dtype`." - return (torch.float32 if np.issubdtype(dtype, np.floating) else - torch.int64 if np.issubdtype(dtype, np.integer) - else None) - -def np2model_tensor(a): - "Tranform numpy array `a` to a tensor of the same type." - dtype = model_type(a.dtype) - res = as_tensor(a) - if not dtype: return res - return res.type(dtype) - -def _pca(x, k=2): - "Compute PCA of `x` with `k` dimensions." - x = x-torch.mean(x,0) - U,S,V = torch.svd(x.t()) - return torch.mm(x,U[:,:k]) -torch.Tensor.pca = _pca - -def trange_of(x): - "Create a tensor from `range_of(x)`." - return torch.arange(len(x)) - -def to_np(x): - "Convert a tensor to a numpy array." - return x.data.cpu().numpy() - -# monkey patching to allow matplotlib to plot tensors -def tensor__array__(self, dtype=None): - res = to_np(self) - if dtype is None: return res - else: return res.astype(dtype, copy=False) -Tensor.__array__ = tensor__array__ -Tensor.ndim = property(lambda x: len(x.shape)) - -def grab_idx(x,i,batch_first:bool=True): - "Grab the `i`-th batch in `x`, `batch_first` stating the batch dimension." - if batch_first: return ([o[i].cpu() for o in x] if is_listy(x) else x[i].cpu()) - else: return ([o[:,i].cpu() for o in x] if is_listy(x) else x[:,i].cpu()) - -def logit(x:Tensor)->Tensor: - "Logit of `x`, clamped to avoid inf." - x = x.clamp(1e-7, 1-1e-7) - return -(1/x-1).log() - -def logit_(x:Tensor)->Tensor: - "Inplace logit of `x`, clamped to avoid inf" - x.clamp_(1e-7, 1-1e-7) - return (x.reciprocal_().sub_(1)).log_().neg_() - -def set_all_seed(seed:int)->None: - "Sets the seeds for all pseudo random generators in fastai lib" - np.random.seed(seed) - torch.manual_seed(seed) - random.seed(seed) - -def uniform(low:Number, high:Number=None, size:Optional[List[int]]=None)->FloatOrTensor: - "Draw 1 or shape=`size` random floats from uniform dist: min=`low`, max=`high`." - if high is None: high=low - return random.uniform(low,high) if size is None else torch.FloatTensor(*listify(size)).uniform_(low,high) - -def log_uniform(low, high, size:Optional[List[int]]=None)->FloatOrTensor: - "Draw 1 or shape=`size` random floats from uniform dist: min=log(`low`), max=log(`high`)." - res = uniform(log(low), log(high), size) - return exp(res) if size is None else res.exp_() - -def rand_bool(p:float, size:Optional[List[int]]=None)->BoolOrTensor: - "Draw 1 or shape=`size` random booleans (`True` occuring with probability `p`)." - return uniform(0,1,size)

IntOrTensor: - "Generate int or tensor `size` of ints between `low` and `high` (included)." - return random.randint(low,high) if size is None else torch.randint(low,high+1,size) - -def one_param(m: nn.Module)->Tensor: - "Return the first parameter of `m`." - return next(m.parameters()) - -def try_int(o:Any)->Any: - "Try to convert `o` to int, default to `o` if not possible." - # NB: single-item rank-1 array/tensor can be converted to int, but we don't want to do this - if isinstance(o, (np.ndarray,Tensor)): return o if o.ndim else int(o) - if isinstance(o, collections.abc.Sized) or getattr(o,'__array_interface__',False): return o - try: return int(o) - except: return o - -def get_model(model:nn.Module): - "Return the model maybe wrapped inside `model`." - return model.module if isinstance(model, (DistributedDataParallel, nn.DataParallel)) else model - -def flatten_check(out:Tensor, targ:Tensor) -> Tensor: - "Check that `out` and `targ` have the same number of elements and flatten them." - out,targ = out.contiguous().view(-1),targ.contiguous().view(-1) - assert len(out) == len(targ), f"Expected output and target to have the same number of elements but got {len(out)} and {len(targ)}." - return out,targ - -#Monkey-patch nn.DataParallel.reset -def _data_parallel_reset(self): - if hasattr(self.module, 'reset'): self.module.reset() -nn.DataParallel.reset = _data_parallel_reset - -def remove_module_load(state_dict): - """create new OrderedDict that does not contain `module.`""" - new_state_dict = OrderedDict() - for k, v in state_dict.items(): new_state_dict[k[7:]] = v - return new_state_dict - -def num_distrib(): - "Return the number of processes in distributed training (if applicable)." - return int(os.environ.get('WORLD_SIZE', 0)) - -def rank_distrib(): - "Return the distributed rank of this process (if applicable)." - return int(os.environ.get('RANK', 0)) - -def add_metrics(last_metrics:Collection[Rank0Tensor], mets:Union[Rank0Tensor, Collection[Rank0Tensor]]): - "Return a dictionary for updating `last_metrics` with `mets`." - last_metrics,mets = listify(last_metrics),listify(mets) - return {'last_metrics': last_metrics + mets} - -def try_save(state:Dict, path:Path=None, file:PathLikeOrBinaryStream=None): - target = open(path/file, 'wb') if is_pathlike(file) else file - try: torch.save(state, target) - except OSError as e: - raise Exception(f"{e}\n Can't write {path/file}. Pass an absolute writable pathlib obj `fname`.") - -def np_func(f): - "Convert a function taking and returning numpy arrays to one taking and returning tensors" - def _inner(*args, **kwargs): - nargs = [to_np(arg) if isinstance(arg,Tensor) else arg for arg in args] - return tensor(f(*nargs, **kwargs)) - functools.update_wrapper(_inner, f) - return _inner - diff --git a/spaces/alistairmcleay/cambridge-masters-project/scripts/UBAR_code/interaction/UBAR_interact.py b/spaces/alistairmcleay/cambridge-masters-project/scripts/UBAR_code/interaction/UBAR_interact.py deleted file mode 100644 index fb47a767d4e2949fee60d4c3e41ea3f559108184..0000000000000000000000000000000000000000 --- a/spaces/alistairmcleay/cambridge-masters-project/scripts/UBAR_code/interaction/UBAR_interact.py +++ /dev/null @@ -1,475 +0,0 @@ -import sys -import torch -import random -import string - -# import bcolors -from omegaconf import OmegaConf -from transformers import GPT2LMHeadModel, GPT2Tokenizer - -from src.crazyneuraluser.UBAR_code.config import global_config as cfg -from src.crazyneuraluser.UBAR_code.reader import MultiWozReader -from src.crazyneuraluser.UBAR_code.db_ops import MultiWozDB - -from typing import List - - -class bcolors: - HEADER = "\033[95m" - OKBLUE = "\033[94m" - OKCYAN = "\033[96m" - GREEN = "\033[92m" - YELLOW = "\033[93m" - RED = "\033[91m" - ENDC = "\033[0m" - BOLD = "\033[1m" - UNDERLINE = "\033[4m" - - -class UbarSystemModel: # may inherit convlab or not, just like andy's - def __init__(self, name: str, checkpoint_path: str, model_config_path: str): - - self.tokenizer = GPT2Tokenizer.from_pretrained("alistairmcleay/UBAR-distilgpt2") - self.model = GPT2LMHeadModel.from_pretrained("alistairmcleay/UBAR-distilgpt2") - self.device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - self.name = name - self.turn_domain = ["general"] # returns a list of one string that is the domain e.g. 'taxi' - # (this is because of the way the db_ops.py deals with the domain. It should really be a string.) - - self.ubar_status = {"dialogue_terminate": False} - - self.print_intermediary_info = False - - self.config = OmegaConf.load(model_config_path) - self.previous_turn = {"user": [], "bspn": [], "aspn": [], "db": []} - - # NB: best to use corpus goals to guide interactions - baselines/simulate_agent.py allows that. - - # initialize multiwoz reader and db_ops - self.reader = MultiWozReader(self.tokenizer) - self.db = MultiWozDB(self.config.dbs_path) - - def lexicalize_sys_response(self, sys_response, domain_hits, decoded_belief_state_subseq) -> str: - lexicalized_sys_response = "" - - # Track entities already filled e.g. if there are 3 restaurants track which have already been added to a slot - max_idx_of_added_entities = -1 - - # Fill slots with values from the DB (lexicalization) - for token in sys_response.split(): - token = token.strip(" .,;:") - if token.startswith("["): # It is a slot to be filled - - # Note in hotel there is specific price data too but to simplify things - # we just use the price range (e.g. moderate) - db_price_key = "price" - # if domain is restaurant then use "pricerange" - if self.turn_domain[0] == "restaurant": - db_price_key = "pricerange" - - slots_to_db_keys_map = { - "[value_price]": db_price_key, - "[value_pricerange]": db_price_key, - "[value_food]": "food", - "[value_area]": "area", - "[value_type]": "type", - "[value_phone]": "phone", - "[value_address]": "address", - "[value_leave]": "leave", - "[value_postcode]": "postcode", - "[value_id]": "id", - "[value_arrive]": "arrive", - "[value_stars]": "stars", - "[value_day]": "day", - "[value_destination]": "destination", - "[value_car]": "taxi_types", - "[value_departure]": "departure", - "[value_people]": "people", - "[value_stay]": "stay", - "[value_department]": "department", - "[value_time]": "time", - "[value_name]": "name", - "[value_reference]": "reference", - } - # Hospital domain is a strange outlier data structure - if self.turn_domain == ["hospital"] and token == "[value_address]": - token = "1 Addenbrooks Street" - elif self.turn_domain == ["hospital"] and token == "[value_postcode]": - token = "CB11QD" - - # So does taxi - elif self.turn_domain == ["taxi"] and token == "[value_phone]" and domain_hits != []: - token = domain_hits[0]["taxi_phone"] - - # Deal with value_name differently because there can be multiple - elif token == "[value_name]" and domain_hits != []: - token = domain_hits[max_idx_of_added_entities + 1]["name"] - max_idx_of_added_entities += 1 - - # This slot tells the user how many db hits there were matching their constraints - elif token == "[value_choice]" and domain_hits != []: - token = len(domain_hits) - - # Randomly generate the reference - elif token == "[value_reference]" and domain_hits != []: - token = "".join(random.choices(string.ascii_uppercase, k=10)) - - else: - # First check can we fill the token from the db results - db_success = False - if domain_hits != []: - for slot, db_key in slots_to_db_keys_map.items(): - if token == slot and db_key in domain_hits[0]: - token = domain_hits[0][db_key] - db_success = True - - # If we cannot, then try to fill it from the belief state by looking for a match - # in the belief state and then if there is a match adding the next token. - # This is not perfect as some are more than one word but its probably good enough. - if not db_success: - # The DB doesn't contain a postcode for the police station so fill it here - if token == "[value_postcode]" and self.turn_domain == ["police"]: - token = "CB11QD" - continue - decoded_belief_states = decoded_belief_state_subseq.split() - for idx, belief_state_slot in enumerate(decoded_belief_states): - if token in slots_to_db_keys_map.keys(): - if slots_to_db_keys_map[token] == belief_state_slot: - curr_slot_resp = "" - # We dont know the length of the value we need to extract from the belief state - for belief_state_token in decoded_belief_states[idx + 1 :]: - if ( - belief_state_token not in slots_to_db_keys_map.values() - and belief_state_token != "" - ): - curr_slot_resp += belief_state_token + " " - else: - break - token = curr_slot_resp[:-1] - continue - - # Otherwise just leave the slot as it is as we have failed to fill it - - lexicalized_sys_response += str(token) - lexicalized_sys_response += " " - - return lexicalized_sys_response - - def set_turn_domain(self, belief_span_ids_subseq, sys_act_span_ids_subseq=None) -> None: - """ - IMPORTANT: use_system_act is not None when actually querying the DB to - lexicalise the system response. When it is None the Belief state NOT the system act is used to determine - the domain. In self.response() the DB is queried twice. The first time is using the Belief state as the system - act has not yet been generated, and it is only used to find out if there are matches in the DB for the current - domain + constraints. Then, after the system act is generated, we call the DB to actually get the results to - lexicalise the system response. It is much more important that the domain is correct for the second call, and - the system act is much more accurate at determining the domain. - """ - - if sys_act_span_ids_subseq is None: - decoded_belief_state_subseq = self.tokenizer.decode(belief_span_ids_subseq[1:-1]) - decoded_prev_belief_state_subseq = self.tokenizer.decode(self.previous_turn["bspn"][1:-1]) - - # If it is the first turn and the belief state is empty then set the domain to general - if self.previous_turn["bspn"] == [] and len(belief_span_ids_subseq) == 2: - self.turn_domain = ["general"] - return - - # If the belief state doesn't change then keep the same domain - if belief_span_ids_subseq == self.previous_turn["bspn"]: - return - - # The domain has changed, get the new one (from the right) - else: - # remove substring from string - if decoded_prev_belief_state_subseq in decoded_belief_state_subseq: - decoded_new_tokens = decoded_belief_state_subseq.replace("decoded_prev_belief_state_subseq", "") - most_recent_domain_in_belief_state = [ - [token.strip("[]") for token in decoded_new_tokens.split() if token.startswith("[")][-1] - ] - self.turn_domain = most_recent_domain_in_belief_state - else: - # Sometimes the previous belief state is not in the current belief state as - # the output changes very slightly (say by one word) - in this case just keep the same domain - # TODO: Could probably handle this better. - if self.print_intermediary_info: - print( - bcolors.YELLOW - + "!Previous belief state not in current belief state! Details below:" - + bcolors.ENDC - ) - print("Previous Belief State: " + decoded_prev_belief_state_subseq) - print("Current Belief State: " + decoded_belief_state_subseq) - - else: - decoded_sys_act_subseq = self.tokenizer.decode(sys_act_span_ids_subseq[1:-1]) - - most_recent_domain_in_sys_act = [ - [token.strip("[]") for token in decoded_sys_act_subseq.split() if token.startswith("[")][0] - ] - self.turn_domain = most_recent_domain_in_sys_act - - def get_domain_hits(self, decoded_belief_state_subseq) -> dict: - # Get hits from db based on belief state, unless its a general turn (no hits then) - constraint_dict = self.reader.bspan_to_constraint_dict(decoded_belief_state_subseq) - query_turn_domain = self.turn_domain[0] # db.queryJsons needs a string not a list (single domain) - # If the constraint dict doesn't contain any constraints for the current domain then pass an empty dict - if query_turn_domain in constraint_dict: - domain_hits = self.db.queryJsons(query_turn_domain, constraint_dict[query_turn_domain]) - else: - domain_hits = self.db.queryJsons(query_turn_domain, {}) - - return domain_hits - - def print_turn_intermediate_info(self, generated_subseq_ids_map) -> None: - print(bcolors.OKCYAN + "Turn domain: " + bcolors.ENDC + "[" + str(self.turn_domain[0]) + "]") - - belief_state = self.tokenizer.decode(generated_subseq_ids_map["bspn"]) - print(bcolors.OKCYAN + "Belief state: " + bcolors.ENDC + belief_state) - - db_output = self.tokenizer.decode(generated_subseq_ids_map["db"]) - print(bcolors.OKCYAN + "DB Output: " + bcolors.ENDC + db_output) - - sys_act = self.tokenizer.decode(generated_subseq_ids_map["aspn"]) - print(bcolors.OKCYAN + "System Act: " + bcolors.ENDC + sys_act) - - def _init_ubar_status(self) -> dict: - return {"dialogue_terminate": False} - - def init_session(self): - self.ubar_status = self._init_ubar_status() - self.previous_turn = {"user": [], "bspn": [], "aspn": [], "db": []} - self.turn_domain = ["general"] - - def is_terminated(self) -> bool: - """This should tell an external client whether the user model considers they have completed the task.""" - # return False - return self.ubar_status["dialogue_terminate"] - - def _activate_dialogue_terminate(self) -> None: - """Turn on the ubar status about dialogue termination""" - self.ubar_status["dialogue_terminate"] = True - - def add_torch_input_eval(self, inputs): - # inputs: context - inputs["context_tensor"] = torch.tensor([inputs["context"]]).to(self.device) - return inputs - - def prepare_input_for_model(self, user_utterance: str, turn_id: int) -> torch.Tensor: - # TODO: CONVERT DIALOGUE HISTORY TO TOKEN IDS - - tokenised_user_utterance = self.tokenizer.encode(" " + user_utterance + " ") - # In this application turn always only contains ["user"], not ["bspn", "aspn", "db"] etc. - turn = {"user": tokenised_user_utterance} - - first_turn = turn_id == 0 - inputs = self.reader.convert_turn_eval(turn, self.previous_turn, first_turn) - inputs = self.add_torch_input_eval(inputs) - - return inputs - - def decode_generated_bspn(self, generated) -> List[int]: - eos_b_id = self.tokenizer.encode([""])[0] - if eos_b_id in generated: - eos_b_idx = generated.index(eos_b_id) - else: - eos_b_idx = len(generated) - 1 - return generated[: eos_b_idx + 1] - - def decode_grenerated_act_resp(self, generated) -> dict: - """ - decode generated - return decoded['resp'] ('bspn', 'aspn') - """ - decoded = {} - eos_a_id = self.tokenizer.encode([""])[0] - eos_r_id = self.tokenizer.encode([""])[0] - # eos_b_id = self.tokenizer.encode([""])[0] - - # eos_r may not exists if gpt2 generated repetitive words. - if eos_r_id in generated: - eos_r_idx = generated.index(eos_r_id) - else: - eos_r_idx = len(generated) - 1 - - if cfg.use_true_curr_aspn: # only predict resp - decoded["resp"] = generated[: eos_r_idx + 1] - else: # predicted aspn, resp - eos_a_idx = generated.index(eos_a_id) - decoded["aspn"] = generated[: eos_a_idx + 1] - decoded["resp"] = generated[eos_a_idx + 1 : eos_r_idx + 1] - return decoded - - def generate_ids_subseq_map(self, inputs): - - context_input_subseq = inputs["context"] - # decoded_context_input_subseq = self.tokenizer.decode(context_input_subseq) - # Check if model has put duplicate tags in the context and if so remove one of the duplicates - # Yes this is kind of hacky, but UBAR seems to learn to duplicate certain tags - I don't know why - # Also instead of decoding and encoding here tags could be checked with their ids - but time is short... - # cleaned_decoded_list = [] - # prev_token = "" - # for token in decoded_context_input_subseq.split(): - # if token.startswith("<") and token.endswith(">"): # It is a tag - # if token == prev_token: # It is a duplicate tag - # continue - # cleaned_decoded_list.append(token) - # prev_token = token - # decoded_context_input_subseq = " ".join(cleaned_decoded_list) - # context_input_subseq = self.tokenizer.encode(decoded_context_input_subseq) - - context_input_subeq_tensor = inputs["context_tensor"] - - # TODO: FIND OUT BY COMPARING WITH MODEL.VALIDATE() how to calculate context_length - context_length = len(context_input_subseq) - - belief_state_ids = self.model.generate( - input_ids=context_input_subeq_tensor, - max_length=context_length + 60, - temperature=0.7, - top_p=1, - num_beams=1, - pad_token_id=self.tokenizer.eos_token_id, - eos_token_id=self.tokenizer.encode([""])[0], - ) - gen_belief_state_token_ids = belief_state_ids[0].cpu().numpy().tolist() # type: list[int] - belief_span_ids_subseq = self.decode_generated_bspn( - gen_belief_state_token_ids[context_length - 1 :] - ) # type: list[int] - - self.set_turn_domain(belief_span_ids_subseq) - - db_result = self.reader.bspan_to_DBpointer( - self.tokenizer.decode(belief_span_ids_subseq), self.turn_domain - ) # type: str - db_ids_subseq = self.tokenizer.convert_tokens_to_ids( - self.tokenizer.tokenize(" " + db_result + " ") - ) + self.tokenizer.encode([""]) - - # TODO: context_input_subseq is already a tensor but the other two subseqs aren't - why? - act_response_gen_input_subseq = context_input_subseq + belief_span_ids_subseq + db_ids_subseq - act_response_gen_input_subseq_tensor = torch.tensor([act_response_gen_input_subseq]).to(self.device) - context_length = len(act_response_gen_input_subseq) - - outputs_db = self.model.generate( - input_ids=act_response_gen_input_subseq_tensor, - max_length=context_length + 80, - temperature=0.7, - top_p=1, - num_beams=1, - pad_token_id=self.tokenizer.eos_token_id, - eos_token_id=self.tokenizer.encode([""])[0], - ) - generated_act_resp_token_ids = outputs_db[0].cpu().numpy().tolist() # type: list[int] - generated_act_resp_token_ids = generated_act_resp_token_ids[context_length - 1 :] - - try: - generated_subseq_ids_map = self.decode_grenerated_act_resp(generated_act_resp_token_ids) - # TODO: IF YOU WANT Option b) then you just read the ['resp'] key and convert to string using huggingface; - # that would be sys_response; Obviously, this applies to Option a as well - generated_subseq_ids_map["bspn"] = belief_span_ids_subseq - # TODO: Option a) STORE THESE MAPPINGS IN SELF.CONTEXT IF YOU WANT TO HAVE - # {U_1, BS_1, DB_1, A_1, R_1, U_2, BS_2... history} - - generated_subseq_ids_map["db"] = db_ids_subseq - generated_subseq_ids_map["labels"] = context_input_subseq - - except ValueError: - generated_subseq_ids_map = {"resp": [], "bspn": [], "aspn": [], "db": [], "labels": []} - - # IMPORTANT: this is how all of the previous state is updated (appended) after each turn - # Update self.previous_turn to track state to be fed into GPT2 - for k, v in generated_subseq_ids_map.items(): - self.previous_turn[k] = v - - if self.print_intermediary_info: - self.print_turn_intermediate_info(generated_subseq_ids_map) - - return generated_subseq_ids_map - - def response(self, usr_utterance: str, turn_id: int) -> str: - - if usr_utterance == "Goodbye": - self._activate_dialogue_terminate() - return "Session Terminated by User" - - inputs = self.prepare_input_for_model(usr_utterance, turn_id) - - generated_subseq_ids_map = self.generate_ids_subseq_map(inputs) - belief_span_ids_subseq = generated_subseq_ids_map["bspn"] - - sys_response = self.tokenizer.decode(generated_subseq_ids_map["resp"][1:-1]) - - prev_turn_domain = self.turn_domain - sys_act_span_ids_subseq = generated_subseq_ids_map["aspn"] - self.set_turn_domain(belief_span_ids_subseq, sys_act_span_ids_subseq) - - if self.turn_domain != ["general"]: - # If the domain changes when reading the system response, then we need to re-do the generation process - # for both the belief state and the system action and response. We do this because self.get_domain_hits() - # will break if the domain is different when querying the DB for the second time here than when it was - # originally queried above, due to the constraint dict it uses that is generated from the belief state - # How can the belief state domain and the system act domain be different? Bunch of things, for example: - # When asking for the police the belief state may be empty (so 'general' domain) - # but then the system action will have [police]. - if prev_turn_domain != self.turn_domain: - if self.print_intermediary_info: - print( - bcolors.RED - + "Domain changed from {} to {}".format(prev_turn_domain, self.turn_domain) - + bcolors.RED - ) - generated_subseq_ids_map = self.generate_ids_subseq_map(inputs) - sys_response = self.tokenizer.decode(generated_subseq_ids_map["resp"][1:-1]) - - decoded_belief_state_subseq = self.tokenizer.decode(belief_span_ids_subseq) - domain_hits = self.get_domain_hits(decoded_belief_state_subseq) - # print(bcolors.UNDERLINE + "Domain hits: \n" + bcolors.ENDC, domain_hits) # for debugging - - sys_response = self.lexicalize_sys_response(sys_response, domain_hits, decoded_belief_state_subseq) - - return sys_response - - -def interact(checkpoint_path): - sys_model = UbarSystemModel("UBAR_sys_model", checkpoint_path, "scripts/UBAR_code/interaction/config.yaml") - # TODO: Fix this hardcoded variable (should be in config) - sys_model.print_intermediary_info = True - - for dial_id in range(1, 11): - print(f"In dialogue {dial_id}") - - # Reset state after each dialog - sys_model.init_session() - - user_utt = input(bcolors.GREEN + "Enter user response here: " + bcolors.ENDC) - - for turn_id in range(100): - try: - sys_response = sys_model.response(user_utt, turn_id) - # There are a lot of edge case bugs that are possible that could break the current turn. If so, continue - # to ensure a large run across the dataset isn't ruined by a single bad turn. - except Exception() as e: - print(bcolors.RED + "Exception: {}".format(e) + bcolors.ENDC) - continue - - if sys_model.is_terminated(): - print(bcolors.RED + sys_response + bcolors.ENDC) - print(bcolors.RED + "---" * 30 + bcolors.ENDC) - break - - print(bcolors.YELLOW + "System: " + bcolors.ENDC + sys_response) - print("---" * 30) - - # next turn materials - user_utt = input(bcolors.GREEN + "Enter user response here: " + bcolors.ENDC) - - -if __name__ == "__main__": - if len(sys.argv) == 1: - print("Wrong argument!") - print("Usage: python UBAR_interact.py checkpoint_path") - sys.exit(1) - - checkpoint_path = sys.argv[1] - interact(checkpoint_path) diff --git a/spaces/amarchheda/ChordDuplicate/portaudio/test/patest_many.c b/spaces/amarchheda/ChordDuplicate/portaudio/test/patest_many.c deleted file mode 100644 index 76cc0434c62ad01f56ee8583ae0f6dc2752e119e..0000000000000000000000000000000000000000 --- a/spaces/amarchheda/ChordDuplicate/portaudio/test/patest_many.c +++ /dev/null @@ -1,210 +0,0 @@ -/** @file patest_many.c - @ingroup test_src - @brief Start and stop the PortAudio Driver multiple times. - @author Phil Burk http://www.softsynth.com -*/ -/* - * $Id$ - * - * This program uses the PortAudio Portable Audio Library. - * For more information see: http://www.portaudio.com - * Copyright (c) 1999-2000 Ross Bencina and Phil Burk - * - * Permission is hereby granted, free of charge, to any person obtaining - * a copy of this software and associated documentation files - * (the "Software"), to deal in the Software without restriction, - * including without limitation the rights to use, copy, modify, merge, - * publish, distribute, sublicense, and/or sell copies of the Software, - * and to permit persons to whom the Software is furnished to do so, - * subject to the following conditions: - * - * The above copyright notice and this permission notice shall be - * included in all copies or substantial portions of the Software. - * - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, - * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR - * ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF - * CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION - * WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - */ - -/* - * The text above constitutes the entire PortAudio license; however, - * the PortAudio community also makes the following non-binding requests: - * - * Any person wishing to distribute modifications to the Software is - * requested to send the modifications to the original developer so that - * they can be incorporated into the canonical version. It is also - * requested that these non-binding requests be included along with the - * license above. - */ - -#include -#include -#include -#include "portaudio.h" -#define NUM_SECONDS (1) -#define SAMPLE_RATE (44100) -#ifndef M_PI -#define M_PI (3.14159265) -#endif -#define TABLE_SIZE (200) -typedef struct -{ - short sine[TABLE_SIZE]; - int left_phase; - int right_phase; - unsigned int sampsToGo; -} -paTestData; -PaError TestOnce( void ); -static int patest1Callback( const void *inputBuffer, void *outputBuffer, - unsigned long framesPerBuffer, - const PaStreamCallbackTimeInfo* timeInfo, - PaStreamCallbackFlags statusFlags, - void *userData ); - -/* This routine will be called by the PortAudio engine when audio is needed. -** It may called at interrupt level on some machines so don't do anything -** that could mess up the system like calling malloc() or free(). -*/ -static int patest1Callback( const void *inputBuffer, void *outputBuffer, - unsigned long framesPerBuffer, - const PaStreamCallbackTimeInfo* timeInfo, - PaStreamCallbackFlags statusFlags, - void *userData ) -{ - paTestData *data = (paTestData*)userData; - short *out = (short*)outputBuffer; - unsigned int i; - int finished = 0; - (void) inputBuffer; /* Prevent "unused variable" warnings. */ - - if( data->sampsToGo < framesPerBuffer ) - { - /* final buffer... */ - - for( i=0; isampsToGo; i++ ) - { - *out++ = data->sine[data->left_phase]; /* left */ - *out++ = data->sine[data->right_phase]; /* right */ - data->left_phase += 1; - if( data->left_phase >= TABLE_SIZE ) data->left_phase -= TABLE_SIZE; - data->right_phase += 3; /* higher pitch so we can distinguish left and right. */ - if( data->right_phase >= TABLE_SIZE ) data->right_phase -= TABLE_SIZE; - } - /* zero remainder of final buffer */ - for( ; isine[data->left_phase]; /* left */ - *out++ = data->sine[data->right_phase]; /* right */ - data->left_phase += 1; - if( data->left_phase >= TABLE_SIZE ) data->left_phase -= TABLE_SIZE; - data->right_phase += 3; /* higher pitch so we can distinguish left and right. */ - if( data->right_phase >= TABLE_SIZE ) data->right_phase -= TABLE_SIZE; - } - data->sampsToGo -= framesPerBuffer; - } - return finished; -} -/*******************************************************************/ -#ifdef MACINTOSH -int main(void); -int main(void) -{ - int i; - PaError err; - int numLoops = 10; - printf("Loop %d times.\n", numLoops ); - for( i=0; i 1 ) - { - numLoops = atoi(argv[1]); - } - for( i=0; idefaultLowOutputLatency; - outputParameters.hostApiSpecificStreamInfo = NULL; - err = Pa_OpenStream( - &stream, - NULL, /* no input */ - &outputParameters, - SAMPLE_RATE, - 1024, /* frames per buffer */ - paClipOff, /* we won't output out of range samples so don't bother clipping them */ - patest1Callback, - &data ); - if( err != paNoError ) goto error; - - err = Pa_StartStream( stream ); - if( err != paNoError ) goto error; - printf("Waiting for sound to finish.\n"); - Pa_Sleep(1000); - err = Pa_CloseStream( stream ); - if( err != paNoError ) goto error; - Pa_Terminate(); - return paNoError; -error: - Pa_Terminate(); - fprintf( stderr, "An error occurred while using the portaudio stream\n" ); - fprintf( stderr, "Error number: %d\n", err ); - fprintf( stderr, "Error message: %s\n", Pa_GetErrorText( err ) ); - return err; -} diff --git a/spaces/amarchheda/ChordDuplicate/portaudio/update_gitrevision.sh b/spaces/amarchheda/ChordDuplicate/portaudio/update_gitrevision.sh deleted file mode 100644 index 80d4f3b12d70dda17af34b07fe5e0dc99722afaa..0000000000000000000000000000000000000000 --- a/spaces/amarchheda/ChordDuplicate/portaudio/update_gitrevision.sh +++ /dev/null @@ -1,17 +0,0 @@ -#!/bin/bash -# -# Write the Git commit SHA to an include file. -# This should be run before compiling code on Linux or Macintosh. -# -revision_filename=src/common/pa_gitrevision.h - -# Run git first to make sure it is installed before corrupting the -# include file. -git rev-parse HEAD - -# Update the include file with the current Git revision. -echo -n "#define PA_GIT_REVISION " > ${revision_filename} -git rev-parse HEAD >> ${revision_filename} - -echo ${revision_filename} now contains -cat ${revision_filename} diff --git a/spaces/amsterdamNLP/CLIP-attention-rollout/clip_grounding/datasets/png.py b/spaces/amsterdamNLP/CLIP-attention-rollout/clip_grounding/datasets/png.py deleted file mode 100644 index ee17deb2effe8c558e373764b5c9c75e3399c155..0000000000000000000000000000000000000000 --- a/spaces/amsterdamNLP/CLIP-attention-rollout/clip_grounding/datasets/png.py +++ /dev/null @@ -1,231 +0,0 @@ -""" -Dataset object for Panoptic Narrative Grounding. - -Paper: https://openaccess.thecvf.com/content/ICCV2021/papers/Gonzalez_Panoptic_Narrative_Grounding_ICCV_2021_paper.pdf -""" - -import os -from os.path import join, isdir, exists - -import torch -from torch.utils.data import Dataset -import cv2 -from PIL import Image -from skimage import io -import numpy as np -import textwrap -import matplotlib.pyplot as plt -from matplotlib import transforms -from imgaug.augmentables.segmaps import SegmentationMapsOnImage -import matplotlib.colors as mc - -from clip_grounding.utils.io import load_json -from clip_grounding.datasets.png_utils import show_image_and_caption - - -class PNG(Dataset): - """Panoptic Narrative Grounding.""" - - def __init__(self, dataset_root, split) -> None: - """ - Initializer. - - Args: - dataset_root (str): path to the folder containing PNG dataset - split (str): MS-COCO split such as train2017/val2017 - """ - super().__init__() - - assert isdir(dataset_root) - self.dataset_root = dataset_root - - assert split in ["val2017"], f"Split {split} not supported. "\ - "Currently, only supports split `val2017`." - self.split = split - - self.ann_dir = join(self.dataset_root, "annotations") - # feat_dir = join(self.dataset_root, "features") - - panoptic = load_json(join(self.ann_dir, "panoptic_{:s}.json".format(split))) - images = panoptic["images"] - self.images_info = {i["id"]: i for i in images} - panoptic_anns = panoptic["annotations"] - self.panoptic_anns = {int(a["image_id"]): a for a in panoptic_anns} - - # self.panoptic_pred_path = join( - # feat_dir, split, "panoptic_seg_predictions" - # ) - # assert isdir(self.panoptic_pred_path) - - panoptic_narratives_path = join(self.dataset_root, "annotations", f"png_coco_{split}.json") - self.panoptic_narratives = load_json(panoptic_narratives_path) - - def __len__(self): - return len(self.panoptic_narratives) - - def get_image_path(self, image_id: str): - image_path = join(self.dataset_root, "images", self.split, f"{image_id.zfill(12)}.jpg") - return image_path - - def __getitem__(self, idx: int): - narr = self.panoptic_narratives[idx] - - image_id = narr["image_id"] - image_path = self.get_image_path(image_id) - assert exists(image_path) - - image = Image.open(image_path) - caption = narr["caption"] - - # show_single_image(image, title=caption, titlesize=12) - - segments = narr["segments"] - - image_id = int(narr["image_id"]) - panoptic_ann = self.panoptic_anns[image_id] - panoptic_ann = self.panoptic_anns[image_id] - segment_infos = {} - for s in panoptic_ann["segments_info"]: - idi = s["id"] - segment_infos[idi] = s - - image_info = self.images_info[image_id] - panoptic_segm = io.imread( - join( - self.ann_dir, - "panoptic_segmentation", - self.split, - "{:012d}.png".format(image_id), - ) - ) - panoptic_segm = ( - panoptic_segm[:, :, 0] - + panoptic_segm[:, :, 1] * 256 - + panoptic_segm[:, :, 2] * 256 ** 2 - ) - - panoptic_ann = self.panoptic_anns[image_id] - # panoptic_pred = io.imread( - # join(self.panoptic_pred_path, "{:012d}.png".format(image_id)) - # )[:, :, 0] - - - # # select a single utterance to visualize - # segment = segments[7] - # segment_ids = segment["segment_ids"] - # segment_mask = np.zeros((image_info["height"], image_info["width"])) - # for segment_id in segment_ids: - # segment_id = int(segment_id) - # segment_mask[panoptic_segm == segment_id] = 1. - - utterances = [s["utterance"] for s in segments] - outputs = [] - for i, segment in enumerate(segments): - - # create segmentation mask on image - segment_ids = segment["segment_ids"] - - # if no annotation for this word, skip - if not len(segment_ids): - continue - - segment_mask = np.zeros((image_info["height"], image_info["width"])) - for segment_id in segment_ids: - segment_id = int(segment_id) - segment_mask[panoptic_segm == segment_id] = 1. - - # store the outputs - text_mask = np.zeros(len(utterances)) - text_mask[i] = 1. - segment_data = dict( - image=image, - text=utterances, - image_mask=segment_mask, - text_mask=text_mask, - full_caption=caption, - ) - outputs.append(segment_data) - - # # visualize segmentation mask with associated text - # segment_color = "red" - # segmap = SegmentationMapsOnImage( - # segment_mask.astype(np.uint8), shape=segment_mask.shape, - # ) - # image_with_segmap = segmap.draw_on_image(np.asarray(image), colors=[0, COLORS[segment_color]])[0] - # image_with_segmap = Image.fromarray(image_with_segmap) - - # colors = ["black" for _ in range(len(utterances))] - # colors[i] = segment_color - # show_image_and_caption(image_with_segmap, utterances, colors) - - return outputs - - -def overlay_segmask_on_image(image, image_mask, segment_color="red"): - segmap = SegmentationMapsOnImage( - image_mask.astype(np.uint8), shape=image_mask.shape, - ) - rgb_color = mc.to_rgb(segment_color) - rgb_color = 255 * np.array(rgb_color) - image_with_segmap = segmap.draw_on_image(np.asarray(image), colors=[0, rgb_color])[0] - image_with_segmap = Image.fromarray(image_with_segmap) - return image_with_segmap - - -def get_text_colors(text, text_mask, segment_color="red"): - colors = ["black" for _ in range(len(text))] - colors[text_mask.nonzero()[0][0]] = segment_color - return colors - - -def overlay_relevance_map_on_image(image, heatmap): - width, height = image.size - - # resize the heatmap to image size - heatmap = cv2.resize(heatmap, (width, height)) - heatmap = np.uint8(255 * heatmap) - heatmap = cv2.applyColorMap(heatmap, cv2.COLORMAP_JET) - heatmap = cv2.cvtColor(heatmap, cv2.COLOR_BGR2RGB) - - # create overlapped super image - img = np.asarray(image) - super_img = heatmap * 0.4 + img * 0.6 - super_img = np.uint8(super_img) - super_img = Image.fromarray(super_img) - - return super_img - - -def visualize_item(image, text, image_mask, text_mask, segment_color="red"): - - segmap = SegmentationMapsOnImage( - image_mask.astype(np.uint8), shape=image_mask.shape, - ) - rgb_color = mc.to_rgb(segment_color) - rgb_color = 255 * np.array(rgb_color) - image_with_segmap = segmap.draw_on_image(np.asarray(image), colors=[0, rgb_color])[0] - image_with_segmap = Image.fromarray(image_with_segmap) - - colors = ["black" for _ in range(len(text))] - - text_idx = text_mask.argmax() - colors[text_idx] = segment_color - show_image_and_caption(image_with_segmap, text, colors) - - - -if __name__ == "__main__": - from clip_grounding.utils.paths import REPO_PATH, DATASET_ROOTS - - PNG_ROOT = DATASET_ROOTS["PNG"] - dataset = PNG(dataset_root=PNG_ROOT, split="val2017") - - item = dataset[0] - sub_item = item[1] - visualize_item( - image=sub_item["image"], - text=sub_item["text"], - image_mask=sub_item["image_mask"], - text_mask=sub_item["text_mask"], - segment_color="red", - ) diff --git a/spaces/antonovmaxim/text-generation-webui-space/extensions/api/streaming_api.py b/spaces/antonovmaxim/text-generation-webui-space/extensions/api/streaming_api.py deleted file mode 100644 index e50dfa2266594f9edc7fb2b6f8659f275236279f..0000000000000000000000000000000000000000 --- a/spaces/antonovmaxim/text-generation-webui-space/extensions/api/streaming_api.py +++ /dev/null @@ -1,78 +0,0 @@ -import asyncio -import json -from threading import Thread - -from websockets.server import serve - -from extensions.api.util import build_parameters, try_start_cloudflared -from modules import shared -from modules.text_generation import generate_reply - -PATH = '/api/v1/stream' - - -async def _handle_connection(websocket, path): - - if path != PATH: - print(f'Streaming api: unknown path: {path}') - return - - async for message in websocket: - message = json.loads(message) - - prompt = message['prompt'] - generate_params = build_parameters(message) - stopping_strings = generate_params.pop('stopping_strings') - generate_params['stream'] = True - - generator = generate_reply( - prompt, generate_params, stopping_strings=stopping_strings, is_chat=False) - - # As we stream, only send the new bytes. - skip_index = 0 - message_num = 0 - - for a in generator: - to_send = a[skip_index:] - await websocket.send(json.dumps({ - 'event': 'text_stream', - 'message_num': message_num, - 'text': to_send - })) - - await asyncio.sleep(0) - - skip_index += len(to_send) - message_num += 1 - - await websocket.send(json.dumps({ - 'event': 'stream_end', - 'message_num': message_num - })) - - -async def _run(host: str, port: int): - async with serve(_handle_connection, host, port, ping_interval=None): - await asyncio.Future() # run forever - - -def _run_server(port: int, share: bool = False): - address = '0.0.0.0' if shared.args.listen else '127.0.0.1' - - def on_start(public_url: str): - public_url = public_url.replace('https://', 'wss://') - print(f'Starting streaming server at public url {public_url}{PATH}') - - if share: - try: - try_start_cloudflared(port, max_attempts=3, on_start=on_start) - except Exception as e: - print(e) - else: - print(f'Starting streaming server at ws://{address}:{port}{PATH}') - - asyncio.run(_run(host=address, port=port)) - - -def start_server(port: int, share: bool = False): - Thread(target=_run_server, args=[port, share], daemon=True).start() diff --git a/spaces/apsys/hetfit/docs/main.html b/spaces/apsys/hetfit/docs/main.html deleted file mode 100644 index 21de8388271812186b81e701b752277016e70ae6..0000000000000000000000000000000000000000 --- a/spaces/apsys/hetfit/docs/main.html +++ /dev/null @@ -1,106 +0,0 @@ - - - - - main.py - - - -

-
-
-

main.py

-
-
-
-
-
- # -
- -
-
-
import streamlit as st
-
-from nets.envs import SCI
-
-
-st.set_page_config(
-        page_title="HET_sci",
-        menu_items={
-            'About':'https://advpropsys.github.io'
-        }
-)
-
-st.title('HETfit_scientific')
-st.markdown("#### Imagine a package which was engineered primarly for data driven plasma physics devices design, mainly hall effect thrusters, yup that's it"
-            "\n### :orange[Don't be scared away though, it has much simpler interface than anything you ever used for such designs]")
-st.markdown('### Main concepts:')
-st.markdown( "- Each observational/design session is called an **environment**, for now it can be either RCI or SCI (Real or scaled interface)"
-            "\n In this overview we will only touch SCI, since RCI is using PINNs which are different topic"
-            "\n- You specify most of the run parameters on this object init, :orange[**including generation of new samples**] via GAN"
-            "\n- You may want to generate new features, do it !"
-            "\n- Want to select best features for more effctive work? Done!"
-            "\n- Compile environment with your model of choice, can be ***any*** torch model or sklearn one"
-            "\n- Train !"
-            "\n- Plot, inference, save, export to jit/onnx, measure performance - **they all are one liners** "
-            )
-st.markdown('### tl;dr \n- Create environment'
-            '\n```run = SCI(*args,**kwargs)```'
-            '\n - Generate features ```run.feature_gen()``` '
-            '\n - Select features ```run.feature_importance()```'
-            '\n - Compile env ```run.compile()```'
-            '\n - Train model in env ```run.train()```'
-            '\n - Inference, plot, performance, ex. ```run.plot3d()```'
-            '\n #### And yes, it all will work even without any additional arguments from user besides column indexes'
-            )
-st.write('Comparison with *arXiv:2206.04440v3*')
-col1, col2 = st.columns(2)
-col1.metric('Geometry accuracy on domain',value='83%',delta='15%')
-col2.metric('$d \mapsto h$ prediction',value='98%',delta='14%')
-
-st.header('Example:')
-
-st.markdown('Remeber indexes and column names on this example: $P$ - 1, $d$ - 3, $h$ - 3, $m_a$ - 6,$T$ - 7')
-st.code('run = SCI(*args,**kwargs)')
-
-run = SCI()
-st.code('run.feature_gen()')
-run.feature_gen()
-st.write('New features: (index-0:22 original samples, else is GAN generated)',run.df.iloc[1:,9:].astype(float))
-st.write('Most of real dataset is from *doi:0.2514/1.B37424*, hence the results mostly agree with it in specific')
-st.code('run.feature_importance(run.df.iloc[1:,1:7].astype(float),run.df.iloc[1:,7]) # Clear and easy example')
-
-st.write(run.feature_importance(run.df.iloc[1:,1:6].astype(float),run.df.iloc[1:,6]))
-st.markdown(' As we can see only $h$ and $d$ passed for $m_a$ model, not only that linear dependacy was proven experimantally, but now we got this from data driven source')
-st.code('run.compile(idx=(1,3,7))')
-run.compile(idx=(1,3,7))
-st.code('run.train(epochs=10)')
-run.train(epochs=10)
-st.code('run.plot3d()')
-st.write(run.plot3d())
-st.code('run.performance()')
-st.write(run.performance())
-
-st.write('Try it out yourself! Select a column from 1 to 10')
-number = st.number_input('Here',min_value=1, max_value=10, step=1)
-
-if number:
-    st.code(f'run.compile(idx=(1,3,{number}))')
-    run.compile(idx=(1,3,number))
-    st.code('run.train(epochs=10)')
-    run.train(epochs=10)
-    st.code('run.plot3d()')
-    st.write(run.plot3d())
-
-
-
-st.markdown('In this intro we covered simplest user flow while using HETFit package, resulted data can be used to leverage PINN and analytical models of Hall effect thrusters'
-            '\n #### :orange[To cite please contact author on https://github.com/advpropsys]')
-
-
-
-
-
-
- diff --git a/spaces/artificialguybr/video-dubbing/TTS/docs/source/models/vits.md b/spaces/artificialguybr/video-dubbing/TTS/docs/source/models/vits.md deleted file mode 100644 index 0c303f7a957f1a27be9028c1f596368919303ecd..0000000000000000000000000000000000000000 --- a/spaces/artificialguybr/video-dubbing/TTS/docs/source/models/vits.md +++ /dev/null @@ -1,38 +0,0 @@ -# VITS - -VITS (Conditional Variational Autoencoder with Adversarial Learning for End-to-End Text-to-Speech -) is an End-to-End (encoder -> vocoder together) TTS model that takes advantage of SOTA DL techniques like GANs, VAE, -Normalizing Flows. It does not require external alignment annotations and learns the text-to-audio alignment -using MAS, as explained in the paper. The model architecture is a combination of GlowTTS encoder and HiFiGAN vocoder. -It is a feed-forward model with x67.12 real-time factor on a GPU. - -🐸 YourTTS is a multi-speaker and multi-lingual TTS model that can perform voice conversion and zero-shot speaker adaptation. -It can also learn a new language or voice with a ~ 1 minute long audio clip. This is a big open gate for training -TTS models in low-resources languages. 🐸 YourTTS uses VITS as the backbone architecture coupled with a speaker encoder model. - -## Important resources & papers -- 🐸 YourTTS: https://arxiv.org/abs/2112.02418 -- VITS: https://arxiv.org/pdf/2106.06103.pdf -- Neural Spline Flows: https://arxiv.org/abs/1906.04032 -- Variational Autoencoder: https://arxiv.org/pdf/1312.6114.pdf -- Generative Adversarial Networks: https://arxiv.org/abs/1406.2661 -- HiFiGAN: https://arxiv.org/abs/2010.05646 -- Normalizing Flows: https://blog.evjang.com/2018/01/nf1.html - -## VitsConfig -```{eval-rst} -.. autoclass:: TTS.tts.configs.vits_config.VitsConfig - :members: -``` - -## VitsArgs -```{eval-rst} -.. autoclass:: TTS.tts.models.vits.VitsArgs - :members: -``` - -## Vits Model -```{eval-rst} -.. autoclass:: TTS.tts.models.vits.Vits - :members: -``` diff --git a/spaces/auto-academic/auto-draft/latex_templates/ICLR2022/conclusion.tex b/spaces/auto-academic/auto-draft/latex_templates/ICLR2022/conclusion.tex deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/avans06/whisper-webui-translate/LICENSE.md b/spaces/avans06/whisper-webui-translate/LICENSE.md deleted file mode 100644 index f5f4b8b5ecd27c09e4ef16e9662bcb7bb2bfc76f..0000000000000000000000000000000000000000 --- a/spaces/avans06/whisper-webui-translate/LICENSE.md +++ /dev/null @@ -1,195 +0,0 @@ -Apache License -============== - -_Version 2.0, January 2004_ -_<>_ - -### Terms and Conditions for use, reproduction, and distribution - -#### 1. Definitions - -“License” shall mean the terms and conditions for use, reproduction, and -distribution as defined by Sections 1 through 9 of this document. - -“Licensor” shall mean the copyright owner or entity authorized by the copyright -owner that is granting the License. - -“Legal Entity” shall mean the union of the acting entity and all other entities -that control, are controlled by, or are under common control with that entity. -For the purposes of this definition, “control” means **(i)** the power, direct or -indirect, to cause the direction or management of such entity, whether by -contract or otherwise, or **(ii)** ownership of fifty percent (50%) or more of the -outstanding shares, or **(iii)** beneficial ownership of such entity. - -“You” (or “Your”) shall mean an individual or Legal Entity exercising -permissions granted by this License. - -“Source” form shall mean the preferred form for making modifications, including -but not limited to software source code, documentation source, and configuration -files. - -“Object” form shall mean any form resulting from mechanical transformation or -translation of a Source form, including but not limited to compiled object code, -generated documentation, and conversions to other media types. - -“Work” shall mean the work of authorship, whether in Source or Object form, made -available under the License, as indicated by a copyright notice that is included -in or attached to the work (an example is provided in the Appendix below). - -“Derivative Works” shall mean any work, whether in Source or Object form, that -is based on (or derived from) the Work and for which the editorial revisions, -annotations, elaborations, or other modifications represent, as a whole, an -original work of authorship. For the purposes of this License, Derivative Works -shall not include works that remain separable from, or merely link (or bind by -name) to the interfaces of, the Work and Derivative Works thereof. - -“Contribution” shall mean any work of authorship, including the original version -of the Work and any modifications or additions to that Work or Derivative Works -thereof, that is intentionally submitted to Licensor for inclusion in the Work -by the copyright owner or by an individual or Legal Entity authorized to submit -on behalf of the copyright owner. For the purposes of this definition, -“submitted” means any form of electronic, verbal, or written communication sent -to the Licensor or its representatives, including but not limited to -communication on electronic mailing lists, source code control systems, and -issue tracking systems that are managed by, or on behalf of, the Licensor for -the purpose of discussing and improving the Work, but excluding communication -that is conspicuously marked or otherwise designated in writing by the copyright -owner as “Not a Contribution.” - -“Contributor” shall mean Licensor and any individual or Legal Entity on behalf -of whom a Contribution has been received by Licensor and subsequently -incorporated within the Work. - -#### 2. Grant of Copyright License - -Subject to the terms and conditions of this License, each Contributor hereby -grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, -irrevocable copyright license to reproduce, prepare Derivative Works of, -publicly display, publicly perform, sublicense, and distribute the Work and such -Derivative Works in Source or Object form. - -#### 3. Grant of Patent License - -Subject to the terms and conditions of this License, each Contributor hereby -grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, -irrevocable (except as stated in this section) patent license to make, have -made, use, offer to sell, sell, import, and otherwise transfer the Work, where -such license applies only to those patent claims licensable by such Contributor -that are necessarily infringed by their Contribution(s) alone or by combination -of their Contribution(s) with the Work to which such Contribution(s) was -submitted. If You institute patent litigation against any entity (including a -cross-claim or counterclaim in a lawsuit) alleging that the Work or a -Contribution incorporated within the Work constitutes direct or contributory -patent infringement, then any patent licenses granted to You under this License -for that Work shall terminate as of the date such litigation is filed. - -#### 4. Redistribution - -You may reproduce and distribute copies of the Work or Derivative Works thereof -in any medium, with or without modifications, and in Source or Object form, -provided that You meet the following conditions: - -* **(a)** You must give any other recipients of the Work or Derivative Works a copy of -this License; and -* **(b)** You must cause any modified files to carry prominent notices stating that You -changed the files; and -* **(c)** You must retain, in the Source form of any Derivative Works that You distribute, -all copyright, patent, trademark, and attribution notices from the Source form -of the Work, excluding those notices that do not pertain to any part of the -Derivative Works; and -* **(d)** If the Work includes a “NOTICE” text file as part of its distribution, then any -Derivative Works that You distribute must include a readable copy of the -attribution notices contained within such NOTICE file, excluding those notices -that do not pertain to any part of the Derivative Works, in at least one of the -following places: within a NOTICE text file distributed as part of the -Derivative Works; within the Source form or documentation, if provided along -with the Derivative Works; or, within a display generated by the Derivative -Works, if and wherever such third-party notices normally appear. The contents of -the NOTICE file are for informational purposes only and do not modify the -License. You may add Your own attribution notices within Derivative Works that -You distribute, alongside or as an addendum to the NOTICE text from the Work, -provided that such additional attribution notices cannot be construed as -modifying the License. - -You may add Your own copyright statement to Your modifications and may provide -additional or different license terms and conditions for use, reproduction, or -distribution of Your modifications, or for any such Derivative Works as a whole, -provided Your use, reproduction, and distribution of the Work otherwise complies -with the conditions stated in this License. - -#### 5. Submission of Contributions - -Unless You explicitly state otherwise, any Contribution intentionally submitted -for inclusion in the Work by You to the Licensor shall be under the terms and -conditions of this License, without any additional terms or conditions. -Notwithstanding the above, nothing herein shall supersede or modify the terms of -any separate license agreement you may have executed with Licensor regarding -such Contributions. - -#### 6. Trademarks - -This License does not grant permission to use the trade names, trademarks, -service marks, or product names of the Licensor, except as required for -reasonable and customary use in describing the origin of the Work and -reproducing the content of the NOTICE file. - -#### 7. Disclaimer of Warranty - -Unless required by applicable law or agreed to in writing, Licensor provides the -Work (and each Contributor provides its Contributions) on an “AS IS” BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, -including, without limitation, any warranties or conditions of TITLE, -NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are -solely responsible for determining the appropriateness of using or -redistributing the Work and assume any risks associated with Your exercise of -permissions under this License. - -#### 8. Limitation of Liability - -In no event and under no legal theory, whether in tort (including negligence), -contract, or otherwise, unless required by applicable law (such as deliberate -and grossly negligent acts) or agreed to in writing, shall any Contributor be -liable to You for damages, including any direct, indirect, special, incidental, -or consequential damages of any character arising as a result of this License or -out of the use or inability to use the Work (including but not limited to -damages for loss of goodwill, work stoppage, computer failure or malfunction, or -any and all other commercial damages or losses), even if such Contributor has -been advised of the possibility of such damages. - -#### 9. Accepting Warranty or Additional Liability - -While redistributing the Work or Derivative Works thereof, You may choose to -offer, and charge a fee for, acceptance of support, warranty, indemnity, or -other liability obligations and/or rights consistent with this License. However, -in accepting such obligations, You may act only on Your own behalf and on Your -sole responsibility, not on behalf of any other Contributor, and only if You -agree to indemnify, defend, and hold each Contributor harmless for any liability -incurred by, or claims asserted against, such Contributor by reason of your -accepting any such warranty or additional liability. - -_END OF TERMS AND CONDITIONS_ - -### APPENDIX: How to apply the Apache License to your work - -To apply the Apache License to your work, attach the following boilerplate -notice, with the fields enclosed by brackets `[]` replaced with your own -identifying information. (Don't include the brackets!) The text should be -enclosed in the appropriate comment syntax for the file format. We also -recommend that a file or class name and description of purpose be included on -the same “printed page” as the copyright notice for easier identification within -third-party archives. - - Copyright [yyyy] [name of copyright owner] - - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. - diff --git a/spaces/awacke1/1.ChatGPT-HuggingFace-Spaces-NLP-Transformers-Pipeline/app.py b/spaces/awacke1/1.ChatGPT-HuggingFace-Spaces-NLP-Transformers-Pipeline/app.py deleted file mode 100644 index b6e81f385d4256b4a286f1a67a983d3912d949e0..0000000000000000000000000000000000000000 --- a/spaces/awacke1/1.ChatGPT-HuggingFace-Spaces-NLP-Transformers-Pipeline/app.py +++ /dev/null @@ -1,132 +0,0 @@ -import gradio as gr -import os -import json -import requests - -#Streaming endpoint -API_URL = "https://api.openai.com/v1/chat/completions" #os.getenv("API_URL") + "/generate_stream" - -#Testing with my Open AI Key -OPENAI_API_KEY = os.getenv("ChatGPT") # Key 03-23 - -def predict(inputs, top_p, temperature, openai_api_key, chat_counter, chatbot=[], history=[]): #repetition_penalty, top_k - - payload = { - "model": "gpt-3.5-turbo", - "messages": [{"role": "user", "content": f"{inputs}"}], - "temperature" : 1.0, - "top_p":1.0, - "n" : 1, - "stream": True, - "presence_penalty":0, - "frequency_penalty":0, - } - - headers = { - "Content-Type": "application/json", - "Authorization": f"Bearer {openai_api_key}" - } - - print(f"chat_counter - {chat_counter}") - if chat_counter != 0 : - messages=[] - for data in chatbot: - temp1 = {} - temp1["role"] = "user" - temp1["content"] = data[0] - temp2 = {} - temp2["role"] = "assistant" - temp2["content"] = data[1] - messages.append(temp1) - messages.append(temp2) - temp3 = {} - temp3["role"] = "user" - temp3["content"] = inputs - messages.append(temp3) - #messages - payload = { - "model": "gpt-3.5-turbo", - "messages": messages, #[{"role": "user", "content": f"{inputs}"}], - "temperature" : temperature, #1.0, - "top_p": top_p, #1.0, - "n" : 1, - "stream": True, - "presence_penalty":0, - "frequency_penalty":0, - } - - chat_counter+=1 - - history.append(inputs) - print(f"payload is - {payload}") - # make a POST request to the API endpoint using the requests.post method, passing in stream=True - response = requests.post(API_URL, headers=headers, json=payload, stream=True) - #response = requests.post(API_URL, headers=headers, json=payload, stream=True) - token_counter = 0 - partial_words = "" - - counter=0 - for chunk in response.iter_lines(): - #Skipping first chunk - if counter == 0: - counter+=1 - continue - #counter+=1 - # check whether each line is non-empty - if chunk.decode() : - chunk = chunk.decode() - # decode each line as response data is in bytes - if len(chunk) > 12 and "content" in json.loads(chunk[6:])['choices'][0]['delta']: - #if len(json.loads(chunk.decode()[6:])['choices'][0]["delta"]) == 0: - # break - partial_words = partial_words + json.loads(chunk[6:])['choices'][0]["delta"]["content"] - if token_counter == 0: - history.append(" " + partial_words) - else: - history[-1] = partial_words - chat = [(history[i], history[i + 1]) for i in range(0, len(history) - 1, 2) ] # convert to tuples of list - token_counter+=1 - yield chat, history, chat_counter # resembles {chatbot: chat, state: history} - - -def reset_textbox(): - return gr.update(value='') - -title = """

🔥ChatGPT API 🚀Streaming🚀

""" -description = """Language models can be conditioned to act like dialogue agents through a conversational prompt that typically takes the form: -``` -User: -Assistant: -User: -Assistant: -... -``` -In this app, you can explore the outputs of a gpt-3.5-turbo LLM. -""" - -with gr.Blocks(css = """#col_container {width: 1000px; margin-left: auto; margin-right: auto;} - #chatbot {height: 520px; overflow: auto;}""") as demo: - gr.HTML(title) - gr.HTML('''
Duplicate SpaceDuplicate the Space and run securely with your OpenAI API Key
''') - with gr.Column(elem_id = "col_container"): - openai_api_key = gr.Textbox(type='password', label="Enter your OpenAI API key here") - chatbot = gr.Chatbot(elem_id='chatbot') #c - inputs = gr.Textbox(placeholder= "Hi there!", label= "Type an input and press Enter") #t - state = gr.State([]) #s - b1 = gr.Button() - - #inputs, top_p, temperature, top_k, repetition_penalty - with gr.Accordion("Parameters", open=False): - top_p = gr.Slider( minimum=-0, maximum=1.0, value=1.0, step=0.05, interactive=True, label="Top-p (nucleus sampling)",) - temperature = gr.Slider( minimum=-0, maximum=5.0, value=1.0, step=0.1, interactive=True, label="Temperature",) - #top_k = gr.Slider( minimum=1, maximum=50, value=4, step=1, interactive=True, label="Top-k",) - #repetition_penalty = gr.Slider( minimum=0.1, maximum=3.0, value=1.03, step=0.01, interactive=True, label="Repetition Penalty", ) - chat_counter = gr.Number(value=0, visible=False, precision=0) - - inputs.submit( predict, [inputs, top_p, temperature, openai_api_key, chat_counter, chatbot, state], [chatbot, state, chat_counter],) - b1.click( predict, [inputs, top_p, temperature, openai_api_key, chat_counter, chatbot, state], [chatbot, state, chat_counter],) - b1.click(reset_textbox, [], [inputs]) - inputs.submit(reset_textbox, [], [inputs]) - - #gr.Markdown(description) - demo.queue().launch(debug=True) diff --git a/spaces/awacke1/ChatGPTStreamlit8/app.py b/spaces/awacke1/ChatGPTStreamlit8/app.py deleted file mode 100644 index 9fcd544b1048b3bbe3efd012716150f93ac3564d..0000000000000000000000000000000000000000 --- a/spaces/awacke1/ChatGPTStreamlit8/app.py +++ /dev/null @@ -1,258 +0,0 @@ -import streamlit as st -import openai -import os -import base64 -import glob -import json -import mistune -import pytz -import math -import requests - -from datetime import datetime -from openai import ChatCompletion -from xml.etree import ElementTree as ET -from bs4 import BeautifulSoup -from collections import deque -from audio_recorder_streamlit import audio_recorder - -def generate_filename(prompt, file_type): - central = pytz.timezone('US/Central') - safe_date_time = datetime.now(central).strftime("%m%d_%I%M") - safe_prompt = "".join(x for x in prompt if x.isalnum())[:45] - return f"{safe_date_time}_{safe_prompt}.{file_type}" - -def chat_with_model(prompt, document_section): - model = model_choice - conversation = [{'role': 'system', 'content': 'You are a helpful assistant.'}] - conversation.append({'role': 'user', 'content': prompt}) - if len(document_section)>0: - conversation.append({'role': 'assistant', 'content': document_section}) - response = openai.ChatCompletion.create(model=model, messages=conversation) - #return response - return response['choices'][0]['message']['content'] - -def transcribe_audio(openai_key, file_path, model): - OPENAI_API_URL = "https://api.openai.com/v1/audio/transcriptions" - headers = { - "Authorization": f"Bearer {openai_key}", - } - with open(file_path, 'rb') as f: - data = {'file': f} - response = requests.post(OPENAI_API_URL, headers=headers, files=data, data={'model': model}) - if response.status_code == 200: - st.write(response.json()) - - response2 = chat_with_model(response.json().get('text'), '') # ************************************* - st.write('Responses:') - #st.write(response) - st.write(response2) - return response.json().get('text') - else: - st.write(response.json()) - st.error("Error in API call.") - return None - -def save_and_play_audio(audio_recorder): - audio_bytes = audio_recorder() - if audio_bytes: - filename = generate_filename("Recording", "wav") - with open(filename, 'wb') as f: - f.write(audio_bytes) - st.audio(audio_bytes, format="audio/wav") - return filename - return None - -def create_file(filename, prompt, response): - if filename.endswith(".txt"): - with open(filename, 'w') as file: - file.write(f"{prompt}\n{response}") - elif filename.endswith(".htm"): - with open(filename, 'w') as file: - file.write(f"{prompt} {response}") - elif filename.endswith(".md"): - with open(filename, 'w') as file: - file.write(f"{prompt}\n\n{response}") - -def truncate_document(document, length): - return document[:length] -def divide_document(document, max_length): - return [document[i:i+max_length] for i in range(0, len(document), max_length)] - -def get_table_download_link(file_path): - with open(file_path, 'r') as file: - data = file.read() - b64 = base64.b64encode(data.encode()).decode() - file_name = os.path.basename(file_path) - ext = os.path.splitext(file_name)[1] # get the file extension - if ext == '.txt': - mime_type = 'text/plain' - elif ext == '.py': - mime_type = 'text/plain' - elif ext == '.xlsx': - mime_type = 'text/plain' - elif ext == '.csv': - mime_type = 'text/plain' - elif ext == '.htm': - mime_type = 'text/html' - elif ext == '.md': - mime_type = 'text/markdown' - else: - mime_type = 'application/octet-stream' # general binary data type - href = f'{file_name}' - return href - -def CompressXML(xml_text): - root = ET.fromstring(xml_text) - for elem in list(root.iter()): - if isinstance(elem.tag, str) and 'Comment' in elem.tag: - elem.parent.remove(elem) - return ET.tostring(root, encoding='unicode', method="xml") - -def read_file_content(file,max_length): - if file.type == "application/json": - content = json.load(file) - return str(content) - elif file.type == "text/html" or file.type == "text/htm": - content = BeautifulSoup(file, "html.parser") - return content.text - elif file.type == "application/xml" or file.type == "text/xml": - tree = ET.parse(file) - root = tree.getroot() - xml = CompressXML(ET.tostring(root, encoding='unicode')) - return xml - elif file.type == "text/markdown" or file.type == "text/md": - md = mistune.create_markdown() - content = md(file.read().decode()) - return content - elif file.type == "text/plain": - return file.getvalue().decode() - else: - return "" - - - -def chat_with_file_contents(prompt, file_content): - conversation = [{'role': 'system', 'content': 'You are a helpful assistant.'}] - conversation.append({'role': 'user', 'content': prompt}) - if len(file_content)>0: - conversation.append({'role': 'assistant', 'content': file_content}) - response = openai.ChatCompletion.create(model=model_choice, messages=conversation) - return response['choices'][0]['message']['content'] - - -# Sidebar and global -openai.api_key = os.getenv('OPENAI_KEY') -st.set_page_config(page_title="GPT Streamlit Document Reasoner",layout="wide") -menu = ["htm", "txt", "xlsx", "csv", "md", "py"] #619 -choice = st.sidebar.selectbox("Output File Type:", menu) -model_choice = st.sidebar.radio("Select Model:", ('gpt-3.5-turbo', 'gpt-3.5-turbo-0301')) - -# Audio, transcribe, GPT: -filename = save_and_play_audio(audio_recorder) -if filename is not None: - transcription = transcribe_audio(openai.api_key, filename, "whisper-1") - st.write(transcription) - gptOutput = chat_with_model(transcription, '') # ************************************* - filename = generate_filename(transcription, choice) - create_file(filename, transcription, gptOutput) - st.sidebar.markdown(get_table_download_link(filename), unsafe_allow_html=True) - - -def main(): - user_prompt = st.text_area("Enter prompts, instructions & questions:", '', height=100) - - collength, colupload = st.columns([2,3]) # adjust the ratio as needed - with collength: - #max_length = 12000 - optimal for gpt35 turbo. 2x=24000 for gpt4. 8x=96000 for gpt4-32k. - max_length = st.slider("File section length for large files", min_value=1000, max_value=128000, value=12000, step=1000) - with colupload: - uploaded_file = st.file_uploader("Add a file for context:", type=["xml", "json", "xlsx","csv","html", "htm", "md", "txt"]) - - document_sections = deque() - document_responses = {} - - if uploaded_file is not None: - file_content = read_file_content(uploaded_file, max_length) - document_sections.extend(divide_document(file_content, max_length)) - - if len(document_sections) > 0: - - if st.button("👁️ View Upload"): - st.markdown("**Sections of the uploaded file:**") - for i, section in enumerate(list(document_sections)): - st.markdown(f"**Section {i+1}**\n{section}") - - st.markdown("**Chat with the model:**") - for i, section in enumerate(list(document_sections)): - if i in document_responses: - st.markdown(f"**Section {i+1}**\n{document_responses[i]}") - else: - if st.button(f"Chat about Section {i+1}"): - st.write('Reasoning with your inputs...') - response = chat_with_model(user_prompt, section) # ************************************* - st.write('Response:') - st.write(response) - document_responses[i] = response - filename = generate_filename(f"{user_prompt}_section_{i+1}", choice) - create_file(filename, user_prompt, response) - st.sidebar.markdown(get_table_download_link(filename), unsafe_allow_html=True) - - if st.button('💬 Chat'): - st.write('Reasoning with your inputs...') - response = chat_with_model(user_prompt, ''.join(list(document_sections))) # ************************************* - st.write('Response:') - st.write(response) - - filename = generate_filename(user_prompt, choice) - create_file(filename, user_prompt, response) - st.sidebar.markdown(get_table_download_link(filename), unsafe_allow_html=True) - - all_files = glob.glob("*.*") - all_files = [file for file in all_files if len(os.path.splitext(file)[0]) >= 20] # exclude files with short names - all_files.sort(key=lambda x: (os.path.splitext(x)[1], x), reverse=True) # sort by file type and file name in descending order - - # sidebar of files - file_contents='' - next_action='' - for file in all_files: - col1, col2, col3, col4, col5 = st.sidebar.columns([1,6,1,1,1]) # adjust the ratio as needed - with col1: - if st.button("🌐", key="md_"+file): # md emoji button - with open(file, 'r') as f: - file_contents = f.read() - next_action='md' - with col2: - st.markdown(get_table_download_link(file), unsafe_allow_html=True) - with col3: - if st.button("📂", key="open_"+file): # open emoji button - with open(file, 'r') as f: - file_contents = f.read() - next_action='open' - with col4: - if st.button("🔍", key="read_"+file): # search emoji button - with open(file, 'r') as f: - file_contents = f.read() - next_action='search' - with col5: - if st.button("🗑", key="delete_"+file): - os.remove(file) - st.experimental_rerun() - - if len(file_contents) > 0: - if next_action=='open': - file_content_area = st.text_area("File Contents:", file_contents, height=500) - if next_action=='md': - st.markdown(file_contents) - if next_action=='search': - file_content_area = st.text_area("File Contents:", file_contents, height=500) - st.write('Reasoning with your inputs...') - response = chat_with_file_contents(user_prompt, file_contents) - st.write('Response:') - st.write(response) - filename = generate_filename(file_content_area, choice) - create_file(filename, file_content_area, response) - st.sidebar.markdown(get_table_download_link(filename), unsafe_allow_html=True) - -if __name__ == "__main__": - main() \ No newline at end of file diff --git a/spaces/awacke1/Health-Care-AI-and-Datasets/app.py b/spaces/awacke1/Health-Care-AI-and-Datasets/app.py deleted file mode 100644 index d97e83e5db380ac3145393d7121cdb1776b35e90..0000000000000000000000000000000000000000 --- a/spaces/awacke1/Health-Care-AI-and-Datasets/app.py +++ /dev/null @@ -1,339 +0,0 @@ -import streamlit as st -from graphviz import Digraph - -st.markdown(""" -# Top Five ML Libraries: -1. PyTorch: PyTorch is another popular open-source machine learning library that is widely used for building deep learning models. It is known for its ease of use and flexibility. -2. TensorFlow: Developed by Google, TensorFlow is an open-source library that is widely used for building and training deep learning models. -3. Scikit-learn: Scikit-learn is a popular Python library for machine learning. It provides a wide range of algorithms for classification, regression, clustering, and dimensionality reduction, along with tools for model selection and evaluation. -4. Keras: Keras is a high-level neural networks API, written in Python and capable of running on top of TensorFlow, CNTK, or Theano. It is designed to be easy to use and supports both convolutional and recurrent neural networks. -5. MXNet: MXNet is a deep learning framework that is known for its speed and scalability. It supports multiple programming languages and is used by several large companies for machine learning tasks. - -# Pytorch demo with knowledge trees -""") - -import matplotlib.pyplot as plt -from mpl_toolkits.mplot3d import Axes3D -import torch -def plot_knowledge_trees(knowledge_trees): - fig = plt.figure() - ax = fig.add_subplot(111, projection='3d') - - xs = torch.arange(len(knowledge_trees)) - ys = torch.arange(len(knowledge_trees[0])) - xs, ys = torch.meshgrid(xs, ys) - - zs = [] - for i in range(len(knowledge_trees)): - zs.append([kt[i] for kt in knowledge_trees]) - - zs = torch.tensor(zs) - ax.plot_surface(xs.numpy(), ys.numpy(), zs.numpy(), cmap='coolwarm') - - ax.set_xlabel('States') - ax.set_ylabel('Knowledge Trees') - ax.set_zlabel('Number of Nodes') - - return fig - -def plot_population(states, populations): - fig, ax = plt.subplots() - ax.bar(states, populations) - - ax.set_xlabel('States') - ax.set_ylabel('Population') - - return fig - - -def main(): - st.title('State Populations and Knowledge Trees') - - # Define the state populations - states = ['California', 'Texas', 'Florida', 'New York'] - populations = [39538223, 29145505, 21538187, 19849399] - - # Define the knowledge trees for each state - knowledge_trees = [ - [100, 200, 300, 400], - [150, 250, 350, 450], - [120, 220, 320, 420], - [130, 230, 330, 430] - ] - - # Generate the plots - st.write('## State Populations') - fig_population = plot_population(states, populations) - st.pyplot(fig_population) - - st.write('## Descending Order Lists of Knowledge Trees') - fig_knowledge_trees = plot_knowledge_trees(knowledge_trees) - st.pyplot(fig_knowledge_trees) - -if __name__ == '__main__': - main() - -st.markdown(""" -# Health Related Examples: -1. Health Conditions By State: https://huggingface.co/spaces/awacke1/HealthConditionsTest -2. Mental Health ICD10 and DSM Codes: https://huggingface.co/spaces/awacke1/Mental-Health-ICD10-to-DSM -3. Health Care and AI Datasets: https://huggingface.co/spaces/awacke1/Health-Care-AI-and-Datasets -4. Zero Shot Classifier Facebook: https://huggingface.co/spaces/awacke1/Zero-shot-classification-facebook-bart-large-mnli -5. Zero Shot Classifier Valhalla: https://huggingface.co/spaces/awacke1/Zero-Shot-Classification-valhalla-distilbart-mnli-12-1 - -""") - -st.markdown(""" -# 👋 Two easy ways to turbo boost your AI learning journey! 💻 -# 🌐 AI Pair Programming -## Open 2 Browsers to: -1. __🌐 ChatGPT__ [URL](https://chat.openai.com/chat) or [URL2](https://platform.openai.com/playground) and -2. __🌐 Huggingface__ [URL](https://huggingface.co/awacke1) in separate browser windows. -1. 🤖 Use prompts to generate a streamlit program on Huggingface or locally to test it. -2. 🔧 For advanced work, add Python 3.10 and VSCode locally, and debug as gradio or streamlit apps. -3. 🚀 Use these two superpower processes to reduce the time it takes you to make a new AI program! ⏱️ -# 🎥 YouTube University Method: -1. 🏋️‍♀️ Plan two hours each weekday to exercise your body and brain. -2. 🎬 Make a playlist of videos you want to learn from on YouTube. Save the links to edit later. -3. 🚀 Try watching the videos at a faster speed while exercising, and sample the first five minutes of each video. -4. 📜 Reorder the playlist so the most useful videos are at the front, and take breaks to exercise. -5. 📝 Practice note-taking in markdown to instantly save what you want to remember. Share your notes with others! -6. 👥 AI Pair Programming Using Long Answer Language Models with Human Feedback: -## 🎥 2023 AI/ML Advanced Learning Playlists: -1. [2023 QA Models and Long Form Question Answering NLP](https://www.youtube.com/playlist?list=PLHgX2IExbFovrkkx8HMTLNgYdjCMNYmX_) -2. [FHIR Bioinformatics Development Using AI/ML and Python, Streamlit, and Gradio - 2022](https://www.youtube.com/playlist?list=PLHgX2IExbFovoMUC3hYXeFegpk_Y0Lz0Q) -3. [2023 ChatGPT for Coding Assistant Streamlit, Gradio and Python Apps](https://www.youtube.com/playlist?list=PLHgX2IExbFouOEnppexiKZVdz_k5b0pvI) -4. [2023 BigScience Bloom - Large Language Model for AI Systems and NLP](https://www.youtube.com/playlist?list=PLHgX2IExbFouqnsIqziThlPCX_miiDq14) -5. [2023 Streamlit Pro Tips for AI UI UX for Data Science, Engineering, and Mathematics](https://www.youtube.com/playlist?list=PLHgX2IExbFou3cP19hHO9Xb-cN8uwr5RM) -6. [2023 Fun, New and Interesting AI, Videos, and AI/ML Techniques](https://www.youtube.com/playlist?list=PLHgX2IExbFotoMt32SrT3Xynt5BXTGnEP) -7. [2023 Best Minds in AGI AI Gamification and Large Language Models](https://www.youtube.com/playlist?list=PLHgX2IExbFotmFeBTpyje1uI22n0GAkXT) -8. [2023 State of the Art for Vision Image Classification, Text Classification and Regression, Extractive Question Answering and Tabular Classification](https://www.youtube.com/playlist?list=PLHgX2IExbFotPcPu6pauNHOoZTTbnAQ2F) -9. [2023 AutoML DataRobot and AI Platforms for Building Models, Features, Test, and Transparency](https://www.youtube.com/playlist?list=PLHgX2IExbFovsY2oGbDwdEhPrakkC8i3g) -""") - - -st.markdown(""" -# 📚 Clinical Terminology and Ontologies [Example 🩺⚕️NLP Clinical Ontology Biomedical NER](https://huggingface.co/spaces/awacke1/Biomed-NLP-AI-Clinical-Terminology) -## Health Vocabularies, Systems of Coding, and Databases with Bibliographies -##__Keywords__: -1. __Clinical Terminology__: 💬 Words that doctors use to talk to each other about patients. -2. __Ontologies for Medications and Conditions__: 📚 A fancy way of organizing knowledge about medicine and health problems. -3. __Health Vocabularies__: 📝 A special list of words used in healthcare to talk about health issues. -4. __Systems of Coding__: 💻 A way of giving things like sicknesses and treatments special codes, so that doctors can remember them easily. -5. __Databases__: 🗄️ A computer system that stores information about patients, health research, and other healthcare things. -6. __Bibliographies__: 📖 A list of books or articles that doctors use to learn about new health information. -1. ## 1️⃣ National Library of Medicine's **RxNorm**: - - Standardized nomenclature for clinical drugs developed by NLM - - Provides links between drug names and related information such as ingredients, strengths, and dosages - - **Data type: controlled vocabulary** - - Access through **NLM's RxNorm website**: https://www.nlm.nih.gov/research/umls/rxnorm/index.html -2. ## 2️⃣ Centers for Medicare and Medicaid Services' Healthcare Common Procedure Coding System (HCPCS): - - Coding system used to identify healthcare **services, procedures, and supplies** - - Includes **codes for drugs, biologicals, and other items** used in medical care - - **Data type: coding system** - - Access through **CMS website**: https://www.cms.gov/Medicare/Coding/MedHCPCSGenInfo -3. ## 3️⃣ Unified Medical Language System (UMLS): - - Set of files and software tools developed by NLM for integrating and mapping biomedical vocabularies - - Includes RxNorm and other drug vocabularies, as well as other terminologies used in medicine - - **Data type: controlled vocabulary** - - Access through UMLS Metathesaurus: https://www.nlm.nih.gov/research/umls/index.html -4. ## 4️⃣ PubMed: - - Database of **biomedical literature** maintained by the National Center for Biotechnology Information (NCBI) - - Includes information about **drugs, including drug names, chemical structures, and pharmacological actions** - - **Data type: bibliographic database** - - Access through **PubMed website**: https://pubmed.ncbi.nlm.nih.gov/ -5. ## 5️⃣ PubChem: - - Database of chemical substances maintained by NCBI - - Includes information about drugs, including **chemical structures, properties, and activities** - - **Data type: chemical database** - - Access through **PubChem website**: https://pubchem.ncbi.nlm.nih.gov/ -6. ## 6️⃣ Behavioral Health Code Terminology Sets: - - Code terminology sets specific to behavioral health - - Includes **DSM** published by American Psychiatric Association, **ICD** published by World Health Organization, and **CPT** published by American Medical Association - - **Data type: coding system** - - Access through respective **organizations' websites**: - 1. [DSM](https://www.psychiatry.org/psychiatrists/practice/dsm) - 2. [ICD](https://www.who.int/standards/classifications/classification-of-diseases) - 3. [CPT](https://www.ama-assn.org/practice-management/cpt/current-procedural-terminology-cpt) -""") - - -st.markdown(""" -1. # 📚Natural Language Processing🔤 - 🗣️🤖💭💬🌍🔍 - 1. 🤔 **🩺⚕️ Sentiment analysis** - Determine underlying sentiment of text. [Example](https://huggingface.co/spaces/awacke1/Sentiment-analysis-streamlit) - 2. 📝 **Named Entity Recognition (NER)** - Identify and classify named entities in text. [Example](https://huggingface.co/spaces/awacke1/Named-entity-resolution) - 3. 🔊 **🩺⚕️Automatic Speech Recognition (ASR)** - Transcribe spoken language into text. - # Advanced NLP ASR Examples: - 1. 🩺⚕️ https://huggingface.co/spaces/awacke1/ASR-High-Accuracy-Test - 2. https://huggingface.co/spaces/awacke1/ASRGenerateStory - 3. 🩺⚕️ https://huggingface.co/spaces/awacke1/TTS-STT-Blocks - 4. 🩺⚕️ https://huggingface.co/spaces/awacke1/CloneAnyVoice - 5. https://huggingface.co/spaces/awacke1/ASR-SOTA-NvidiaSTTMozilla - 4. 🌐 **Machine translation** - Translate text between languages automatically. [Example](https://huggingface.co/spaces/awacke1/Machine-translation) - 5. 📄 **Text summarization** - Automatically summarize large volumes of text. [Example](https://huggingface.co/spaces/awacke1/Text-summarization) - 6. ❓ **🩺⚕️ Question answering** - Answer questions posed in natural language. [Example](https://huggingface.co/spaces/awacke1/Question-answering) - 7. 🤖 **Sentiment-aware chatbots** - Use sentiment analysis to detect user emotions and respond appropriately. - 8. 📊 **🩺⚕️ Text classification** - Classify text into different categories. [Example](https://huggingface.co/spaces/awacke1/sileod-deberta-v3-base-tasksource-nli) - 9. 💬 **🩺⚕️ Text generation** - Generate natural language text. [Example](https://huggingface.co/spaces/awacke1/Sentence2Paragraph) - 10. 🔎 **Topic modeling** - Automatically identify topics in a large corpus of text. [Example](https://huggingface.co/spaces/awacke1/Topic-modeling) - - Examples - 1. [NLP Video Summary](https://huggingface.co/spaces/awacke1/Video-Summary) - 2. [TTS-STT ASR with Multiple Voices](https://huggingface.co/spaces/awacke1/TTS-STT-Blocks) - 3. [NLP Transcript with Video Player](https://huggingface.co/spaces/awacke1/Streamlit-ASR-Video) - 4. [NLP Clinical Ontology Biomedical NER](https://huggingface.co/spaces/awacke1/Biomed-NLP-AI-Clinical-Terminology) - 5. [Document Understanding and NLP](https://huggingface.co/spaces/awacke1/AIDocumentUnderstandingOCR) - 6. [NLP ASR Wav2Vec2 Multilingual](https://huggingface.co/spaces/awacke1/ASR-High-Accuracy-Test) - 7. [Live ASR](https://huggingface.co/spaces/awacke1/ASR-SOTA-NvidiaSTTMozilla) - 8. [NLP and Visualization](https://huggingface.co/spaces/awacke1/Visualization-Plotly-Sunbursts-Treemaps-and-WebGL) -""") - - - - - -st.markdown(""" -2. # 🔮Generative AI💭 (🎨Images and 📝Text) - 🎵🧩🔄📊🌌 - 1. 🆕 **🩺⚕️ Generation of new data**: Create new data that resembles existing data. [Example](https://huggingface.co/spaces/awacke1/GenAI-Generate-New-Data-Resembling-Example) - 2. 🎨 **Creative potential**: Generate music, art, or literature. [Example](https://huggingface.co/spaces/awacke1/Creative-Potential-Music-Art-Lit) - 3. 📊 **Data synthesis**: Synthesize data from multiple sources to create new datasets. [Example](https://huggingface.co/spaces/awacke1/Data-Synthesizer-Synthesize-From-Multiple-Sources) - 4. 📈 **🩺⚕️ Data augmentation**: Augment existing datasets to make them larger and more diverse. [Example](https://huggingface.co/spaces/awacke1/Data-Augmentation) - 5. 🔀 **Domain transfer**: Transfer knowledge learned from one domain to another. - 6. 🔍 **Unsupervised learning**: Learn patterns without labeled training data. - 7. 🔄 **Adaptive learning**: Adapt to changes in data over time. - 8. 🔊 **Noise injection**: Introduce noise to explore a wider range of possibilities. - 9. 🕶️ **Latent space manipulation**: Control output by manipulating a model's latent space. - 10. 🖼️ **Realistic output**: Produce output that is difficult to distinguish from human-created data. - - Examples - 1. Quantum AI Circuits: https://huggingface.co/spaces/awacke1/AI-Quantum?option=Circuit - 2. Generate Story and Video: https://huggingface.co/spaces/awacke1/ASRGenerateStoryandVideo - 3. ASR Generate Story: https://huggingface.co/spaces/awacke1/ASRGenerateStory - 4. Music Generation: https://huggingface.co/spaces/awacke1/MusicMaker -""") - - - - -st.markdown(""" -3. # 📷Image Recognition🏞️ - 1. 📷 **Object detection**: Detect and identify multiple objects in an image for detailed analysis and classification. - 2. 🏞️ **Scene recognition**: Recognize and classify entire scenes based on objects, colors, and shapes. - 3. 😃 **Facial recognition**: Analyze facial features for accurate identification. - 4. 😊 **Emotion recognition**: Identify emotions on a subject's face, including happiness, sadness, and anger. - 5. 🔤 **Text recognition**: Identify and translate text in images for analysis. - 6. 🎨 **Color recognition**: Detect colors and provide information on hue, saturation, and brightness. - 7. 🔍 **Image segmentation**: Divide an image into multiple regions for individual analysis and classification. - 8. 🌅 **Image restoration**: Remove noise and blur, restoring images to original clarity and quality. - 9. 🔖 **Image classification**: Classify images into categories like animals, buildings, or landscapes. - 10. 🎨 **Style transfer**: Apply the style of one image to another for unique and innovative results. - - Examples - 1. 🩺⚕️ Text-to-Image : [Image Classification](https://huggingface.co/spaces/awacke1/Prompt-Refinery-Text-to-Image-Generation) - 2. Image Captions from 5 SOTA Generators: [URL](https://huggingface.co/spaces/awacke1/ImageCaptionPromptGenerator) - 3. 🩺⚕️ Image to Multilingual OCR: [URL](https://huggingface.co/spaces/awacke1/Image-to-Multilingual-OCR) - 4. WRN - Wide Residual Networks: [URL](https://huggingface.co/spaces/awacke1/ResnetPytorchImageRecognition) - 5. AI Document Understanding: [URL](https://huggingface.co/spaces/awacke1/AIDocumentUnderstandingOCR) - 6. Elixir Docker Bumblebee: [URL](https://huggingface.co/spaces/awacke1/DockerImageRecognitionToText) - 7. Speech to Text to Story to Images to Video: [URL](https://huggingface.co/spaces/awacke1/Speeech2Text2Story2Images2Video) - 8. Image to Line Drawings: [URL](https://huggingface.co/spaces/awacke1/Image-to-Line-Drawings) - 9. Semantic Image Search: [URL](https://huggingface.co/spaces/awacke1/Image-Semantic-Search) - 10. Zoom Clip Toon: [URL](https://huggingface.co/spaces/awacke1/Zoom-Clip-Toon-Image-to-Image) - 11. Image to Reading Labels: [URL](https://huggingface.co/spaces/awacke1/ImageOCRMultilingual) - 12. A Game For That - Gamification Using Snapshot Images: [URL](https://huggingface.co/spaces/awacke1/AGameForThat) - 13. AI Visually Plays QBert, Pong, Seaquest and more: [URL](https://huggingface.co/spaces/awacke1/AI-Atari-Live-Streamlit) - 14. AI Creates Generator Style Mix Art from Encyclopedia: [URL](https://huggingface.co/spaces/awacke1/Art-Generator-and-Style-Mixer) - 15. BigGAN Image Gen and Search: [URL](https://huggingface.co/spaces/awacke1/AI-BigGAN-Image-Gen) - 16. Art Style Line Drawings: [URL](https://huggingface.co/spaces/awacke1/ArtStyleFoodsandNutrition) - 17. 🩺⚕️ Yolo Real Time Image Recognition from Webcam: https://huggingface.co/spaces/awacke1/Webcam-Object-Recognition-Yolo-n-Coco -""") - -st.markdown(""" -4. # 🗣️Speech Recognition💬 - 1. 🔊 **Continuous Speech Recognition**: Transcribe spoken words in real-time without pausing. - 2. 🗣️ **Speaker Identification**: Identify individual speakers through unique features in their speech. - 3. 🧠 **Contextual Awareness**: Understand conversation context and interpret word meaning. - 4. 🌎 **Multilingual Support**: Recognize and transcribe multiple languages for translation. - 5. 🔇 **Noise Reduction**: Filter out background noise to improve transcription quality. - 6. 🔒 **Voice Biometrics**: Verify speaker identity and provide secure access to personal data. - 7. 🎛️ **Command and Control**: Interpret voice commands to automate tasks and interact with software. - 8. 💬 **Natural Language Processing**: Understand complex human speech patterns. - 9. 🧠 **Adaptive Learning**: Learn and adapt to improve accuracy over time. - 10. ☁️ **Cloud-Based Deployment**: Real-time processing of large amounts of data, even on mobile devices. -""") - -st.markdown(""" -5. # Reinforcement Learning - 1. 🏆 **Reward-driven**: RL uses rewards or punishments to drive its learning process. - 2. 🧪 **Trial-and-error learning**: RL is a trial-and-error learning method, where an agent tries different actions to find the best action that will maximize the cumulative reward. - 3. 🤔 **Exploration-exploitation trade-off**: RL agents need to balance exploration and exploitation to find new possibilities while also exploiting successful actions. - 4. 📈 **Markov Decision Processes**: RL uses MDPs to model decision-making processes. - 5. 📊 **Policy optimization**: RL uses policy optimization techniques to find the best policy for a given task or learn the optimal policy from scratch. - 6. 💰 **Value-based methods**: RL uses value-based methods to estimate the value of each state or action. - 7. 🧠 **Model-based methods**: RL can use model-based methods to predict the outcomes of different actions. - 8. 🤖 **Deep Reinforcement Learning**: DRL combines RL with deep learning techniques to learn complex decision-making tasks. - 9. 🔄 **Transfer learning**: RL can use transfer learning techniques to transfer knowledge learned in one task to another task. - 10. 🤝 **Multi-agent RL**: RL can handle multiple agents that interact with each other. -""") - -st.markdown(""" -6. 🎲Game Theory🎲 – Traditional AI processes - 1. 🤝 **Interdependence**: Game Theory considers decision-making among multiple agents, unlike traditional AI processes which focus on a single agent. - 2. 🎯 **Strategic Behavior**: Game Theory assumes that agents aim to maximize their payoffs based on the actions of other agents. Traditional AI may not consider this strategic element. - 3. 💰 **Payoffs**: Game Theory calculates payoffs for each agent based on their actions and the actions of other agents, unlike traditional AI which may focus on a single objective. - 4. ⚖️ **Equilibrium**: Game Theory seeks to identify stable states in the game where no agent has an incentive to deviate from their current strategy. Traditional AI may not seek to find an equilibrium. - 5. 🎲 **Game Formulation**: Game Theory formulates a game, including rules, players, and possible actions, unlike traditional AI which may not require such formulation. - 6. 💡 **Solution Concepts**: Game Theory has various solution concepts, such as Nash Equilibrium and Pareto Efficiency, to identify the most desirable outcomes. Traditional AI may not have such concepts. - 7. 📊 **Information**: Game Theory considers the information available to each agent in the game. Traditional AI may not consider information explicitly. - 8. ⚔️ **Adversarial**: Game Theory models adversarial scenarios where agents have conflicting goals. Traditional AI may assume cooperation among agents. - 9. ❓ **Uncertainty**: Game Theory deals with uncertainty and incomplete information in the game. Traditional AI may not consider uncertainty. - 10. 🌐 **Complexity**: Game Theory deals with complex multi-agent interactions. Traditional AI may focus on single-agent optimization. - - Examples - 1. 🩺⚕️ Health Care Game: https://huggingface.co/spaces/awacke1/AI-RPG-Self-Play-RLML-Health-Battler-Game - 2. 🩺⚕️ Sankey Snacks Math Chart Animator: https://huggingface.co/spaces/awacke1/Sankey-Snacks - 3. Blackjack 21 : https://huggingface.co/spaces/awacke1/BlackjackSimulatorCardGameAI - 4. Player Card Monster Battler: https://huggingface.co/spaces/awacke1/Player-Card-Monster-Battler-For-Math-and-AI - 5. Emojitrition: https://huggingface.co/spaces/awacke1/Emojitrition-Fun-and-Easy-Nutrition -""") - - - - -st.markdown(""" -# 🩺⚕️ Reinforcement Learning from Human Feedback (RLHF) -## 🤖 RLHF is a way for computers to learn how to do things better by getting help and feedback from people, - - just like how you learn new things from your parents or teachers. -🎮 Let's say the computer wants to learn how to play a video game. - - It might start by trying different things and seeing what happens. -👍 If it does something good, like getting a high score, it gets a reward. -👎 If it does something bad, like losing a life, it gets a punishment. -👩‍💻 Now, imagine that a person is watching the computer play the game and giving it feedback. - -The person might say things like "Good job!" when the computer gets a high score - - or "Oops, try again!" when it loses a life. -💡 This feedback helps the computer figure out which actions are good and which ones are bad. - -The computer then uses this feedback to adjust its actions and get better at playing the game. -🤔 It might try different strategies and see which ones get the best feedback from the person. - -Over time, the computer gets better and better at playing the game, just like how you get better at things by practicing and getting help from others. -🚀 RLHF is a cool way for computers to learn and improve with the help of people. - -Who knows, maybe one day you can teach a computer to do something amazing! -# Examples -## 🩺⚕️ Hospital Visualizations -🩺⚕️ https://huggingface.co/spaces/awacke1/VizLib-TopLargeHospitalsMinnesota -🩺⚕️ https://huggingface.co/spaces/awacke1/VizLib-TopLargeHospitalsNewJersey -🩺⚕️ https://huggingface.co/spaces/awacke1/VizLib-TopLargeHospitalsMentalHealth -🩺⚕️ https://huggingface.co/spaces/awacke1/VizLib-GraphViz-Folium-MapTopLargeHospitalsinWI -# Card Game Activity -https://huggingface.co/spaces/awacke1/CardGameActivity-GraphViz -https://huggingface.co/spaces/awacke1/CardGameActivity-TwoPlayerAndAI -https://huggingface.co/spaces/awacke1/CardGameActivity -https://huggingface.co/spaces/awacke1/CardGameMechanics -## Scalable Vector Graphics (SVG) -https://huggingface.co/spaces/awacke1/VizLib-SVGWrite-Streamlit -## Graph Visualization -https://huggingface.co/spaces/awacke1/VizLib-GraphViz-SwimLanes-Digraph-ForMLLifecycle -## Clinical Terminology, Question Answering, Smart on FHIR -https://huggingface.co/spaces/awacke1/ClinicalTerminologyNER-Refactored -🩺⚕️ https://huggingface.co/spaces/awacke1/Assessment-By-Organs -🩺⚕️ https://huggingface.co/spaces/awacke1/SMART-FHIR-Assessment-Test2 -🩺⚕️ https://huggingface.co/spaces/awacke1/FHIRLib-FHIRKit -""") - - - diff --git a/spaces/awacke1/SpeechRecognitionwithWav2Vec2/README.md b/spaces/awacke1/SpeechRecognitionwithWav2Vec2/README.md deleted file mode 100644 index 86f0f71c3977c8b05816b36d1d25bdbb5ba999d4..0000000000000000000000000000000000000000 --- a/spaces/awacke1/SpeechRecognitionwithWav2Vec2/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: SpeechRecognitionwithWav2Vec2 -emoji: 😻 -colorFrom: indigo -colorTo: gray -sdk: streamlit -sdk_version: 1.17.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/awacke1/Team.Click.Battle.Multiplayer/app.py b/spaces/awacke1/Team.Click.Battle.Multiplayer/app.py deleted file mode 100644 index 56e922ae56fdc8bd6d35ddee1bbac2150ee6a180..0000000000000000000000000000000000000000 --- a/spaces/awacke1/Team.Click.Battle.Multiplayer/app.py +++ /dev/null @@ -1,87 +0,0 @@ -import streamlit as st -import random -import time -import plotly.graph_objects as go - -teams = [ - ('Team 1', '😎', 'Cool Squad', 'New York City'), - ('Team 2', '🚀', 'Rocketeers', 'Los Angeles'), - ('Team 3', '🤖', 'Robo Gang', 'San Francisco'), - ('Team 4', '🌟', 'Super Stars', 'Chicago'), - ('Team 5', '🐉', 'Dragons', 'Houston') -] - -def create_sankey(votes, turn): - labels = [f"{team[1]} {team[2]}" for team in teams] + ['Upvotes', 'Downvotes'] - source = [] - target = [] - value = [] - - for i, team in enumerate(teams): - source += [i, i] - target += [len(teams), len(teams) + 1] - value += [votes[team[0]][0], votes[team[0]][1]] - - fig = go.Figure(data=[go.Sankey( - node=dict(pad=15, thickness=20, line=dict(color='black', width=0.5), label=labels), - link=dict(source=source, target=target, value=value))]) - - fig.update_layout(title_text=f'Location Simulator by Nickname (Turn {turn})', title_font=dict(size=24, color='blue')) - - return fig - -def run_scenario(duration=100, click_card_limit=None): - start_time = time.time() - votes = {team[0]: [0, 0] for team in teams} # Initialize upvotes and downvotes - click_cards = 0 - chat = [] - turn = 0 - - st.header("Sankey Graph") - fig = create_sankey(votes, turn) - st.plotly_chart(fig, use_container_width=True) - - while time.time() - start_time < duration: - if click_card_limit is None or click_cards < click_card_limit: - click_cards += 1 - - team = random.choice(teams) - vote_type = random.choice(['upvote', 'downvote']) - clicks = 1 + 3 * (click_cards > 0) - click_cards -= clicks > 1 - - if vote_type == 'upvote': - votes[team[0]][0] += clicks - else: - votes[team[0]][1] += clicks - chat.append((team, vote_type, clicks)) - - turn += 1 - fig = create_sankey(votes, turn) - st.plotly_chart(fig, use_container_width=True) - - time.sleep(random.uniform(0, 1)) # Random sleep between 0 and 1 seconds - - return votes, chat - -st.title("Team Upvotes and Downvotes Emoji Game") - -duration = st.slider("Duration (seconds)", min_value=0, max_value=100, value=10, step=1) -click_card_limit = st.slider("Click Card Limit", min_value=0, max_value=100, value=10, step=1) - -st.write(f"Running scenario for {duration} seconds with {click_card_limit} click cards...") -votes, chat = run_scenario(duration, click_card_limit) - -st.header("Results") -for team, vote_counts in votes.items(): - st.write(f"{team}: {vote_counts[0]} upvotes, {vote_counts[1]} downvotes") - -st.header("Chat") -for message in chat: - team, vote_type, clicks = message - st.write(f"{team[1]} {team[2]}: {clicks} {vote_type}s") - -st.header("Final Results") -for team, vote_counts in votes.items(): - st.write(f"{team}: {vote_counts[0]} upvotes, {vote_counts[1]} downvotes") - diff --git a/spaces/awacke1/VideoFromImage/README.md b/spaces/awacke1/VideoFromImage/README.md deleted file mode 100644 index 001762555fed2c68546236c9f30e6ac3b1f4dc10..0000000000000000000000000000000000000000 --- a/spaces/awacke1/VideoFromImage/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: MS Image2Video -emoji: ⚡ -colorFrom: indigo -colorTo: pink -sdk: gradio -sdk_version: 3.41.2 -app_file: app.py -pinned: false -duplicated_from: fffiloni/MS-Image2Video ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/banana-projects/web3d/node_modules/three/examples/js/loaders/GCodeLoader.js b/spaces/banana-projects/web3d/node_modules/three/examples/js/loaders/GCodeLoader.js deleted file mode 100644 index af21142dbdbad31b3c344cb51ab2c7ae58576e6c..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/examples/js/loaders/GCodeLoader.js +++ /dev/null @@ -1,225 +0,0 @@ -'use strict'; - -/** - * THREE.GCodeLoader is used to load gcode files usually used for 3D printing or CNC applications. - * - * Gcode files are composed by commands used by machines to create objects. - * - * @class THREE.GCodeLoader - * @param {Manager} manager Loading manager. - * @author tentone - * @author joewalnes - */ -THREE.GCodeLoader = function ( manager ) { - - this.manager = ( manager !== undefined ) ? manager : THREE.DefaultLoadingManager; - - this.splitLayer = false; - -}; - -THREE.GCodeLoader.prototype.load = function ( url, onLoad, onProgress, onError ) { - - var self = this; - - var loader = new THREE.FileLoader( self.manager ); - loader.setPath( self.path ); - loader.load( url, function ( text ) { - - onLoad( self.parse( text ) ); - - }, onProgress, onError ); - -}; - -THREE.GCodeLoader.prototype.setPath = function ( value ) { - - this.path = value; - return this; - -}; - -THREE.GCodeLoader.prototype.parse = function ( data ) { - - var state = { x: 0, y: 0, z: 0, e: 0, f: 0, extruding: false, relative: false }; - var layers = []; - - var currentLayer = undefined; - - var pathMaterial = new THREE.LineBasicMaterial( { color: 0xFF0000 } ); - pathMaterial.name = 'path'; - - var extrudingMaterial = new THREE.LineBasicMaterial( { color: 0x00FF00 } ); - extrudingMaterial.name = 'extruded'; - - function newLayer( line ) { - - currentLayer = { vertex: [], pathVertex: [], z: line.z }; - layers.push( currentLayer ); - - } - - //Create lie segment between p1 and p2 - function addSegment( p1, p2 ) { - - if ( currentLayer === undefined ) { - - newLayer( p1 ); - - } - - if ( line.extruding ) { - - currentLayer.vertex.push( p1.x, p1.y, p1.z ); - currentLayer.vertex.push( p2.x, p2.y, p2.z ); - - } else { - - currentLayer.pathVertex.push( p1.x, p1.y, p1.z ); - currentLayer.pathVertex.push( p2.x, p2.y, p2.z ); - - } - - } - - function delta( v1, v2 ) { - - return state.relative ? v2 : v2 - v1; - - } - - function absolute( v1, v2 ) { - - return state.relative ? v1 + v2 : v2; - - } - - var lines = data.replace( /;.+/g, '' ).split( '\n' ); - - for ( var i = 0; i < lines.length; i ++ ) { - - var tokens = lines[ i ].split( ' ' ); - var cmd = tokens[ 0 ].toUpperCase(); - - //Argumments - var args = {}; - tokens.splice( 1 ).forEach( function ( token ) { - - if ( token[ 0 ] !== undefined ) { - - var key = token[ 0 ].toLowerCase(); - var value = parseFloat( token.substring( 1 ) ); - args[ key ] = value; - - } - - } ); - - //Process commands - //G0/G1 – Linear Movement - if ( cmd === 'G0' || cmd === 'G1' ) { - - var line = { - x: args.x !== undefined ? absolute( state.x, args.x ) : state.x, - y: args.y !== undefined ? absolute( state.y, args.y ) : state.y, - z: args.z !== undefined ? absolute( state.z, args.z ) : state.z, - e: args.e !== undefined ? absolute( state.e, args.e ) : state.e, - f: args.f !== undefined ? absolute( state.f, args.f ) : state.f, - }; - - //Layer change detection is or made by watching Z, it's made by watching when we extrude at a new Z position - if ( delta( state.e, line.e ) > 0 ) { - - line.extruding = delta( state.e, line.e ) > 0; - - if ( currentLayer == undefined || line.z != currentLayer.z ) { - - newLayer( line ); - - } - - } - - addSegment( state, line ); - state = line; - - } else if ( cmd === 'G2' || cmd === 'G3' ) { - - //G2/G3 - Arc Movement ( G2 clock wise and G3 counter clock wise ) - //console.warn( 'THREE.GCodeLoader: Arc command not supported' ); - - } else if ( cmd === 'G90' ) { - - //G90: Set to Absolute Positioning - state.relative = false; - - } else if ( cmd === 'G91' ) { - - //G91: Set to state.relative Positioning - state.relative = true; - - } else if ( cmd === 'G92' ) { - - //G92: Set Position - var line = state; - line.x = args.x !== undefined ? args.x : line.x; - line.y = args.y !== undefined ? args.y : line.y; - line.z = args.z !== undefined ? args.z : line.z; - line.e = args.e !== undefined ? args.e : line.e; - state = line; - - } else { - - //console.warn( 'THREE.GCodeLoader: Command not supported:' + cmd ); - - } - - } - - function addObject( vertex, extruding ) { - - var geometry = new THREE.BufferGeometry(); - geometry.addAttribute( 'position', new THREE.Float32BufferAttribute( vertex, 3 ) ); - - var segments = new THREE.LineSegments( geometry, extruding ? extrudingMaterial : pathMaterial ); - segments.name = 'layer' + i; - object.add( segments ); - - } - - var object = new THREE.Group(); - object.name = 'gcode'; - - if ( this.splitLayer ) { - - for ( var i = 0; i < layers.length; i ++ ) { - - var layer = layers[ i ]; - addObject( layer.vertex, true ); - addObject( layer.pathVertex, false ); - - } - - } else { - - var vertex = [], pathVertex = []; - - for ( var i = 0; i < layers.length; i ++ ) { - - var layer = layers[ i ]; - - vertex = vertex.concat( layer.vertex ); - pathVertex = pathVertex.concat( layer.pathVertex ); - - } - - addObject( vertex, true ); - addObject( pathVertex, false ); - - } - - object.quaternion.setFromEuler( new THREE.Euler( - Math.PI / 2, 0, 0 ) ); - - return object; - -}; diff --git a/spaces/banana-projects/web3d/node_modules/three/examples/js/loaders/RGBELoader.js b/spaces/banana-projects/web3d/node_modules/three/examples/js/loaders/RGBELoader.js deleted file mode 100644 index 85cdcc1ff819d9fe16ad99f8d15e3a6ad8862b60..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/examples/js/loaders/RGBELoader.js +++ /dev/null @@ -1,396 +0,0 @@ -/** - * @author Nikos M. / https://github.com/foo123/ - */ - -// https://github.com/mrdoob/three.js/issues/5552 -// http://en.wikipedia.org/wiki/RGBE_image_format - -THREE.HDRLoader = THREE.RGBELoader = function ( manager ) { - - this.manager = ( manager !== undefined ) ? manager : THREE.DefaultLoadingManager; - this.type = THREE.UnsignedByteType; - -}; - -// extend THREE.DataTextureLoader -THREE.RGBELoader.prototype = Object.create( THREE.DataTextureLoader.prototype ); - -// adapted from http://www.graphics.cornell.edu/~bjw/rgbe.html -THREE.RGBELoader.prototype._parser = function ( buffer ) { - - var - /* return codes for rgbe routines */ - RGBE_RETURN_SUCCESS = 0, - RGBE_RETURN_FAILURE = - 1, - - /* default error routine. change this to change error handling */ - rgbe_read_error = 1, - rgbe_write_error = 2, - rgbe_format_error = 3, - rgbe_memory_error = 4, - rgbe_error = function ( rgbe_error_code, msg ) { - - switch ( rgbe_error_code ) { - - case rgbe_read_error: console.error( "THREE.RGBELoader Read Error: " + ( msg || '' ) ); - break; - case rgbe_write_error: console.error( "THREE.RGBELoader Write Error: " + ( msg || '' ) ); - break; - case rgbe_format_error: console.error( "THREE.RGBELoader Bad File Format: " + ( msg || '' ) ); - break; - default: - case rgbe_memory_error: console.error( "THREE.RGBELoader: Error: " + ( msg || '' ) ); - - } - return RGBE_RETURN_FAILURE; - - }, - - /* offsets to red, green, and blue components in a data (float) pixel */ - RGBE_DATA_RED = 0, - RGBE_DATA_GREEN = 1, - RGBE_DATA_BLUE = 2, - - /* number of floats per pixel, use 4 since stored in rgba image format */ - RGBE_DATA_SIZE = 4, - - /* flags indicating which fields in an rgbe_header_info are valid */ - RGBE_VALID_PROGRAMTYPE = 1, - RGBE_VALID_FORMAT = 2, - RGBE_VALID_DIMENSIONS = 4, - - NEWLINE = "\n", - - fgets = function ( buffer, lineLimit, consume ) { - - lineLimit = ! lineLimit ? 1024 : lineLimit; - var p = buffer.pos, - i = - 1, len = 0, s = '', chunkSize = 128, - chunk = String.fromCharCode.apply( null, new Uint16Array( buffer.subarray( p, p + chunkSize ) ) ) - ; - while ( ( 0 > ( i = chunk.indexOf( NEWLINE ) ) ) && ( len < lineLimit ) && ( p < buffer.byteLength ) ) { - - s += chunk; len += chunk.length; - p += chunkSize; - chunk += String.fromCharCode.apply( null, new Uint16Array( buffer.subarray( p, p + chunkSize ) ) ); - - } - - if ( - 1 < i ) { - - /*for (i=l-1; i>=0; i--) { - byteCode = m.charCodeAt(i); - if (byteCode > 0x7f && byteCode <= 0x7ff) byteLen++; - else if (byteCode > 0x7ff && byteCode <= 0xffff) byteLen += 2; - if (byteCode >= 0xDC00 && byteCode <= 0xDFFF) i--; //trail surrogate - }*/ - if ( false !== consume ) buffer.pos += len + i + 1; - return s + chunk.slice( 0, i ); - - } - return false; - - }, - - /* minimal header reading. modify if you want to parse more information */ - RGBE_ReadHeader = function ( buffer ) { - - var line, match, - - // regexes to parse header info fields - magic_token_re = /^#\?(\S+)$/, - gamma_re = /^\s*GAMMA\s*=\s*(\d+(\.\d+)?)\s*$/, - exposure_re = /^\s*EXPOSURE\s*=\s*(\d+(\.\d+)?)\s*$/, - format_re = /^\s*FORMAT=(\S+)\s*$/, - dimensions_re = /^\s*\-Y\s+(\d+)\s+\+X\s+(\d+)\s*$/, - - // RGBE format header struct - header = { - - valid: 0, /* indicate which fields are valid */ - - string: '', /* the actual header string */ - - comments: '', /* comments found in header */ - - programtype: 'RGBE', /* listed at beginning of file to identify it after "#?". defaults to "RGBE" */ - - format: '', /* RGBE format, default 32-bit_rle_rgbe */ - - gamma: 1.0, /* image has already been gamma corrected with given gamma. defaults to 1.0 (no correction) */ - - exposure: 1.0, /* a value of 1.0 in an image corresponds to watts/steradian/m^2. defaults to 1.0 */ - - width: 0, height: 0 /* image dimensions, width/height */ - - }; - - if ( buffer.pos >= buffer.byteLength || ! ( line = fgets( buffer ) ) ) { - - return rgbe_error( rgbe_read_error, "no header found" ); - - } - /* if you want to require the magic token then uncomment the next line */ - if ( ! ( match = line.match( magic_token_re ) ) ) { - - return rgbe_error( rgbe_format_error, "bad initial token" ); - - } - header.valid |= RGBE_VALID_PROGRAMTYPE; - header.programtype = match[ 1 ]; - header.string += line + "\n"; - - while ( true ) { - - line = fgets( buffer ); - if ( false === line ) break; - header.string += line + "\n"; - - if ( '#' === line.charAt( 0 ) ) { - - header.comments += line + "\n"; - continue; // comment line - - } - - if ( match = line.match( gamma_re ) ) { - - header.gamma = parseFloat( match[ 1 ], 10 ); - - } - if ( match = line.match( exposure_re ) ) { - - header.exposure = parseFloat( match[ 1 ], 10 ); - - } - if ( match = line.match( format_re ) ) { - - header.valid |= RGBE_VALID_FORMAT; - header.format = match[ 1 ];//'32-bit_rle_rgbe'; - - } - if ( match = line.match( dimensions_re ) ) { - - header.valid |= RGBE_VALID_DIMENSIONS; - header.height = parseInt( match[ 1 ], 10 ); - header.width = parseInt( match[ 2 ], 10 ); - - } - - if ( ( header.valid & RGBE_VALID_FORMAT ) && ( header.valid & RGBE_VALID_DIMENSIONS ) ) break; - - } - - if ( ! ( header.valid & RGBE_VALID_FORMAT ) ) { - - return rgbe_error( rgbe_format_error, "missing format specifier" ); - - } - if ( ! ( header.valid & RGBE_VALID_DIMENSIONS ) ) { - - return rgbe_error( rgbe_format_error, "missing image size specifier" ); - - } - - return header; - - }, - - RGBE_ReadPixels_RLE = function ( buffer, w, h ) { - - var data_rgba, offset, pos, count, byteValue, - scanline_buffer, ptr, ptr_end, i, l, off, isEncodedRun, - scanline_width = w, num_scanlines = h, rgbeStart - ; - - if ( - // run length encoding is not allowed so read flat - ( ( scanline_width < 8 ) || ( scanline_width > 0x7fff ) ) || - // this file is not run length encoded - ( ( 2 !== buffer[ 0 ] ) || ( 2 !== buffer[ 1 ] ) || ( buffer[ 2 ] & 0x80 ) ) - ) { - - // return the flat buffer - return new Uint8Array( buffer ); - - } - - if ( scanline_width !== ( ( buffer[ 2 ] << 8 ) | buffer[ 3 ] ) ) { - - return rgbe_error( rgbe_format_error, "wrong scanline width" ); - - } - - data_rgba = new Uint8Array( 4 * w * h ); - - if ( ! data_rgba || ! data_rgba.length ) { - - return rgbe_error( rgbe_memory_error, "unable to allocate buffer space" ); - - } - - offset = 0; pos = 0; ptr_end = 4 * scanline_width; - rgbeStart = new Uint8Array( 4 ); - scanline_buffer = new Uint8Array( ptr_end ); - - // read in each successive scanline - while ( ( num_scanlines > 0 ) && ( pos < buffer.byteLength ) ) { - - if ( pos + 4 > buffer.byteLength ) { - - return rgbe_error( rgbe_read_error ); - - } - - rgbeStart[ 0 ] = buffer[ pos ++ ]; - rgbeStart[ 1 ] = buffer[ pos ++ ]; - rgbeStart[ 2 ] = buffer[ pos ++ ]; - rgbeStart[ 3 ] = buffer[ pos ++ ]; - - if ( ( 2 != rgbeStart[ 0 ] ) || ( 2 != rgbeStart[ 1 ] ) || ( ( ( rgbeStart[ 2 ] << 8 ) | rgbeStart[ 3 ] ) != scanline_width ) ) { - - return rgbe_error( rgbe_format_error, "bad rgbe scanline format" ); - - } - - // read each of the four channels for the scanline into the buffer - // first red, then green, then blue, then exponent - ptr = 0; - while ( ( ptr < ptr_end ) && ( pos < buffer.byteLength ) ) { - - count = buffer[ pos ++ ]; - isEncodedRun = count > 128; - if ( isEncodedRun ) count -= 128; - - if ( ( 0 === count ) || ( ptr + count > ptr_end ) ) { - - return rgbe_error( rgbe_format_error, "bad scanline data" ); - - } - - if ( isEncodedRun ) { - - // a (encoded) run of the same value - byteValue = buffer[ pos ++ ]; - for ( i = 0; i < count; i ++ ) { - - scanline_buffer[ ptr ++ ] = byteValue; - - } - //ptr += count; - - } else { - - // a literal-run - scanline_buffer.set( buffer.subarray( pos, pos + count ), ptr ); - ptr += count; pos += count; - - } - - } - - - // now convert data from buffer into rgba - // first red, then green, then blue, then exponent (alpha) - l = scanline_width; //scanline_buffer.byteLength; - for ( i = 0; i < l; i ++ ) { - - off = 0; - data_rgba[ offset ] = scanline_buffer[ i + off ]; - off += scanline_width; //1; - data_rgba[ offset + 1 ] = scanline_buffer[ i + off ]; - off += scanline_width; //1; - data_rgba[ offset + 2 ] = scanline_buffer[ i + off ]; - off += scanline_width; //1; - data_rgba[ offset + 3 ] = scanline_buffer[ i + off ]; - offset += 4; - - } - - num_scanlines --; - - } - - return data_rgba; - - } - ; - - var byteArray = new Uint8Array( buffer ), - byteLength = byteArray.byteLength; - byteArray.pos = 0; - var rgbe_header_info = RGBE_ReadHeader( byteArray ); - - if ( RGBE_RETURN_FAILURE !== rgbe_header_info ) { - - var w = rgbe_header_info.width, - h = rgbe_header_info.height, - image_rgba_data = RGBE_ReadPixels_RLE( byteArray.subarray( byteArray.pos ), w, h ) - ; - if ( RGBE_RETURN_FAILURE !== image_rgba_data ) { - - if ( this.type === THREE.UnsignedByteType ) { - - var data = image_rgba_data; - var format = THREE.RGBEFormat; // handled as THREE.RGBAFormat in shaders - var type = THREE.UnsignedByteType; - - } else if ( this.type === THREE.FloatType ) { - - var RGBEByteToRGBFloat = function ( sourceArray, sourceOffset, destArray, destOffset ) { - - var e = sourceArray[ sourceOffset + 3 ]; - var scale = Math.pow( 2.0, e - 128.0 ) / 255.0; - - destArray[ destOffset + 0 ] = sourceArray[ sourceOffset + 0 ] * scale; - destArray[ destOffset + 1 ] = sourceArray[ sourceOffset + 1 ] * scale; - destArray[ destOffset + 2 ] = sourceArray[ sourceOffset + 2 ] * scale; - - }; - - var numElements = ( image_rgba_data.length / 4 ) * 3; - var floatArray = new Float32Array( numElements ); - - for ( var j = 0; j < numElements; j ++ ) { - - RGBEByteToRGBFloat( image_rgba_data, j * 4, floatArray, j * 3 ); - - } - - var data = floatArray; - var format = THREE.RGBFormat; - var type = THREE.FloatType; - - - } else { - - console.error( 'THREE.RGBELoader: unsupported type: ', this.type ); - - } - - return { - width: w, height: h, - data: data, - header: rgbe_header_info.string, - gamma: rgbe_header_info.gamma, - exposure: rgbe_header_info.exposure, - format: format, - type: type - }; - - } - - } - - return null; - -}; - -THREE.RGBELoader.prototype.setType = function ( value ) { - - this.type = value; - return this; - -}; - - diff --git a/spaces/batuhantosun/Guided-Backpropagation/utils.py b/spaces/batuhantosun/Guided-Backpropagation/utils.py deleted file mode 100644 index ac94e6b780e040c72d79284afccf4b98a2fdf495..0000000000000000000000000000000000000000 --- a/spaces/batuhantosun/Guided-Backpropagation/utils.py +++ /dev/null @@ -1,33 +0,0 @@ -import numpy as np -import cv2 - - -def range_norm(img): - min = img.min() - max = img.max() - eps = 1e-6 - return (img-min)/(max-min+eps) - - -def grad2heatmapped(input_image, grad_image, ratio): - input_image = np.array(input_image) - grad_image = np.array(grad_image) - - # Invert negative pixels - grad_image[grad_image<100] += 128 - - # Apply thresholding and blur to obtain heatmap - th = cv2.threshold(grad_image, 140, 255, cv2.THRESH_BINARY)[1] - blur = cv2.GaussianBlur(th, (11,11), 11) - heatmap = cv2.applyColorMap(blur, cv2.COLORMAP_JET) - - # Apply edge padding to heatmap to have 256x256 size - heatmap = np.pad(heatmap, ((16,16),(16,16),(0,0)), 'edge') - - # Upsample heatmap to input_image size - heatmap = cv2.resize(heatmap, (input_image.shape[1], input_image.shape[0])) - - # Superimpose heatmap on input_image - heatmapped = cv2.addWeighted(input_image, 1-ratio, heatmap, ratio, 0) - - return heatmapped diff --git "a/spaces/betterme/mestreamlit/0__\357\243\277Info.py" "b/spaces/betterme/mestreamlit/0__\357\243\277Info.py" deleted file mode 100644 index 4fa8801bdddfacab96327b6fdcfed850c0466129..0000000000000000000000000000000000000000 --- "a/spaces/betterme/mestreamlit/0__\357\243\277Info.py" +++ /dev/null @@ -1,50 +0,0 @@ -import streamlit as st -import inspect -import textwrap - - -class Page(object): - - def __init__(self, app_title="# App Title", - app_info="> App Info", - sidebar_title="## Sidebar Title", - page_title="Page Title", - page_icon='🔥', - menu_items=None, - show_code=False - ): - st.set_page_config( - page_title=page_title, - page_icon=page_icon, - initial_sidebar_state='auto', - menu_items=menu_items - ) - - if app_title: st.markdown(app_title) - if app_info: st.markdown(app_info) - if sidebar_title: st.sidebar.markdown(sidebar_title) - - if sidebar_title and show_code: self.show_code(self.main) - - def main(self): - raise NotImplementedError('Method not implemented!') - - def show_code(self, demo): - """Showing the code of the demo.""" - _ = st.sidebar.checkbox("Show code", False) - if _: - # Showing the code of the demo. - st.markdown("---") - st.markdown("## Main Code") - sourcelines, _ = inspect.getsourcelines(demo) - st.code(textwrap.dedent("".join(sourcelines[1:]))) - st.markdown("---") - - -class SPage(Page): - - def main(self): - st.markdown(f"{st}=={st.__version__}") - - -SPage().main() diff --git a/spaces/bhandsab/meta-llama-Llama-2-70b-hf/app.py b/spaces/bhandsab/meta-llama-Llama-2-70b-hf/app.py deleted file mode 100644 index 4ec735e3ad87d311c5c40e845d963d4474a98f4c..0000000000000000000000000000000000000000 --- a/spaces/bhandsab/meta-llama-Llama-2-70b-hf/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/meta-llama/Llama-2-70b-hf").launch() \ No newline at end of file diff --git a/spaces/bigslime/stablediffusion-infinity/js/toolbar.js b/spaces/bigslime/stablediffusion-infinity/js/toolbar.js deleted file mode 100644 index 6c721bc84d3a41a0761ead58e6034ba4dfd4a6ef..0000000000000000000000000000000000000000 --- a/spaces/bigslime/stablediffusion-infinity/js/toolbar.js +++ /dev/null @@ -1,581 +0,0 @@ -// import { w2ui,w2toolbar,w2field,query,w2alert, w2utils,w2confirm} from "https://rawgit.com/vitmalina/w2ui/master/dist/w2ui.es6.min.js" -// import { w2ui,w2toolbar,w2field,query,w2alert, w2utils,w2confirm} from "https://cdn.jsdelivr.net/gh/vitmalina/w2ui@master/dist/w2ui.es6.min.js" - -// https://stackoverflow.com/questions/36280818/how-to-convert-file-to-base64-in-javascript -function getBase64(file) { - var reader = new FileReader(); - reader.readAsDataURL(file); - reader.onload = function () { - add_image(reader.result); - // console.log(reader.result); - }; - reader.onerror = function (error) { - console.log("Error: ", error); - }; -} - -function getText(file) { - var reader = new FileReader(); - reader.readAsText(file); - reader.onload = function () { - window.postMessage(["load",reader.result],"*") - // console.log(reader.result); - }; - reader.onerror = function (error) { - console.log("Error: ", error); - }; -} - -document.querySelector("#upload_file").addEventListener("change", (event)=>{ - console.log(event); - let file = document.querySelector("#upload_file").files[0]; - getBase64(file); -}) - -document.querySelector("#upload_state").addEventListener("change", (event)=>{ - console.log(event); - let file = document.querySelector("#upload_state").files[0]; - getText(file); -}) - -open_setting = function() { - if (!w2ui.foo) { - new w2form({ - name: "foo", - style: "border: 0px; background-color: transparent;", - fields: [{ - field: "canvas_width", - type: "int", - required: true, - html: { - label: "Canvas Width" - } - }, - { - field: "canvas_height", - type: "int", - required: true, - html: { - label: "Canvas Height" - } - }, - ], - record: { - canvas_width: 1200, - canvas_height: 600, - }, - actions: { - Save() { - this.validate(); - let record = this.getCleanRecord(); - window.postMessage(["resize",record.canvas_width,record.canvas_height],"*"); - w2popup.close(); - }, - custom: { - text: "Cancel", - style: "text-transform: uppercase", - onClick(event) { - w2popup.close(); - } - } - } - }); - } - w2popup.open({ - title: "Form in a Popup", - body: "
", - style: "padding: 15px 0px 0px 0px", - width: 500, - height: 280, - showMax: true, - async onToggle(event) { - await event.complete - w2ui.foo.resize(); - } - }) - .then((event) => { - w2ui.foo.render("#form") - }); -} - -var button_lst=["clear", "load", "save", "export", "upload", "selection", "canvas", "eraser", "outpaint", "accept", "cancel", "retry", "prev", "current", "next", "eraser_size_btn", "eraser_size", "resize_selection", "scale", "zoom_in", "zoom_out", "help"]; -var upload_button_lst=['clear', 'load', 'save', "upload", 'export', 'outpaint', 'resize_selection', 'help', "setting"]; -var resize_button_lst=['clear', 'load', 'save', "upload", 'export', "selection", "canvas", "eraser", 'outpaint', 'resize_selection',"zoom_in", "zoom_out", 'help', "setting"]; -var outpaint_button_lst=['clear', 'load', 'save', "canvas", "eraser", "upload", 'export', 'resize_selection', "zoom_in", "zoom_out",'help', "setting"]; -var outpaint_result_lst=["accept", "cancel", "retry", "prev", "current", "next"]; -var outpaint_result_func_lst=["accept", "retry", "prev", "current", "next"]; - -function check_button(id,text="",checked=true,tooltip="") -{ - return { type: "check", id: id, text: text, icon: checked?"fa-solid fa-square-check":"fa-regular fa-square", checked: checked, tooltip: tooltip }; -} - -var toolbar=new w2toolbar({ - box: "#toolbar", - name: "toolbar", - tooltip: "top", - items: [ - { type: "button", id: "clear", text: "Reset", tooltip: "Reset Canvas", icon: "fa-solid fa-rectangle-xmark" }, - { type: "break" }, - { type: "button", id: "load", tooltip: "Load Canvas", icon: "fa-solid fa-file-import" }, - { type: "button", id: "save", tooltip: "Save Canvas", icon: "fa-solid fa-file-export" }, - { type: "button", id: "export", tooltip: "Export Image", icon: "fa-solid fa-floppy-disk" }, - { type: "break" }, - { type: "button", id: "upload", text: "Upload Image", icon: "fa-solid fa-upload" }, - { type: "break" }, - { type: "radio", id: "selection", group: "1", tooltip: "Selection", icon: "fa-solid fa-arrows-up-down-left-right", checked: true }, - { type: "radio", id: "canvas", group: "1", tooltip: "Canvas", icon: "fa-solid fa-image" }, - { type: "radio", id: "eraser", group: "1", tooltip: "Eraser", icon: "fa-solid fa-eraser" }, - { type: "break" }, - { type: "button", id: "outpaint", text: "Outpaint", tooltip: "Run Outpainting", icon: "fa-solid fa-brush" }, - { type: "break" }, - { type: "button", id: "accept", text: "Accept", tooltip: "Accept current result", icon: "fa-solid fa-check", hidden: true, disable:true,}, - { type: "button", id: "cancel", text: "Cancel", tooltip: "Cancel current outpainting/error", icon: "fa-solid fa-ban", hidden: true}, - { type: "button", id: "retry", text: "Retry", tooltip: "Retry", icon: "fa-solid fa-rotate", hidden: true, disable:true,}, - { type: "button", id: "prev", tooltip: "Prev Result", icon: "fa-solid fa-caret-left", hidden: true, disable:true,}, - { type: "html", id: "current", hidden: true, disable:true, - async onRefresh(event) { - await event.complete - let fragment = query.html(` -
-
- ${this.sel_value ?? "1/1"} -
`) - query(this.box).find("#tb_toolbar_item_current").append(fragment) - } - }, - { type: "button", id: "next", tooltip: "Next Result", icon: "fa-solid fa-caret-right", hidden: true,disable:true,}, - { type: "button", id: "add_image", text: "Add Image", icon: "fa-solid fa-file-circle-plus", hidden: true,disable:true,}, - { type: "button", id: "delete_image", text: "Delete Image", icon: "fa-solid fa-trash-can", hidden: true,disable:true,}, - { type: "button", id: "confirm", text: "Confirm", icon: "fa-solid fa-check", hidden: true,disable:true,}, - { type: "button", id: "cancel_overlay", text: "Cancel", icon: "fa-solid fa-ban", hidden: true,disable:true,}, - { type: "break" }, - { type: "spacer" }, - { type: "break" }, - { type: "button", id: "eraser_size_btn", tooltip: "Eraser Size", text:"Size", icon: "fa-solid fa-eraser", hidden: true, count: 32}, - { type: "html", id: "eraser_size", hidden: true, - async onRefresh(event) { - await event.complete - // let fragment = query.html(` - // - // `) - let fragment = query.html(` - - `) - fragment.filter("input").on("change", event => { - this.eraser_size = event.target.value; - window.overlay.freeDrawingBrush.width=this.eraser_size; - this.setCount("eraser_size_btn", event.target.value); - window.postMessage(["eraser_size", event.target.value],"*") - this.refresh(); - }) - query(this.box).find("#tb_toolbar_item_eraser_size").append(fragment) - } - }, - // { type: "button", id: "resize_eraser", tooltip: "Resize Eraser", icon: "fa-solid fa-sliders" }, - { type: "button", id: "resize_selection", text: "Resize Selection", tooltip: "Resize Selection", icon: "fa-solid fa-expand" }, - { type: "break" }, - { type: "html", id: "scale", - async onRefresh(event) { - await event.complete - let fragment = query.html(` -
-
- ${this.scale_value ?? "100%"} -
`) - query(this.box).find("#tb_toolbar_item_scale").append(fragment) - } - }, - { type: "button", id: "zoom_in", tooltip: "Zoom In", icon: "fa-solid fa-magnifying-glass-plus" }, - { type: "button", id: "zoom_out", tooltip: "Zoom Out", icon: "fa-solid fa-magnifying-glass-minus" }, - { type: "break" }, - { type: "button", id: "help", tooltip: "Help", icon: "fa-solid fa-circle-info" }, - { type: "new-line"}, - { type: "button", id: "setting", text: "Canvas Setting", tooltip: "Resize Canvas Here", icon: "fa-solid fa-sliders" }, - { type: "break" }, - check_button("enable_img2img","Enable Img2Img",false), - // check_button("use_correction","Photometric Correction",false), - check_button("resize_check","Resize Small Input",true), - check_button("enable_safety","Enable Safety Checker",true), - check_button("square_selection","Square Selection Only",false), - {type: "break"}, - check_button("use_seed","Use Seed:",false), - { type: "html", id: "seed_val", - async onRefresh(event) { - await event.complete - let fragment = query.html(` - `) - fragment.filter("input").on("change", event => { - this.config_obj.seed_val = event.target.value; - parent.config_obj=this.config_obj; - this.refresh(); - }) - query(this.box).find("#tb_toolbar_item_seed_val").append(fragment) - } - }, - { type: "button", id: "random_seed", tooltip: "Set a random seed", icon: "fa-solid fa-dice" }, - ], - onClick(event) { - switch(event.target){ - case "setting": - open_setting(); - break; - case "upload": - this.upload_mode=true - document.querySelector("#overlay_container").style.pointerEvents="auto"; - this.click("canvas"); - this.click("selection"); - this.show("confirm","cancel_overlay","add_image","delete_image"); - this.enable("confirm","cancel_overlay","add_image","delete_image"); - this.disable(...upload_button_lst); - query("#upload_file").click(); - if(this.upload_tip) - { - this.upload_tip=false; - w2utils.notify("Note that only visible images will be added to canvas",{timeout:10000,where:query("#container")}) - } - break; - case "resize_selection": - this.resize_mode=true; - this.disable(...resize_button_lst); - this.enable("confirm","cancel_overlay"); - this.show("confirm","cancel_overlay"); - window.postMessage(["resize_selection",""],"*"); - document.querySelector("#overlay_container").style.pointerEvents="auto"; - break; - case "confirm": - if(this.upload_mode) - { - export_image(); - } - else - { - let sel_box=this.selection_box; - window.postMessage(["resize_selection",sel_box.x,sel_box.y,sel_box.width,sel_box.height],"*"); - } - case "cancel_overlay": - end_overlay(); - this.hide("confirm","cancel_overlay","add_image","delete_image"); - if(this.upload_mode){ - this.enable(...upload_button_lst); - } - else - { - this.enable(...resize_button_lst); - window.postMessage(["resize_selection","",""],"*"); - if(event.target=="cancel_overlay") - { - this.selection_box=this.selection_box_bak; - } - } - if(this.selection_box) - { - this.setCount("resize_selection",`${Math.floor(this.selection_box.width/8)*8}x${Math.floor(this.selection_box.height/8)*8}`); - } - this.disable("confirm","cancel_overlay","add_image","delete_image"); - this.upload_mode=false; - this.resize_mode=false; - this.click("selection"); - break; - case "add_image": - query("#upload_file").click(); - break; - case "delete_image": - let active_obj = window.overlay.getActiveObject(); - if(active_obj) - { - window.overlay.remove(active_obj); - window.overlay.renderAll(); - } - else - { - w2utils.notify("You need to select an image first",{error:true,timeout:2000,where:query("#container")}) - } - break; - case "load": - query("#upload_state").click(); - this.selection_box=null; - this.setCount("resize_selection",""); - break; - case "next": - case "prev": - window.postMessage(["outpaint", "", event.target], "*"); - break; - case "outpaint": - this.click("selection"); - this.disable(...outpaint_button_lst); - this.show(...outpaint_result_lst); - if(this.outpaint_tip) - { - this.outpaint_tip=false; - w2utils.notify("The canvas stays locked until you accept/cancel current outpainting",{timeout:10000,where:query("#container")}) - } - document.querySelector("#container").style.pointerEvents="none"; - case "retry": - this.disable(...outpaint_result_func_lst); - window.postMessage(["transfer",""],"*") - break; - case "accept": - case "cancel": - this.hide(...outpaint_result_lst); - this.disable(...outpaint_result_func_lst); - this.enable(...outpaint_button_lst); - document.querySelector("#container").style.pointerEvents="auto"; - window.postMessage(["click", event.target],"*"); - let app=parent.document.querySelector("gradio-app"); - app=app.shadowRoot??app; - app.querySelector("#cancel").click(); - break; - case "eraser": - case "selection": - case "canvas": - if(event.target=="eraser") - { - this.show("eraser_size","eraser_size_btn"); - window.overlay.freeDrawingBrush.width=this.eraser_size; - window.overlay.isDrawingMode = true; - } - else - { - this.hide("eraser_size","eraser_size_btn"); - window.overlay.isDrawingMode = false; - } - if(this.upload_mode) - { - if(event.target=="canvas") - { - window.postMessage(["mode", event.target],"*") - document.querySelector("#overlay_container").style.pointerEvents="none"; - document.querySelector("#overlay_container").style.opacity = 0.5; - } - else - { - document.querySelector("#overlay_container").style.pointerEvents="auto"; - document.querySelector("#overlay_container").style.opacity = 1.0; - } - } - else - { - window.postMessage(["mode", event.target],"*") - } - break; - case "help": - w2popup.open({ - title: "Document", - body: "Usage: https://github.com/lkwq007/stablediffusion-infinity/blob/master/docs/usage.md" - }) - break; - case "clear": - w2confirm("Reset canvas?").yes(() => { - window.postMessage(["click", event.target],"*"); - }).no(() => {}) - break; - case "random_seed": - this.config_obj.seed_val=Math.floor(Math.random() * 3000000000); - parent.config_obj=this.config_obj; - this.refresh(); - break; - case "enable_img2img": - case "use_correction": - case "resize_check": - case "enable_safety": - case "use_seed": - case "square_selection": - let target=this.get(event.target); - target.icon=target.checked?"fa-regular fa-square":"fa-solid fa-square-check"; - this.config_obj[event.target]=!target.checked; - parent.config_obj=this.config_obj; - this.refresh(); - break; - case "save": - case "export": - ask_filename(event.target); - break; - default: - // clear, save, export, outpaint, retry - // break, save, export, accept, retry, outpaint - window.postMessage(["click", event.target],"*") - } - console.log("Target: "+ event.target, event) - } -}) -window.w2ui=w2ui; -w2ui.toolbar.config_obj={ - resize_check: true, - enable_safety: true, - use_correction: false, - enable_img2img: false, - use_seed: false, - seed_val: 0, - square_selection: false, -}; -w2ui.toolbar.outpaint_tip=true; -w2ui.toolbar.upload_tip=true; -window.update_count=function(cur,total){ - w2ui.toolbar.sel_value=`${cur}/${total}`; - w2ui.toolbar.refresh(); -} -window.update_eraser=function(val,max_val){ - w2ui.toolbar.eraser_size=`${val}`; - w2ui.toolbar.eraser_max=`${max_val}`; - w2ui.toolbar.setCount("eraser_size_btn", `${val}`); - w2ui.toolbar.refresh(); -} -window.update_scale=function(val){ - w2ui.toolbar.scale_value=`${val}`; - w2ui.toolbar.refresh(); -} -window.enable_result_lst=function(){ - w2ui.toolbar.enable(...outpaint_result_lst); -} -function onObjectScaled(e) -{ - let object = e.target; - if(object.isType("rect")) - { - let width=object.getScaledWidth(); - let height=object.getScaledHeight(); - object.scale(1); - width=Math.max(Math.min(width,window.overlay.width-object.left),256); - height=Math.max(Math.min(height,window.overlay.height-object.top),256); - let l=Math.max(Math.min(object.left,window.overlay.width-width-object.strokeWidth),0); - let t=Math.max(Math.min(object.top,window.overlay.height-height-object.strokeWidth),0); - if(window.w2ui.toolbar.config_obj.square_selection) - { - let max_val = Math.min(Math.max(width,height),window.overlay.width,window.overlay.height); - width=max_val; - height=max_val; - } - object.set({ width: width, height: height, left:l,top:t}) - window.w2ui.toolbar.selection_box={width: width, height: height, x:object.left, y:object.top}; - window.w2ui.toolbar.setCount("resize_selection",`${Math.floor(width/8)*8}x${Math.floor(height/8)*8}`); - window.w2ui.toolbar.refresh(); - } -} -function onObjectMoved(e) -{ - let object = e.target; - if(object.isType("rect")) - { - let l=Math.max(Math.min(object.left,window.overlay.width-object.width-object.strokeWidth),0); - let t=Math.max(Math.min(object.top,window.overlay.height-object.height-object.strokeWidth),0); - object.set({left:l,top:t}); - window.w2ui.toolbar.selection_box={width: object.width, height: object.height, x:object.left, y:object.top}; - } -} -window.setup_overlay=function(width,height) -{ - if(window.overlay) - { - window.overlay.setDimensions({width:width,height:height}); - let app=parent.document.querySelector("gradio-app"); - app=app.shadowRoot??app; - app.querySelector("#sdinfframe").style.height=80+Number(height)+"px"; - document.querySelector("#container").style.height= height+"px"; - document.querySelector("#container").style.width = width+"px"; - } - else - { - canvas=new fabric.Canvas("overlay_canvas"); - canvas.setDimensions({width:width,height:height}); - let app=parent.document.querySelector("gradio-app"); - app=app.shadowRoot??app; - app.querySelector("#sdinfframe").style.height=80+Number(height)+"px"; - canvas.freeDrawingBrush = new fabric.EraserBrush(canvas); - canvas.on("object:scaling", onObjectScaled); - canvas.on("object:moving", onObjectMoved); - window.overlay=canvas; - } - document.querySelector("#overlay_container").style.pointerEvents="none"; -} -window.update_overlay=function(width,height) -{ - window.overlay.setDimensions({width:width,height:height},{backstoreOnly:true}); - // document.querySelector("#overlay_container").style.pointerEvents="none"; -} -window.adjust_selection=function(x,y,width,height) -{ - var rect = new fabric.Rect({ - left: x, - top: y, - fill: "rgba(0,0,0,0)", - strokeWidth: 3, - stroke: "rgba(0,0,0,0.7)", - cornerColor: "red", - cornerStrokeColor: "red", - borderColor: "rgba(255, 0, 0, 1.0)", - width: width, - height: height, - lockRotation: true, - }); - rect.setControlsVisibility({ mtr: false }); - window.overlay.add(rect); - window.overlay.setActiveObject(window.overlay.item(0)); - window.w2ui.toolbar.selection_box={width: width, height: height, x:x, y:y}; - window.w2ui.toolbar.selection_box_bak={width: width, height: height, x:x, y:y}; -} -function add_image(url) -{ - fabric.Image.fromURL(url,function(img){ - window.overlay.add(img); - window.overlay.setActiveObject(img); - },{left:100,top:100}); -} -function export_image() -{ - data=window.overlay.toDataURL(); - document.querySelector("#upload_content").value=data; - window.postMessage(["upload",""],"*"); - end_overlay(); -} -function end_overlay() -{ - window.overlay.clear(); - document.querySelector("#overlay_container").style.opacity = 1.0; - document.querySelector("#overlay_container").style.pointerEvents="none"; -} -function ask_filename(target) -{ - w2prompt({ - label: "Enter filename", - value: `outpaint_${((new Date(Date.now() -(new Date()).getTimezoneOffset() * 60000))).toISOString().replace("T","_").replace(/[^0-9_]/g, "").substring(0,15)}`, - }) - .change((event) => { - console.log("change", event.detail.originalEvent.target.value); - }) - .ok((event) => { - console.log("value=", event.detail.value); - window.postMessage(["click",target,event.detail.value],"*"); - }) - .cancel((event) => { - console.log("cancel"); - }); -} - -document.querySelector("#container").addEventListener("wheel",(e)=>{e.preventDefault()}) -window.setup_shortcut=function(json) -{ - var config=JSON.parse(json); - var key_map={}; - Object.keys(config.shortcut).forEach(k=>{ - key_map[config.shortcut[k]]=k; - }) - document.addEventListener("keydown",(e)=>{ - if(e.target.tagName!="INPUT") - { - let key=e.key; - if(e.ctrlKey) - { - key="Ctrl+"+e.key; - if(key in key_map) - { - e.preventDefault(); - } - } - if(key in key_map) - { - w2ui.toolbar.click(key_map[key]); - } - } - }) -} \ No newline at end of file diff --git a/spaces/bingbing520/ChatGPT/modules/shared.py b/spaces/bingbing520/ChatGPT/modules/shared.py deleted file mode 100644 index a9e72580aa7ae48f907e923a09099513570a9ad8..0000000000000000000000000000000000000000 --- a/spaces/bingbing520/ChatGPT/modules/shared.py +++ /dev/null @@ -1,55 +0,0 @@ -from modules.presets import COMPLETION_URL, BALANCE_API_URL, USAGE_API_URL, API_HOST -import os -import queue - -class State: - interrupted = False - multi_api_key = False - completion_url = COMPLETION_URL - balance_api_url = BALANCE_API_URL - usage_api_url = USAGE_API_URL - - def interrupt(self): - self.interrupted = True - - def recover(self): - self.interrupted = False - - def set_api_host(self, api_host): - self.completion_url = f"https://{api_host}/v1/chat/completions" - self.balance_api_url = f"https://{api_host}/dashboard/billing/credit_grants" - self.usage_api_url = f"https://{api_host}/dashboard/billing/usage" - os.environ["OPENAI_API_BASE"] = f"https://{api_host}/v1" - - def reset_api_host(self): - self.completion_url = COMPLETION_URL - self.balance_api_url = BALANCE_API_URL - self.usage_api_url = USAGE_API_URL - os.environ["OPENAI_API_BASE"] = f"https://{API_HOST}/v1" - return API_HOST - - def reset_all(self): - self.interrupted = False - self.completion_url = COMPLETION_URL - - def set_api_key_queue(self, api_key_list): - self.multi_api_key = True - self.api_key_queue = queue.Queue() - for api_key in api_key_list: - self.api_key_queue.put(api_key) - - def switching_api_key(self, func): - if not hasattr(self, "api_key_queue"): - return func - - def wrapped(*args, **kwargs): - api_key = self.api_key_queue.get() - args[0].api_key = api_key - ret = func(*args, **kwargs) - self.api_key_queue.put(api_key) - return ret - - return wrapped - - -state = State() diff --git a/spaces/bioriAsaeru/text-to-voice/Archmodels Vol 123.md b/spaces/bioriAsaeru/text-to-voice/Archmodels Vol 123.md deleted file mode 100644 index 731ea673a0e37e8981d882d66f1e127123c24657..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Archmodels Vol 123.md +++ /dev/null @@ -1,18 +0,0 @@ -

Archmodels Vol 123


DOWNLOADhttps://urloso.com/2uyPR9



- -Oct 23, 2018 · 3d modeling software, modeller and general home 3d modeling software for beginners to advanced users. Home 3D modelers software make 3d models, which can be used in various. Bedroom sets 3d modeler - the newest fully functional 3d modeler for your home 3d design and 3d modeling. 3D modeling and 3d rendering software for beginners and professionals. Find a variety of 3D modeling and rendering tools and apps to help you create your own 3D models or 3D designs. - -The following is a list of a few apps that you can use for 3D modeling, designing and rendering. Mac 3d modeling apps. Rendering software for macOS 3d modeling apps. - -However if your printer isnt used for 3d printing, it would have a whole different application in the bedroom. To get started with 3D printing, youll want to learn 3D printing software, and to get your head. 3d modeling software, modeller and general home 3d modeling software for beginners to advanced users. Home 3D modelers software make 3d models, which can be used in various. - -3D modeling, rendering and imaging software. Free download. Bedroom set 3d modeler - the newest fully functional 3d modeler for your home 3d design and 3d modeling. 3D modeling and 3d rendering software for beginners and professionals. Find a variety of 3D modeling and rendering tools and apps to help you create your own 3D models or 3D designs. - -Since theyre generally known for their pencils, pencil modelling softwares are probably among the most frequently used software for rendering images. Modeling, rendering, and animation are key parts of animation and character design. When building models, it's essential to choose a tool that can be used effectively and efficiently. - -See our list of best 3d modeling software for more popular choices. Bedroom set 3d modeler - the newest fully functional 3d modeler for your home 3d design and 3d modeling. 3D modeling and 3d rendering software for beginners and professionals. Find a variety of 3D modeling and rendering tools and apps to help you create your own 3D models or 3D designs. - -First round of 3D renders. For additional resources, check out our list of recommended apps and software. Free 3D modelling software is available to everyone with a PC. This is the kind of software that provides a feature set comparable 4fefd39f24
-
-
-

diff --git a/spaces/blossom618/text_generator/App.py b/spaces/blossom618/text_generator/App.py deleted file mode 100644 index 51bf358eea0ad0221716c05225ea3f5307cd0f19..0000000000000000000000000000000000000000 --- a/spaces/blossom618/text_generator/App.py +++ /dev/null @@ -1,10 +0,0 @@ -import gradio as gr -from transformers import pipeline - -generator = pipeline('text-generation', model='gpt2') - - def generate (text): - result=generator (text) - return result [0] ['generated_text') - -gr.Interface (fn=generate, inputs=gr.inputs.Textbox(), outputs=gr.outputs. Textbox()) . launch () \ No newline at end of file diff --git a/spaces/brainblow/AudioCreator_Music-Audio_Generation/audiocraft/grids/diffusion/4_bands_base_32khz.py b/spaces/brainblow/AudioCreator_Music-Audio_Generation/audiocraft/grids/diffusion/4_bands_base_32khz.py deleted file mode 100644 index f7e67bcc89dd0c8e50d770e600b55f179fe19588..0000000000000000000000000000000000000000 --- a/spaces/brainblow/AudioCreator_Music-Audio_Generation/audiocraft/grids/diffusion/4_bands_base_32khz.py +++ /dev/null @@ -1,27 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -""" -Training of the 4 diffusion models described in -"From Discrete Tokens to High-Fidelity Audio Using Multi-Band Diffusion" -(paper link). -""" - -from ._explorers import DiffusionExplorer - - -@DiffusionExplorer -def explorer(launcher): - launcher.slurm_(gpus=4, partition='learnfair') - - launcher.bind_({'solver': 'diffusion/default', - 'dset': 'internal/music_10k_32khz'}) - - with launcher.job_array(): - launcher({'filter.use': True, 'filter.idx_band': 0, "processor.use": False, 'processor.power_std': 0.4}) - launcher({'filter.use': True, 'filter.idx_band': 1, "processor.use": False, 'processor.power_std': 0.4}) - launcher({'filter.use': True, 'filter.idx_band': 2, "processor.use": True, 'processor.power_std': 0.4}) - launcher({'filter.use': True, 'filter.idx_band': 3, "processor.use": True, 'processor.power_std': 0.75}) diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/export/caffe2_patch.py b/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/export/caffe2_patch.py deleted file mode 100644 index 2da70ae34e31dfe1a2ab4d5625a3e2b096aa5c7f..0000000000000000000000000000000000000000 --- a/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/export/caffe2_patch.py +++ /dev/null @@ -1,189 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. - -import contextlib -from unittest import mock -import torch - -from detectron2.modeling import poolers -from detectron2.modeling.proposal_generator import rpn -from detectron2.modeling.roi_heads import keypoint_head, mask_head -from detectron2.modeling.roi_heads.fast_rcnn import FastRCNNOutputLayers - -from .c10 import ( - Caffe2Compatible, - Caffe2FastRCNNOutputsInference, - Caffe2KeypointRCNNInference, - Caffe2MaskRCNNInference, - Caffe2ROIPooler, - Caffe2RPN, - caffe2_fast_rcnn_outputs_inference, - caffe2_keypoint_rcnn_inference, - caffe2_mask_rcnn_inference, -) - - -class GenericMixin(object): - pass - - -class Caffe2CompatibleConverter(object): - """ - A GenericUpdater which implements the `create_from` interface, by modifying - module object and assign it with another class replaceCls. - """ - - def __init__(self, replaceCls): - self.replaceCls = replaceCls - - def create_from(self, module): - # update module's class to the new class - assert isinstance(module, torch.nn.Module) - if issubclass(self.replaceCls, GenericMixin): - # replaceCls should act as mixin, create a new class on-the-fly - new_class = type( - "{}MixedWith{}".format(self.replaceCls.__name__, module.__class__.__name__), - (self.replaceCls, module.__class__), - {}, # {"new_method": lambda self: ...}, - ) - module.__class__ = new_class - else: - # replaceCls is complete class, this allow arbitrary class swap - module.__class__ = self.replaceCls - - # initialize Caffe2Compatible - if isinstance(module, Caffe2Compatible): - module.tensor_mode = False - - return module - - -def patch(model, target, updater, *args, **kwargs): - """ - recursively (post-order) update all modules with the target type and its - subclasses, make a initialization/composition/inheritance/... via the - updater.create_from. - """ - for name, module in model.named_children(): - model._modules[name] = patch(module, target, updater, *args, **kwargs) - if isinstance(model, target): - return updater.create_from(model, *args, **kwargs) - return model - - -def patch_generalized_rcnn(model): - ccc = Caffe2CompatibleConverter - model = patch(model, rpn.RPN, ccc(Caffe2RPN)) - model = patch(model, poolers.ROIPooler, ccc(Caffe2ROIPooler)) - - return model - - -@contextlib.contextmanager -def mock_fastrcnn_outputs_inference( - tensor_mode, check=True, box_predictor_type=FastRCNNOutputLayers -): - with mock.patch.object( - box_predictor_type, - "inference", - autospec=True, - side_effect=Caffe2FastRCNNOutputsInference(tensor_mode), - ) as mocked_func: - yield - if check: - assert mocked_func.call_count > 0 - - -@contextlib.contextmanager -def mock_mask_rcnn_inference(tensor_mode, patched_module, check=True): - with mock.patch( - "{}.mask_rcnn_inference".format(patched_module), side_effect=Caffe2MaskRCNNInference() - ) as mocked_func: - yield - if check: - assert mocked_func.call_count > 0 - - -@contextlib.contextmanager -def mock_keypoint_rcnn_inference(tensor_mode, patched_module, use_heatmap_max_keypoint, check=True): - with mock.patch( - "{}.keypoint_rcnn_inference".format(patched_module), - side_effect=Caffe2KeypointRCNNInference(use_heatmap_max_keypoint), - ) as mocked_func: - yield - if check: - assert mocked_func.call_count > 0 - - -class ROIHeadsPatcher: - def __init__(self, heads, use_heatmap_max_keypoint): - self.heads = heads - self.use_heatmap_max_keypoint = use_heatmap_max_keypoint - self.previous_patched = {} - - @contextlib.contextmanager - def mock_roi_heads(self, tensor_mode=True): - """ - Patching several inference functions inside ROIHeads and its subclasses - - Args: - tensor_mode (bool): whether the inputs/outputs are caffe2's tensor - format or not. Default to True. - """ - # NOTE: this requries the `keypoint_rcnn_inference` and `mask_rcnn_inference` - # are called inside the same file as BaseXxxHead due to using mock.patch. - kpt_heads_mod = keypoint_head.BaseKeypointRCNNHead.__module__ - mask_head_mod = mask_head.BaseMaskRCNNHead.__module__ - - mock_ctx_managers = [ - mock_fastrcnn_outputs_inference( - tensor_mode=tensor_mode, - check=True, - box_predictor_type=type(self.heads.box_predictor), - ) - ] - if getattr(self.heads, "keypoint_on", False): - mock_ctx_managers += [ - mock_keypoint_rcnn_inference( - tensor_mode, kpt_heads_mod, self.use_heatmap_max_keypoint - ) - ] - if getattr(self.heads, "mask_on", False): - mock_ctx_managers += [mock_mask_rcnn_inference(tensor_mode, mask_head_mod)] - - with contextlib.ExitStack() as stack: # python 3.3+ - for mgr in mock_ctx_managers: - stack.enter_context(mgr) - yield - - def patch_roi_heads(self, tensor_mode=True): - self.previous_patched["box_predictor"] = self.heads.box_predictor.inference - self.previous_patched["keypoint_rcnn"] = keypoint_head.keypoint_rcnn_inference - self.previous_patched["mask_rcnn"] = mask_head.mask_rcnn_inference - - def patched_fastrcnn_outputs_inference(predictions, proposal): - return caffe2_fast_rcnn_outputs_inference( - True, self.heads.box_predictor, predictions, proposal - ) - - self.heads.box_predictor.inference = patched_fastrcnn_outputs_inference - - if getattr(self.heads, "keypoint_on", False): - - def patched_keypoint_rcnn_inference(pred_keypoint_logits, pred_instances): - return caffe2_keypoint_rcnn_inference( - self.use_heatmap_max_keypoint, pred_keypoint_logits, pred_instances - ) - - keypoint_head.keypoint_rcnn_inference = patched_keypoint_rcnn_inference - - if getattr(self.heads, "mask_on", False): - - def patched_mask_rcnn_inference(pred_mask_logits, pred_instances): - return caffe2_mask_rcnn_inference(pred_mask_logits, pred_instances) - - mask_head.mask_rcnn_inference = patched_mask_rcnn_inference - - def unpatch_roi_heads(self): - self.heads.box_predictor.inference = self.previous_patched["box_predictor"] - keypoint_head.keypoint_rcnn_inference = self.previous_patched["keypoint_rcnn"] - mask_head.mask_rcnn_inference = self.previous_patched["mask_rcnn"] diff --git a/spaces/camenduru-com/riffusion-api/README.md b/spaces/camenduru-com/riffusion-api/README.md deleted file mode 100644 index bd2838089b8594792d6f3f341230a88d2fdddb30..0000000000000000000000000000000000000000 --- a/spaces/camenduru-com/riffusion-api/README.md +++ /dev/null @@ -1,8 +0,0 @@ ---- -title: Riffusion App API -emoji: ⚙ -colorFrom: grey -colorTo: grey -sdk: docker -pinned: false ---- diff --git a/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/aiofiles/os.py b/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/aiofiles/os.py deleted file mode 100644 index 29bc748fa91a6d3de6ec42842416de6af7134f5c..0000000000000000000000000000000000000000 --- a/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/aiofiles/os.py +++ /dev/null @@ -1,51 +0,0 @@ -"""Async executor versions of file functions from the os module.""" -import os - -from . import ospath as path -from .ospath import wrap - -__all__ = [ - "path", - "stat", - "statvfs", - "rename", - "renames", - "replace", - "remove", - "unlink", - "mkdir", - "makedirs", - "rmdir", - "removedirs", - "link", - "symlink", - "readlink", - "listdir", - "scandir", - "access", - "sendfile", - "wrap", -] - - -stat = wrap(os.stat) -rename = wrap(os.rename) -renames = wrap(os.renames) -replace = wrap(os.replace) -remove = wrap(os.remove) -unlink = wrap(os.unlink) -mkdir = wrap(os.mkdir) -makedirs = wrap(os.makedirs) -rmdir = wrap(os.rmdir) -removedirs = wrap(os.removedirs) -link = wrap(os.link) -symlink = wrap(os.symlink) -readlink = wrap(os.readlink) -listdir = wrap(os.listdir) -scandir = wrap(os.scandir) -access = wrap(os.access) - -if hasattr(os, "sendfile"): - sendfile = wrap(os.sendfile) -if hasattr(os, "statvfs"): - statvfs = wrap(os.statvfs) diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/DensePose/doc/DENSEPOSE_IUV.md b/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/DensePose/doc/DENSEPOSE_IUV.md deleted file mode 100644 index de158e0eea0c287507b701376abc9307ce92c0f1..0000000000000000000000000000000000000000 --- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/DensePose/doc/DENSEPOSE_IUV.md +++ /dev/null @@ -1,627 +0,0 @@ -# Chart-based Dense Pose Estimation for Humans and Animals - -## Overview - -The goal of chart-based DensePose methods is to establish dense correspondences -between image pixels and 3D object mesh by splitting the latter into charts and estimating -for each pixel the corresponding chart index `I` and local chart coordinates `(U, V)`. - -
- -
- -The charts used for human DensePose estimation are shown in Figure 1. -The human body is split into 24 parts, each part is parametrized by `U` and `V` -coordinates, each taking values in `[0, 1]`. - -
- -
-

Figure 1. Partitioning and parametrization of human body surface.

- -The pipeline uses [Faster R-CNN](https://arxiv.org/abs/1506.01497) -with [Feature Pyramid Network](https://arxiv.org/abs/1612.03144) meta architecture -outlined in Figure 2. For each detected object, the model predicts -its coarse segmentation `S` (2 or 15 channels: foreground / background or -background + 14 predefined body parts), fine segmentation `I` (25 channels: -background + 24 predefined body parts) and local chart coordinates `U` and `V`. - -
- -
-

Figure 2. DensePose chart-based architecture based on Faster R-CNN with Feature Pyramid Network (FPN).

- -### Bootstrapping Chart-Based Models - -[Sanakoyeu et al., 2020](https://arxiv.org/pdf/2003.00080.pdf) introduced a pipeline -to transfer DensePose models trained on humans to proximal animal classes (chimpanzees), -which is summarized in Figure 3. The training proceeds in two stages: - -First, a *master* model is trained on data from source domain (humans with full -DensePose annotation `S`, `I`, `U` and `V`) -and supporting domain (animals with segmentation annotation only). -Only selected animal classes are chosen from the supporting -domain through *category filters* to guarantee the quality of target domain results. -The training is done in *class-agnostic manner*: all selected categories are mapped -to a single category (human). - -Second, a *student* model is trained on data from source and supporting domains, -as well as data from target domain obtained by applying the master model, selecting -high-confidence detections and sampling the results. - -
- -
-

Figure 3. Domain adaptation: master model is trained on data from source and -supporting domains to produce predictions in target domain; student model combines data from source and -supporting domains, as well as sampled predictions from the master model on target domain to improve -target domain predictions quality.

- -Examples of pretrained master and student models are available in the [Model Zoo](#ModelZooBootstrap). -For more details on the bootstrapping pipeline, please see [Bootstrapping Pipeline](BOOTSTRAPPING_PIPELINE.md). - -### Datasets - -For more details on datasets used for chart-based model training and validation, -please refer to the [DensePose Datasets](DENSEPOSE_DATASETS.md) page. - -## Model Zoo and Baselines - -### Legacy Models - -Baselines trained using schedules from [Güler et al, 2018](https://arxiv.org/pdf/1802.00434.pdf) - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Namelr
sched
train
time
(s/iter)
inference
time
(s/im)
train
mem
(GB)
box
AP
segm
AP
dp. AP
GPS
dp. AP
GPSm
model iddownload
R_50_FPN_s1x_legacys1x0.3070.0513.258.158.252.154.9164832157model | metrics
R_101_FPN_s1x_legacys1x0.3900.0634.359.559.353.256.0164832182model | metrics
- -### Improved Baselines, Original Fully Convolutional Head - -These models use an improved training schedule and Panoptic FPN head from [Kirillov et al, 2019](https://arxiv.org/abs/1901.02446). - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Namelr
sched
train
time
(s/iter)
inference
time
(s/im)
train
mem
(GB)
box
AP
segm
AP
dp. AP
GPS
dp. AP
GPSm
model iddownload
R_50_FPN_s1xs1x0.3590.0664.561.267.263.765.3165712039model | metrics
R_101_FPN_s1xs1x0.4280.0795.862.367.864.566.2165712084model | metrics
- -### Improved Baselines, DeepLabV3 Head - -These models use an improved training schedule, Panoptic FPN head from [Kirillov et al, 2019](https://arxiv.org/abs/1901.02446) and DeepLabV3 head from [Chen et al, 2017](https://arxiv.org/abs/1706.05587). - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Namelr
sched
train
time
(s/iter)
inference
time
(s/im)
train
mem
(GB)
box
AP
segm
AP
dp. AP
GPS
dp. AP
GPSm
model iddownload
R_50_FPN_DL_s1xs1x0.3920.0706.761.168.365.666.7165712097model | metrics
R_101_FPN_DL_s1xs1x0.4780.0837.062.368.766.367.6165712116model | metrics
- -###
Baselines with Confidence Estimation - -These models perform additional estimation of confidence in regressed UV coodrinates, along the lines of [Neverova et al., 2019](https://papers.nips.cc/paper/8378-correlated-uncertainty-for-learning-dense-correspondences-from-noisy-labels). - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Namelr
sched
train
time
(s/iter)
inference
time
(s/im)
train
mem
(GB)
box
AP
segm
AP
dp. AP
GPS
dp. AP
GPSm
model iddownload
R_50_FPN_WC1_s1xs1x0.3530.0644.660.567.064.265.4173862049model | metrics
R_50_FPN_WC2_s1xs1x0.3640.0664.860.766.964.265.7173861455model | metrics
R_50_FPN_DL_WC1_s1xs1x0.3970.0686.761.168.165.867.0173067973model | metrics
R_50_FPN_DL_WC2_s1xs1x0.4100.0706.860.867.965.666.7173859335model | metrics
R_101_FPN_WC1_s1xs1x0.4350.0765.762.567.664.966.3171402969model | metrics
R_101_FPN_WC2_s1xs1x0.4500.0785.762.367.664.866.4173860702model | metrics
R_101_FPN_DL_WC1_s1xs1x0.4790.0817.962.068.466.267.2173858525model | metrics
R_101_FPN_DL_WC2_s1xs1x0.4910.0827.661.768.365.967.2173294801model | metrics
- -Acronyms: - -`WC1`: with confidence estimation model type 1 for `U` and `V` - -`WC2`: with confidence estimation model type 2 for `U` and `V` - -###
Baselines with Mask Confidence Estimation - -Models that perform estimation of confidence in regressed UV coodrinates -as well as confidences associated with coarse and fine segmentation, -see [Sanakoyeu et al., 2020](https://arxiv.org/pdf/2003.00080.pdf) for details. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Namelr
sched
train
time
(s/iter)
inference
time
(s/im)
train
mem
(GB)
box
AP
segm
AP
dp. AP
GPS
dp. AP
GPSm
model iddownload
R_50_FPN_WC1M_s1xs1x0.3810.0664.860.666.764.065.4217144516model | metrics
R_50_FPN_WC2M_s1xs1x0.3420.0685.060.766.964.265.5216245640model | metrics
R_50_FPN_DL_WC1M_s1xs1x0.3710.0686.060.768.065.266.7216245703model | metrics
R_50_FPN_DL_WC2M_s1xs1x0.3850.0716.160.868.165.066.4216245758model | metrics
R_101_FPN_WC1M_s1xs1x0.4230.0795.962.067.364.866.0216453687model | metrics
R_101_FPN_WC2M_s1xs1x0.4360.0805.962.567.464.566.0216245682model | metrics
R_101_FPN_DL_WC1M_s1xs1x0.4530.0796.862.068.166.467.1216245771model | metrics
R_101_FPN_DL_WC2M_s1xs1x0.4640.0806.961.968.266.167.1216245790model | metrics
- -Acronyms: - -`WC1M`: with confidence estimation model type 1 for `U` and `V` and mask confidence estimation - -`WC2M`: with confidence estimation model type 2 for `U` and `V` and mask confidence estimation - -###
Bootstrapping Baselines - -Master and student models trained using the bootstrapping pipeline with chimpanzee as the target category, -see [Sanakoyeu et al., 2020](https://arxiv.org/pdf/2003.00080.pdf) -and [Bootstrapping Pipeline](BOOTSTRAPPING_PIPELINE.md) for details. -Evaluation is performed on [DensePose Chimps](DENSEPOSE_DATASETS.md#densepose-chimps) dataset. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Namelr
sched
train
time
(s/iter)
inference
time
(s/im)
train
mem
(GB)
box
AP
segm
AP
dp. APex
GPS
dp. AP
GPS
dp. AP
GPSm
model iddownload
R_50_FPN_DL_WC1M_3x_Atop10P_CA3x0.5220.0739.761.359.136.220.030.2217578784model | metrics
R_50_FPN_DL_WC1M_3x_Atop10P_CA_B_uniform3x1.9390.07210.160.958.537.221.531.0256453729model | metrics
R_50_FPN_DL_WC1M_3x_Atop10P_CA_B_uv3x1.9850.0729.661.458.938.322.232.1256452095model | metrics
R_50_FPN_DL_WC1M_3x_Atop10P_CA_B_finesegm3x2.0470.07210.360.958.536.720.730.7256452819model | metrics
R_50_FPN_DL_WC1M_3x_Atop10P_CA_B_coarsesegm3x1.8300.0709.661.359.237.921.531.6256455697model | metrics
- -Acronyms: - -`WC1M`: with confidence estimation model type 1 for `U` and `V` and mask confidence estimation - -`Atop10P`: humans and animals from the 10 best suitable categories are used for training - -`CA`: class agnostic training, where all annotated instances are mapped into a single category - -`B_<...>`: schedule with bootstrapping with the specified results sampling strategy - -Note: - -The relaxed `dp. APex GPS` metric was used in -[Sanakoyeu et al., 2020](https://arxiv.org/pdf/2003.00080.pdf) to evaluate DensePose -results. This metric considers matches at thresholds 0.2, 0.3 and 0.4 additionally -to the standard ones used in the evaluation protocol. The minimum threshold is -controlled by `DENSEPOSE_EVALUATION.MIN_IOU_THRESHOLD` config option. - -### License - -All models available for download are licensed under the -[Creative Commons Attribution-ShareAlike 3.0 license](https://creativecommons.org/licenses/by-sa/3.0/) - -## References - -If you use chart-based DensePose methods, please take the references from the following -BibTeX entries: - -DensePose bootstrapping pipeline: -``` -@InProceedings{Sanakoyeu2020TransferringDensePose, - title = {Transferring Dense Pose to Proximal Animal Classes}, - author = {Artsiom Sanakoyeu and Vasil Khalidov and Maureen S. McCarthy and Andrea Vedaldi and Natalia Neverova}, - journal = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)}, - year = {2020}, -} -``` - -DensePose with confidence estimation: -``` -@InProceedings{Neverova2019DensePoseConfidences, - title = {Correlated Uncertainty for Learning Dense Correspondences from Noisy Labels}, - author = {Neverova, Natalia and Novotny, David and Vedaldi, Andrea}, - journal = {Advances in Neural Information Processing Systems}, - year = {2019}, -} -``` - -Original DensePose: -``` -@InProceedings{Guler2018DensePose, - title={DensePose: Dense Human Pose Estimation In The Wild}, - author={R\{i}za Alp G\"uler, Natalia Neverova, Iasonas Kokkinos}, - journal={The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)}, - year={2018} -} -``` diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/PointSup/point_sup/point_utils.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/PointSup/point_sup/point_utils.py deleted file mode 100644 index eed876ea9e0127c584c008bd5aab3e16e2c8c66a..0000000000000000000000000000000000000000 --- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/PointSup/point_sup/point_utils.py +++ /dev/null @@ -1,77 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -import torch - -from detectron2.layers import cat - - -def get_point_coords_from_point_annotation(instances): - """ - Load point coords and their corresponding labels from point annotation. - - Args: - instances (list[Instances]): A list of N Instances, where N is the number of images - in the batch. These instances are in 1:1 - correspondence with the pred_mask_logits. The ground-truth labels (class, box, mask, - ...) associated with each instance are stored in fields. - Returns: - point_coords (Tensor): A tensor of shape (N, P, 2) that contains the coordinates of P - sampled points. - point_labels (Tensor): A tensor of shape (N, P) that contains the labels of P - sampled points. `point_labels` takes 3 possible values: - - 0: the point belongs to background - - 1: the point belongs to the object - - -1: the point is ignored during training - """ - point_coords_list = [] - point_labels_list = [] - for instances_per_image in instances: - if len(instances_per_image) == 0: - continue - point_coords = instances_per_image.gt_point_coords.to(torch.float32) - point_labels = instances_per_image.gt_point_labels.to(torch.float32).clone() - proposal_boxes_per_image = instances_per_image.proposal_boxes.tensor - - # Convert point coordinate system, ground truth points are in image coord. - point_coords_wrt_box = get_point_coords_wrt_box(proposal_boxes_per_image, point_coords) - - # Ignore points that are outside predicted boxes. - point_ignores = ( - (point_coords_wrt_box[:, :, 0] < 0) - | (point_coords_wrt_box[:, :, 0] > 1) - | (point_coords_wrt_box[:, :, 1] < 0) - | (point_coords_wrt_box[:, :, 1] > 1) - ) - point_labels[point_ignores] = -1 - - point_coords_list.append(point_coords_wrt_box) - point_labels_list.append(point_labels) - - return ( - cat(point_coords_list, dim=0), - cat(point_labels_list, dim=0), - ) - - -def get_point_coords_wrt_box(boxes_coords, point_coords): - """ - Convert image-level absolute coordinates to box-normalized [0, 1] x [0, 1] point cooordinates. - Args: - boxes_coords (Tensor): A tensor of shape (R, 4) that contains bounding boxes. - coordinates. - point_coords (Tensor): A tensor of shape (R, P, 2) that contains - image-normalized coordinates of P sampled points. - Returns: - point_coords_wrt_box (Tensor): A tensor of shape (R, P, 2) that contains - [0, 1] x [0, 1] box-normalized coordinates of the P sampled points. - """ - with torch.no_grad(): - point_coords_wrt_box = point_coords.clone() - point_coords_wrt_box[:, :, 0] -= boxes_coords[:, None, 0] - point_coords_wrt_box[:, :, 1] -= boxes_coords[:, None, 1] - point_coords_wrt_box[:, :, 0] = point_coords_wrt_box[:, :, 0] / ( - boxes_coords[:, None, 2] - boxes_coords[:, None, 0] - ) - point_coords_wrt_box[:, :, 1] = point_coords_wrt_box[:, :, 1] / ( - boxes_coords[:, None, 3] - boxes_coords[:, None, 1] - ) - return point_coords_wrt_box diff --git a/spaces/catundchat/tts_cn/modules.py b/spaces/catundchat/tts_cn/modules.py deleted file mode 100644 index 289f4e3bdc7e1c783766b4c20bdf4475e65c932b..0000000000000000000000000000000000000000 --- a/spaces/catundchat/tts_cn/modules.py +++ /dev/null @@ -1,522 +0,0 @@ -import copy -import math -import numpy as np -import scipy -import torch -from torch import nn -from torch.nn import functional as F - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm - -import commons -from commons import init_weights, get_padding -from transforms import piecewise_rational_quadratic_transform - - -LRELU_SLOPE = 0.1 - - -class LayerNorm(nn.Module): - def __init__(self, channels, eps=1e-5): - super().__init__() - self.channels = channels - self.eps = eps - - self.gamma = nn.Parameter(torch.ones(channels)) - self.beta = nn.Parameter(torch.zeros(channels)) - - def forward(self, x): - x = x.transpose(1, -1) - x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps) - return x.transpose(1, -1) - - -class ConvReluNorm(nn.Module): - def __init__( - self, - in_channels, - hidden_channels, - out_channels, - kernel_size, - n_layers, - p_dropout, - ): - super().__init__() - self.in_channels = in_channels - self.hidden_channels = hidden_channels - self.out_channels = out_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - assert n_layers > 1, "Number of layers should be larger than 0." - - self.conv_layers = nn.ModuleList() - self.norm_layers = nn.ModuleList() - self.conv_layers.append( - nn.Conv1d( - in_channels, hidden_channels, kernel_size, padding=kernel_size // 2 - ) - ) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.relu_drop = nn.Sequential(nn.ReLU(), nn.Dropout(p_dropout)) - for _ in range(n_layers - 1): - self.conv_layers.append( - nn.Conv1d( - hidden_channels, - hidden_channels, - kernel_size, - padding=kernel_size // 2, - ) - ) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask): - x_org = x - for i in range(self.n_layers): - x = self.conv_layers[i](x * x_mask) - x = self.norm_layers[i](x) - x = self.relu_drop(x) - x = x_org + self.proj(x) - return x * x_mask - - -class DDSConv(nn.Module): - """ - Dialted and Depth-Separable Convolution - """ - - def __init__(self, channels, kernel_size, n_layers, p_dropout=0.0): - super().__init__() - self.channels = channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - - self.drop = nn.Dropout(p_dropout) - self.convs_sep = nn.ModuleList() - self.convs_1x1 = nn.ModuleList() - self.norms_1 = nn.ModuleList() - self.norms_2 = nn.ModuleList() - for i in range(n_layers): - dilation = kernel_size**i - padding = (kernel_size * dilation - dilation) // 2 - self.convs_sep.append( - nn.Conv1d( - channels, - channels, - kernel_size, - groups=channels, - dilation=dilation, - padding=padding, - ) - ) - self.convs_1x1.append(nn.Conv1d(channels, channels, 1)) - self.norms_1.append(LayerNorm(channels)) - self.norms_2.append(LayerNorm(channels)) - - def forward(self, x, x_mask, g=None): - if g is not None: - x = x + g - for i in range(self.n_layers): - y = self.convs_sep[i](x * x_mask) - y = self.norms_1[i](y) - y = F.gelu(y) - y = self.convs_1x1[i](y) - y = self.norms_2[i](y) - y = F.gelu(y) - y = self.drop(y) - x = x + y - return x * x_mask - - -class WN(torch.nn.Module): - def __init__( - self, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0, - p_dropout=0, - ): - super(WN, self).__init__() - assert kernel_size % 2 == 1 - self.hidden_channels = hidden_channels - self.kernel_size = (kernel_size,) - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - self.p_dropout = p_dropout - - self.in_layers = torch.nn.ModuleList() - self.res_skip_layers = torch.nn.ModuleList() - self.drop = nn.Dropout(p_dropout) - - if gin_channels != 0: - cond_layer = torch.nn.Conv1d( - gin_channels, 2 * hidden_channels * n_layers, 1 - ) - self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name="weight") - - for i in range(n_layers): - dilation = dilation_rate**i - padding = int((kernel_size * dilation - dilation) / 2) - in_layer = torch.nn.Conv1d( - hidden_channels, - 2 * hidden_channels, - kernel_size, - dilation=dilation, - padding=padding, - ) - in_layer = torch.nn.utils.weight_norm(in_layer, name="weight") - self.in_layers.append(in_layer) - - # last one is not necessary - if i < n_layers - 1: - res_skip_channels = 2 * hidden_channels - else: - res_skip_channels = hidden_channels - - res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1) - res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name="weight") - self.res_skip_layers.append(res_skip_layer) - - def forward(self, x, x_mask, g=None, **kwargs): - output = torch.zeros_like(x) - n_channels_tensor = torch.IntTensor([self.hidden_channels]) - - if g is not None: - g = self.cond_layer(g) - - for i in range(self.n_layers): - x_in = self.in_layers[i](x) - if g is not None: - cond_offset = i * 2 * self.hidden_channels - g_l = g[:, cond_offset : cond_offset + 2 * self.hidden_channels, :] - else: - g_l = torch.zeros_like(x_in) - - acts = commons.fused_add_tanh_sigmoid_multiply(x_in, g_l, n_channels_tensor) - acts = self.drop(acts) - - res_skip_acts = self.res_skip_layers[i](acts) - if i < self.n_layers - 1: - res_acts = res_skip_acts[:, : self.hidden_channels, :] - x = (x + res_acts) * x_mask - output = output + res_skip_acts[:, self.hidden_channels :, :] - else: - output = output + res_skip_acts - return output * x_mask - - def remove_weight_norm(self): - if self.gin_channels != 0: - torch.nn.utils.remove_weight_norm(self.cond_layer) - for l in self.in_layers: - torch.nn.utils.remove_weight_norm(l) - for l in self.res_skip_layers: - torch.nn.utils.remove_weight_norm(l) - - -class ResBlock1(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock1, self).__init__() - self.convs1 = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]), - ) - ), - ] - ) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - ] - ) - self.convs2.apply(init_weights) - - def forward(self, x, x_mask=None): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c2(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class ResBlock2(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3)): - super(ResBlock2, self).__init__() - self.convs = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]), - ) - ), - ] - ) - self.convs.apply(init_weights) - - def forward(self, x, x_mask=None): - for c in self.convs: - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -class Log(nn.Module): - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask - logdet = torch.sum(-y, [1, 2]) - return y, logdet - else: - x = torch.exp(x) * x_mask - return x - - -class Flip(nn.Module): - def forward(self, x, *args, reverse=False, **kwargs): - x = torch.flip(x, [1]) - if not reverse: - logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device) - return x, logdet - else: - return x - - -class ElementwiseAffine(nn.Module): - def __init__(self, channels): - super().__init__() - self.channels = channels - self.m = nn.Parameter(torch.zeros(channels, 1)) - self.logs = nn.Parameter(torch.zeros(channels, 1)) - - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = self.m + torch.exp(self.logs) * x - y = y * x_mask - logdet = torch.sum(self.logs * x_mask, [1, 2]) - return y, logdet - else: - x = (x - self.m) * torch.exp(-self.logs) * x_mask - return x - - -class ResidualCouplingLayer(nn.Module): - def __init__( - self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=0, - gin_channels=0, - mean_only=False, - ): - assert channels % 2 == 0, "channels should be divisible by 2" - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.half_channels = channels // 2 - self.mean_only = mean_only - - self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1) - self.enc = WN( - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=p_dropout, - gin_channels=gin_channels, - ) - self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1) - self.post.weight.data.zero_() - self.post.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels] * 2, 1) - h = self.pre(x0) * x_mask - h = self.enc(h, x_mask, g=g) - stats = self.post(h) * x_mask - if not self.mean_only: - m, logs = torch.split(stats, [self.half_channels] * 2, 1) - else: - m = stats - logs = torch.zeros_like(m) - - if not reverse: - x1 = m + x1 * torch.exp(logs) * x_mask - x = torch.cat([x0, x1], 1) - logdet = torch.sum(logs, [1, 2]) - return x, logdet - else: - x1 = (x1 - m) * torch.exp(-logs) * x_mask - x = torch.cat([x0, x1], 1) - return x - - def remove_weight_norm(self): - self.enc.remove_weight_norm() - - -class ConvFlow(nn.Module): - def __init__( - self, - in_channels, - filter_channels, - kernel_size, - n_layers, - num_bins=10, - tail_bound=5.0, - ): - super().__init__() - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.num_bins = num_bins - self.tail_bound = tail_bound - self.half_channels = in_channels // 2 - - self.pre = nn.Conv1d(self.half_channels, filter_channels, 1) - self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.0) - self.proj = nn.Conv1d( - filter_channels, self.half_channels * (num_bins * 3 - 1), 1 - ) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels] * 2, 1) - h = self.pre(x0) - h = self.convs(h, x_mask, g=g) - h = self.proj(h) * x_mask - - b, c, t = x0.shape - h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?] - - unnormalized_widths = h[..., : self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_heights = h[..., self.num_bins : 2 * self.num_bins] / math.sqrt( - self.filter_channels - ) - unnormalized_derivatives = h[..., 2 * self.num_bins :] - - x1, logabsdet = piecewise_rational_quadratic_transform( - x1, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=reverse, - tails="linear", - tail_bound=self.tail_bound, - ) - - x = torch.cat([x0, x1], 1) * x_mask - logdet = torch.sum(logabsdet * x_mask, [1, 2]) - if not reverse: - return x, logdet - else: - return x diff --git a/spaces/catundchat/tts_cn/utils.py b/spaces/catundchat/tts_cn/utils.py deleted file mode 100644 index f193a3e225b368fe7324852994676ad7236c970e..0000000000000000000000000000000000000000 --- a/spaces/catundchat/tts_cn/utils.py +++ /dev/null @@ -1,319 +0,0 @@ -import os -import glob -import sys -import argparse -import logging -import json -import subprocess -import numpy as np -from scipy.io.wavfile import read -import torch - -MATPLOTLIB_FLAG = False - -logging.basicConfig(stream=sys.stdout, level=logging.DEBUG) -logger = logging - - -def load_checkpoint(checkpoint_path, model, optimizer=None): - assert os.path.isfile(checkpoint_path) - checkpoint_dict = torch.load(checkpoint_path, map_location="cpu") - iteration = checkpoint_dict["iteration"] - learning_rate = checkpoint_dict["learning_rate"] - if optimizer is not None: - optimizer.load_state_dict(checkpoint_dict["optimizer"]) - saved_state_dict = checkpoint_dict["model"] - if hasattr(model, "module"): - state_dict = model.module.state_dict() - else: - state_dict = model.state_dict() - new_state_dict = {} - for k, v in state_dict.items(): - try: - new_state_dict[k] = saved_state_dict[k] - except: - logger.info("%s is not in the checkpoint" % k) - new_state_dict[k] = v - if hasattr(model, "module"): - model.module.load_state_dict(new_state_dict) - else: - model.load_state_dict(new_state_dict) - logger.info( - "Loaded checkpoint '{}' (iteration {})".format(checkpoint_path, iteration) - ) - return model, optimizer, learning_rate, iteration - - -def save_checkpoint(model, optimizer, learning_rate, iteration, checkpoint_path): - logger.info( - "Saving model and optimizer state at iteration {} to {}".format( - iteration, checkpoint_path - ) - ) - if hasattr(model, "module"): - state_dict = model.module.state_dict() - else: - state_dict = model.state_dict() - torch.save( - { - "model": state_dict, - "iteration": iteration, - "optimizer": optimizer.state_dict(), - "learning_rate": learning_rate, - }, - checkpoint_path, - ) - - -def load_model(checkpoint_path, model): - assert os.path.isfile(checkpoint_path) - checkpoint_dict = torch.load(checkpoint_path, map_location="cpu") - saved_state_dict = checkpoint_dict["model"] - if hasattr(model, "module"): - state_dict = model.module.state_dict() - else: - state_dict = model.state_dict() - new_state_dict = {} - for k, v in state_dict.items(): - try: - new_state_dict[k] = saved_state_dict[k] - except: - logger.info("%s is not in the checkpoint" % k) - new_state_dict[k] = v - if hasattr(model, "module"): - model.module.load_state_dict(new_state_dict) - else: - model.load_state_dict(new_state_dict) - return model - - -def save_model(model, checkpoint_path): - if hasattr(model, 'module'): - state_dict = model.module.state_dict() - else: - state_dict = model.state_dict() - torch.save({'model': state_dict}, checkpoint_path) - - -def summarize( - writer, - global_step, - scalars={}, - histograms={}, - images={}, - audios={}, - audio_sampling_rate=22050, -): - for k, v in scalars.items(): - writer.add_scalar(k, v, global_step) - for k, v in histograms.items(): - writer.add_histogram(k, v, global_step) - for k, v in images.items(): - writer.add_image(k, v, global_step, dataformats="HWC") - for k, v in audios.items(): - writer.add_audio(k, v, global_step, audio_sampling_rate) - - -def latest_checkpoint_path(dir_path, regex="G_*.pth"): - f_list = glob.glob(os.path.join(dir_path, regex)) - f_list.sort(key=lambda f: int("".join(filter(str.isdigit, f)))) - x = f_list[-1] - print(x) - return x - - -def plot_spectrogram_to_numpy(spectrogram): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger("matplotlib") - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(10, 2)) - im = ax.imshow(spectrogram, aspect="auto", origin="lower", interpolation="none") - plt.colorbar(im, ax=ax) - plt.xlabel("Frames") - plt.ylabel("Channels") - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep="") - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def plot_alignment_to_numpy(alignment, info=None): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger("matplotlib") - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(6, 4)) - im = ax.imshow( - alignment.transpose(), aspect="auto", origin="lower", interpolation="none" - ) - fig.colorbar(im, ax=ax) - xlabel = "Decoder timestep" - if info is not None: - xlabel += "\n\n" + info - plt.xlabel(xlabel) - plt.ylabel("Encoder timestep") - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep="") - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def load_wav_to_torch(full_path): - sampling_rate, data = read(full_path) - return torch.FloatTensor(data.astype(np.float32)), sampling_rate - - -def load_filepaths_and_text(filename, split="|"): - with open(filename, encoding="utf-8") as f: - filepaths_and_text = [] - for line in f: - path_text = line.strip().split(split) - filepaths_and_text.append(path_text) - return filepaths_and_text - - -def get_hparams(init=True): - parser = argparse.ArgumentParser() - parser.add_argument( - "-c", - "--config", - type=str, - default="./configs/bert_vits.json", - help="JSON file for configuration", - ) - parser.add_argument("-m", "--model", type=str, required=True, help="Model name") - - args = parser.parse_args() - model_dir = os.path.join("./logs", args.model) - - if not os.path.exists(model_dir): - os.makedirs(model_dir) - - config_path = args.config - config_save_path = os.path.join(model_dir, "config.json") - if init: - with open(config_path, "r") as f: - data = f.read() - with open(config_save_path, "w") as f: - f.write(data) - else: - with open(config_save_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - hparams.model_dir = model_dir - return hparams - - -def get_hparams_from_dir(model_dir): - config_save_path = os.path.join(model_dir, "config.json") - with open(config_save_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - hparams.model_dir = model_dir - return hparams - - -def get_hparams_from_file(config_path): - with open(config_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - return hparams - - -def check_git_hash(model_dir): - source_dir = os.path.dirname(os.path.realpath(__file__)) - if not os.path.exists(os.path.join(source_dir, ".git")): - logger.warn( - "{} is not a git repository, therefore hash value comparison will be ignored.".format( - source_dir - ) - ) - return - - cur_hash = subprocess.getoutput("git rev-parse HEAD") - - path = os.path.join(model_dir, "githash") - if os.path.exists(path): - saved_hash = open(path).read() - if saved_hash != cur_hash: - logger.warn( - "git hash values are different. {}(saved) != {}(current)".format( - saved_hash[:8], cur_hash[:8] - ) - ) - else: - open(path, "w").write(cur_hash) - - -def get_logger(model_dir, filename="train.log"): - global logger - logger = logging.getLogger(os.path.basename(model_dir)) - logger.setLevel(logging.DEBUG) - - formatter = logging.Formatter("%(asctime)s\t%(name)s\t%(levelname)s\t%(message)s") - if not os.path.exists(model_dir): - os.makedirs(model_dir) - h = logging.FileHandler(os.path.join(model_dir, filename)) - h.setLevel(logging.DEBUG) - h.setFormatter(formatter) - logger.addHandler(h) - return logger - - -class HParams: - def __init__(self, **kwargs): - for k, v in kwargs.items(): - if type(v) == dict: - v = HParams(**v) - self[k] = v - - def keys(self): - return self.__dict__.keys() - - def items(self): - return self.__dict__.items() - - def values(self): - return self.__dict__.values() - - def __len__(self): - return len(self.__dict__) - - def __getitem__(self, key): - return getattr(self, key) - - def __setitem__(self, key, value): - return setattr(self, key, value) - - def __contains__(self, key): - return key in self.__dict__ - - def __repr__(self): - return self.__dict__.__repr__() diff --git a/spaces/cccc-c/bingo/README.md b/spaces/cccc-c/bingo/README.md deleted file mode 100644 index 218767d1d7debd26932ffddca2ec0f421c0171a9..0000000000000000000000000000000000000000 --- a/spaces/cccc-c/bingo/README.md +++ /dev/null @@ -1,195 +0,0 @@ ---- -title: bingo -emoji: 📉 -colorFrom: red -colorTo: red -sdk: docker -pinned: true -license: mit -duplicated_from: hf4all/bingo ---- - -
- -# Bingo - -Bingo,一个让你呼吸顺畅 New Bing。 - -高度还原 New Bing 网页版的主要操作,国内可用,兼容绝大多数微软 Bing AI 的功能,可自行部署使用。 - -![Github stars](https://badgen.net/github/stars/weaigc/bingo?icon=github&label=stars) -![Gthub issues](https://img.shields.io/github/issues/weaigc/bingo) -[![docker build](https://github.com/weaigc/bingo/actions/workflows/docker.yml/badge.svg)](https://hub.docker.com/repository/docker/weaigc/bingo/) -[![docker hub](https://badgen.net/docker/size/weaigc/bingo?icon=docker&label=image%20size)](https://hub.docker.com/repository/docker/weaigc/bingo/) -[![MIT License](https://img.shields.io/badge/license-MIT-97c50f)](https://github.com/weaigc/bingo/blob/main/license) - -
- -## 演示站点 - -https://bing.github1s.tk - - - -[![img](./docs/images/demo.png)](https://bing.github1s.tk) - -## 功能和特点 - -- 完全基于 Next.js 重写,高度还原 New Bing Web 版 UI,使用体验和 Bing AI 基本一致。 -- 支持 Docker 构建,方便快捷地部署和访问。 -- Cookie 可全局配置,全局共享。 -- 支持持续语音对话 - -## RoadMap - - - [x] 支持 wss 转发 - - [x] 支持一键部署 - - [x] 优化移动端展示 - - [x] 支持画图 - - [x] 支持语音输入(支持语音指令,目前仅支持 PC 版 Edge 及 Chrome 浏览器) - - [x] 支持语音输出(需要手动开启) - - [x] 支持图片输入 - - [x] 支持自定义域名 - - [ ] 支持历史记录 - - [ ] 适配深色模式 - - [ ] 支持内置提示词 - - [ ] 支持离线访问 - - [ ] 国际化翻译 - -## 一键部署 -你也可以一键部署自己的 New Bing AI 到 🤗 HuggingFace 。 - -### 部署到 Huggingface -1. 点击此图标 -[![Deploy to HuggingFace](https://img.shields.io/badge/%E7%82%B9%E5%87%BB%E9%83%A8%E7%BD%B2-%F0%9F%A4%97-fff)](https://huggingface.co/login?next=%2Fspaces%2Fhf4all%2Fbingo%3Fduplicate%3Dtrue%26visibility%3Dpublic),配置可以不改。 - -2. 部署署完成后,点击“设置” 》“站点域名”,点一下,复制一下 HF 域名信息,然后分享给别人即可。 - -> Huggingface 不支持绑定自己的域名,不过我们可以使用曲线救国的方式来达到这个目的 -> 1. 方式二,借助 Cloudflare Workers [部署Cloudflare Workers](#使用Cloudflare-Workers自定义域名) -> 2. 方式一,借助 Github Pages 及 iframe [如何绑定域名](https://github.com/weaigc/bingo/issues/4) - -### 使用Cloudflare Workers自定义域名 - -> 核心代码 [worker.js](./cloudflare/worker.js) - -- [注册 Cloudflare 账号](https://dash.cloudflare.com/sign-up) - -- 添加一个新的网站,需要你有自己的域名并且将域名`Name Server`托管给 Cloudflare 才行(更多信息可自行 Google) - -- 通过左侧菜单进入「Workers」,并点击「Create a Worker」。 - -- 创建 Worker 服务,复制 [worker.js](./cloudflare/worker.js) 全部代码,粘贴至创建的服务中,根据注释进行改动,保存并部署。 - -- 触发器 中自定义访问域名。 - -### 部署其它平台 -
- -由于其他平台目前遭到 New Bing 封杀,会遇到很多问题,不再做推荐,有需要的可以自行查看 - - -#### 部署到 Netlify -[![Deploy to Netlify Button](https://www.netlify.com/img/deploy/button.svg)](https://app.netlify.com/start/deploy?repository=https://github.com/weaigc/bingo) - -#### 部署到 Vercel -如果你是 Vercel 付费用户,可以点以下链接一键部署到 Vercel。免费版本有[接口超时限制](https://vercel.com/docs/concepts/limits/overview),不推荐使用 - -[![Deploy with Vercel](https://vercel.com/button)](https://vercel.com/new/clone?demo-title=bingo&demo-description=bingo&demo-url=https%3A%2F%2Fbing.github1s.tk%2F&project-name=bingo&repository-name=bingo&repository-url=https%3A%2F%2Fgithub.com%2Fweaigc%2Fbingo&from=templates&skippable-integrations=1&env=BING_HEADER&envDescription=%E5%A6%82%E6%9E%9C%E4%B8%8D%E7%9F%A5%E9%81%93%E6%80%8E%E4%B9%88%E9%85%8D%E7%BD%AE%E8%AF%B7%E7%82%B9%E5%8F%B3%E4%BE%A7Learn+More&envLink=https%3A%2F%2Fgithub.com%2Fweaigc%2Fbingo%2Fblob%2Fmain%2F.env.example) - -#### 部署到 Render - -[![Deploy to Render](https://render.com/images/deploy-to-render-button.svg)](https://render.com/deploy?repo=https://github.com/weaigc/bingo) -
- -## 环境和依赖 - -- Node.js >= 18 -- Bing AI 的[身份信息](#如何获取-BING_HEADER)) - -## 安装和使用 - -* 使用 Node 启动 - -```bash -git clone https://github.com/weaigc/bingo.git -npm i # 推荐使用 pnpm i -npm run build -npm run start -``` - -* 使用 Docker 启动 -```bash -docker pull weaigc/bingo -docker run --rm -it -p 7860:7860 weaigc/bingo -# 或者 -docker run --rm -it -e BING_HEADER=xxxx -p 7860:7860 weaigc/bingo -``` - -## 如何获取 BING_HEADER -> 配置了 BING_HEADER 意味着你将自己的账号共享给所有使用此服务的人,如果不需要免登录画图的功能,不建议设置此变量 - -打开 https://www.bing.com 并登录,然后访问 https://www.bing.com/turing/captcha/challenge,通过人机校验,然后 - -![BING HEADER](./docs/images/curl.png) - -> 复制出来的内容应该如下所示。确认格式无误后,打开 https://effulgent-bubblegum-e2f5df.netlify.app/#dialog=%22settings%22 ,粘贴进去,点击“转成 BING_HEADER 并复制”,然后从剪切板粘贴即可得到。(你也可以先在网页上进行验证) - -以下是格式参考,需要注意的是,网页端保存的格式是以`curl`开头, 而服务端配置的 `BING_HEADER` 是 `base64` 格式,两者不能互通。 -
-正常格式/网页端保存的格式(格式仅供参考) - -``` -curl 'https://www.bing.com/turing/captcha/challenge' \ - -H 'authority: www.bing.com' \ - -H 'accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7' \ - -H 'accept-language: zh-CN,zh;q=0.9,en;q=0.8,en-GB;q=0.7,en-US;q=0.6' \ - -H 'cache-control: max-age=0' \ - -H 'cookie: MicrosoftApplicationsTelemetryDeviceId=3399c004-fd0e-48ec-bb92-d82a27b2bbd4; _EDGE_V=1; SRCHD=AF=NOFORM; SRCHUID=V=2&GUID=29EBDDA4E6674329ACCF1A0A423C3E98&dmnchg=1; _UR=QS=0&TQS=0; _HPVN=CS=eyJQbiI6eyJDbiI6MSwiU3QiOjAsIlFzIjowLCJQcm9kIjoiUCJ9LCJTYyI6eyJDbiI6MSwiU3QiOjAsIlFzIjowLCJQcm9kIjoiSCJ9LCJReiI6eyJDbiI6MSwiU3QiOjAsIlFzIjowLCJQcm9kIjoiVCJ9LCJBcCI6dHJ1ZSwiTXV0ZSI6dHJ1ZSwiTGFkIjoiMjAyMy0wNy0yNVQwMDowMDowMFoiLCJJb3RkIjowLCJHd2IiOjAsIkRmdCI6bnVsbCwiTXZzIjowLCJGbHQiOjAsIkltcCI6Mn0=; _RwBf=ilt=1&ihpd=1&ispd=0&rc=0&rb=0&gb=0&rg=200&pc=0&mtu=0&rbb=0&g=0&cid=&clo=0&v=1&l=2023-07-25T07:00:00.0000000Z&lft=0001-01-01T00:00:00.0000000&aof=0&o=2&p=&c=&t=0&s=0001-01-01T00:00:00.0000000+00:00&ts=2023-07-25T11:00:31.7111548+00:00&rwred=0&wls=&lka=0&lkt=0&TH=&dci=0; ANON=A=0043C6590EA808ED6E395059FFFFFFFF&E=1c8b&W=1; NAP=V=1.9&E=1c31&C=DnaMSbDN_4efZ_xXqBF3Daorjr53kYqYoaP8YHsupjmiXnysX7a37A&W=1; PPLState=1; KievRPSSecAuth=FABSBBRaTOJILtFsMkpLVWSG6AN6C/svRwNmAAAEgAAACMGUA7EGVSjGEAQBGHtNsc5sNL7unmJsfPJ2t6imfo4BeUJlAia3IpMTtMUy4PU/C5QAzRI5pODtsIee0+blgllXt/5IiWwGjwmdhivsFM597pRPkjARPfwsPhNLPNbJrCPNPHdje4Is78MnCADXw6/NBq2FL8V2/byw2fH6IuAMD2MvN/VvqpEa9ZxiDjZtENj4HEj0mO2SgzjfyEhVAkjvznJqU2rw/Q2tHmX94NAM2kzlzKF/hWPhCCUmu8IHLvCnHDS6mSptvJDDP/sp3ovtzOXkP1mlM/Xju5ftesUvccVEQGffXORa1dE5hEMbKIiKXz1tDdduSXE19g9/+mRMAjaQhpwhI8XmilCTx1adb1Ll5qK+VjC9GNfEZzcbsGBPVaOl+anG8rEMq+Xnhjo7J+NqTNolavHgcuV8kJsCeJZIged33UA8eOZeFo+wAECMguxMoSqgpGH+sthqynvD/FJD6r/tiU2N3uqVq8NE8V37asrN6T14Z0FGBJOe6ET1+PGApm3s11OY9/xhFEB9T5BEPUGEbvRcLcW2ncFQX0EU+xweiPqo1Q1hNUg/dCtSI+lZ7c2H8XheePZavZ0TJQ8oNCSAuKiTqJmI0fVGpwbXwfaADkEipuawz3fIuMJBNgMU0OtA7Hm59v2fGLIBuvi6YeKS6GgVk3BIPf+P/eKahwozrxQZaFnoHTSqMkvct7xCP4atBROfXKf5Ww0CcFKp+2WX9BIskTOo2jjk6bAyyYJ+ElUB1fgLKNk5m/YSMc9iYCLIBMIGN8F0Yvy3tZ7cvh7Ue5Klo98US/I+nW1G7ZJMHRgUO8h8lpneHqEMegKd8gynO4VF7RpCjJkunDmW0Ta+RkXAP619pg0dqHMFkoOgknN78oBbGTV6fJUKotv+vi61kLhAeXZGWoHGCRXh2wUC6YgfPgKA6ESRNHtFn7E5B3HHpLc5rVMDSNhKZYfdhupV4Ezf6+5DhMcZLZhi0kk+ivDiN1gdHlVtSN55xpvf+c+XZDzR0uhgcvgy0LAbmzgk6y4WbYH+LQsMpzNNj+aC72vMiWovWrKh9jY4MYCmdgxsS/skPtLdp18muiEIRXTbZQGUmhxFpJAIbBIsCscMpzL0BgeujxUwM5wr79Sd9r4xwbgSMwmBlBfUHRVBdNyg8feepeJbCS63nD6eHOuLqMRsPIio3w/ki/EAa92UUEiZeavLsMUD/y/qAvWUdzdP5Y+C/TM+CMGS/kGL4LEdY/28MQeTvU1qv1X21kQt2aiaj3pPVL36hAzxbcLgqcMo9oymDRy87kdCXW/+g4oKLtMh6fm/G6W6Y/B01JlxohyyvueHQIG557uzkEkTJ3FnOVODSKBKpb3WZ65rExfV71zSZa25F3GmpaIG6HiYrX2YYhQAkIE9pKEQBHbnwHuwNDGottZTXZw=; WLS=C=9df3f9d8518fae19&N=wen; WLID=pGY8HgWCu4p5XYCOk2oa0+DBdftkMUfmNIn8XtSjSTKsgv/Il7GUlYs0Jpjf/E12jZMgV7x44Dy3fXOgjjUoJx7Y/ClLrLhsk20THksJJoI=; _EDGE_S=F=1&SID=17CF6EE006426448213C7DB907436588&mkt=zh-CN; MUID=225621093D8A6C27301632413C0E6D08; MUIDB=225621093D8A6C27301632413C0E6D08; SUID=A; SNRHOP=I=&TS=; _U=nGyzKQruEsDwLiu65fZFIG6e12hf2lwTJmroW__k8joUJIKmG3OIjayXKGW9dCVR3sNhF76mEVxyW6yjUGPodOfjtSa3s3J_DxMOrEK1BqXCOBI9bC66spAIASV7prsYFlVAJz73jVNENp_tBubLHJy6EbT0BKRe4AjrYkH-9uMnmCKB8Zmyg; _SS=SID=17CF6EE006426448213C7DB907436588&R=0&RB=0&GB=0&RG=200&RP=0&PC=U531; SRCHS=PC=U531; USRLOC=HS=1&ELOC=LAT=22.501529693603516|LON=113.9263687133789|N=%E5%8D%97%E5%B1%B1%E5%8C%BA%EF%BC%8C%E5%B9%BF%E4%B8%9C%E7%9C%81|ELT=2|&CLOC=LAT=22.50153029046461|LON=113.92637070632928|A=733.4464586120832|TS=230726151034|SRC=W; SRCHUSR=DOB=20230725&T=1690384908000&POEX=W; ipv6=hit=1690388509974&t=6; SRCHHPGUSR=HV=1690384945&SRCHLANG=zh-Hans&PV=15.0.0&BRW=MW&BRH=MT&CW=410&CH=794&SCW=410&SCH=794&DPR=1.5&UTC=480&DM=0&WTS=63825879627&PRVCW=410&PRVCH=794&PR=1.5; cct=AjWIBYOoVP-Afq6gWwtx80If6yHn6iBuEVHA1XHdAKpny6Y_CVyi_MSyM94VyMWnjdYkkccVtm3czoIAtXUGQA; GC=AjWIBYOoVP-Afq6gWwtx80If6yHn6iBuEVHA1XHdAKpR3Y_D9Ytcks4Ht6XhadXk75dvhzP4YOUS0UmoEyqyxw' \ - -H 'dnt: 1' \ - -H 'sec-ch-ua: "Chromium";v="116", "Not)A;Brand";v="24", "Microsoft Edge";v="116"' \ - -H 'sec-ch-ua-arch: "x86"' \ - -H 'sec-ch-ua-bitness: "64"' \ - -H 'sec-ch-ua-full-version: "116.0.1938.29"' \ - -H 'sec-ch-ua-full-version-list: "Chromium";v="116.0.5845.42", "Not)A;Brand";v="24.0.0.0", "Microsoft Edge";v="116.0.1938.29"' \ - -H 'sec-ch-ua-mobile: ?0' \ - -H 'sec-ch-ua-model: ""' \ - -H 'sec-ch-ua-platform: "Windows"' \ - -H 'sec-ch-ua-platform-version: "15.0.0"' \ - -H 'sec-fetch-dest: document' \ - -H 'sec-fetch-mode: navigate' \ - -H 'sec-fetch-site: none' \ - -H 'sec-fetch-user: ?1' \ - -H 'sec-ms-gec: B3F47AD4A283CAB374C0451C46AAFD147C6A4DACAFF6A1C13F34B2C72B024494' \ - -H 'sec-ms-gec-version: 1-116.0.1938.29' \ - -H 'upgrade-insecure-requests: 1' \ - -H 'user-agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36 Edg/116.0.0.0' \ - -H 'x-client-data: eyIxIjoiMiIsIjEwIjoiXCJTMGg3R05HOTF2aDQ1TUZSUnZ5NHN2akRmMWdlaVJKenNxNlA3aU1WbnF3PVwiIiwiMiI6IjEiLCIzIjoiMSIsIjQiOiIyMTU4ODQ5NTM4MjY4OTM5NTA3IiwiNSI6IlwiSm9GUWpPTDk3OS9MbkRRZnlCd2N1M2FsOUN3eTZTQmdaMGNYMXBtOWVMZz1cIiIsIjYiOiJiZXRhIiwiNyI6IjE4MDM4ODYyNjQzNSIsIjkiOiJkZXNrdG9wIn0=' \ - -H 'x-edge-shopping-flag: 1' \ - --compressed -``` -
- -
-转成base64之后的格式(BING_HEADER只能使用 base64 之后的格式) - -``` -Y3VybCAnaHR0cHM6Ly93d3cuYmluZy5jb20vdHVyaW5nL2NvbnZlcnNhdGlvbi9jcmVhdGUnIFwgICAtSCAnYXV0aG9yaXR5OiB3d3cuYmluZy5jb20nIFwgICAtSCAnYWNjZXB0OiB0ZXh0L2h0bWwsYXBwbGljYXRpb24veGh0bWwreG1sLGFwcGxpY2F0aW9uL3htbDtxPTAuOSxpbWFnZS93ZWJwLGltYWdlL2FwbmcsKi8qO3E9MC44LGFwcGxpY2F0aW9uL3NpZ25lZC1leGNoYW5nZTt2PWIzO3E9MC43JyBcICAgLUggJ2FjY2VwdC1sYW5ndWFnZTogemgtQ04semg7cT0wLjksZW47cT0wLjgsZW4tR0I7cT0wLjcsZW4tVVM7cT0wLjYnIFwgICAtSCAnY2FjaGUtY29udHJvbDogbWF4LWFnZT0wJyBcICAgLUggJ2Nvb2tpZTogTWljcm9zb2Z0QXBwbGljYXRpb25zVGVsZW1ldHJ5RGV2aWNlSWQ9MzM5OWMwMDQtZmQwZS00OGVjLWJiOTItZDgyYTI3YjJiYmQ0OyBfRURHRV9WPTE7IFNSQ0hEPUFGPU5PRk9STTsgU1JDSFVJRD1WPTImR1VJRD0yOUVCRERBNEU2Njc0MzI5QUNDRjFBMEE0MjNDM0U5OCZkbW5jaGc9MTsgX1VSPVFTPTAmVFFTPTA7IF9IUFZOPUNTPWV5SlFiaUk2ZXlKRGJpSTZNU3dpVTNRaU9qQXNJbEZ6SWpvd0xDSlFjbTlrSWpvaVVDSjlMQ0pUWXlJNmV5SkRiaUk2TVN3aVUzUWlPakFzSWxGeklqb3dMQ0pRY205a0lqb2lTQ0o5TENKUmVpSTZleUpEYmlJNk1Td2lVM1FpT2pBc0lsRnpJam93TENKUWNtOWtJam9pVkNKOUxDSkJjQ0k2ZEhKMVpTd2lUWFYwWlNJNmRISjFaU3dpVEdGa0lqb2lNakF5TXkwd055MHlOVlF3TURvd01Eb3dNRm9pTENKSmIzUmtJam93TENKSGQySWlPakFzSWtSbWRDSTZiblZzYkN3aVRYWnpJam93TENKR2JIUWlPakFzSWtsdGNDSTZNbjA9OyBfUndCZj1pbHQ9MSZpaHBkPTEmaXNwZD0wJnJjPTAmcmI9MCZnYj0wJnJnPTIwMCZwYz0wJm10dT0wJnJiYj0wJmc9MCZjaWQ9JmNsbz0wJnY9MSZsPTIwMjMtMDctMjVUMDc6MDA6MDAuMDAwMDAwMFombGZ0PTAwMDEtMDEtMDFUMDA6MDA6MDAuMDAwMDAwMCZhb2Y9MCZvPTImcD0mYz0mdD0wJnM9MDAwMS0wMS0wMVQwMDowMDowMC4wMDAwMDAwKzAwOjAwJnRzPTIwMjMtMDctMjVUMTE6MDA6MzEuNzExMTU0OCswMDowMCZyd3JlZD0wJndscz0mbGthPTAmbGt0PTAmVEg9JmRjaT0wOyBBTk9OPUE9MDA0M0M2NTkwRUE4MDhFRDZFMzk1MDU5RkZGRkZGRkYmRT0xYzhiJlc9MTsgTkFQPVY9MS45JkU9MWMzMSZDPURuYU1TYkROXzRlZlpfeFhxQkYzRGFvcmpyNTNrWXFZb2FQOFlIc3Vwam1pWG55c1g3YTM3QSZXPTE7IFBQTFN0YXRlPTE7IEtpZXZSUFNTZWNBdXRoPUZBQlNCQlJhVE9KSUx0RnNNa3BMVldTRzZBTjZDL3N2UndObUFBQUVnQUFBQ01HVUE3RUdWU2pHRUFRQkdIdE5zYzVzTkw3dW5tSnNmUEoydDZpbWZvNEJlVUpsQWlhM0lwTVR0TVV5NFBVL0M1UUF6Ukk1cE9EdHNJZWUwK2JsZ2xsWHQvNUlpV3dHandtZGhpdnNGTTU5N3BSUGtqQVJQZndzUGhOTFBOYkpyQ1BOUEhkamU0SXM3OE1uQ0FEWHc2L05CcTJGTDhWMi9ieXcyZkg2SXVBTUQyTXZOL1Z2cXBFYTlaeGlEalp0RU5qNEhFajBtTzJTZ3pqZnlFaFZBa2p2em5KcVUycncvUTJ0SG1YOTROQU0ya3psektGL2hXUGhDQ1VtdThJSEx2Q25IRFM2bVNwdHZKRERQL3NwM292dHpPWGtQMW1sTS9YanU1ZnRlc1V2Y2NWRVFHZmZYT1JhMWRFNWhFTWJLSWlLWHoxdERkZHVTWEUxOWc5LyttUk1BamFRaHB3aEk4WG1pbENUeDFhZGIxTGw1cUsrVmpDOUdOZkVaemNic0dCUFZhT2wrYW5HOHJFTXErWG5oam83SitOcVROb2xhdkhnY3VWOGtKc0NlSlpJZ2VkMzNVQThlT1plRm8rd0FFQ01ndXhNb1NxZ3BHSCtzdGhxeW52RC9GSkQ2ci90aVUyTjN1cVZxOE5FOFYzN2Fzck42VDE0WjBGR0JKT2U2RVQxK1BHQXBtM3MxMU9ZOS94aEZFQjlUNUJFUFVHRWJ2UmNMY1cybmNGUVgwRVUreHdlaVBxbzFRMWhOVWcvZEN0U0krbFo3YzJIOFhoZWVQWmF2WjBUSlE4b05DU0F1S2lUcUptSTBmVkdwd2JYd2ZhQURrRWlwdWF3ejNmSXVNSkJOZ01VME90QTdIbTU5djJmR0xJQnV2aTZZZUtTNkdnVmszQklQZitQL2VLYWh3b3pyeFFaYUZub0hUU3FNa3ZjdDd4Q1A0YXRCUk9mWEtmNVd3MENjRktwKzJXWDlCSXNrVE9vMmpqazZiQXl5WUorRWxVQjFmZ0xLTms1bS9ZU01jOWlZQ0xJQk1JR044RjBZdnkzdFo3Y3ZoN1VlNUtsbzk4VVMvSStuVzFHN1pKTUhSZ1VPOGg4bHBuZUhxRU1lZ0tkOGd5bk80VkY3UnBDakprdW5EbVcwVGErUmtYQVA2MTlwZzBkcUhNRmtvT2drbk43OG9CYkdUVjZmSlVLb3R2K3ZpNjFrTGhBZVhaR1dvSEdDUlhoMndVQzZZZ2ZQZ0tBNkVTUk5IdEZuN0U1QjNISHBMYzVyVk1EU05oS1pZZmRodXBWNEV6ZjYrNURoTWNaTFpoaTBraytpdkRpTjFnZEhsVnRTTjU1eHB2ZitjK1haRHpSMHVoZ2N2Z3kwTEFibXpnazZ5NFdiWUgrTFFzTXB6Tk5qK2FDNzJ2TWlXb3ZXcktoOWpZNE1ZQ21kZ3hzUy9za1B0TGRwMThtdWlFSVJYVGJaUUdVbWh4RnBKQUliQklzQ3NjTXB6TDBCZ2V1anhVd001d3I3OVNkOXI0eHdiZ1NNd21CbEJmVUhSVkJkTnlnOGZlZXBlSmJDUzYzbkQ2ZUhPdUxxTVJzUElpbzN3L2tpL0VBYTkyVVVFaVplYXZMc01VRC95L3FBdldVZHpkUDVZK0MvVE0rQ01HUy9rR0w0TEVkWS8yOE1RZVR2VTFxdjFYMjFrUXQyYWlhajNwUFZMMzZoQXp4YmNMZ3FjTW85b3ltRFJ5ODdrZENYVy8rZzRvS0x0TWg2Zm0vRzZXNlkvQjAxSmx4b2h5eXZ1ZUhRSUc1NTd1emtFa1RKM0ZuT1ZPRFNLQktwYjNXWjY1ckV4ZlY3MXpTWmEyNUYzR21wYUlHNkhpWXJYMllZaFFBa0lFOXBLRVFCSGJud0h1d05ER290dFpUWFp3PTsgV0xTPUM9OWRmM2Y5ZDg1MThmYWUxOSZOPXdlbjsgV0xJRD1wR1k4SGdXQ3U0cDVYWUNPazJvYTArREJkZnRrTVVmbU5JbjhYdFNqU1RLc2d2L0lsN0dVbFlzMEpwamYvRTEyalpNZ1Y3eDQ0RHkzZlhPZ2pqVW9KeDdZL0NsTHJMaHNrMjBUSGtzSkpvST07IF9FREdFX1M9Rj0xJlNJRD0xN0NGNkVFMDA2NDI2NDQ4MjEzQzdEQjkwNzQzNjU4OCZta3Q9emgtQ047IE1VSUQ9MjI1NjIxMDkzRDhBNkMyNzMwMTYzMjQxM0MwRTZEMDg7IE1VSURCPTIyNTYyMTA5M0Q4QTZDMjczMDE2MzI0MTNDMEU2RDA4OyBTVUlEPUE7IFNOUkhPUD1JPSZUUz07IF9VPW5HeXpLUXJ1RXNEd0xpdTY1ZlpGSUc2ZTEyaGYybHdUSm1yb1dfX2s4am9VSklLbUczT0lqYXlYS0dXOWRDVlIzc05oRjc2bUVWeHlXNnlqVUdQb2RPZmp0U2EzczNKX0R4TU9yRUsxQnFYQ09CSTliQzY2c3BBSUFTVjdwcnNZRmxWQUp6NzNqVk5FTnBfdEJ1YkxISnk2RWJUMEJLUmU0QWpyWWtILTl1TW5tQ0tCOFpteWc7IF9TUz1TSUQ9MTdDRjZFRTAwNjQyNjQ0ODIxM0M3REI5MDc0MzY1ODgmUj0wJlJCPTAmR0I9MCZSRz0yMDAmUlA9MCZQQz1VNTMxOyBTUkNIUz1QQz1VNTMxOyBVU1JMT0M9SFM9MSZFTE9DPUxBVD0yMi41MDE1Mjk2OTM2MDM1MTZ8TE9OPTExMy45MjYzNjg3MTMzNzg5fE49JUU1JThEJTk3JUU1JUIxJUIxJUU1JThDJUJBJUVGJUJDJThDJUU1JUI5JUJGJUU0JUI4JTlDJUU3JTlDJTgxfEVMVD0yfCZDTE9DPUxBVD0yMi41MDE1MzAyOTA0NjQ2MXxMT049MTEzLjkyNjM3MDcwNjMyOTI4fEE9NzMzLjQ0NjQ1ODYxMjA4MzJ8VFM9MjMwNzI2MTUxMDM0fFNSQz1XOyBTUkNIVVNSPURPQj0yMDIzMDcyNSZUPTE2OTAzODQ5MDgwMDAmUE9FWD1XOyBpcHY2PWhpdD0xNjkwMzg4NTA5OTc0JnQ9NjsgU1JDSEhQR1VTUj1IVj0xNjkwMzg0OTQ1JlNSQ0hMQU5HPXpoLUhhbnMmUFY9MTUuMC4wJkJSVz1NVyZCUkg9TVQmQ1c9NDEwJkNIPTc5NCZTQ1c9NDEwJlNDSD03OTQmRFBSPTEuNSZVVEM9NDgwJkRNPTAmV1RTPTYzODI1ODc5NjI3JlBSVkNXPTQxMCZQUlZDSD03OTQmUFI9MS41OyBjY3Q9QWpXSUJZT29WUC1BZnE2Z1d3dHg4MElmNnlIbjZpQnVFVkhBMVhIZEFLcG55NllfQ1Z5aV9NU3lNOTRWeU1XbmpkWWtrY2NWdG0zY3pvSUF0WFVHUUE7IEdDPUFqV0lCWU9vVlAtQWZxNmdXd3R4ODBJZjZ5SG42aUJ1RVZIQTFYSGRBS3BSM1lfRDlZdGNrczRIdDZYaGFkWGs3NWR2aHpQNFlPVVMwVW1vRXlxeXh3JyBcICAgLUggJ2RudDogMScgXCAgIC1IICdzZWMtY2gtdWE6ICJDaHJvbWl1bSI7dj0iMTE2IiwgIk5vdClBO0JyYW5kIjt2PSIyNCIsICJNaWNyb3NvZnQgRWRnZSI7dj0iMTE2IicgXCAgIC1IICdzZWMtY2gtdWEtYXJjaDogIng4NiInIFwgICAtSCAnc2VjLWNoLXVhLWJpdG5lc3M6ICI2NCInIFwgICAtSCAnc2VjLWNoLXVhLWZ1bGwtdmVyc2lvbjogIjExNi4wLjE5MzguMjkiJyBcICAgLUggJ3NlYy1jaC11YS1mdWxsLXZlcnNpb24tbGlzdDogIkNocm9taXVtIjt2PSIxMTYuMC41ODQ1LjQyIiwgIk5vdClBO0JyYW5kIjt2PSIyNC4wLjAuMCIsICJNaWNyb3NvZnQgRWRnZSI7dj0iMTE2LjAuMTkzOC4yOSInIFwgICAtSCAnc2VjLWNoLXVhLW1vYmlsZTogPzAnIFwgICAtSCAnc2VjLWNoLXVhLW1vZGVsOiAiIicgXCAgIC1IICdzZWMtY2gtdWEtcGxhdGZvcm06ICJXaW5kb3dzIicgXCAgIC1IICdzZWMtY2gtdWEtcGxhdGZvcm0tdmVyc2lvbjogIjE1LjAuMCInIFwgICAtSCAnc2VjLWZldGNoLWRlc3Q6IGRvY3VtZW50JyBcICAgLUggJ3NlYy1mZXRjaC1tb2RlOiBuYXZpZ2F0ZScgXCAgIC1IICdzZWMtZmV0Y2gtc2l0ZTogbm9uZScgXCAgIC1IICdzZWMtZmV0Y2gtdXNlcjogPzEnIFwgICAtSCAnc2VjLW1zLWdlYzogQjNGNDdBRDRBMjgzQ0FCMzc0QzA0NTFDNDZBQUZEMTQ3QzZBNERBQ0FGRjZBMUMxM0YzNEIyQzcyQjAyNDQ5NCcgXCAgIC1IICdzZWMtbXMtZ2VjLXZlcnNpb246IDEtMTE2LjAuMTkzOC4yOScgXCAgIC1IICd1cGdyYWRlLWluc2VjdXJlLXJlcXVlc3RzOiAxJyBcICAgLUggJ3VzZXItYWdlbnQ6IE1vemlsbGEvNS4wIChXaW5kb3dzIE5UIDEwLjA7IFdpbjY0OyB4NjQpIEFwcGxlV2ViS2l0LzUzNy4zNiAoS0hUTUwsIGxpa2UgR2Vja28pIENocm9tZS8xMTYuMC4wLjAgU2FmYXJpLzUzNy4zNiBFZGcvMTE2LjAuMC4wJyBcICAgLUggJ3gtY2xpZW50LWRhdGE6IGV5SXhJam9pTWlJc0lqRXdJam9pWENKVE1HZzNSMDVIT1RGMmFEUTFUVVpTVW5aNU5ITjJha1JtTVdkbGFWSktlbk54TmxBM2FVMVdibkYzUFZ3aUlpd2lNaUk2SWpFaUxDSXpJam9pTVNJc0lqUWlPaUl5TVRVNE9EUTVOVE00TWpZNE9UTTVOVEEzSWl3aU5TSTZJbHdpU205R1VXcFBURGszT1M5TWJrUlJabmxDZDJOMU0yRnNPVU4zZVRaVFFtZGFNR05ZTVhCdE9XVk1aejFjSWlJc0lqWWlPaUppWlhSaElpd2lOeUk2SWpFNE1ETTRPRFl5TmpRek5TSXNJamtpT2lKa1pYTnJkRzl3SW4wPScgXCAgIC1IICd4LWVkZ2Utc2hvcHBpbmctZmxhZzogMScgXCAgIC0tY29tcHJlc3NlZA== -``` -
- - -## 鸣谢 - - 感谢 [EdgeGPT](https://github.com/acheong08/EdgeGPT) 提供的代理 API 的方法。 - - 感谢 [Vercel AI](https://github.com/vercel-labs/ai-chatbot) 提供的基础脚手架和 [ChatHub](https://github.com/chathub-dev/chathub) [go-proxy-bingai](https://github.com/adams549659584/go-proxy-bingai) 提供的部分代码。 - - -## 答疑及交流 - - - -## License - -MIT © [LICENSE](https://github.com/weaigc/bingo/blob/main/LICENSE). - - diff --git a/spaces/certkor/CertKOR.ai/README.md b/spaces/certkor/CertKOR.ai/README.md deleted file mode 100644 index 8afa3dbefd265156467bd97cc431bcc6d6b5c99b..0000000000000000000000000000000000000000 --- a/spaces/certkor/CertKOR.ai/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: CertKOR.ai -emoji: 👀 -colorFrom: gray -colorTo: pink -sdk: streamlit -sdk_version: 1.27.2 -app_file: app.py -pinned: false -license: cc-by-nc-4.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/chansung/LLM-As-Chatbot/miscs/__init__.py b/spaces/chansung/LLM-As-Chatbot/miscs/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/clickhouse_connect/driverc/__init__.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/clickhouse_connect/driverc/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/docx/table.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/docx/table.py deleted file mode 100644 index b3bc090fb5ac99ffb137b9bb233e8b892bb17718..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/docx/table.py +++ /dev/null @@ -1,469 +0,0 @@ -# encoding: utf-8 - -""" -The |Table| object and related proxy classes. -""" - -from __future__ import absolute_import, print_function, unicode_literals - -from .blkcntnr import BlockItemContainer -from .enum.style import WD_STYLE_TYPE -from .oxml.simpletypes import ST_Merge -from .shared import Inches, lazyproperty, Parented - - -class Table(Parented): - """ - Proxy class for a WordprocessingML ```` element. - """ - def __init__(self, tbl, parent): - super(Table, self).__init__(parent) - self._element = self._tbl = tbl - - def add_column(self, width): - """ - Return a |_Column| object of *width*, newly added rightmost to the - table. - """ - tblGrid = self._tbl.tblGrid - gridCol = tblGrid.add_gridCol() - gridCol.w = width - for tr in self._tbl.tr_lst: - tc = tr.add_tc() - tc.width = width - return _Column(gridCol, self) - - def add_row(self): - """ - Return a |_Row| instance, newly added bottom-most to the table. - """ - tbl = self._tbl - tr = tbl.add_tr() - for gridCol in tbl.tblGrid.gridCol_lst: - tc = tr.add_tc() - tc.width = gridCol.w - return _Row(tr, self) - - @property - def alignment(self): - """ - Read/write. A member of :ref:`WdRowAlignment` or None, specifying the - positioning of this table between the page margins. |None| if no - setting is specified, causing the effective value to be inherited - from the style hierarchy. - """ - return self._tblPr.alignment - - @alignment.setter - def alignment(self, value): - self._tblPr.alignment = value - - @property - def autofit(self): - """ - |True| if column widths can be automatically adjusted to improve the - fit of cell contents. |False| if table layout is fixed. Column widths - are adjusted in either case if total column width exceeds page width. - Read/write boolean. - """ - return self._tblPr.autofit - - @autofit.setter - def autofit(self, value): - self._tblPr.autofit = value - - def cell(self, row_idx, col_idx): - """ - Return |_Cell| instance correponding to table cell at *row_idx*, - *col_idx* intersection, where (0, 0) is the top, left-most cell. - """ - cell_idx = col_idx + (row_idx * self._column_count) - return self._cells[cell_idx] - - def column_cells(self, column_idx): - """ - Sequence of cells in the column at *column_idx* in this table. - """ - cells = self._cells - idxs = range(column_idx, len(cells), self._column_count) - return [cells[idx] for idx in idxs] - - @lazyproperty - def columns(self): - """ - |_Columns| instance representing the sequence of columns in this - table. - """ - return _Columns(self._tbl, self) - - def row_cells(self, row_idx): - """ - Sequence of cells in the row at *row_idx* in this table. - """ - column_count = self._column_count - start = row_idx * column_count - end = start + column_count - return self._cells[start:end] - - @lazyproperty - def rows(self): - """ - |_Rows| instance containing the sequence of rows in this table. - """ - return _Rows(self._tbl, self) - - @property - def style(self): - """ - Read/write. A |_TableStyle| object representing the style applied to - this table. The default table style for the document (often `Normal - Table`) is returned if the table has no directly-applied style. - Assigning |None| to this property removes any directly-applied table - style causing it to inherit the default table style of the document. - Note that the style name of a table style differs slightly from that - displayed in the user interface; a hyphen, if it appears, must be - removed. For example, `Light Shading - Accent 1` becomes `Light - Shading Accent 1`. - """ - style_id = self._tbl.tblStyle_val - return self.part.get_style(style_id, WD_STYLE_TYPE.TABLE) - - @style.setter - def style(self, style_or_name): - style_id = self.part.get_style_id( - style_or_name, WD_STYLE_TYPE.TABLE - ) - self._tbl.tblStyle_val = style_id - - @property - def table(self): - """ - Provide child objects with reference to the |Table| object they - belong to, without them having to know their direct parent is - a |Table| object. This is the terminus of a series of `parent._table` - calls from an arbitrary child through its ancestors. - """ - return self - - @property - def table_direction(self): - """ - A member of :ref:`WdTableDirection` indicating the direction in which - the table cells are ordered, e.g. `WD_TABLE_DIRECTION.LTR`. |None| - indicates the value is inherited from the style hierarchy. - """ - return self._element.bidiVisual_val - - @table_direction.setter - def table_direction(self, value): - self._element.bidiVisual_val = value - - @property - def _cells(self): - """ - A sequence of |_Cell| objects, one for each cell of the layout grid. - If the table contains a span, one or more |_Cell| object references - are repeated. - """ - col_count = self._column_count - cells = [] - for tc in self._tbl.iter_tcs(): - for grid_span_idx in range(tc.grid_span): - if tc.vMerge == ST_Merge.CONTINUE: - cells.append(cells[-col_count]) - elif grid_span_idx > 0: - cells.append(cells[-1]) - else: - cells.append(_Cell(tc, self)) - return cells - - @property - def _column_count(self): - """ - The number of grid columns in this table. - """ - return self._tbl.col_count - - @property - def _tblPr(self): - return self._tbl.tblPr - - -class _Cell(BlockItemContainer): - """Table cell""" - - def __init__(self, tc, parent): - super(_Cell, self).__init__(tc, parent) - self._tc = self._element = tc - - def add_paragraph(self, text='', style=None): - """ - Return a paragraph newly added to the end of the content in this - cell. If present, *text* is added to the paragraph in a single run. - If specified, the paragraph style *style* is applied. If *style* is - not specified or is |None|, the result is as though the 'Normal' - style was applied. Note that the formatting of text in a cell can be - influenced by the table style. *text* can contain tab (``\\t``) - characters, which are converted to the appropriate XML form for - a tab. *text* can also include newline (``\\n``) or carriage return - (``\\r``) characters, each of which is converted to a line break. - """ - return super(_Cell, self).add_paragraph(text, style) - - def add_table(self, rows, cols): - """ - Return a table newly added to this cell after any existing cell - content, having *rows* rows and *cols* columns. An empty paragraph is - added after the table because Word requires a paragraph element as - the last element in every cell. - """ - width = self.width if self.width is not None else Inches(1) - table = super(_Cell, self).add_table(rows, cols, width) - self.add_paragraph() - return table - - def merge(self, other_cell): - """ - Return a merged cell created by spanning the rectangular region - having this cell and *other_cell* as diagonal corners. Raises - |InvalidSpanError| if the cells do not define a rectangular region. - """ - tc, tc_2 = self._tc, other_cell._tc - merged_tc = tc.merge(tc_2) - return _Cell(merged_tc, self._parent) - - @property - def paragraphs(self): - """ - List of paragraphs in the cell. A table cell is required to contain - at least one block-level element and end with a paragraph. By - default, a new cell contains a single paragraph. Read-only - """ - return super(_Cell, self).paragraphs - - @property - def tables(self): - """ - List of tables in the cell, in the order they appear. Read-only. - """ - return super(_Cell, self).tables - - @property - def text(self): - """ - The entire contents of this cell as a string of text. Assigning - a string to this property replaces all existing content with a single - paragraph containing the assigned text in a single run. - """ - return '\n'.join(p.text for p in self.paragraphs) - - @text.setter - def text(self, text): - """ - Write-only. Set entire contents of cell to the string *text*. Any - existing content or revisions are replaced. - """ - tc = self._tc - tc.clear_content() - p = tc.add_p() - r = p.add_r() - r.text = text - - @property - def vertical_alignment(self): - """Member of :ref:`WdCellVerticalAlignment` or None. - - A value of |None| indicates vertical alignment for this cell is - inherited. Assigning |None| causes any explicitly defined vertical - alignment to be removed, restoring inheritance. - """ - tcPr = self._element.tcPr - if tcPr is None: - return None - return tcPr.vAlign_val - - @vertical_alignment.setter - def vertical_alignment(self, value): - tcPr = self._element.get_or_add_tcPr() - tcPr.vAlign_val = value - - @property - def width(self): - """ - The width of this cell in EMU, or |None| if no explicit width is set. - """ - return self._tc.width - - @width.setter - def width(self, value): - self._tc.width = value - - -class _Column(Parented): - """ - Table column - """ - def __init__(self, gridCol, parent): - super(_Column, self).__init__(parent) - self._gridCol = gridCol - - @property - def cells(self): - """ - Sequence of |_Cell| instances corresponding to cells in this column. - """ - return tuple(self.table.column_cells(self._index)) - - @property - def table(self): - """ - Reference to the |Table| object this column belongs to. - """ - return self._parent.table - - @property - def width(self): - """ - The width of this column in EMU, or |None| if no explicit width is - set. - """ - return self._gridCol.w - - @width.setter - def width(self, value): - self._gridCol.w = value - - @property - def _index(self): - """ - Index of this column in its table, starting from zero. - """ - return self._gridCol.gridCol_idx - - -class _Columns(Parented): - """ - Sequence of |_Column| instances corresponding to the columns in a table. - Supports ``len()``, iteration and indexed access. - """ - def __init__(self, tbl, parent): - super(_Columns, self).__init__(parent) - self._tbl = tbl - - def __getitem__(self, idx): - """ - Provide indexed access, e.g. 'columns[0]' - """ - try: - gridCol = self._gridCol_lst[idx] - except IndexError: - msg = "column index [%d] is out of range" % idx - raise IndexError(msg) - return _Column(gridCol, self) - - def __iter__(self): - for gridCol in self._gridCol_lst: - yield _Column(gridCol, self) - - def __len__(self): - return len(self._gridCol_lst) - - @property - def table(self): - """ - Reference to the |Table| object this column collection belongs to. - """ - return self._parent.table - - @property - def _gridCol_lst(self): - """ - Sequence containing ```` elements for this table, each - representing a table column. - """ - tblGrid = self._tbl.tblGrid - return tblGrid.gridCol_lst - - -class _Row(Parented): - """ - Table row - """ - def __init__(self, tr, parent): - super(_Row, self).__init__(parent) - self._tr = self._element = tr - - @property - def cells(self): - """ - Sequence of |_Cell| instances corresponding to cells in this row. - """ - return tuple(self.table.row_cells(self._index)) - - @property - def height(self): - """ - Return a |Length| object representing the height of this cell, or - |None| if no explicit height is set. - """ - return self._tr.trHeight_val - - @height.setter - def height(self, value): - self._tr.trHeight_val = value - - @property - def height_rule(self): - """ - Return the height rule of this cell as a member of the - :ref:`WdRowHeightRule` enumeration, or |None| if no explicit - height_rule is set. - """ - return self._tr.trHeight_hRule - - @height_rule.setter - def height_rule(self, value): - self._tr.trHeight_hRule = value - - @property - def table(self): - """ - Reference to the |Table| object this row belongs to. - """ - return self._parent.table - - @property - def _index(self): - """ - Index of this row in its table, starting from zero. - """ - return self._tr.tr_idx - - -class _Rows(Parented): - """ - Sequence of |_Row| objects corresponding to the rows in a table. - Supports ``len()``, iteration, indexed access, and slicing. - """ - def __init__(self, tbl, parent): - super(_Rows, self).__init__(parent) - self._tbl = tbl - - def __getitem__(self, idx): - """ - Provide indexed access, (e.g. 'rows[0]') - """ - return list(self)[idx] - - def __iter__(self): - return (_Row(tr, self) for tr in self._tbl.tr_lst) - - def __len__(self): - return len(self._tbl.tr_lst) - - @property - def table(self): - """ - Reference to the |Table| object this row collection belongs to. - """ - return self._parent.table diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/index-c4debac9.js b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/index-c4debac9.js deleted file mode 100644 index b7926228189829a98f3130ed24a319a20bc1f987..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/index-c4debac9.js +++ /dev/null @@ -1,2 +0,0 @@ -import{S as M,e as N,s as O,N as z,k,O as Q,K as S,p as j,o as w,M as E,ao as T,Q as A,z as C,v as B,A as q,x as P,a1 as L,B as V,al as W,P as X,R as Y,F as g,ap as r,E as Z,ae as y,h as D,j as F,q as p,r as x,t as K}from"./index-f877dfd5.js";/* empty css */import{B as $}from"./Button-11a87b79.js";import{B as ee}from"./BlockTitle-8596cf63.js";import"./Info-f92267f9.js";function te(t){let e;return{c(){e=X(t[1])},m(l,s){j(l,e,s)},p(l,s){s&2&&Y(e,l[1])},d(l){l&&q(e)}}}function le(t){let e,l,s,a,_,c,h;return l=new ee({props:{show_label:t[4],info:t[2],$$slots:{default:[te]},$$scope:{ctx:t}}}),{c(){e=z("label"),k(l.$$.fragment),s=Q(),a=z("input"),S(a,"type","color"),a.disabled=t[3],S(a,"class","svelte-56zyyb"),S(e,"class","block")},m(o,f){j(o,e,f),w(l,e,null),E(e,s),E(e,a),T(a,t[0]),_=!0,c||(h=[A(a,"blur",t[6]),A(a,"input",t[7])],c=!0)},p(o,[f]){const m={};f&16&&(m.show_label=o[4]),f&4&&(m.info=o[2]),f&1026&&(m.$$scope={dirty:f,ctx:o}),l.$set(m),(!_||f&8)&&(a.disabled=o[3]),f&1&&T(a,o[0])},i(o){_||(C(l.$$.fragment,o),_=!0)},o(o){B(l.$$.fragment,o),_=!1},d(o){o&&q(e),P(l),c=!1,L(h)}}}function se(t,e,l){let{value:s="#000000"}=e,{value_is_output:a=!1}=e,{label:_}=e,{info:c=void 0}=e,{disabled:h=!1}=e,{show_label:o=!0}=e;const f=V();function m(){f("change",s),a||f("input")}W(()=>{l(5,a=!1)});function d(u){g.call(this,t,u)}function n(){s=this.value,l(0,s)}return t.$$set=u=>{"value"in u&&l(0,s=u.value),"value_is_output"in u&&l(5,a=u.value_is_output),"label"in u&&l(1,_=u.label),"info"in u&&l(2,c=u.info),"disabled"in u&&l(3,h=u.disabled),"show_label"in u&&l(4,o=u.show_label)},t.$$.update=()=>{t.$$.dirty&1&&m()},[s,_,c,h,o,a,d,n]}class ie extends M{constructor(e){super(),N(this,e,se,le,O,{value:0,value_is_output:5,label:1,info:2,disabled:3,show_label:4})}}function ne(t){let e,l,s,a,_,c;const h=[t[11]];let o={};for(let n=0;nF(s,"value",f)),D.push(()=>F(s,"value_is_output",m)),s.$on("change",t[15]),s.$on("input",t[16]),s.$on("submit",t[17]),s.$on("blur",t[18]),{c(){k(e.$$.fragment),l=Q(),k(s.$$.fragment)},m(n,u){w(e,n,u),j(n,l,u),w(s,n,u),c=!0},p(n,u){const v=u&2048?p(h,[x(n[11])]):{};e.$set(v);const b={};u&4&&(b.label=n[2]),u&8&&(b.info=n[3]),u&128&&(b.show_label=n[7]),u&4096&&(b.disabled=n[12]==="static"),!a&&u&1&&(a=!0,b.value=n[0],K(()=>a=!1)),!_&&u&2&&(_=!0,b.value_is_output=n[1],K(()=>_=!1)),s.$set(b)},i(n){c||(C(e.$$.fragment,n),C(s.$$.fragment,n),c=!0)},o(n){B(e.$$.fragment,n),B(s.$$.fragment,n),c=!1},d(n){n&&q(l),P(e,n),P(s,n)}}}function ae(t){let e,l;return e=new $({props:{visible:t[6],elem_id:t[4],elem_classes:t[5],container:t[8],scale:t[9],min_width:t[10],$$slots:{default:[ne]},$$scope:{ctx:t}}}),{c(){k(e.$$.fragment)},m(s,a){w(e,s,a),l=!0},p(s,[a]){const _={};a&64&&(_.visible=s[6]),a&16&&(_.elem_id=s[4]),a&32&&(_.elem_classes=s[5]),a&256&&(_.container=s[8]),a&512&&(_.scale=s[9]),a&1024&&(_.min_width=s[10]),a&530575&&(_.$$scope={dirty:a,ctx:s}),e.$set(_)},i(s){l||(C(e.$$.fragment,s),l=!0)},o(s){B(e.$$.fragment,s),l=!1},d(s){P(e,s)}}}function ue(t,e,l){let{label:s="ColorPicker"}=e,{info:a=void 0}=e,{elem_id:_=""}=e,{elem_classes:c=[]}=e,{visible:h=!0}=e,{value:o}=e,{value_is_output:f=!1}=e,{show_label:m}=e,{container:d=!1}=e,{scale:n=null}=e,{min_width:u=void 0}=e,{loading_status:v}=e,{mode:b}=e;function R(i){o=i,l(0,o)}function U(i){f=i,l(1,f)}function G(i){g.call(this,t,i)}function H(i){g.call(this,t,i)}function I(i){g.call(this,t,i)}function J(i){g.call(this,t,i)}return t.$$set=i=>{"label"in i&&l(2,s=i.label),"info"in i&&l(3,a=i.info),"elem_id"in i&&l(4,_=i.elem_id),"elem_classes"in i&&l(5,c=i.elem_classes),"visible"in i&&l(6,h=i.visible),"value"in i&&l(0,o=i.value),"value_is_output"in i&&l(1,f=i.value_is_output),"show_label"in i&&l(7,m=i.show_label),"container"in i&&l(8,d=i.container),"scale"in i&&l(9,n=i.scale),"min_width"in i&&l(10,u=i.min_width),"loading_status"in i&&l(11,v=i.loading_status),"mode"in i&&l(12,b=i.mode)},[o,f,s,a,_,c,h,m,d,n,u,v,b,R,U,G,H,I,J]}class oe extends M{constructor(e){super(),N(this,e,ue,ae,O,{label:2,info:3,elem_id:4,elem_classes:5,visible:6,value:0,value_is_output:1,show_label:7,container:8,scale:9,min_width:10,loading_status:11,mode:12})}get label(){return this.$$.ctx[2]}set label(e){this.$$set({label:e}),r()}get info(){return this.$$.ctx[3]}set info(e){this.$$set({info:e}),r()}get elem_id(){return this.$$.ctx[4]}set elem_id(e){this.$$set({elem_id:e}),r()}get elem_classes(){return this.$$.ctx[5]}set elem_classes(e){this.$$set({elem_classes:e}),r()}get visible(){return this.$$.ctx[6]}set visible(e){this.$$set({visible:e}),r()}get value(){return this.$$.ctx[0]}set value(e){this.$$set({value:e}),r()}get value_is_output(){return this.$$.ctx[1]}set value_is_output(e){this.$$set({value_is_output:e}),r()}get show_label(){return this.$$.ctx[7]}set show_label(e){this.$$set({show_label:e}),r()}get container(){return this.$$.ctx[8]}set container(e){this.$$set({container:e}),r()}get scale(){return this.$$.ctx[9]}set scale(e){this.$$set({scale:e}),r()}get min_width(){return this.$$.ctx[10]}set min_width(e){this.$$set({min_width:e}),r()}get loading_status(){return this.$$.ctx[11]}set loading_status(e){this.$$set({loading_status:e}),r()}get mode(){return this.$$.ctx[12]}set mode(e){this.$$set({mode:e}),r()}}const me=oe,be=["static","dynamic"],de=t=>({type:{payload:"string"},description:{payload:"hex color code"},example_data:t.value??"#000000"});export{me as Component,de as document,be as modes}; -//# sourceMappingURL=index-c4debac9.js.map diff --git a/spaces/cihyFjudo/fairness-paper-search/Free Download of Talk Language Allan Pease PDF Learn the Secrets of Effective Communication.md b/spaces/cihyFjudo/fairness-paper-search/Free Download of Talk Language Allan Pease PDF Learn the Secrets of Effective Communication.md deleted file mode 100644 index 0643b2949896a713d6ed8cee9a64fddb2c863b86..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Free Download of Talk Language Allan Pease PDF Learn the Secrets of Effective Communication.md +++ /dev/null @@ -1,6 +0,0 @@ -
-

We went out and researched the web for the best body language experts we could find and put together five of our favourite video lessons. We hear from Vanessa Van Edwards and the Science of People, Allan Pease in his inspirational TED talk on body language and other tips from the Stanford School of Business.

-

Hi there, Kelsey Tonner here from Be a Better Guide. Today we are talking about how we as tour guides can use body language effectively to communicate that we are confident, capable, and that your guests have nothing to worry about with you in charge.

-

talk language allan pease pdf free download


DOWNLOAD >>>>> https://tinurli.com/2uwi88



aaccfb2cb3
-
-
\ No newline at end of file diff --git a/spaces/cihyFjudo/fairness-paper-search/Kvisoft FlipBook Maker Enterprise v4.0 The Ultimate Page Turning Software for PDF Word Excel and More.md b/spaces/cihyFjudo/fairness-paper-search/Kvisoft FlipBook Maker Enterprise v4.0 The Ultimate Page Turning Software for PDF Word Excel and More.md deleted file mode 100644 index 71101c800b567ab75cda4774d6a752038b804fba..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Kvisoft FlipBook Maker Enterprise v4.0 The Ultimate Page Turning Software for PDF Word Excel and More.md +++ /dev/null @@ -1,6 +0,0 @@ -

Kvisoft FlipBook Maker Enterprise v4.0


Download 🌟 https://tinurli.com/2uwiSN



- - aaccfb2cb3
-
-
-

diff --git a/spaces/cihyFjudo/fairness-paper-search/Tamilnadu Dr Mgr Medical University Digital Library The Future of Digital Scholarship and Innovation in Medicine.md b/spaces/cihyFjudo/fairness-paper-search/Tamilnadu Dr Mgr Medical University Digital Library The Future of Digital Scholarship and Innovation in Medicine.md deleted file mode 100644 index 1ce860ef68ad6649f4ce42bafa6a1971250bdb4a..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Tamilnadu Dr Mgr Medical University Digital Library The Future of Digital Scholarship and Innovation in Medicine.md +++ /dev/null @@ -1,6 +0,0 @@ -

Tamilnadu Dr Mgr Medical University Digital Librar radar graffiti alpha


Download Zip ··· https://tinurli.com/2uwiQg



- - aaccfb2cb3
-
-
-

diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/attr/converters.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/attr/converters.py deleted file mode 100644 index 4cada106b01c564faf17969d24038f80abd5de6f..0000000000000000000000000000000000000000 --- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/attr/converters.py +++ /dev/null @@ -1,144 +0,0 @@ -# SPDX-License-Identifier: MIT - -""" -Commonly useful converters. -""" - - -import typing - -from ._compat import _AnnotationExtractor -from ._make import NOTHING, Factory, pipe - - -__all__ = [ - "default_if_none", - "optional", - "pipe", - "to_bool", -] - - -def optional(converter): - """ - A converter that allows an attribute to be optional. An optional attribute - is one which can be set to ``None``. - - Type annotations will be inferred from the wrapped converter's, if it - has any. - - :param callable converter: the converter that is used for non-``None`` - values. - - .. versionadded:: 17.1.0 - """ - - def optional_converter(val): - if val is None: - return None - return converter(val) - - xtr = _AnnotationExtractor(converter) - - t = xtr.get_first_param_type() - if t: - optional_converter.__annotations__["val"] = typing.Optional[t] - - rt = xtr.get_return_type() - if rt: - optional_converter.__annotations__["return"] = typing.Optional[rt] - - return optional_converter - - -def default_if_none(default=NOTHING, factory=None): - """ - A converter that allows to replace ``None`` values by *default* or the - result of *factory*. - - :param default: Value to be used if ``None`` is passed. Passing an instance - of `attrs.Factory` is supported, however the ``takes_self`` option - is *not*. - :param callable factory: A callable that takes no parameters whose result - is used if ``None`` is passed. - - :raises TypeError: If **neither** *default* or *factory* is passed. - :raises TypeError: If **both** *default* and *factory* are passed. - :raises ValueError: If an instance of `attrs.Factory` is passed with - ``takes_self=True``. - - .. versionadded:: 18.2.0 - """ - if default is NOTHING and factory is None: - raise TypeError("Must pass either `default` or `factory`.") - - if default is not NOTHING and factory is not None: - raise TypeError( - "Must pass either `default` or `factory` but not both." - ) - - if factory is not None: - default = Factory(factory) - - if isinstance(default, Factory): - if default.takes_self: - raise ValueError( - "`takes_self` is not supported by default_if_none." - ) - - def default_if_none_converter(val): - if val is not None: - return val - - return default.factory() - - else: - - def default_if_none_converter(val): - if val is not None: - return val - - return default - - return default_if_none_converter - - -def to_bool(val): - """ - Convert "boolean" strings (e.g., from env. vars.) to real booleans. - - Values mapping to :code:`True`: - - - :code:`True` - - :code:`"true"` / :code:`"t"` - - :code:`"yes"` / :code:`"y"` - - :code:`"on"` - - :code:`"1"` - - :code:`1` - - Values mapping to :code:`False`: - - - :code:`False` - - :code:`"false"` / :code:`"f"` - - :code:`"no"` / :code:`"n"` - - :code:`"off"` - - :code:`"0"` - - :code:`0` - - :raises ValueError: for any other value. - - .. versionadded:: 21.3.0 - """ - if isinstance(val, str): - val = val.lower() - truthy = {True, "true", "t", "yes", "y", "on", "1", 1} - falsy = {False, "false", "f", "no", "n", "off", "0", 0} - try: - if val in truthy: - return True - if val in falsy: - return False - except TypeError: - # Raised when "val" is not hashable (e.g., lists) - pass - raise ValueError(f"Cannot convert value to bool: {val}") diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/dxva2_av1.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/dxva2_av1.c deleted file mode 100644 index 228f72ba18e112fa2fe9b8cd7813366be96b02ea..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/dxva2_av1.c +++ /dev/null @@ -1,508 +0,0 @@ -/* - * DXVA2 AV1 HW acceleration. - * - * copyright (c) 2020 Hendrik Leppkes - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#include "config_components.h" - -#include "libavutil/avassert.h" -#include "libavutil/pixdesc.h" - -#include "dxva2_internal.h" -#include "av1dec.h" - -#define MAX_TILES 256 - -struct AV1DXVAContext { - FFDXVASharedContext shared; - - unsigned int bitstream_allocated; - uint8_t *bitstream_cache; -}; - -struct av1_dxva2_picture_context { - DXVA_PicParams_AV1 pp; - unsigned tile_count; - DXVA_Tile_AV1 tiles[MAX_TILES]; - uint8_t *bitstream; - unsigned bitstream_size; -}; - -static int get_bit_depth_from_seq(const AV1RawSequenceHeader *seq) -{ - if (seq->seq_profile == 2 && seq->color_config.high_bitdepth) - return seq->color_config.twelve_bit ? 12 : 10; - else if (seq->seq_profile <= 2 && seq->color_config.high_bitdepth) - return 10; - else - return 8; -} - -static int fill_picture_parameters(const AVCodecContext *avctx, AVDXVAContext *ctx, const AV1DecContext *h, - DXVA_PicParams_AV1 *pp) -{ - int i,j, uses_lr; - const AV1RawSequenceHeader *seq = h->raw_seq; - const AV1RawFrameHeader *frame_header = h->raw_frame_header; - const AV1RawFilmGrainParams *film_grain = &h->cur_frame.film_grain; - - unsigned char remap_lr_type[4] = { AV1_RESTORE_NONE, AV1_RESTORE_SWITCHABLE, AV1_RESTORE_WIENER, AV1_RESTORE_SGRPROJ }; - int apply_grain = !(avctx->export_side_data & AV_CODEC_EXPORT_DATA_FILM_GRAIN) && film_grain->apply_grain; - - memset(pp, 0, sizeof(*pp)); - - pp->width = avctx->width; - pp->height = avctx->height; - - pp->max_width = seq->max_frame_width_minus_1 + 1; - pp->max_height = seq->max_frame_height_minus_1 + 1; - - pp->CurrPicTextureIndex = ff_dxva2_get_surface_index(avctx, ctx, h->cur_frame.f); - pp->superres_denom = frame_header->use_superres ? frame_header->coded_denom + AV1_SUPERRES_DENOM_MIN : AV1_SUPERRES_NUM; - pp->bitdepth = get_bit_depth_from_seq(seq); - pp->seq_profile = seq->seq_profile; - - /* Tiling info */ - pp->tiles.cols = frame_header->tile_cols; - pp->tiles.rows = frame_header->tile_rows; - pp->tiles.context_update_id = frame_header->context_update_tile_id; - - for (i = 0; i < pp->tiles.cols; i++) - pp->tiles.widths[i] = frame_header->width_in_sbs_minus_1[i] + 1; - - for (i = 0; i < pp->tiles.rows; i++) - pp->tiles.heights[i] = frame_header->height_in_sbs_minus_1[i] + 1; - - /* Coding tools */ - pp->coding.use_128x128_superblock = seq->use_128x128_superblock; - pp->coding.intra_edge_filter = seq->enable_intra_edge_filter; - pp->coding.interintra_compound = seq->enable_interintra_compound; - pp->coding.masked_compound = seq->enable_masked_compound; - pp->coding.warped_motion = frame_header->allow_warped_motion; - pp->coding.dual_filter = seq->enable_dual_filter; - pp->coding.jnt_comp = seq->enable_jnt_comp; - pp->coding.screen_content_tools = frame_header->allow_screen_content_tools; - pp->coding.integer_mv = frame_header->force_integer_mv || !(frame_header->frame_type & 1); - pp->coding.cdef = seq->enable_cdef; - pp->coding.restoration = seq->enable_restoration; - pp->coding.film_grain = seq->film_grain_params_present && !(avctx->export_side_data & AV_CODEC_EXPORT_DATA_FILM_GRAIN); - pp->coding.intrabc = frame_header->allow_intrabc; - pp->coding.high_precision_mv = frame_header->allow_high_precision_mv; - pp->coding.switchable_motion_mode = frame_header->is_motion_mode_switchable; - pp->coding.filter_intra = seq->enable_filter_intra; - pp->coding.disable_frame_end_update_cdf = frame_header->disable_frame_end_update_cdf; - pp->coding.disable_cdf_update = frame_header->disable_cdf_update; - pp->coding.reference_mode = frame_header->reference_select; - pp->coding.skip_mode = frame_header->skip_mode_present; - pp->coding.reduced_tx_set = frame_header->reduced_tx_set; - pp->coding.superres = frame_header->use_superres; - pp->coding.tx_mode = frame_header->tx_mode; - pp->coding.use_ref_frame_mvs = frame_header->use_ref_frame_mvs; - pp->coding.enable_ref_frame_mvs = seq->enable_ref_frame_mvs; - pp->coding.reference_frame_update = 1; // 0 for show_existing_frame with key frames, but those are not passed to the hwaccel - - /* Format & Picture Info flags */ - pp->format.frame_type = frame_header->frame_type; - pp->format.show_frame = frame_header->show_frame; - pp->format.showable_frame = frame_header->showable_frame; - pp->format.subsampling_x = seq->color_config.subsampling_x; - pp->format.subsampling_y = seq->color_config.subsampling_y; - pp->format.mono_chrome = seq->color_config.mono_chrome; - - /* References */ - pp->primary_ref_frame = frame_header->primary_ref_frame; - pp->order_hint = frame_header->order_hint; - pp->order_hint_bits = seq->enable_order_hint ? seq->order_hint_bits_minus_1 + 1 : 0; - - memset(pp->RefFrameMapTextureIndex, 0xFF, sizeof(pp->RefFrameMapTextureIndex)); - for (i = 0; i < AV1_REFS_PER_FRAME; i++) { - int8_t ref_idx = frame_header->ref_frame_idx[i]; - AVFrame *ref_frame = h->ref[ref_idx].f; - - pp->frame_refs[i].width = ref_frame->width; - pp->frame_refs[i].height = ref_frame->height; - pp->frame_refs[i].Index = ref_frame->buf[0] ? ref_idx : 0xFF; - - /* Global Motion */ - pp->frame_refs[i].wminvalid = h->cur_frame.gm_invalid[AV1_REF_FRAME_LAST + i]; - pp->frame_refs[i].wmtype = h->cur_frame.gm_type[AV1_REF_FRAME_LAST + i]; - for (j = 0; j < 6; ++j) { - pp->frame_refs[i].wmmat[j] = h->cur_frame.gm_params[AV1_REF_FRAME_LAST + i][j]; - } - } - for (i = 0; i < AV1_NUM_REF_FRAMES; i++) { - AVFrame *ref_frame = h->ref[i].f; - if (ref_frame->buf[0]) - pp->RefFrameMapTextureIndex[i] = ff_dxva2_get_surface_index(avctx, ctx, ref_frame); - } - - /* Loop filter parameters */ - pp->loop_filter.filter_level[0] = frame_header->loop_filter_level[0]; - pp->loop_filter.filter_level[1] = frame_header->loop_filter_level[1]; - pp->loop_filter.filter_level_u = frame_header->loop_filter_level[2]; - pp->loop_filter.filter_level_v = frame_header->loop_filter_level[3]; - pp->loop_filter.sharpness_level = frame_header->loop_filter_sharpness; - pp->loop_filter.mode_ref_delta_enabled = frame_header->loop_filter_delta_enabled; - pp->loop_filter.mode_ref_delta_update = frame_header->loop_filter_delta_update; - pp->loop_filter.delta_lf_multi = frame_header->delta_lf_multi; - pp->loop_filter.delta_lf_present = frame_header->delta_lf_present; - pp->loop_filter.delta_lf_res = frame_header->delta_lf_res; - - for (i = 0; i < AV1_TOTAL_REFS_PER_FRAME; i++) { - pp->loop_filter.ref_deltas[i] = frame_header->loop_filter_ref_deltas[i]; - } - - pp->loop_filter.mode_deltas[0] = frame_header->loop_filter_mode_deltas[0]; - pp->loop_filter.mode_deltas[1] = frame_header->loop_filter_mode_deltas[1]; - pp->loop_filter.frame_restoration_type[0] = remap_lr_type[frame_header->lr_type[0]]; - pp->loop_filter.frame_restoration_type[1] = remap_lr_type[frame_header->lr_type[1]]; - pp->loop_filter.frame_restoration_type[2] = remap_lr_type[frame_header->lr_type[2]]; - uses_lr = frame_header->lr_type[0] || frame_header->lr_type[1] || frame_header->lr_type[2]; - pp->loop_filter.log2_restoration_unit_size[0] = uses_lr ? (6 + frame_header->lr_unit_shift) : 8; - pp->loop_filter.log2_restoration_unit_size[1] = uses_lr ? (6 + frame_header->lr_unit_shift - frame_header->lr_uv_shift) : 8; - pp->loop_filter.log2_restoration_unit_size[2] = uses_lr ? (6 + frame_header->lr_unit_shift - frame_header->lr_uv_shift) : 8; - - /* Quantization */ - pp->quantization.delta_q_present = frame_header->delta_q_present; - pp->quantization.delta_q_res = frame_header->delta_q_res; - pp->quantization.base_qindex = frame_header->base_q_idx; - pp->quantization.y_dc_delta_q = frame_header->delta_q_y_dc; - pp->quantization.u_dc_delta_q = frame_header->delta_q_u_dc; - pp->quantization.v_dc_delta_q = frame_header->delta_q_v_dc; - pp->quantization.u_ac_delta_q = frame_header->delta_q_u_ac; - pp->quantization.v_ac_delta_q = frame_header->delta_q_v_ac; - pp->quantization.qm_y = frame_header->using_qmatrix ? frame_header->qm_y : 0xFF; - pp->quantization.qm_u = frame_header->using_qmatrix ? frame_header->qm_u : 0xFF; - pp->quantization.qm_v = frame_header->using_qmatrix ? frame_header->qm_v : 0xFF; - - /* Cdef parameters */ - pp->cdef.damping = frame_header->cdef_damping_minus_3; - pp->cdef.bits = frame_header->cdef_bits; - for (i = 0; i < 8; i++) { - pp->cdef.y_strengths[i].primary = frame_header->cdef_y_pri_strength[i]; - pp->cdef.y_strengths[i].secondary = frame_header->cdef_y_sec_strength[i]; - pp->cdef.uv_strengths[i].primary = frame_header->cdef_uv_pri_strength[i]; - pp->cdef.uv_strengths[i].secondary = frame_header->cdef_uv_sec_strength[i]; - } - - /* Misc flags */ - pp->interp_filter = frame_header->interpolation_filter; - - /* Segmentation */ - pp->segmentation.enabled = frame_header->segmentation_enabled; - pp->segmentation.update_map = frame_header->segmentation_update_map; - pp->segmentation.update_data = frame_header->segmentation_update_data; - pp->segmentation.temporal_update = frame_header->segmentation_temporal_update; - for (i = 0; i < AV1_MAX_SEGMENTS; i++) { - for (j = 0; j < AV1_SEG_LVL_MAX; j++) { - pp->segmentation.feature_mask[i].mask |= frame_header->feature_enabled[i][j] << j; - pp->segmentation.feature_data[i][j] = frame_header->feature_value[i][j]; - } - } - - /* Film grain */ - if (apply_grain) { - pp->film_grain.apply_grain = 1; - pp->film_grain.scaling_shift_minus8 = film_grain->grain_scaling_minus_8; - pp->film_grain.chroma_scaling_from_luma = film_grain->chroma_scaling_from_luma; - pp->film_grain.ar_coeff_lag = film_grain->ar_coeff_lag; - pp->film_grain.ar_coeff_shift_minus6 = film_grain->ar_coeff_shift_minus_6; - pp->film_grain.grain_scale_shift = film_grain->grain_scale_shift; - pp->film_grain.overlap_flag = film_grain->overlap_flag; - pp->film_grain.clip_to_restricted_range = film_grain->clip_to_restricted_range; - pp->film_grain.matrix_coeff_is_identity = (seq->color_config.matrix_coefficients == AVCOL_SPC_RGB); - - pp->film_grain.grain_seed = film_grain->grain_seed; - pp->film_grain.num_y_points = film_grain->num_y_points; - for (i = 0; i < film_grain->num_y_points; i++) { - pp->film_grain.scaling_points_y[i][0] = film_grain->point_y_value[i]; - pp->film_grain.scaling_points_y[i][1] = film_grain->point_y_scaling[i]; - } - pp->film_grain.num_cb_points = film_grain->num_cb_points; - for (i = 0; i < film_grain->num_cb_points; i++) { - pp->film_grain.scaling_points_cb[i][0] = film_grain->point_cb_value[i]; - pp->film_grain.scaling_points_cb[i][1] = film_grain->point_cb_scaling[i]; - } - pp->film_grain.num_cr_points = film_grain->num_cr_points; - for (i = 0; i < film_grain->num_cr_points; i++) { - pp->film_grain.scaling_points_cr[i][0] = film_grain->point_cr_value[i]; - pp->film_grain.scaling_points_cr[i][1] = film_grain->point_cr_scaling[i]; - } - for (i = 0; i < 24; i++) { - pp->film_grain.ar_coeffs_y[i] = film_grain->ar_coeffs_y_plus_128[i]; - } - for (i = 0; i < 25; i++) { - pp->film_grain.ar_coeffs_cb[i] = film_grain->ar_coeffs_cb_plus_128[i]; - pp->film_grain.ar_coeffs_cr[i] = film_grain->ar_coeffs_cr_plus_128[i]; - } - pp->film_grain.cb_mult = film_grain->cb_mult; - pp->film_grain.cb_luma_mult = film_grain->cb_luma_mult; - pp->film_grain.cr_mult = film_grain->cr_mult; - pp->film_grain.cr_luma_mult = film_grain->cr_luma_mult; - pp->film_grain.cb_offset = film_grain->cb_offset; - pp->film_grain.cr_offset = film_grain->cr_offset; - pp->film_grain.cr_offset = film_grain->cr_offset; - } - - // XXX: Setting the StatusReportFeedbackNumber breaks decoding on some drivers (tested on NVIDIA 457.09) - // Status Reporting is not used by FFmpeg, hence not providing a number does not cause any issues - //pp->StatusReportFeedbackNumber = 1 + DXVA_CONTEXT_REPORT_ID(avctx, ctx)++; - return 0; -} - -static int dxva2_av1_start_frame(AVCodecContext *avctx, - av_unused const uint8_t *buffer, - av_unused uint32_t size) -{ - const AV1DecContext *h = avctx->priv_data; - AVDXVAContext *ctx = DXVA_CONTEXT(avctx); - struct av1_dxva2_picture_context *ctx_pic = h->cur_frame.hwaccel_picture_private; - - if (!DXVA_CONTEXT_VALID(avctx, ctx)) - return -1; - av_assert0(ctx_pic); - - /* Fill up DXVA_PicParams_AV1 */ - if (fill_picture_parameters(avctx, ctx, h, &ctx_pic->pp) < 0) - return -1; - - ctx_pic->bitstream_size = 0; - ctx_pic->bitstream = NULL; - return 0; -} - -static int dxva2_av1_decode_slice(AVCodecContext *avctx, - const uint8_t *buffer, - uint32_t size) -{ - const AV1DecContext *h = avctx->priv_data; - const AV1RawFrameHeader *frame_header = h->raw_frame_header; - struct av1_dxva2_picture_context *ctx_pic = h->cur_frame.hwaccel_picture_private; - struct AV1DXVAContext *ctx = avctx->internal->hwaccel_priv_data; - void *tmp; - - ctx_pic->tile_count = frame_header->tile_cols * frame_header->tile_rows; - - /* too many tiles, exceeding all defined levels in the AV1 spec */ - if (ctx_pic->tile_count > MAX_TILES) - return AVERROR(ENOSYS); - - /* Shortcut if all tiles are in the same buffer */ - if (ctx_pic->tile_count == h->tg_end - h->tg_start + 1) { - ctx_pic->bitstream = (uint8_t *)buffer; - ctx_pic->bitstream_size = size; - - for (uint32_t tile_num = 0; tile_num < ctx_pic->tile_count; tile_num++) { - ctx_pic->tiles[tile_num].DataOffset = h->tile_group_info[tile_num].tile_offset; - ctx_pic->tiles[tile_num].DataSize = h->tile_group_info[tile_num].tile_size; - ctx_pic->tiles[tile_num].row = h->tile_group_info[tile_num].tile_row; - ctx_pic->tiles[tile_num].column = h->tile_group_info[tile_num].tile_column; - ctx_pic->tiles[tile_num].anchor_frame = 0xFF; - } - - return 0; - } - - /* allocate an internal buffer */ - tmp = av_fast_realloc(ctx->bitstream_cache, &ctx->bitstream_allocated, - ctx_pic->bitstream_size + size); - if (!tmp) { - return AVERROR(ENOMEM); - } - ctx_pic->bitstream = ctx->bitstream_cache = tmp; - - memcpy(ctx_pic->bitstream + ctx_pic->bitstream_size, buffer, size); - - for (uint32_t tile_num = h->tg_start; tile_num <= h->tg_end; tile_num++) { - ctx_pic->tiles[tile_num].DataOffset = ctx_pic->bitstream_size + h->tile_group_info[tile_num].tile_offset; - ctx_pic->tiles[tile_num].DataSize = h->tile_group_info[tile_num].tile_size; - ctx_pic->tiles[tile_num].row = h->tile_group_info[tile_num].tile_row; - ctx_pic->tiles[tile_num].column = h->tile_group_info[tile_num].tile_column; - ctx_pic->tiles[tile_num].anchor_frame = 0xFF; - } - - ctx_pic->bitstream_size += size; - - return 0; -} - -static int commit_bitstream_and_slice_buffer(AVCodecContext *avctx, - DECODER_BUFFER_DESC *bs, - DECODER_BUFFER_DESC *sc) -{ - const AV1DecContext *h = avctx->priv_data; - AVDXVAContext *ctx = DXVA_CONTEXT(avctx); - struct av1_dxva2_picture_context *ctx_pic = h->cur_frame.hwaccel_picture_private; - void *dxva_data_ptr; - uint8_t *dxva_data; - unsigned dxva_size; - unsigned padding; - unsigned type; - -#if CONFIG_D3D11VA - if (ff_dxva2_is_d3d11(avctx)) { - type = D3D11_VIDEO_DECODER_BUFFER_BITSTREAM; - if (FAILED(ID3D11VideoContext_GetDecoderBuffer(D3D11VA_CONTEXT(ctx)->video_context, - D3D11VA_CONTEXT(ctx)->decoder, - type, - &dxva_size, &dxva_data_ptr))) - return -1; - } -#endif -#if CONFIG_DXVA2 - if (avctx->pix_fmt == AV_PIX_FMT_DXVA2_VLD) { - type = DXVA2_BitStreamDateBufferType; - if (FAILED(IDirectXVideoDecoder_GetBuffer(DXVA2_CONTEXT(ctx)->decoder, - type, - &dxva_data_ptr, &dxva_size))) - return -1; - } -#endif - - dxva_data = dxva_data_ptr; - - if (ctx_pic->bitstream_size > dxva_size) { - av_log(avctx, AV_LOG_ERROR, "Bitstream size exceeds hardware buffer"); - return -1; - } - - memcpy(dxva_data, ctx_pic->bitstream, ctx_pic->bitstream_size); - - padding = FFMIN(128 - ((ctx_pic->bitstream_size) & 127), dxva_size - ctx_pic->bitstream_size); - if (padding > 0) { - memset(dxva_data + ctx_pic->bitstream_size, 0, padding); - ctx_pic->bitstream_size += padding; - } - -#if CONFIG_D3D11VA - if (ff_dxva2_is_d3d11(avctx)) - if (FAILED(ID3D11VideoContext_ReleaseDecoderBuffer(D3D11VA_CONTEXT(ctx)->video_context, D3D11VA_CONTEXT(ctx)->decoder, type))) - return -1; -#endif -#if CONFIG_DXVA2 - if (avctx->pix_fmt == AV_PIX_FMT_DXVA2_VLD) - if (FAILED(IDirectXVideoDecoder_ReleaseBuffer(DXVA2_CONTEXT(ctx)->decoder, type))) - return -1; -#endif - -#if CONFIG_D3D11VA - if (ff_dxva2_is_d3d11(avctx)) { - D3D11_VIDEO_DECODER_BUFFER_DESC *dsc11 = bs; - memset(dsc11, 0, sizeof(*dsc11)); - dsc11->BufferType = type; - dsc11->DataSize = ctx_pic->bitstream_size; - dsc11->NumMBsInBuffer = 0; - - type = D3D11_VIDEO_DECODER_BUFFER_SLICE_CONTROL; - } -#endif -#if CONFIG_DXVA2 - if (avctx->pix_fmt == AV_PIX_FMT_DXVA2_VLD) { - DXVA2_DecodeBufferDesc *dsc2 = bs; - memset(dsc2, 0, sizeof(*dsc2)); - dsc2->CompressedBufferType = type; - dsc2->DataSize = ctx_pic->bitstream_size; - dsc2->NumMBsInBuffer = 0; - - type = DXVA2_SliceControlBufferType; - } -#endif - - return ff_dxva2_commit_buffer(avctx, ctx, sc, type, - ctx_pic->tiles, sizeof(*ctx_pic->tiles) * ctx_pic->tile_count, 0); -} - -static int dxva2_av1_end_frame(AVCodecContext *avctx) -{ - const AV1DecContext *h = avctx->priv_data; - struct av1_dxva2_picture_context *ctx_pic = h->cur_frame.hwaccel_picture_private; - int ret; - - if (ctx_pic->bitstream_size <= 0) - return -1; - - ret = ff_dxva2_common_end_frame(avctx, h->cur_frame.f, - &ctx_pic->pp, sizeof(ctx_pic->pp), - NULL, 0, - commit_bitstream_and_slice_buffer); - - return ret; -} - -static int dxva2_av1_uninit(AVCodecContext *avctx) -{ - struct AV1DXVAContext *ctx = avctx->internal->hwaccel_priv_data; - - av_freep(&ctx->bitstream_cache); - ctx->bitstream_allocated = 0; - - return ff_dxva2_decode_uninit(avctx); -} - -#if CONFIG_AV1_DXVA2_HWACCEL -const AVHWAccel ff_av1_dxva2_hwaccel = { - .name = "av1_dxva2", - .type = AVMEDIA_TYPE_VIDEO, - .id = AV_CODEC_ID_AV1, - .pix_fmt = AV_PIX_FMT_DXVA2_VLD, - .init = ff_dxva2_decode_init, - .uninit = dxva2_av1_uninit, - .start_frame = dxva2_av1_start_frame, - .decode_slice = dxva2_av1_decode_slice, - .end_frame = dxva2_av1_end_frame, - .frame_params = ff_dxva2_common_frame_params, - .frame_priv_data_size = sizeof(struct av1_dxva2_picture_context), - .priv_data_size = sizeof(struct AV1DXVAContext), -}; -#endif - -#if CONFIG_AV1_D3D11VA_HWACCEL -const AVHWAccel ff_av1_d3d11va_hwaccel = { - .name = "av1_d3d11va", - .type = AVMEDIA_TYPE_VIDEO, - .id = AV_CODEC_ID_AV1, - .pix_fmt = AV_PIX_FMT_D3D11VA_VLD, - .init = ff_dxva2_decode_init, - .uninit = dxva2_av1_uninit, - .start_frame = dxva2_av1_start_frame, - .decode_slice = dxva2_av1_decode_slice, - .end_frame = dxva2_av1_end_frame, - .frame_params = ff_dxva2_common_frame_params, - .frame_priv_data_size = sizeof(struct av1_dxva2_picture_context), - .priv_data_size = sizeof(struct AV1DXVAContext), -}; -#endif - -#if CONFIG_AV1_D3D11VA2_HWACCEL -const AVHWAccel ff_av1_d3d11va2_hwaccel = { - .name = "av1_d3d11va2", - .type = AVMEDIA_TYPE_VIDEO, - .id = AV_CODEC_ID_AV1, - .pix_fmt = AV_PIX_FMT_D3D11, - .init = ff_dxva2_decode_init, - .uninit = dxva2_av1_uninit, - .start_frame = dxva2_av1_start_frame, - .decode_slice = dxva2_av1_decode_slice, - .end_frame = dxva2_av1_end_frame, - .frame_params = ff_dxva2_common_frame_params, - .frame_priv_data_size = sizeof(struct av1_dxva2_picture_context), - .priv_data_size = sizeof(struct AV1DXVAContext), -}; -#endif diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/ftr.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/ftr.c deleted file mode 100644 index 74a2c10b5c89ae6a9f4b15902ad7c747e6badbf2..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/ftr.c +++ /dev/null @@ -1,208 +0,0 @@ -/* - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#include "adts_header.h" -#include "avcodec.h" -#include "codec_internal.h" -#include "get_bits.h" -#include "decode.h" - -typedef struct FTRContext { - AVCodecContext *aac_avctx[64]; // wrapper context for AAC - int nb_context; - AVPacket *packet; - AVFrame *frame; -} FTRContext; - -static av_cold int ftr_init(AVCodecContext *avctx) -{ - FTRContext *s = avctx->priv_data; - const AVCodec *codec; - int ret; - - if (avctx->ch_layout.nb_channels > 64 || - avctx->ch_layout.nb_channels <= 0) - return AVERROR(EINVAL); - - s->packet = av_packet_alloc(); - if (!s->packet) - return AVERROR(ENOMEM); - - s->frame = av_frame_alloc(); - if (!s->frame) - return AVERROR(ENOMEM); - - s->nb_context = avctx->ch_layout.nb_channels; - - codec = avcodec_find_decoder(AV_CODEC_ID_AAC); - if (!codec) - return AVERROR_BUG; - - for (int i = 0; i < s->nb_context; i++) { - s->aac_avctx[i] = avcodec_alloc_context3(codec); - if (!s->aac_avctx[i]) - return AVERROR(ENOMEM); - ret = avcodec_open2(s->aac_avctx[i], codec, NULL); - if (ret < 0) - return ret; - } - - avctx->sample_fmt = s->aac_avctx[0]->sample_fmt; - if (!av_sample_fmt_is_planar(avctx->sample_fmt)) - return AVERROR(EINVAL); - - return 0; -} - -static int ftr_decode_frame(AVCodecContext *avctx, AVFrame *frame, - int *got_frame, AVPacket *avpkt) -{ - FTRContext *s = avctx->priv_data; - GetBitContext gb; - int ret, ch_offset = 0; - - ret = init_get_bits8(&gb, avpkt->data, avpkt->size); - if (ret < 0) - return ret; - - frame->nb_samples = 0; - - for (int i = 0; i < s->nb_context; i++) { - AVCodecContext *codec_avctx = s->aac_avctx[i]; - GetBitContext gb2 = gb; - AACADTSHeaderInfo hdr_info; - int size; - - if (get_bits_left(&gb) < 64) - return AVERROR_INVALIDDATA; - - memset(&hdr_info, 0, sizeof(hdr_info)); - - size = ff_adts_header_parse(&gb2, &hdr_info); - if (size <= 0 || size * 8 > get_bits_left(&gb)) - return AVERROR_INVALIDDATA; - - if (size > s->packet->size) { - ret = av_grow_packet(s->packet, size - s->packet->size); - if (ret < 0) - return ret; - } - - ret = av_packet_make_writable(s->packet); - if (ret < 0) - return ret; - - memcpy(s->packet->data, avpkt->data + (get_bits_count(&gb) >> 3), size); - s->packet->size = size; - - if (size > 12) { - uint8_t *buf = s->packet->data; - - if (buf[3] & 0x20) { - int tmp = buf[8]; - buf[ 9] = ~buf[9]; - buf[11] = ~buf[11]; - buf[12] = ~buf[12]; - buf[ 8] = ~buf[10]; - buf[10] = ~tmp; - } - } - - ret = avcodec_send_packet(codec_avctx, s->packet); - if (ret < 0) { - av_log(avctx, AV_LOG_ERROR, "Error submitting a packet for decoding\n"); - return ret; - } - - ret = avcodec_receive_frame(codec_avctx, s->frame); - if (ret < 0) - return ret; - - if (!avctx->sample_rate) { - avctx->sample_rate = codec_avctx->sample_rate; - } else { - if (avctx->sample_rate != codec_avctx->sample_rate) - return AVERROR_INVALIDDATA; - } - - if (!frame->nb_samples) { - frame->nb_samples = s->frame->nb_samples; - if ((ret = ff_get_buffer(avctx, frame, 0)) < 0) - return ret; - } else { - if (frame->nb_samples != s->frame->nb_samples) - return AVERROR_INVALIDDATA; - } - - skip_bits_long(&gb, size * 8); - - if (ch_offset + s->frame->ch_layout.nb_channels > avctx->ch_layout.nb_channels) - return AVERROR_INVALIDDATA; - - if (avctx->sample_fmt != codec_avctx->sample_fmt) - return AVERROR_INVALIDDATA; - - for (int ch = 0; ch < s->frame->ch_layout.nb_channels; ch++) - memcpy(frame->extended_data[ch_offset + ch], - s->frame->extended_data[ch], - av_get_bytes_per_sample(codec_avctx->sample_fmt) * s->frame->nb_samples); - - ch_offset += s->frame->ch_layout.nb_channels; - - if (ch_offset >= avctx->ch_layout.nb_channels) - break; - } - - *got_frame = 1; - - return get_bits_count(&gb) >> 3; -} - -static void ftr_flush(AVCodecContext *avctx) -{ - FTRContext *s = avctx->priv_data; - - for (int i = 0; i < s->nb_context; i++) - avcodec_flush_buffers(s->aac_avctx[i]); -} - -static av_cold int ftr_close(AVCodecContext *avctx) -{ - FTRContext *s = avctx->priv_data; - - for (int i = 0; i < s->nb_context; i++) - avcodec_free_context(&s->aac_avctx[i]); - av_packet_free(&s->packet); - av_frame_free(&s->frame); - - return 0; -} - -const FFCodec ff_ftr_decoder = { - .p.name = "ftr", - .p.long_name = NULL_IF_CONFIG_SMALL("FTR Voice"), - .p.type = AVMEDIA_TYPE_AUDIO, - .p.id = AV_CODEC_ID_FTR, - .init = ftr_init, - FF_CODEC_DECODE_CB(ftr_decode_frame), - .close = ftr_close, - .flush = ftr_flush, - .priv_data_size = sizeof(FTRContext), - .p.capabilities = AV_CODEC_CAP_SUBFRAMES | AV_CODEC_CAP_DR1, - .caps_internal = FF_CODEC_CAP_INIT_CLEANUP, -}; diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/hpel_template.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/hpel_template.c deleted file mode 100644 index fccfe7610fe581c1b7b5f5d9d6e90705988fecad..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/hpel_template.c +++ /dev/null @@ -1,106 +0,0 @@ -/* - * Copyright (c) 2000, 2001 Fabrice Bellard - * Copyright (c) 2002-2004 Michael Niedermayer - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#include "libavutil/intreadwrite.h" - -#include "pixels.h" - -#include "bit_depth_template.c" - -#define DEF_HPEL(OPNAME, OP) \ -static inline void FUNC(OPNAME ## _pixels8_l2)(uint8_t *dst, \ - const uint8_t *src1, \ - const uint8_t *src2, \ - int dst_stride, \ - int src_stride1, \ - int src_stride2, \ - int h) \ -{ \ - int i; \ - for (i = 0; i < h; i++) { \ - pixel4 a, b; \ - a = AV_RN4P(&src1[i * src_stride1]); \ - b = AV_RN4P(&src2[i * src_stride2]); \ - OP(*((pixel4 *) &dst[i * dst_stride]), rnd_avg_pixel4(a, b)); \ - a = AV_RN4P(&src1[i * src_stride1 + 4 * sizeof(pixel)]); \ - b = AV_RN4P(&src2[i * src_stride2 + 4 * sizeof(pixel)]); \ - OP(*((pixel4 *) &dst[i * dst_stride + 4 * sizeof(pixel)]), \ - rnd_avg_pixel4(a, b)); \ - } \ -} \ - \ -static inline void FUNC(OPNAME ## _pixels4_l2)(uint8_t *dst, \ - const uint8_t *src1, \ - const uint8_t *src2, \ - int dst_stride, \ - int src_stride1, \ - int src_stride2, \ - int h) \ -{ \ - int i; \ - for (i = 0; i < h; i++) { \ - pixel4 a, b; \ - a = AV_RN4P(&src1[i * src_stride1]); \ - b = AV_RN4P(&src2[i * src_stride2]); \ - OP(*((pixel4 *) &dst[i * dst_stride]), rnd_avg_pixel4(a, b)); \ - } \ -} \ - \ -static inline void FUNC(OPNAME ## _pixels2_l2)(uint8_t *dst, \ - const uint8_t *src1, \ - const uint8_t *src2, \ - int dst_stride, \ - int src_stride1, \ - int src_stride2, \ - int h) \ -{ \ - int i; \ - for (i = 0; i < h; i++) { \ - pixel4 a, b; \ - a = AV_RN2P(&src1[i * src_stride1]); \ - b = AV_RN2P(&src2[i * src_stride2]); \ - OP(*((pixel2 *) &dst[i * dst_stride]), rnd_avg_pixel4(a, b)); \ - } \ -} \ - \ -static inline void FUNC(OPNAME ## _pixels16_l2)(uint8_t *dst, \ - const uint8_t *src1, \ - const uint8_t *src2, \ - int dst_stride, \ - int src_stride1, \ - int src_stride2, \ - int h) \ -{ \ - FUNC(OPNAME ## _pixels8_l2)(dst, src1, src2, dst_stride, \ - src_stride1, src_stride2, h); \ - FUNC(OPNAME ## _pixels8_l2)(dst + 8 * sizeof(pixel), \ - src1 + 8 * sizeof(pixel), \ - src2 + 8 * sizeof(pixel), \ - dst_stride, src_stride1, \ - src_stride2, h); \ -} \ - -#define op_avg(a, b) a = rnd_avg_pixel4(a, b) -#define op_put(a, b) a = b -DEF_HPEL(avg, op_avg) -DEF_HPEL(put, op_put) -#undef op_avg -#undef op_put diff --git a/spaces/congsaPfin/Manga-OCR/logs/Car Game APK How to Download and Play the Most Popular Racing Games.md b/spaces/congsaPfin/Manga-OCR/logs/Car Game APK How to Download and Play the Most Popular Racing Games.md deleted file mode 100644 index 327f4043dc33835427604f29e7c8f411f9459d16..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Car Game APK How to Download and Play the Most Popular Racing Games.md +++ /dev/null @@ -1,117 +0,0 @@ - -

How to Download and Install Car Games APK on Android

-

If you are a fan of car games, you might want to try some of the best car games for Android that are not available on the Google Play Store. These games can be downloaded and installed using APK files, which are the packages for Android apps. In this article, we will show you what are APK files, how to find and download them, and how to install them on your Android device.

-

car game apk download


Download » https://urlca.com/2uOdbL



-

What are Car Games APK Files?

-

APK stands for Android Package Kit, and it is the file format used by Android to distribute and install apps. APK files contain all the necessary components for an app to run on your device, such as the code, resources, assets, certificates, and manifest.

-

The benefits of using APK files

-

There are several reasons why you might want to use APK files instead of downloading apps from the Google Play Store. Some of them are:

-
    -
  • You can access apps that are not available in your region or country.
  • -
  • You can get early access to beta versions or updates of apps before they are officially released.
  • -
  • You can install apps that have been removed from the Google Play Store due to various reasons.
  • -
  • You can customize your device with apps that offer more features or options than the official ones.
  • -
-

The risks of using APK files

-

However, using APK files also comes with some risks that you should be aware of. Some of them are:

-
    -
  • You might download malicious or fake apps that can harm your device or steal your data.
  • -
  • You might violate the terms and conditions of the app developers or the Google Play Store by installing unauthorized or modified apps.
  • -
  • You might encounter compatibility or performance issues with some apps that are not optimized for your device or Android version.
  • -
  • You might lose some features or functionality that are only available in the official versions of the apps.
  • -
-

Therefore, you should always be careful when downloading and installing APK files from unknown sources. Only download APK files from reputable and trusted websites, and scan them with a reliable antivirus app before installing them. Also, make sure you have enough storage space and battery life on your device before installing any APK file.

-

How to Find and Download Car Games APK Files

-

The best sources for car games APK files

-

There are many websites that offer car games APK files for download, but not all of them are safe and reliable. Some of them might contain viruses, malware, or fake apps that can damage your device or compromise your privacy. To avoid these risks, you should only download car games APK files from reputable sources that monitor and verify the files they host. One of the most popular and trusted sources for car games APK files is APK Mirror, which hosts tons of popular Android apps that can be installed individually or as updates. You can also find other sites that host car games APK files by searching on Google, but make sure you check their reviews and ratings before downloading anything.

-

asphalt 8 car racing game apk download
-extreme car driving simulator apk download
-ultimate car driving simulator apk download
-race master 3d car racing apk download
-real racing 3 car game apk download
-csr racing 2 car game apk download
-need for speed no limits car game apk download
-traffic racer car game apk download
-hill climb racing 2 car game apk download
-asphalt 9 legends car racing game apk download
-drift max pro car drifting game apk download
-car parking multiplayer car game apk download
-city racing 3d car game apk download
-turbo driving racing 3d car game apk download
-mad skills motocross 2 car game apk download
-gt racing 2 the real car experience apk download
-drag racing classic car game apk download
-fastlane road to revenge car game apk download
-beach buggy racing 2 car game apk download
-real drift car racing lite apk download
-rally fury extreme racing car game apk download
-pixel car racer retro style car game apk download
-smashy road wanted 2 car game apk download
-rebel racing realistic car game apk download
-traffic rider motorcycle racing game apk download
-bike race free top motorcycle racing games apk download
-moto x3m bike race game and stunts racing apk download
-bike stunt 3d bike games bike race free apk download
-bike mayhem free best bike game ever apk download
-trial xtreme 4 extreme bike racing champions apk download
-moto traffic race 2 multiplayer bike racing game apk download
-bike blast rush bmx bicycle run and jump games apk download
-bike unchained 2 mountain bike downhill and slopestyle apk download
-dirt bike unchained red bull's new bike game apk download
-downhill masters downhill mountain biking game apk download
-stickman downhill motocross bike and bmx racing game apk download
-stickman downhill monstertruck monster truck racing game apk download
-monster truck destruction real monster truck simulator game apk download
-monster truck go racing games for kids and toddlers apk download
-monster truck demolition derby crash stunts simulator 2021 apk download
-monster truck stunt games mega ramp impossible tracks 3d apk download
-monster truck robot games robot transforming games 2021 apk download
-monster truck police chase cop vs robbers escape games 2021 apk download
-monster truck zombie crusher drive your great vehicle through 20 levels of zombies apocalypse madness and crush them all in this fun and addictive driving and shooting zombie survival...apk download

-

The best car games for Android in 2023

-

If you are looking for some great car games to play on your Android device, you have plenty of options to choose from. Whether you prefer simulations, racing, puzzles, or arcade-style games, there is something for everyone in the car games genre. Here are some of the best car games for Android in 2023 that you can download as APK files:

- - - - - - - - - - -
NameDescriptionDownload Link
Asphalt 9: LegendsThe latest installment of the Asphalt series, featuring stunning graphics, realistic physics, and over 50 licensed cars from top manufacturers. You can race against other players online or offline, customize your cars, and join a club to compete for rewards.Asphalt 9: Legends APK
Real Racing 3A realistic racing simulation game that offers over 250 cars from 33 brands, 19 real tracks, and a variety of game modes. You can compete with friends and rivals in cross-platform multiplayer, join a team, and participate in special events.Real Racing 3 APK
CarX Drift Racing 2A drifting game that lets you experience the thrill of sliding sideways on different tracks. You can customize your cars, tune your engine, and challenge other players in online or offline modes. You can also create your own club and join tournaments.CarX Drift Racing 2 APK
CSR Racing 2A drag racing game that features over 200 licensed cars from top brands, stunning graphics, and realistic physics. You can upgrade your cars, compete with other players in live races, join a crew, and explore a 3D city.CSR Racing 2 APK
Need for Speed No LimitsA racing game that lets you build your dream car from scratch, using over 1000 customization options. You can race on various tracks, evade the cops, and take down rivals. You can also join events and win exclusive rewards.Need for Speed No Limits APK
Drive Ahead!A fun and chaotic game that pits you against your friends or AI opponents in gladiator-style car battles. You can choose from over 100 vehicles, ranging from monster trucks to UFOs, and smash your enemies' heads with various weapons and obstacles.Drive Ahead! APK
Hill Climb Racing 2A physics-based driving game that challenges you to climb hills and overcome obstacles with your vehicle. You can unlock and upgrade over 20 vehicles, customize your driver, and compete with other players in online or offline modes.Hill Climb Racing 2 APK
Car Parking MultiplayerA realistic parking simulator that offers over 100 cars, 75 levels, and a huge open world. You can park your car in different scenarios, interact with other players, chat with them, and even exchange cars.Car Parking Multiplayer APK
-

How to Install Car Games APK Files on Android

-

Once you have downloaded the car games APK files that you want to play, you need to install them on your Android device. There are different ways to do this, depending on your device settings and preferences. Here are some of the most common methods:

-

How to enable unknown sources on Android

-

Before you can install any APK file on your device, you need to enable the option to allow installation from unknown sources. This option is disabled by default for security reasons, but you can easily turn it on by following these steps:

-
    -
  1. Go to your device's Settings app and tap on Security or Privacy.
  2. -
  3. Find the option that says Unknown sources or Install unknown apps and toggle it on.
  4. -
  5. You might see a warning message that tells you about the risks of installing apps from unknown sources. Tap on OK or Allow to confirm.
  6. -
  7. You can now install any APK file on your device without any restrictions.
  8. -
-

How to use a file manager or a browser to install APK files

-

If you have downloaded the car games APK files using a file manager app or a browser app on your device, you can use the same app to install them. Here is how:

-
    -
  1. Open the file manager or browser app that you used to download the APK files.
  2. -
  3. Navigate to the folder where you saved the APK files. Usually, this is the Downloads folder.
  4. -
  5. Tap on the APK file that you want to install. You might see a pop-up window that asks you if you want to install this app. Tap on installed using an APK file, you can do so by following these steps:

    -
      -
    1. Go to your device's Settings app and tap on Apps or Applications.
    2. -
    3. Find the car game that you want to uninstall and tap on it.
    4. -
    5. Tap on Uninstall or Delete and confirm your choice.
    6. -
    7. You might see a message that tells you that the app has been uninstalled successfully. Tap on OK or Done to exit.
    8. -
    -

    How can I backup car games APK files?

    -

    If you want to backup a car game that you have installed using an APK file, you can do so by using a file manager app or an APK extractor app. Here is how:

    -
      -
    • Download and install a file manager app or an APK extractor app from the Google Play Store. Some of the most popular ones are ES File Explorer, Solid Explorer, and APK Extractor.
    • -
    • Open the app and grant it the necessary permissions to access your device's storage.
    • -
    • Navigate to the folder where the car game APK file is located. Usually, this is the /data/app folder.
    • -
    • Long-press on the car game APK file and select Copy or Share.
    • -
    • Paste or send the car game APK file to another location, such as your SD card, your computer, or your cloud storage.
    • -
    -

    I hope this article has helped you learn how to download and install car games APK files on Android. If you have any questions or feedback, please leave a comment below. Happy gaming!

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Download Supreme Duelist Stickman Unlocked Mod Apk for Android.md b/spaces/congsaPfin/Manga-OCR/logs/Download Supreme Duelist Stickman Unlocked Mod Apk for Android.md deleted file mode 100644 index e3a8b8318e41f2394622a83c2aa23c1a150d1c54..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Download Supreme Duelist Stickman Unlocked Mod Apk for Android.md +++ /dev/null @@ -1,137 +0,0 @@ - -

    Supreme Duelist Stickman Mod APK Uptodown: A Fun and Exciting Stickman Game

    -

    If you are looking for a fun and exciting stickman game that you can play with your friends or online players, then you should try Supreme Duelist Stickman. This is a multiplayer stickman game that offers different modes and weapons to choose from. You can control your stickman and compete in various battles where you have to use your skills and strategy to defeat your opponents. You can also customize your stickman with different skins and outfits. However, if you want to unlock all the characters, weapons, and skins in the game, you will need to spend real money or watch ads. That is why you might want to download Supreme Duelist Stickman Mod APK Uptodown, a modified version of the original APK file that offers extra features and benefits for free. In this article, we will tell you what is a mod APK, how to download and install it, and what are its features and advantages.

    -

    supreme duelist stickman mod apk uptodown


    Download Zip ►►►►► https://urlca.com/2uO5YG



    -

    What is Supreme Duelist Stickman?

    -

    A multiplayer stickman game with different modes and weapons

    -

    Supreme Duelist Stickman is a game developed by Neron's Brother, a studio that specializes in creating stickman games. The game has over 100 million downloads on Google Play Store and has a rating of 4.4 out of 5 stars. The game is compatible with Android devices running version 4.1 or higher.

    -

    The game allows you to play as a stickman and compete in various modes such as single player, two players, survival mode, online mode, tournament mode, etc. You can also choose from different weapons such as guns, swords, axes, hammers, etc. Each weapon has its own advantages and disadvantages, so you have to choose wisely depending on your opponent and situation. You can also use special skills such as teleportation, flying, etc. to gain an edge over your enemies.

    -

    A game of skill and strategy where you have to defeat your opponents

    -

    The game is not just about swinging your weapon randomly. You have to use your skill and strategy to defeat your opponents. You have to aim carefully, dodge their attacks, use the environment to your advantage, etc. You also have to manage your energy bar, which depletes as you use your skills or get hit by your enemies. If your energy bar runs out, you will lose the match.

    -

    The game also has a physics-based system that makes the gameplay more realistic and fun. You can see your stickman react to every hit, bounce, fly, or fall. You can also interact with the objects in the background such as boxes, barrels, ropes, etc. to create more chaos and fun.

    -

    A game with simple graphics but smooth animations and sound effects

    -

    The game has simple graphics that resemble stick figures and doodles. However, this does not affect the quality of the game. The game has smooth animations and sound effects that make the gameplay more enjoyable and immersive. You can hear the sound of your weapon hitting your opponent, the sound of your opponent screaming or grunting, the sound of the objects breaking or exploding, etc. You can also see the blood splatter and the ragdoll effects of your stickman and your opponent.

    -

    What is a mod APK and why do you need it?

    -

    A modified version of the original APK file that offers extra features and benefits

    -

    A mod APK is a modified version of the original APK file that is created by third-party developers or hackers. A mod APK can offer extra features and benefits that are not available in the original APK file. For example, a mod APK can remove ads, unlock premium features, add unlimited resources, etc.

    -

    A mod APK can also bypass the restrictions and limitations imposed by the original APK file. For example, a mod APK can allow you to play a game that is not compatible with your device, or a game that is not available in your region, or a game that requires an internet connection to play.

    -

    A way to unlock all the characters, weapons, and skins in the game for free

    -

    One of the reasons why you might want to download Supreme Duelist Stickman Mod APK Uptodown is because it can unlock all the characters, weapons, and skins in the game for free. Normally, you would have to spend real money or watch ads to unlock these items in the game. However, with the mod APK file, you can access all these items without spending a dime or wasting your time.

    -

    You can choose from different characters such as ninja, pirate, robot, zombie, etc. You can also choose from different weapons such as guns, swords, axes, hammers, etc. You can also customize your stickman with different skins and outfits such as hats, masks, glasses, etc. You can mix and match these items to create your own unique stickman.

    -

    supreme duelist stickman mod apk download uptodown
    -supreme duelist stickman mod apk unlimited money uptodown
    -supreme duelist stickman mod apk latest version uptodown
    -supreme duelist stickman mod apk android 1 uptodown
    -supreme duelist stickman mod apk no ads uptodown
    -supreme duelist stickman mod apk all characters unlocked uptodown
    -supreme duelist stickman mod apk free shopping uptodown
    -supreme duelist stickman mod apk god mode uptodown
    -supreme duelist stickman mod apk offline uptodown
    -supreme duelist stickman mod apk hack uptodown
    -supreme duelist stickman mod apk revdl uptodown
    -supreme duelist stickman mod apk rexdl uptodown
    -supreme duelist stickman mod apk 2023 uptodown
    -supreme duelist stickman mod apk 2.1.8 uptodown
    -supreme duelist stickman mod apk 2.1.9 uptodown
    -supreme duelist stickman mod apk 2.2.0 uptodown
    -supreme duelist stickman mod apk 2.2.1 uptodown
    -supreme duelist stickman mod apk 2.2.2 uptodown
    -supreme duelist stickman mod apk 2.2.3 uptodown
    -supreme duelist stickman mod apk 2.2.4 uptodown
    -supreme duelist stickman mod apk 2.2.5 uptodown
    -supreme duelist stickman mod apk 2.2.6 uptodown
    -supreme duelist stickman mod apk 2.2.7 uptodown
    -supreme duelist stickman mod apk 2.2.8 uptodown
    -supreme duelist stickman mod apk 3.0.0 uptodown
    -supreme duelist stickman mod apk 3.0.1 uptodown
    -supreme duelist stickman mod apk 3.0.2 uptodown
    -supreme duelist stickman mod apk 3.0.3 uptodown
    -supreme duelist stickman mod apk 3.0.4 uptodown
    -supreme duelist stickman mod apk 3.0.5 uptodown
    -supreme duelist stickman mod apk 3.0.6 uptodown
    -supreme duelist stickman mod apk 3.0.7 uptodown
    -supreme duelist stickman mod apk 3.0.8 uptodown
    -supreme duelist stickman mod apk 3.0.9 uptodown
    -supreme duelist stickman mod apk 3.1.0 uptodown
    -supreme duelist stickman mod apk 3.1.1 uptodown
    -supreme duelist stickman mod apk 3.1.2 uptodown
    -supreme duelist stickman mod apk 3.1.3 uptodown
    -supreme duelist stickman mod apk 3.1.4 uptodown
    -supreme duelist stickman mod apk 3.1.5 uptodown
    -supreme duelist stickman mod apk 3.1.6 uptodown
    -supreme duelist stickman mod apk 3.1.7 uptodown
    -supreme duelist stickman mod apk 3.1.8 uptodown
    -supreme duelist stickman mod apk 3.1.9 uptodown
    -supreme duelist stickman mod apk 3.2.0 uptodown
    -supreme duelist stickman mod apk 3.2.1 uptodown
    -supreme duelist stickman mod apk 3.2.2 uptodown
    -supreme duelist stickman mod apk 3.2.3 uptodown
    -supreme duelist stickman mod apk 3.2.4 uptodown

    -

    A way to enjoy the game without ads or in-app purchases

    -

    Another reason why you might want to download Supreme Duelist Stickman Mod APK Uptodown is because it can remove ads and in-app purchases from the game. Normally, you would have to watch ads or buy coins or gems to play the game. However, with the mod APK file, you can enjoy the game without any interruptions or distractions.

    -

    You can play the game without seeing any annoying ads pop up on your screen. You can also play the game without having to buy any coins or gems to refill your energy bar or unlock new items. You can play the game as much as you want without any limitations or restrictions.

    How to download and install Supreme Duelist Stickman Mod APK Uptodown?

    -

    The steps to download the mod APK file from Uptodown website

    -

    If you want to download Supreme Duelist Stickman Mod APK Uptodown, you will need to follow these steps:

    -
      -
    1. Go to the Uptodown website by clicking here.
    2. -
    3. Search for Supreme Duelist Stickman in the search bar and click on the game icon.
    4. -
    5. Scroll down and click on the green Download button.
    6. -
    7. Choose the latest version of the mod APK file and click on Download again.
    8. -
    9. Wait for the download to finish and save the file in your device.
    10. -
    -

    The steps to install the mod APK file on your Android device

    -

    After you have downloaded the mod APK file, you will need to install it on your Android device. To do that, you will need to follow these steps:

    -
      -
    1. Locate the mod APK file in your device's file manager and tap on it.
    2. -
    3. If you see a warning message that says "For your security, your phone is not allowed to install unknown apps from this source", tap on Settings.
    4. -
    5. Enable the option that says "Allow from this source" or "Install unknown apps".
    6. -
    7. Go back to the mod APK file and tap on it again.
    8. -
    9. Tap on Install and wait for the installation to complete.
    10. -
    11. Tap on Open and enjoy the game.
    12. -
    -

    The steps to enable unknown sources and permissions on your device

    -

    If you have not enabled unknown sources and permissions on your device before, you will need to do that before installing the mod APK file. To do that, you will need to follow these steps:

    -
      -
    1. Go to your device's Settings and tap on Security or Privacy.
    2. -
    3. Find the option that says "Unknown sources" or "Install unknown apps" and enable it.
    4. -
    5. Go to your device's Settings and tap on Apps or Applications.
    6. -
    7. Find the game app and tap on it.
    8. -
    9. Tap on Permissions and enable all the permissions that are required by the game.
    10. -
    -

    What are the features and advantages of Supreme Duelist Stickman Mod APK Uptodown?

    -

    The features of the mod APK file such as unlimited coins, gems, and energy

    -

    The mod APK file of Supreme Duelist Stickman offers some amazing features that are not available in the original APK file. Some of these features are:

    -
      -
    • Unlimited coins: You can get unlimited coins in the game, which you can use to buy new weapons, skins, and outfits for your stickman. You can also use coins to upgrade your weapons and skills.
    • -
    • Unlimited gems: You can get unlimited gems in the game, which you can use to unlock new characters, modes, and maps in the game. You can also use gems to revive yourself if you lose a match.
    • -
    • Unlimited energy: You can get unlimited energy in the game, which means you can play as long as you want without running out of energy. You can also use energy to activate special skills such as teleportation, flying, etc.
    • -
    -

    The advantages of the mod APK file such as no root required, no virus or malware, and easy to use

    -

    The mod APK file of Supreme Duelist Stickman also offers some advantages that make it better than the original APK file. Some of these advantages are:

    -
      -
    • No root required: You do not need to root your device to install or use the mod APK file. This means you do not have to risk damaging your device or voiding its warranty.
    • -
    • No virus or malware: The mod APK file is safe and secure to download and install. It does not contain any virus or malware that can harm your device or steal your data.
    • -
    • Easy to use: The mod APK file is easy to use and does not require any complicated steps or settings. You just have to download, install, and play the game with all the features and benefits.
    • -
    -

    The comparison of the mod APK file with the original APK file in terms of performance and quality

    -

    The mod APK file of Supreme Duelist Stickman does not compromise on the performance and quality of the game. It offers the same gameplay experience as the original APK file with some extra features and benefits. The mod APK file does not affect the graphics, animations, sound effects, or physics of the game. It also does not cause any lag, crash, or error in the game. It also does not require any internet connection to play the game. The mod APK file is compatible with most Android devices and runs smoothly and fast.

    -

    Conclusion

    -

    A summary of the main points of the article

    -

    In conclusion, Supreme Duelist Stickman is a fun and exciting stickman game that you can play with your friends or online players. You can choose from different modes, weapons, and characters to compete in various battles. You can also customize your stickman with different skins and outfits. However, if you want to unlock all the items in the game, you will need to download Supreme Duelist Stickman Mod APK Uptodown, a modified version of the original APK file that offers extra features and benefits for free.

    -

    A recommendation to try Supreme Duelist Stickman Mod APK Uptodown for a fun and exciting stickman game experience

    -

    We recommend you to try Supreme Duelist Stickman Mod APK Uptodown for a fun and exciting stickman game experience. You can enjoy the game without ads or in-app purchases. You can also access all the characters, weapons, and skins in the game for free. You can also get unlimited coins, gems, and energy in the game. You can also play the game without any internet connection or root requirement. You can also download and install the mod APK file easily and safely from Uptodown website.

    -

    A call to action to download the mod APK file from Uptodown website

    -

    If you are interested in playing Supreme Duelist Stickman Mod APK Uptodown, you can download the mod APK file from Uptodown website by clicking here. You can also follow the steps we have provided above to install the mod APK file on your Android device. You can then enjoy the game with all the features and benefits that the mod APK file offers. So what are you waiting for? Download Supreme Duelist Stickman Mod APK Uptodown now and have fun!

    -

    FAQs

    -

    What is Supreme Duelist Stickman?

    -

    Supreme Duelist Stickman is a multiplayer stickman game that offers different modes and weapons to choose from. You can control your stickman and compete in various battles where you have to use your skills and strategy to defeat your opponents.

    -

    What is a mod APK?

    -

    A mod APK is a modified version of the original APK file that offers extra features and benefits that are not available in the original APK file.

    -

    How to download Supreme Duelist Stickman Mod APK Uptodown?

    -

    You can download Supreme Duelist Stickman Mod APK Uptodown from Uptodown website by clicking here. You can then follow the steps we have provided above to install the mod APK file on your Android device.

    -

    What are the features of Supreme Duelist Stickman Mod APK Uptodown?

    -

    Some of the features of Supreme Duelist Stickman Mod APK Uptodown are unlimited coins, gems, and energy, no ads or in-app purchases, all characters, weapons, and skins unlocked, no internet connection or root required, etc.

    -

    Is Supreme Duelist Stickman Mod APK Uptodown safe and secure?

    -

    Yes, Supreme Duelist Stickman Mod APK Uptodown is safe and secure to download and install. It does not contain any virus or malware that can harm your device or steal your data.

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/GTA 5 94 GB Download How to Optimize Your PC for the Best Performance.md b/spaces/congsaPfin/Manga-OCR/logs/GTA 5 94 GB Download How to Optimize Your PC for the Best Performance.md deleted file mode 100644 index 9180439e32320b3e4a703445546d31f4baa9133d..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/GTA 5 94 GB Download How to Optimize Your PC for the Best Performance.md +++ /dev/null @@ -1,135 +0,0 @@ - -

    GTA 5 94 GB Download: Everything You Need to Know

    -

    If you are a fan of video games, you have probably heard of GTA 5, one of the most popular and successful games of all time. But do you know how big the download size of GTA 5 is and how to get it on your device? In this article, we will answer all your questions about GTA 5 94 GB download and more.

    -

    What is GTA 5 and why is it so popular?

    -

    GTA 5 is an open-world action-adventure game by Rockstar Games

    -

    GTA 5, or Grand Theft Auto V, is the fifth main installment in the Grand Theft Auto series, which started in 1997. The game was released in 2013 for PlayStation 3 and Xbox 360, and later for PlayStation 4, Xbox One, and PC. The game is set in the fictional state of San Andreas, which is based on Southern California, and follows the lives of three protagonists: Michael, Franklin, and Trevor. The game allows the player to switch between the three characters at any time and explore the vast open world, which includes urban areas, countryside, mountains, deserts, and oceans.

    -

    gta 5 94 gb download


    Download –––––>>> https://urlca.com/2uO8uH



    -

    GTA 5 has a rich story mode, a vast online multiplayer mode, and stunning graphics

    -

    GTA 5 has a lot to offer to its players. The game has a story mode that consists of more than 60 missions that involve heists, shootouts, chases, stealth, and more. The game also has a lot of side activities, such as racing, golfing, tennis, hunting, yoga, parachuting, etc. The game also has an online multiplayer mode called GTA Online, which allows up to 30 players to cooperate or compete in various modes, such as deathmatches, races, missions, heists, etc. The game also has stunning graphics that showcase the beauty and diversity of San Andreas. The game features realistic weather effects, dynamic lighting and shadows, high-resolution textures, and detailed animations.

    -

    Why is GTA 5 94 GB download size and how does it vary across platforms?

    -

    GTA 5 file size depends on the version, platform, and installation method of the game

    -

    The file size of GTA 5 is not fixed. It depends on various factors, such as the version of the game, the platform on which you wish to install it, or whether you are installing it from a disk or downloading it from the internet. The file size of GTA 5 also changes over time due to updates and patches that add new content or fix bugs.

    -

    GTA 5 file size ranges from 72 GB to more than 94 GB depending on the platform

    -

    The file size of GTA 5 varies across different platforms. Here is a table that shows the approximate file size of GTA 5 depending on the platform:

    - - - - - - - -
    Platform File Size
    PC 94 GB (download) or 72 GB (disk)
    PlayStation 4 76 GB (download) or 50 GB (disk)
    PlayStation 5 80 GB (download) or 50 GB (disk)
    Xbox One 76 GB (download) or 50 GB (disk)
    Xbox Series X/S 80 GB (download) or 50 GB (disk)
    -

    As you can see, the file size of GTA 5 is the largest for PC, especially if you download it from the internet. This is because the PC version of GTA 5 has higher resolution textures, more detailed models, and better graphics settings than the console versions. The file size of GTA 5 is also larger for the next-generation consoles, such as PlayStation 5 and Xbox Series X/S, than the previous-generation consoles, such as PlayStation 4 and Xbox One. This is because the next-generation consoles have improved performance and features, such as faster loading times, ray tracing, and 4K resolution.

    -

    How to download GTA 5 and what are the requirements?

    -

    GTA 5 can be downloaded from various sources depending on the platform

    -

    If you want to download GTA 5 on your device, you have different options depending on the platform you are using. Here are some of the sources from which you can download GTA 5:

    -

    GTA 5 download sources for PC

    -
      -
    • The official Rockstar Games Launcher: This is the official source from which you can download GTA 5 on your PC. You need to create a Rockstar Games Social Club account and purchase the game from the launcher. You can also access other Rockstar Games titles and online services from this launcher.
    • -
    • Steam: This is another popular source from which you can download GTA 5 on your PC. You need to have a Steam account and purchase the game from the Steam store. You can also access other Steam features and games from this platform.
    • -
    • Epic Games Store: This is another source from which you can download GTA 5 on your PC. You need to have an Epic Games account and purchase the game from the Epic Games store. You can also access other Epic Games titles and features from this platform.
    • -
    • Other online retailers: You can also download GTA 5 from other online retailers, such as Amazon, Green Man Gaming, Humble Bundle, etc. You need to purchase a digital code or a physical disk from these retailers and redeem it on your PC.
    • -
    -

    GTA 5 download sources for PlayStation

    -
      -
    • The official PlayStation Store: This is the official source from which you can download GTA 5 on your PlayStation console. You need to have a PlayStation Network account and purchase the game from the PlayStation Store. You can also access other PlayStation games and services from this platform.
    • -
    • Other online retailers: You can also download GTA 5 from other online retailers, such as Amazon, Best Buy, GameStop, etc. You need to purchase a digital code or a physical disk from these retailers and redeem it on your PlayStation console.
    • -
    -

    GTA 5 download sources for Xbox

    -
      -
    • The official Microsoft Store: This is the official source from which you can download GTA 5 on your Xbox console. You need to have a Microsoft account and purchase the game from the Microsoft Store. You can also access other Xbox games and services from this platform.
    • -
    • Other online retailers: You can also download GTA 5 from other online retailers, such as Amazon, Best Buy, GameStop, etc. You need to purchase a digital code or a physical disk from these retailers and redeem it on your Xbox console.
    • -
    -

    GTA 5 requires a lot of disk space, RAM, and processing power to run smoothly

    -

    GTA 5 is not a light game. It requires a lot of disk space, RAM, and processing power to run smoothly on your device. Here are some of the minimum and recommended requirements for GTA 5 depending on the platform:

    -

    GTA 5 minimum and recommended requirements for PC

    - - - - - - - -
    Minimum Requirements Recommended Requirements
    CPU: Intel Core 2 Quad CPU Q6600 @ 2.40GHz / AMD Phenom 9850 Quad-Core Processor @ 2 GHz CPU: Intel Core i5 3470 @ 3.2GHz / AMD X8 FX-8350 @ 4GHz
    RAM: 4 GB RAM: 8 GB
    GPU: NVIDIA 9800 GT 1GB / AMD HD 4870 1GB GPU: NVIDIA GTX 660 2GB / AMD HD7870 2GB
    OS: Windows 10, 8.1, 8, 7 (64-bit) OS: Windows 10, 8.1, 8, 7 (64-bit)
    Disk Space: 72 GB Disk Space: 94 GB
    -

    GTA 5 minimum and recommended requirements for PlayStation

    - - - - - - - -
    Minimum Requirements Recommended Requirements
    Platform: PlayStation 3 Platform: PlayStation 5
    CPU: Cell Broadband Engine @ 3.2GHz CPU: AMD Zen 2-based CPU @ 3.5GHz
    RAM: 256 MB + 256 MB VRAM RAM: 16 GB GDDR6
    GPU: NVIDIA RSX @ 550MHz GPU: AMD RDNA 2-based GPU @ 2.23GHz
    Disk Space: 50 GB Disk Space: 80 GB
    -

    GTA 5 minimum and recommended requirements for Xbox

    - - - - - - - -
    Minimum Requirements Recommended Requirements
    Platform: Xbox 360 Platform: Xbox Series X/S
    CPU: Xenon @ 3.2GHz CPU: AMD Zen 2-based CPU @ 3.6GHz /3.4GHz
    RAM: 512 MB + VRAM RAM: 16 GB GDDR6 + VRAM
    GPU: Xenos @500MHz GPU: AMD RDNA 2-based GPU @1.825GHz /1.565GHz
    Disk Space:50 GB Disk Space:80 GB
    -

    As you can see, GTA 5 requires a lot of disk space, RAM, and processing power to run smoothly on your device. You should make sure that your device meets the minimum or recommended requirements before downloading GTA 5.

    -

    Conclusion and FAQs

    -

    GTA 5 is one of the most popular and successful games of all time. It is an open-world action-adventure game that offers a rich story mode, a vast online multiplayer mode, and stunning graphics. However, GTA 5 also has a large file size that ranges from 72 GB to more than 94 GB depending on the platform. You should make sure that your device has enough disk space, RAM, and processing power to run GTA 5 smoothly. You can download GTA 5 from various sources depending on the platform you are using.

    -

    gta 5 94 gb download epic games
    -gta 5 94 gb download pc
    -gta 5 94 gb download ps4
    -gta 5 94 gb download xbox one
    -gta 5 94 gb download free
    -gta 5 94 gb download size
    -gta 5 94 gb download time
    -gta 5 94 gb download link
    -gta 5 94 gb download torrent
    -gta 5 94 gb download highly compressed
    -gta 5 94 gb download from rockstar games
    -gta 5 94 gb download steam
    -gta 5 94 gb download microsoft store
    -gta 5 94 gb download ps5
    -gta 5 94 gb download xbox series x
    -gta 5 94 gb download full version
    -gta 5 94 gb download crack
    -gta 5 94 gb download without internet
    -gta 5 94 gb download offline
    -gta 5 94 gb download slow
    -gta 5 94 gb download speed
    -gta 5 94 gb download error
    -gta 5 94 gb download fix
    -gta 5 94 gb download update
    -gta 5 94 gb download latest version
    -gta 5 94 gb download requirements
    -gta 5 94 gb download mods
    -gta 5 94 gb download cheats
    -gta 5 94 gb download gameplay
    -gta 5 94 gb download review
    -gta 5 94 gb download tips and tricks
    -gta 5 94 gb download guide
    -gta 5 94 gb download walkthrough
    -gta v file size for all platforms [newest update]
    -how to install the epic games launcher for the free GTA V offer?
    -how to reduce the GTA V file size on PC?
    -how to increase the GTA V download speed on PC?
    -how to resume the GTA V download on PC?
    -how to transfer the GTA V files from one PC to another?
    -how to verify the GTA V files on PC?
    -how to install the GTA V updates on PC?
    -how to uninstall the GTA V files on PC?
    -how to play GTA V online on PC?
    -how to fix the GTA V launcher error on PC?
    -how to optimize the GTA V settings on PC?
    -how to run GTA V in windowed mode on PC?
    -how to use a controller for GTA V on PC?
    -how to change the language of GTA V on PC?
    -how to take screenshots in GTA V on PC?

    -

    If you have any questions about GTA 5 or GTA 5 download size, you can check out these FAQs:

    -
      -
    • Q: How long does it take to download GTA 5?
      A: The download time of GTA 5 depends on your internet speed, the file size of the game, and the source from which you are downloading it. Generally, it can take anywhere from a few hours to a few days to download GTA 5.
    • -
    • Q: Can I play GTA Online without downloading GTA 5?
      A: No, you cannot play GTA Online without downloading GTA 5. GTA Online is a part of GTA 5 and requires the base game to run.
    • -
    • Q: Can I reduce the file size of GTA 5?
      A: There is no official way to reduce the file size of GTA 5. However, some unofficial methods may involve deleting some files or folders from the game directory or using some compression tools. However, these methods are not recommended as they may cause errors or glitches in the game. You should always backup your game files before trying any unofficial methods.
    • -
    • Q: Can I play GTA 5 on my mobile device?
      A: No, you cannot play GTA 5 on your mobile device. GTA 5 is only available for PC, PlayStation, and Xbox platforms. There are some unofficial apps or websites that claim to offer GTA 5 for mobile devices, but they are either fake or malicious. You should avoid them at all costs.
    • -
    • Q: Can I transfer my GTA 5 progress from one platform to another?
      A: Yes, you can transfer your GTA 5 progress from one platform to another, but only for GTA Online. You need to have a Rockstar Games Social Club account and link it to your platform of choice. You can then transfer your GTA Online character and progress from one platform to another. However, you can only do this once per account.
    • -
    -

    I hope this article has helped you understand everything you need to know about GTA 5 94 GB download. If you have any other questions or feedback, feel free to leave a comment below. Happy gaming!

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/TikTok APK The Best Way to Download and Use the New Version Without VPN.md b/spaces/congsaPfin/Manga-OCR/logs/TikTok APK The Best Way to Download and Use the New Version Without VPN.md deleted file mode 100644 index 83f523c08a718f2f7ee262af8cf0a00b76be4921..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/TikTok APK The Best Way to Download and Use the New Version Without VPN.md +++ /dev/null @@ -1,111 +0,0 @@ -
    -

    TikTok APK Download New Version 2022 Without VPN

    -

    TikTok is one of the most popular social media platforms in the world, with over 1 billion active users. It allows you to create and share short videos with music, filters, effects, stickers, and more. But what if you want to download TikTok APK new version 2022 without VPN? In this article, we will show you how to do that easily and safely.

    -

    tiktok apk download new version 2022 without vpn


    Download Filehttps://urlca.com/2uOddr



    -

    What is TikTok and why is it so popular?

    -

    TikTok is a social network for creating and sharing short videos

    -

    TikTok is an app that lets you record, edit, and share videos that are up to 60 seconds long. You can choose from millions of songs, sounds, and clips from the app's library, or use your own audio. You can also add filters, effects, stickers, text, emojis, and more to make your videos unique and expressive.

    -

    TikTok has millions of users and content creators worldwide

    -

    TikTok is not only a platform for watching videos, but also a community for connecting with people who share your interests, passions, and talents. You can follow your favorite creators, discover new ones, comment, like, share, and chat with them. You can also join challenges, trends, hashtags, and events to showcase your creativity and have fun.

    -

    TikTok offers various features and options to make videos fun and engaging

    -

    TikTok is constantly updating its app with new features and options to make your video experience better. Some of the features include:

    -

    tiktok app download latest version 2022 free no vpn
    -how to download tiktok new update 2022 without vpn android
    -tiktok apk 2022 latest version free download no vpn required
    -download tiktok for android 2022 new version without using vpn
    -tiktok 2022 update apk download free without vpn for android
    -tiktok latest version 2022 apk free download no vpn needed
    -how to get tiktok new version 2022 without vpn on android
    -download tiktok apk 2022 latest update without vpn free
    -tiktok for android 2022 new version free download no vpn
    -tiktok apk download without vpn 2022 latest version free
    -how to install tiktok new version 2022 without vpn on android
    -tiktok 2022 latest update apk free download without vpn
    -tiktok new version 2022 apk download free no vpn for android
    -how to download tiktok latest version 2022 without vpn on android
    -tiktok apk free download 2022 new version without vpn android
    -download tiktok new version 2022 apk without vpn for free
    -tiktok latest update 2022 apk download free no vpn required
    -how to update tiktok to new version 2022 without vpn on android
    -tiktok apk 2022 new version free download without vpn android
    -download tiktok latest version 2022 apk without vpn for free
    -tiktok new update 2022 apk free download no vpn needed
    -how to download tiktok for android 2022 new version without vpn
    -tiktok apk without vpn 2022 latest version free download
    -download tiktok for android new version 2022 without using vpn
    -tiktok latest version apk download 2022 without vpn free
    -how to get tiktok for android new version 2022 without vpn
    -tiktok apk download new update 2022 without vpn for free
    -download tiktok latest update 2022 apk without vpn free
    -tiktok new version apk free download 2022 without vpn android
    -how to install tiktok for android new version 2022 without vpn
    -tiktok apk free download without vpn 2022 latest version
    -download tiktok new version apk 2022 without using vpn for free
    -tiktok latest update apk free download 2022 no vpn required
    -how to update tiktok for android to new version 2022 without vpn
    -tiktok apk download free no vpn 2022 latest version android
    -download tiktok latest version for android 2022 without using vpn
    -tiktok new version apk download no vpn required 2022 free
    -how to get the latest version of tiktok on android without vpn in 2022
    -tiktok apk download for free without using vpn in 2022 latest version
    -download the newest version of tiktok for android in 2022 without a vpn
    -tiktok new update in 2022 apk free download for android no need for a vpn
    -how to install the latest update of tiktok on android in 2022 without a vpn
    -tiktok apk for android in 2022 newest version free download no need of a vpn
    -download the latest update of tiktok for android in 2022 no use of a vpn
    -tiktok newest version in 2022 apk free download for android no need of a vpn

    -
      -
    • Duet: You can collaborate with another user by recording a video alongside their existing one.
    • -
    • Stitch: You can clip and integrate scenes from another user's video into your own.
    • -
    • Live: You can broadcast live to your followers and interact with them in real time.
    • -
    • Reels: You can create short videos with multiple clips, transitions, effects, and music.
    • -
    • Stories: You can share ephemeral videos that disappear after 24 hours.
    • -
    -

    Why do you need to download TikTok APK without VPN?

    -

    TikTok is banned or restricted in some countries due to security or political reasons

    -

    Unfortunately, not everyone can access TikTok freely. Some countries have banned or restricted the app due to concerns over data privacy, national security, or political censorship. For example, India, Pakistan, Bangladesh, Indonesia, Turkey, Egypt, and some other countries have blocked or limited TikTok's availability in their regions.

    -

    VPNs can slow down your internet connection and affect your video quality

    -

    One way to bypass these restrictions is to use a VPN (virtual private network) service that masks your IP address and location. However, VPNs can also have some drawbacks. For one thing, they can slow down your internet speed and bandwidth, which can affect your video streaming and uploading quality. You may experience buffering, lagging, freezing, or pixelation

    VPNs can also expose your personal data and online activity to third parties

    -

    Another downside of using VPNs is that they can also compromise your privacy and security online. Some VPNs may collect and sell your personal data, such as your browsing history, location, device information, and more, to advertisers or other third parties. Some VPNs may also have weak encryption or leak your IP address, which can make you vulnerable to hackers, malware, or government surveillance.

    -

    How to download TikTok APK new version 2022 without VPN?

    -

    Find a reliable and safe source for downloading the APK file

    -

    The first step to download TikTok APK new version 2022 without VPN is to find a trustworthy and secure source for the APK file. APK stands for Android Package Kit, and it is a file format that contains the app's code, resources, and metadata. You can download APK files from various websites, but you need to be careful about the quality and safety of the file. Some APK files may be corrupted, outdated, or infected with viruses or malware.

    -

    To avoid these risks, you should only download APK files from reputable and verified sources, such as APKMirror, APKPure, or Uptodown. These websites scan and test the APK files before uploading them, and they also provide detailed information about the app's version, size, developer, permissions, and more. You can also read user reviews and ratings to check the feedback and experience of other users.

    -

    Enable unknown sources on your Android device settings

    -

    The next step to download TikTok APK new version 2022 without VPN is to enable unknown sources on your Android device settings. This is because Android devices normally do not allow installing apps from sources other than the Google Play Store. To enable unknown sources, you need to follow these steps:

    -
      -
    1. Go to your device's Settings and tap on Security or Privacy.
    2. -
    3. Find the option that says Unknown Sources or Install Unknown Apps and toggle it on.
    4. -
    5. A warning message will pop up, telling you that installing apps from unknown sources can harm your device. Tap on OK or Allow to confirm.
    6. -
    -

    Now you are ready to install the APK file on your device.

    -

    Install the APK file and launch the app

    -

    The final step to download TikTok APK new version 2022 without VPN is to install the APK file and launch the app. To do this, you need to follow these steps:

    -
      -
    1. Locate the downloaded APK file on your device's storage or file manager. It may be in the Downloads folder or in a specific folder created by the website you downloaded it from.
    2. -
    3. Tap on the APK file and a prompt will appear, asking you if you want to install the app. Tap on Install and wait for the installation process to finish.
    4. -
    5. Once the app is installed, you can tap on Open to launch it or find it on your app drawer or home screen.
    6. -
    -

    Congratulations! You have successfully downloaded TikTok APK new version 2022 without VPN.

    What are the benefits of downloading TikTok APK new version 2022 without VPN?

    -

    You can access all the features and content of TikTok without any restrictions or limitations

    -

    One of the main benefits of downloading TikTok APK new version 2022 without VPN is that you can access all the features and content of TikTok without any restrictions or limitations. You can watch, create, and share videos from any country, region, or network. You can also explore and join the global community of TikTok users and creators. You can enjoy the latest trends, challenges, hashtags, and events on TikTok. You can also discover and follow your favorite celebrities, influencers, artists, and brands on TikTok.

    -

    You can enjoy faster and smoother video streaming and uploading

    -

    Another benefit of downloading TikTok APK new version 2022 without VPN is that you can enjoy faster and smoother video streaming and uploading. You can watch videos without any buffering, lagging, freezing, or pixelation. You can also upload your videos without any delays, errors, or failures. You can also save your data and battery by using less bandwidth and power. You can have a better video experience on TikTok with high-quality resolution, sound, and speed.

    -

    You can protect your privacy and security online

    -

    A third benefit of downloading TikTok APK new version 2022 without VPN is that you can protect your privacy and security online. You can avoid exposing your personal data and online activity to third parties, such as advertisers, hackers, or government agencies. You can also avoid being tracked, monitored, or censored by your ISP, network administrator, or authorities. You can also prevent malware or viruses from infecting your device or stealing your information. You can have a safer and more private online experience on TikTok.

    -

    Conclusion

    -

    TikTok is a fun and exciting social media platform that allows you to create and share short videos with music, filters, effects, stickers, and more. However, if you want to download TikTok APK new version 2022 without VPN, you need to follow some steps and precautions. You need to find a reliable and safe source for the APK file, enable unknown sources on your device settings, install the APK file and launch the app. By doing this, you can enjoy the benefits of accessing all the features and content of TikTok without any restrictions or limitations, having faster and smoother video streaming and uploading, and protecting your privacy and security online.

    -

    We hope this article has helped you learn how to download TikTok APK new version 2022 without VPN. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading!

    -

    FAQs

    -

    What is an APK file?

    -

    An APK file is a file format that contains the app's code, resources, and metadata. It is used to install apps on Android devices.

    -

    What is a VPN service?

    -

    A VPN service is a service that masks your IP address and location by routing your internet traffic through a secure and encrypted server in another country.

    -

    Why is TikTok banned or restricted in some countries?

    -

    TikTok is banned or restricted in some countries due to concerns over data privacy, national security, or political censorship.

    -

    How can I update my TikTok app after downloading the APK file?

    -

    You can update your TikTok app by downloading the latest APK file from the same source you downloaded it from before and installing it over the existing app.

    -

    Is it legal to download TikTok APK without VPN?

    -

    It depends on the laws and regulations of your country or region. Some countries may prohibit or penalize the use of unlicensed or unauthorized apps or services. You should check the legal status of TikTok in your area before downloading the APK file.

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Traffic Racer 3D The Ultimate Car Racing Simulation Game.md b/spaces/congsaPfin/Manga-OCR/logs/Traffic Racer 3D The Ultimate Car Racing Simulation Game.md deleted file mode 100644 index b847a37d27c5b502192e55498af1326bc2780cb0..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Traffic Racer 3D The Ultimate Car Racing Simulation Game.md +++ /dev/null @@ -1,175 +0,0 @@ - -

    Traffic Racer 3D Game Download: A Guide for Racing Fans

    -

    If you love racing games, you should definitely check out Traffic Racer 3D, a milestone in the genre of endless arcade racing. In this game, you can drive your car through highway traffic, earn cash, upgrade your car and buy new ones. You can also try to be one of the fastest drivers in the global leaderboards. Endless racing is now redefined!

    -

    traffic racer 3d game download


    Download ✶✶✶ https://urlca.com/2uOfeC



    -

    In this article, we will show you how to download Traffic Racer 3D on different devices, how to play it and enjoy the thrill of racing, how to compare your performance with other players, how to customize your car and make it stand out, and how to enjoy the stunning graphics and sound effects of this game. We will also answer some frequently asked questions about Traffic Racer 3D. Let's get started!

    -

    How to Download Traffic Racer 3D on Different Devices

    -

    Traffic Racer 3D is available on various platforms, such as Android, iOS, Windows, and Chrome. Here are the steps to download it on each device:

    -

    Android

    -

    If you have an Android device, you can download Traffic Racer 3D from the Google Play Store. Here's how:

    -

    Traffic Racer 3D simulation game for Android
    -How to play Traffic Racer 3D on PC with emulator
    -Traffic Racer 3D APK free download latest version
    -Best cars and upgrades in Traffic Racer 3D
    -Traffic Racer 3D tips and tricks to score high
    -Traffic Racer 3D vs Traffic Racer: which one is better?
    -Traffic Racer 3D online leaderboards and achievements
    -Traffic Racer 3D review: a fun and addictive racing game
    -Traffic Racer 3D mod APK unlimited money and gems
    -Traffic Racer 3D cheats and hacks for Android and PC
    -Traffic Racer 3D gameplay video and screenshots
    -Traffic Racer 3D alternatives: other racing games to try
    -Traffic Racer 3D support and feedback: how to contact the developer
    -Traffic Racer 3D data safety and privacy policy
    -Traffic Racer 3D update: what's new in the latest version?
    -Traffic Racer 3D for iOS: is it available on iPhone and iPad?
    -Traffic Racer 3D features: stunning 3D graphics and realistic car handling
    -Traffic Racer 3D modes: endless, two-way, time trial, police chase and free ride
    -Traffic Racer 3D environments: suburb, desert, snowy, rainy and city night
    -Traffic Racer 3D download size and system requirements
    -How to install Traffic Racer 3D on Android devices
    -How to uninstall Traffic Racer 3D from PC or Android
    -How to backup and restore Traffic Racer 3D data on Android or PC
    -How to fix Traffic Racer 3D not working or crashing issues
    -How to change language and settings in Traffic Racer 3D
    -How to connect Traffic Racer 3D to Facebook or Google Play Games
    -How to earn cash and coins in Traffic Racer 3D fast and easy
    -How to unlock all cars and wheels in Traffic Racer 3D
    -How to customize your car color and paint in Traffic Racer 3D
    -How to overtake cars closely and get bonus scores in Traffic Racer 3D
    -How to drive in opposite direction and get extra cash in Traffic Racer 3D
    -How to avoid traffic accidents and collisions in Traffic Racer 3D
    -How to use tilt or touch controls in Traffic Racer 3D
    -How to use gas button and brake button in Traffic Racer 3D
    -How to mute or adjust sound effects and music in Traffic Racer 3D
    -How to pause or resume the game in Traffic Racer 3D
    -How to restart or quit the game in Traffic Racer 3D
    -How to view your stats and records in Traffic Racer 3D
    -How to access the shop and buy new cars or upgrades in Traffic Racer 3D
    -How to watch ads or make in-app purchases in Traffic Racer 3D

    -
      -
    1. Open the Google Play Store app on your device.
    2. -
    3. Search for "Traffic Racer" or "Traffic Racer 3D" in the search bar.
    4. -
    5. Tap on the game icon that says Traffic Racer 3D by SK Games.
    6. -
    7. Tap on the Install button and wait for the download to complete.
    8. -
    9. Tap on the Open button to launch the game and enjoy!
    10. -
    -

    You can also download Traffic Racer 3D from the official website of SK Games. Just follow the link and click on the Download button.

    -

    iOS

    -

    If you have an iOS device, you can download Traffic Racer 3D from the App Store. Here's how:

    -
      -
    1. Open the App Store app on your device.
    2. -
    3. Search for "Traffic Racer" or "Traffic Racer 3D" in the search bar.
    4. -
    5. Tap on the game icon that says Traffic Racer 3D by SK Games.
    6. -
    7. Tap on the Get button and enter your Apple ID and password if prompted.
    8. -
    9. Wait for the download to complete and tap on the game icon to launch it and enjoy!
    10. -
    -

    Windows

    -

    If you have a Windows device, you can download Traffic Racer 3D from the Microsoft Store. Here's how:

    -
      -
    1. Open the Microsoft Store app on your device.
    2. -
    3. Search for "Traffic Racer" or "Traffic Racer 3D" in the search bar.
    4. -
    5. Click on the game icon that says Traffic Racer 3D by SK Games.
    6. -
    7. Click on the Get button and sign in with your Microsoft account if prompted.
    8. -
    9. Wait for the download to complete and click on the game icon to launch it and enjoy!
    10. -
    -

    Chrome

    -

    If you have a Chrome browser, you can download Traffic Racer 3D from the Chrome Web Store. Here's how:

    -
      -
    1. Open the Chrome Web Store link in your browser.
    2. -
    3. Search for "Traffic Racer" or "Traffic Racer 3D" in the search bar.
    4. -
    5. Click on the game icon that says Traffic Racer 3D.
    6. -
    7. Click on the Add to Chrome button and confirm if prompted.
    8. -
    9. The game will be added to your Chrome apps and you can launch it from there and enjoy!
    10. -
    -

    How to Play Traffic Racer 3D and Enjoy the Thrill of Racing

    -

    Traffic Racer 3D is a simple but addictive game that will keep you entertained for hours. The goal is to drive as fast as you can through traffic without crashing. The faster you drive, the more points you get. You can also earn extra points by overtaking other cars closely or driving in the opposite direction. Here are some features and tips to help you play Traffic Racer 3D and have fun:

    -

    Game Modes

    -

    Traffic Racer 3D offers five different game modes to suit your preferences and skills. They are:

    -
      -
    • Endless: This is the classic mode where you drive as long as you can without crashing. The traffic gets denser and faster as you progress. You can choose from four different environments: Suburb, Desert, Snowy, and City Night.
    • -
    • Two-Way: This is similar to Endless mode, but with a twist. You drive in a two-way road, which means you have to avoid not only the traffic in your lane, but also the oncoming traffic in the opposite lane. This mode is more challenging and rewarding, as you get double points for driving in the wrong lane.
    • -
    • Time Trial:This is a mode where you have a limited time to drive as far as you can. You can extend your time by passing checkpoints along the way. You can choose from three different time limits: 60 seconds, 90 seconds, and 120 seconds.
    • -
    • Police Chase:This is a mode where you have to escape from the police cars that are chasing you. You have to drive fast and smart, as the police cars will try to ram you or block your way. You can also use nitro boosters to speed up and evade them. You can choose from three different difficulty levels: Easy, Medium, and Hard.
    • -
    • Free Ride:Free Ride: This is a mode where you can drive freely without any rules or objectives. You can explore the different environments and enjoy the scenery. You can also change the traffic density and speed to your liking.
    • -
    -

    Controls

    -

    Traffic Racer 3D has simple and intuitive controls that make it easy to play. You can choose from two different control options: Tilt or Touch. Here's how they work:

    -
      -
    • Tilt: In this option, you tilt your device left or right to steer your car. You touch the gas button on the right side of the screen to accelerate, and the brake button on the left side of the screen to slow down.
    • -
    • Touch: In this option, you touch the left or right arrows on the bottom of the screen to steer your car. You touch the gas button on the right side of the screen to accelerate, and the brake button on the left side of the screen to slow down.
    • -
    -

    You can also change the sensitivity of the tilt or touch controls in the settings menu. You can also enable or disable the auto-acceleration feature, which makes your car accelerate automatically without touching the gas button.

    -

    Tips and Tricks

    -

    Traffic Racer 3D is a fun and addictive game, but it can also be challenging and frustrating at times. Here are some tips and tricks to help you score more points, avoid crashes, and unlock new cars and upgrades:

    -
      -
    • Overtake other cars closely: When you pass another car within 100 meters, you get bonus points. The closer you are, the more points you get. However, be careful not to hit them or you will lose points and damage your car.
    • -
    • Drive in the opposite direction: When you drive in the wrong lane in Two-Way mode, you get double points for every overtaking. However, be prepared to face more traffic and higher risk of crashing.
    • -
    • Use nitro boosters: When you see a blue bottle on the road, try to collect it. It will give you a nitro booster that will make your car go faster for a few seconds. You can also buy nitro boosters with cash in the garage.
    • -
    • Upgrade your car: When you earn enough cash, you can upgrade your car's speed, acceleration, handling, and braking in the garage. This will make your car perform better and easier to control.
    • -
    • Buy new cars: When you have enough cash, you can also buy new cars with different features and styles in the garage. Some cars are faster, some are more agile, some are more durable, and some are more stylish. Find the car that suits your taste and budget.
    • -
    -

    How to Compare Your Performance with Other Players in Traffic Racer 3D

    -

    Traffic Racer 3D is not only a game for yourself, but also a game for competing with other players around the world. You can compare your performance with other players in two ways: Leaderboards and Achievements. Here's how they work:

    -

    Leaderboards

    -

    Leaderboards are where you can see your rank among other players based on your score in each game mode. You can access them by tapping on the Leaderboards button on the main menu. You can also see your friends' ranks if they are connected to Google Play Games or Game Center.

    -

    You can filter the leaderboards by All Time, This Week, or Today. You can also see your best score and rank in each game mode on the top of the screen.

    -

    Achievements

    -

    Achievements are where you can see your progress and rewards for completing various tasks in Traffic Racer 3D. You can access them by tapping on the Achievements button on the main menu. You can also see your friends' achievements if they are connected to Google Play Games or Game Center.

    -

    You can see a list of achievements with their names, descriptions, icons, and status (locked or unlocked). Some achievements are easy to unlock, such as driving 10 km or buying a new car. Some achievements are hard to unlock, such as driving 1000 km or reaching 400 km/h.

    -

    When you unlock an achievement, you get a notification and a reward of cash or nitro boosters. You can also share your achievements with your friends on social media.

    -

    How to Customize Your Car and Make It Stand Out in Traffic Racer 3D

    -

    Traffic Racer 3D is not only a game for racing, but also a game for expressing your personality and style. You can customize your car and make it stand out in traffic in two ways: Car Selection and Car Customization. Here's how they work:

    -

    Car Selection

    -

    Car Selection is where you can choose from 40+ different cars with different features and styles. You can access it by tapping on the Garage button on the main menu. You can see a list of cars with their names, prices, and stats (speed, acceleration, handling, and braking).

    -

    You can buy new cars with cash that you earn from playing the game. Some cars are more expensive than others, but they also have better performance and appearance. You can also unlock some cars by completing certain achievements.

    -

    You can switch between the cars that you own by tapping on them. You can also see a preview of how they look in 3D by tapping on the View button.

    -

    Car Customization

    -

    Car Customization is where you can change the color, wheels, and paint of your car. You can access it by tapping on the Customize button on the Garage menu. You can see a 3D view of your car and three options to customize it: Color, Wheels, and Paint.

    -

    You can change the color of your car by tapping on the Color option and choosing from a palette of colors. You can also use a slider to adjust the brightness and saturation of the color.

    -

    You can change the wheels of your car by tapping on the Wheels option and choosing from a variety of wheels with different designs and sizes. You can also use a slider to adjust the size of the wheels.

    -

    You can change the paint of your car by tapping on the Paint option and choosing from a collection of paint patterns with different shapes and colors. You can also use a slider to adjust the scale and rotation of the paint pattern.

    -

    You can save your customization by tapping on the Save button. You can also reset your customization by tapping on the Reset button.

    -

    How to Enjoy the Stunning Graphics and Sound Effects of Traffic Racer 3D

    -

    Traffic Racer 3D is not only a game for playing, but also a game for experiencing. You can enjoy the stunning graphics and sound effects of this game in two ways: Environments and Sound Effects. Here's how they work:

    -

    Environments

    -

    Environments are where you can explore 5 detailed environments with different weather and time conditions. You can access them by tapping on the Select Environment button on the main menu. You can see a list of environments with their names and icons.

    -

    You can choose from four different environments: Suburb, Desert, Snowy, and City Night. Each environment has its own characteristics, such as traffic density, road layout, scenery, lighting, and weather effects. For example, in Suburb, you will see houses, trees, bridges, and sunny skies. In Desert, you will see sand dunes, cacti, rocks, and dusty winds. In Snowy, you will see snowflakes, icebergs, penguins, and auroras. In City Night, you will see skyscrapers, neon lights, billboards, and raindrops.

    -

    You can also unlock a fifth environment: Rainy Day. This environment is similar to City Night, but with more rain and thunder effects. You can unlock it by reaching 100 km/h in Endless mode.

    -

    Sound Effects

    -

    Sound Effects are where you can listen to the realistic engine sounds and background music of Traffic Racer 3D. You can access them by tapping on the Settings button on the main menu. You can see two options to adjust them: SFX Volume and BGM Volume.

    -

    You can adjust the SFX Volume by using a slider to increase or decrease the sound effects of your car's engine, brakes, horns, crashes, nitro boosters, etc. You can also mute or unmute them by tapping on the speaker icon.

    -

    You can adjust the BGM Volume by using a slider to increase or decrease the background music of Traffic Racer 3D. The music is composed of various genres, such as rock, pop, techno, etc. You can also mute or unmute them by tapping on the speaker icon.

    -

    Conclusion: Why Traffic Racer 3D is One of the Best Racing Games on the Market

    -

    Traffic Racer 3D is a game that will satisfy your need for speed and adrenaline. It has many features and benefits that make it one of the best racing games on the market. Here are some of them:

    -
      -
    • It is easy to download and play on different devices, such as Android, iOS, Windows, and Chrome.
    • -
    • It has simple and intuitive controls that make it fun and addictive.
    • -
    • It has five different game modes that suit your preferences and skills.
    • -
    • It has 40+ different cars that you can buy and customize.
    • -
    • It has 5 detailed environments that you can explore and enjoy.
    • -
    • It has realistic engine sounds and background music that enhance your experience.
    • -
    • It has leaderboards and achievements that let you compare your performance with other players.
    • -
    • It is free to play, but you can also remove ads and buy more cash and nitro boosters with real money.
    • -
    -

    If you are a racing fan, you should not miss Traffic Racer 3D. It is a game that will keep you entertained for hours and make you feel like a real racer. Download it now and enjoy the thrill of racing!

    -

    FAQs: Frequently Asked Questions about Traffic Racer 3D

    -

    Here are some of the most common questions that people ask about Traffic Racer 3D. If you have any other questions, feel free to contact us at support@skgames.com.

    -

    Q1: Is Traffic Racer 3D free to play?

    -

    A1: Yes, Traffic Racer 3D is free to play. You can download it from the Google Play Store, App Store, Microsoft Store, or Chrome Web Store without paying anything. However, the game contains ads that may interrupt your gameplay. You can remove them by paying a small fee in the game.

    -

    Q2: How can I remove ads from Traffic Racer 3D?

    -

    A2: You can remove ads from Traffic Racer 3D by tapping on the No Ads button on the main menu. You will be redirected to a payment page where you can choose your preferred payment method and confirm your purchase. Once you do that, you will not see any ads in the game anymore.

    -

    Q3: How can I contact the developer of Traffic Racer 3D?

    -

    A3: You can contact the developer of Traffic Racer 3D by sending an email to support@skgames.com. You can also visit their website at www.skgames.com or follow them on social media at Facebook, Twitter, Instagram, or YouTube. They will be happy to hear from you and answer your questions or feedback.

    -

    Q4: What are the minimum system requirements for Traffic Racer 3D?

    -

    A4: The minimum system requirements for Traffic Racer 3D are as follows:

    -
      -
    • Android: Android 4.1 or higher, 1 GB of RAM, 100 MB of free storage space.
    • -
    • iOS: iOS 9.0 or higher, iPhone 5 or higher, iPad 4 or higher, iPod touch 6th generation or higher, 100 MB of free storage space.
    • -
    • Windows: Windows 10 version 10240.0 or higher, x86 or x64 architecture, DirectX version 10 or higher, 2 GB of RAM, 100 MB of free storage space.
    • -
    • Chrome: Chrome version 49 or higher, any operating system that supports Chrome browser, any processor type, any amount of RAM, any amount of free storage space.
    • -
    -

    Q5: Can I play Traffic Racer 3D offline?

    -

    A5: Yes, you can play Traffic Racer 3D offline. You don't need an internet connection to play the game. However, some features may not work properly when you are offline, such as leaderboards, achievements, challenges, rewards, and in-app purchases. To enjoy these features fully, you need to connect to the internet.

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/YT Music MOD APK Download The Best Way to Listen to Music Online.md b/spaces/congsaPfin/Manga-OCR/logs/YT Music MOD APK Download The Best Way to Listen to Music Online.md deleted file mode 100644 index d27057818723d1094a65a92332b06bced0f91e35..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/YT Music MOD APK Download The Best Way to Listen to Music Online.md +++ /dev/null @@ -1,135 +0,0 @@ - -

    YT Music Mod APK Download: What You Need to Know

    -

    If you are looking for a way to enjoy unlimited music and podcasts on your Android device, you might have heard of YT Music Mod APK. This is a modified version of the official YT Music app that offers some extra features and benefits. But what exactly is YT Music Mod APK, how to download and install it, and what are the risks and alternatives? In this article, we will answer these questions and more.

    -

    y t music mod apk download


    Download File ===> https://urlca.com/2uObw1



    -

    What is YT Music and why use it?

    -

    YT Music is a music streaming service that lets you listen to millions of songs, albums, playlists, live performances, remixes, covers, and more. It also lets you watch music videos and access podcasts from various genres and topics. You can use YT Music on your browser or download the app for your Android or iOS device.

    -

    YT Music features and benefits

    -

    Some of the features and benefits of using YT Music are:

    -
      -
    • You can easily switch between audio and video mode with a single tap.
    • -
    • You can discover new music and podcasts based on your preferences, mood, location, time, and trends.
    • -
    • You can search for songs by lyrics, artist name, album title, or genre.
    • -
    • You can create and share your own playlists or collaborate with others.
    • -
    • You can upload up to 100,000 songs of your own music to your library.
    • -
    • You can listen to music offline by downloading it to your device.
    • -
    -

    YT Music Premium vs YT Music Free

    -

    YT Music offers two versions: a free version that is supported by ads, and a premium version that costs $9.99 per month. The premium version gives you some exclusive benefits, such as:

    -
      -
    • No ads interrupting your music or podcasts.
    • -
    • Background play that lets you listen to music or podcasts while using other apps or when your screen is off.
    • -
    • Audio-only mode that saves data and battery by playing only the audio of music videos.
    • -
    • Smart downloads that automatically download up to 500 songs based on your listening history.
    • -
    -

    What is YT Music Mod APK and how to get it?

    -

    YT Music Mod APK is a modified version of the official YT Music app that bypasses some of the limitations and restrictions of the original app. It is not available on the Google Play Store or the official YT Music website, but you can find it on various third-party websites that offer APK downloads.

    -

    YT Music Mod APK features and advantages

    -

    Some of the features and advantages of using YT Music Mod APK are:

    -
      -
    • You can enjoy all the benefits of YT Music Premium without paying any subscription fee.
    • -
    • You can access all the content available on YT Music without any regional or country restrictions.
    • -
    • You can customize the app's interface, theme, layout, and settings according to your preferences.
    • -
    • You can remove unwanted permissions, trackers, analytics, and ads from the app.
    • -
    -

    YT Music Mod APK installation guide

    -

    To install YT Music Mod APK on your Android device, you need to follow these steps:

    -

    y t music mod apk download latest version
    -y t music mod apk download no ads
    -y t music mod apk download offline
    -y t music mod apk download premium
    -y t music mod apk download free
    -y t music mod apk download for android
    -y t music mod apk download 2021
    -y t music mod apk download unlimited downloads
    -y t music mod apk download background play
    -y t music mod apk download without root
    -y t music mod apk download for pc
    -y t music mod apk download jalan tikus[^1^]
    -y t music mod apk download rexdl
    -y t music mod apk download revdl
    -y t music mod apk download happymod
    -y t music mod apk download apkpure
    -y t music mod apk download uptodown
    -y t music mod apk download android 1
    -y t music mod apk download android 11
    -y t music mod apk download android 10
    -y t music mod apk download android 9
    -y t music mod apk download android 8
    -y t music mod apk download android 7
    -y t music mod apk download android 6
    -y t music mod apk download android 5
    -y t music mod apk download ios
    -y t music mod apk download iphone
    -y t music mod apk download ipad
    -y t music mod apk download mac
    -y t music mod apk download windows 10
    -y t music mod apk download windows 7
    -y t music mod apk download windows 8.1
    -y t music mod apk download linux
    -y t music mod apk download chromebook
    -y t music mod apk download bluestacks
    -y t music mod apk download nox player
    -y t music mod apk download ld player
    -y t music mod apk download memu play
    -y t music mod apk download smart tv
    -y t music mod apk download firestick
    -y t music mod apk download roku
    -y t music mod apk download chromecast
    -y t music mod apk download carplay
    -y t music mod apk download wear os
    -y t music mod apk download watch os

    -
      -
    1. Download the latest version of YT Music Mod APK from a reliable source. You can use this link as an example.
    2. -
    3. Enable unknown sources on your device by going to Settings > Security > Unknown Sources.
    4. -
    5. Locate the downloaded file on your device and tap on it to start the installation process.
    6. -
    7. Follow the instructions on the screen and grant the necessary permissions to the app.
    8. -
    9. Once the installation is complete, launch the app and sign in with your Google account or create a new one.
    10. -
    -

    Congratulations, you have successfully installed YT Music Mod APK on your device. You can now enjoy unlimited music and podcasts without any ads or interruptions.

    -

    What are the risks and alternatives of YT Music Mod APK?

    -

    While YT Music Mod APK may seem like a tempting option to enjoy YT Music for free, it is not without its risks and drawbacks. Here are some of the things you should be aware of before using YT Music Mod APK:

    -

    Risks of using YT Music Mod APK

    -

    Some of the risks of using YT Music Mod APK are:

    -
      -
    • You may violate the terms and conditions of YT Music and Google, which could result in your account being suspended or banned.
    • -
    • You may expose your device and data to malware, viruses, spyware, or other harmful software that could compromise your security and privacy.
    • -
    • You may experience bugs, errors, crashes, or performance issues with the app, as it is not officially supported or updated by YT Music or Google.
    • -
    • You may miss out on some of the latest features, updates, and improvements that are available on the official YT Music app.
    • -
    -

    Alternatives to YT Music Mod APK

    -

    If you are looking for a safer and more reliable way to enjoy music and podcasts on your Android device, you may want to consider some of the alternatives to YT Music Mod APK. Here are some of the best ones:

    - - - - - - - -
    NameDescriptionPrice
    SpotifyA popular music streaming service that offers over 70 million songs, podcasts, playlists, and more. You can also create and share your own music and podcasts with Spotify Studio.Free with ads or $9.99 per month for Spotify Premium.
    SoundCloudA platform that lets you discover and stream millions of songs, podcasts, and audio content from independent artists and creators. You can also upload and share your own sounds with the community.Free with ads or $9.99 per month for SoundCloud Go+.
    PandoraA personalized music streaming service that creates custom radio stations based on your favorite artists, songs, genres, and moods. You can also access podcasts, comedy, news, and more.Free with ads or $4.99 per month for Pandora Plus or $9.99 per month for Pandora Premium.
    DeezerA music streaming service that offers over 73 million songs, podcasts, playlists, and more. You can also enjoy live radio stations, lyrics, and recommendations from editors and experts.Free with ads or $9.99 per month for Deezer Premium or $14.99 per month for Deezer Family.
    Apple MusicA music streaming service that offers over 75 million songs, podcasts, playlists, and more. You can also access exclusive content, live radio stations, and music videos.$9.99 per month for Apple Music Individual or $14.99 per month for Apple Music Family.
    -

    Conclusion

    -

    YT Music Mod APK is a modified version of the official YT Music app that offers some extra features and benefits. However, it also comes with some risks and drawbacks that you should be aware of before using it. If you want to enjoy music and podcasts on your Android device without any hassle or worry, you may want to consider some of the alternatives to YT Music Mod APK that we have listed above.

    -

    FAQs

    -

    Here are some of the frequently asked questions about YT Music Mod APK:

    -
      -
    1. Is YT Music Mod APK legal?
    2. -

      No, YT Music Mod APK is not legal, as it violates the terms and conditions of YT Music and Google. It also infringes on the intellectual property rights of the artists and creators whose content is available on YT Music.

      -
    3. Is YT Music Mod APK safe?
    4. -

      No, YT Music Mod APK is not safe, as it may expose your device and data to malware, viruses, spyware, or other harmful software that could compromise your security and privacy. It may also cause bugs, errors, crashes, or performance issues with the app.

      -
    5. Is YT Music Mod APK updated?
    6. -

      No, YT Music Mod APK is not updated by YT Music or Google. It depends on the third-party developers who create and distribute it. Therefore, it may not have the latest features, updates, and improvements that are available on the official YT Music app.

      -
    7. How to uninstall YT Music Mod APK?
    8. -

      To uninstall YT Music Mod APK from your device, you need to follow these steps:

      -
        -
      1. Go to Settings > Apps > YT Music Mod APK.
      2. -
      3. Tap on Uninstall and confirm your choice.
      4. -
      5. Clear the cache and data of your device by going to Settings > Storage > Cached Data.
      6. -
      7. Restart your device and check if the app is completely removed.
      8. -
      -
    9. How to contact YT Music Mod APK developers?
    10. -

      There is no official way to contact the developers of YT Music Mod APK, as they are not affiliated with YT Music or Google. However, you may try to find their contact information on the website where you downloaded the app or on their social media accounts if they have any.

      -
    11. How to report a problem with YT Music Mod APK?
    12. -

      There is no official way to report a problem with YT Music Mod APK, as it is not supported or updated by YT Music or Google. However, you may try to leave a comment or feedback on the website where you downloaded the app or on their social media accounts if they have any.

      -

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmseg/datasets/pipelines/test_time_aug.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmseg/datasets/pipelines/test_time_aug.py deleted file mode 100644 index fb781d928ed71aceb1abcaef44d3889c00d2261e..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmseg/datasets/pipelines/test_time_aug.py +++ /dev/null @@ -1,133 +0,0 @@ -import warnings - -import annotator.mmpkg.mmcv as mmcv - -from ..builder import PIPELINES -from .compose import Compose - - -@PIPELINES.register_module() -class MultiScaleFlipAug(object): - """Test-time augmentation with multiple scales and flipping. - - An example configuration is as followed: - - .. code-block:: - - img_scale=(2048, 1024), - img_ratios=[0.5, 1.0], - flip=True, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']), - ] - - After MultiScaleFLipAug with above configuration, the results are wrapped - into lists of the same length as followed: - - .. code-block:: - - dict( - img=[...], - img_shape=[...], - scale=[(1024, 512), (1024, 512), (2048, 1024), (2048, 1024)] - flip=[False, True, False, True] - ... - ) - - Args: - transforms (list[dict]): Transforms to apply in each augmentation. - img_scale (None | tuple | list[tuple]): Images scales for resizing. - img_ratios (float | list[float]): Image ratios for resizing - flip (bool): Whether apply flip augmentation. Default: False. - flip_direction (str | list[str]): Flip augmentation directions, - options are "horizontal" and "vertical". If flip_direction is list, - multiple flip augmentations will be applied. - It has no effect when flip == False. Default: "horizontal". - """ - - def __init__(self, - transforms, - img_scale, - img_ratios=None, - flip=False, - flip_direction='horizontal'): - self.transforms = Compose(transforms) - if img_ratios is not None: - img_ratios = img_ratios if isinstance(img_ratios, - list) else [img_ratios] - assert mmcv.is_list_of(img_ratios, float) - if img_scale is None: - # mode 1: given img_scale=None and a range of image ratio - self.img_scale = None - assert mmcv.is_list_of(img_ratios, float) - elif isinstance(img_scale, tuple) and mmcv.is_list_of( - img_ratios, float): - assert len(img_scale) == 2 - # mode 2: given a scale and a range of image ratio - self.img_scale = [(int(img_scale[0] * ratio), - int(img_scale[1] * ratio)) - for ratio in img_ratios] - else: - # mode 3: given multiple scales - self.img_scale = img_scale if isinstance(img_scale, - list) else [img_scale] - assert mmcv.is_list_of(self.img_scale, tuple) or self.img_scale is None - self.flip = flip - self.img_ratios = img_ratios - self.flip_direction = flip_direction if isinstance( - flip_direction, list) else [flip_direction] - assert mmcv.is_list_of(self.flip_direction, str) - if not self.flip and self.flip_direction != ['horizontal']: - warnings.warn( - 'flip_direction has no effect when flip is set to False') - if (self.flip - and not any([t['type'] == 'RandomFlip' for t in transforms])): - warnings.warn( - 'flip has no effect when RandomFlip is not in transforms') - - def __call__(self, results): - """Call function to apply test time augment transforms on results. - - Args: - results (dict): Result dict contains the data to transform. - - Returns: - dict[str: list]: The augmented data, where each value is wrapped - into a list. - """ - - aug_data = [] - if self.img_scale is None and mmcv.is_list_of(self.img_ratios, float): - h, w = results['img'].shape[:2] - img_scale = [(int(w * ratio), int(h * ratio)) - for ratio in self.img_ratios] - else: - img_scale = self.img_scale - flip_aug = [False, True] if self.flip else [False] - for scale in img_scale: - for flip in flip_aug: - for direction in self.flip_direction: - _results = results.copy() - _results['scale'] = scale - _results['flip'] = flip - _results['flip_direction'] = direction - data = self.transforms(_results) - aug_data.append(data) - # list of dict to dict of list - aug_data_dict = {key: [] for key in aug_data[0]} - for data in aug_data: - for key, val in data.items(): - aug_data_dict[key].append(val) - return aug_data_dict - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(transforms={self.transforms}, ' - repr_str += f'img_scale={self.img_scale}, flip={self.flip})' - repr_str += f'flip_direction={self.flip_direction}' - return repr_str diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmseg/models/necks/fpn.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmseg/models/necks/fpn.py deleted file mode 100644 index a53b2a69500f8c2edb835abc3ff0ccc2173d1fb1..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmseg/models/necks/fpn.py +++ /dev/null @@ -1,212 +0,0 @@ -import torch.nn as nn -import torch.nn.functional as F -from annotator.uniformer.mmcv.cnn import ConvModule, xavier_init - -from ..builder import NECKS - - -@NECKS.register_module() -class FPN(nn.Module): - """Feature Pyramid Network. - - This is an implementation of - Feature Pyramid Networks for Object - Detection (https://arxiv.org/abs/1612.03144) - - Args: - in_channels (List[int]): Number of input channels per scale. - out_channels (int): Number of output channels (used at each scale) - num_outs (int): Number of output scales. - start_level (int): Index of the start input backbone level used to - build the feature pyramid. Default: 0. - end_level (int): Index of the end input backbone level (exclusive) to - build the feature pyramid. Default: -1, which means the last level. - add_extra_convs (bool | str): If bool, it decides whether to add conv - layers on top of the original feature maps. Default to False. - If True, its actual mode is specified by `extra_convs_on_inputs`. - If str, it specifies the source feature map of the extra convs. - Only the following options are allowed - - - 'on_input': Last feat map of neck inputs (i.e. backbone feature). - - 'on_lateral': Last feature map after lateral convs. - - 'on_output': The last output feature map after fpn convs. - extra_convs_on_inputs (bool, deprecated): Whether to apply extra convs - on the original feature from the backbone. If True, - it is equivalent to `add_extra_convs='on_input'`. If False, it is - equivalent to set `add_extra_convs='on_output'`. Default to True. - relu_before_extra_convs (bool): Whether to apply relu before the extra - conv. Default: False. - no_norm_on_lateral (bool): Whether to apply norm on lateral. - Default: False. - conv_cfg (dict): Config dict for convolution layer. Default: None. - norm_cfg (dict): Config dict for normalization layer. Default: None. - act_cfg (str): Config dict for activation layer in ConvModule. - Default: None. - upsample_cfg (dict): Config dict for interpolate layer. - Default: `dict(mode='nearest')` - - Example: - >>> import torch - >>> in_channels = [2, 3, 5, 7] - >>> scales = [340, 170, 84, 43] - >>> inputs = [torch.rand(1, c, s, s) - ... for c, s in zip(in_channels, scales)] - >>> self = FPN(in_channels, 11, len(in_channels)).eval() - >>> outputs = self.forward(inputs) - >>> for i in range(len(outputs)): - ... print(f'outputs[{i}].shape = {outputs[i].shape}') - outputs[0].shape = torch.Size([1, 11, 340, 340]) - outputs[1].shape = torch.Size([1, 11, 170, 170]) - outputs[2].shape = torch.Size([1, 11, 84, 84]) - outputs[3].shape = torch.Size([1, 11, 43, 43]) - """ - - def __init__(self, - in_channels, - out_channels, - num_outs, - start_level=0, - end_level=-1, - add_extra_convs=False, - extra_convs_on_inputs=False, - relu_before_extra_convs=False, - no_norm_on_lateral=False, - conv_cfg=None, - norm_cfg=None, - act_cfg=None, - upsample_cfg=dict(mode='nearest')): - super(FPN, self).__init__() - assert isinstance(in_channels, list) - self.in_channels = in_channels - self.out_channels = out_channels - self.num_ins = len(in_channels) - self.num_outs = num_outs - self.relu_before_extra_convs = relu_before_extra_convs - self.no_norm_on_lateral = no_norm_on_lateral - self.fp16_enabled = False - self.upsample_cfg = upsample_cfg.copy() - - if end_level == -1: - self.backbone_end_level = self.num_ins - assert num_outs >= self.num_ins - start_level - else: - # if end_level < inputs, no extra level is allowed - self.backbone_end_level = end_level - assert end_level <= len(in_channels) - assert num_outs == end_level - start_level - self.start_level = start_level - self.end_level = end_level - self.add_extra_convs = add_extra_convs - assert isinstance(add_extra_convs, (str, bool)) - if isinstance(add_extra_convs, str): - # Extra_convs_source choices: 'on_input', 'on_lateral', 'on_output' - assert add_extra_convs in ('on_input', 'on_lateral', 'on_output') - elif add_extra_convs: # True - if extra_convs_on_inputs: - # For compatibility with previous release - # TODO: deprecate `extra_convs_on_inputs` - self.add_extra_convs = 'on_input' - else: - self.add_extra_convs = 'on_output' - - self.lateral_convs = nn.ModuleList() - self.fpn_convs = nn.ModuleList() - - for i in range(self.start_level, self.backbone_end_level): - l_conv = ConvModule( - in_channels[i], - out_channels, - 1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg if not self.no_norm_on_lateral else None, - act_cfg=act_cfg, - inplace=False) - fpn_conv = ConvModule( - out_channels, - out_channels, - 3, - padding=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg, - inplace=False) - - self.lateral_convs.append(l_conv) - self.fpn_convs.append(fpn_conv) - - # add extra conv layers (e.g., RetinaNet) - extra_levels = num_outs - self.backbone_end_level + self.start_level - if self.add_extra_convs and extra_levels >= 1: - for i in range(extra_levels): - if i == 0 and self.add_extra_convs == 'on_input': - in_channels = self.in_channels[self.backbone_end_level - 1] - else: - in_channels = out_channels - extra_fpn_conv = ConvModule( - in_channels, - out_channels, - 3, - stride=2, - padding=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg, - inplace=False) - self.fpn_convs.append(extra_fpn_conv) - - # default init_weights for conv(msra) and norm in ConvModule - def init_weights(self): - for m in self.modules(): - if isinstance(m, nn.Conv2d): - xavier_init(m, distribution='uniform') - - def forward(self, inputs): - assert len(inputs) == len(self.in_channels) - - # build laterals - laterals = [ - lateral_conv(inputs[i + self.start_level]) - for i, lateral_conv in enumerate(self.lateral_convs) - ] - - # build top-down path - used_backbone_levels = len(laterals) - for i in range(used_backbone_levels - 1, 0, -1): - # In some cases, fixing `scale factor` (e.g. 2) is preferred, but - # it cannot co-exist with `size` in `F.interpolate`. - if 'scale_factor' in self.upsample_cfg: - laterals[i - 1] += F.interpolate(laterals[i], - **self.upsample_cfg) - else: - prev_shape = laterals[i - 1].shape[2:] - laterals[i - 1] += F.interpolate( - laterals[i], size=prev_shape, **self.upsample_cfg) - - # build outputs - # part 1: from original levels - outs = [ - self.fpn_convs[i](laterals[i]) for i in range(used_backbone_levels) - ] - # part 2: add extra levels - if self.num_outs > len(outs): - # use max pool to get more levels on top of outputs - # (e.g., Faster R-CNN, Mask R-CNN) - if not self.add_extra_convs: - for i in range(self.num_outs - used_backbone_levels): - outs.append(F.max_pool2d(outs[-1], 1, stride=2)) - # add conv layers on top of original feature maps (RetinaNet) - else: - if self.add_extra_convs == 'on_input': - extra_source = inputs[self.backbone_end_level - 1] - elif self.add_extra_convs == 'on_lateral': - extra_source = laterals[-1] - elif self.add_extra_convs == 'on_output': - extra_source = outs[-1] - else: - raise NotImplementedError - outs.append(self.fpn_convs[used_backbone_levels](extra_source)) - for i in range(used_backbone_levels + 1, self.num_outs): - if self.relu_before_extra_convs: - outs.append(self.fpn_convs[i](F.relu(outs[-1]))) - else: - outs.append(self.fpn_convs[i](outs[-1])) - return tuple(outs) diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/zoe/zoedepth/models/base_models/midas_repo/midas/backbones/beit.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/zoe/zoedepth/models/base_models/midas_repo/midas/backbones/beit.py deleted file mode 100644 index 7a24e02cd2b979844bf638b46ac60949ee9ce691..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/zoe/zoedepth/models/base_models/midas_repo/midas/backbones/beit.py +++ /dev/null @@ -1,196 +0,0 @@ -import timm -import torch -import types - -import numpy as np -import torch.nn.functional as F - -from .utils import forward_adapted_unflatten, make_backbone_default -from timm.models.beit import gen_relative_position_index -from torch.utils.checkpoint import checkpoint -from typing import Optional - - -def forward_beit(pretrained, x): - return forward_adapted_unflatten(pretrained, x, "forward_features") - - -def patch_embed_forward(self, x): - """ - Modification of timm.models.layers.patch_embed.py: PatchEmbed.forward to support arbitrary window sizes. - """ - x = self.proj(x) - if self.flatten: - x = x.flatten(2).transpose(1, 2) - x = self.norm(x) - return x - - -def _get_rel_pos_bias(self, window_size): - """ - Modification of timm.models.beit.py: Attention._get_rel_pos_bias to support arbitrary window sizes. - """ - old_height = 2 * self.window_size[0] - 1 - old_width = 2 * self.window_size[1] - 1 - - new_height = 2 * window_size[0] - 1 - new_width = 2 * window_size[1] - 1 - - old_relative_position_bias_table = self.relative_position_bias_table - - old_num_relative_distance = self.num_relative_distance - new_num_relative_distance = new_height * new_width + 3 - - old_sub_table = old_relative_position_bias_table[:old_num_relative_distance - 3] - - old_sub_table = old_sub_table.reshape(1, old_width, old_height, -1).permute(0, 3, 1, 2) - new_sub_table = F.interpolate(old_sub_table, size=(new_height, new_width), mode="bilinear") - new_sub_table = new_sub_table.permute(0, 2, 3, 1).reshape(new_num_relative_distance - 3, -1) - - new_relative_position_bias_table = torch.cat( - [new_sub_table, old_relative_position_bias_table[old_num_relative_distance - 3:]]) - - key = str(window_size[1]) + "," + str(window_size[0]) - if key not in self.relative_position_indices.keys(): - self.relative_position_indices[key] = gen_relative_position_index(window_size) - - relative_position_bias = new_relative_position_bias_table[ - self.relative_position_indices[key].view(-1)].view( - window_size[0] * window_size[1] + 1, - window_size[0] * window_size[1] + 1, -1) # Wh*Ww,Wh*Ww,nH - relative_position_bias = relative_position_bias.permute(2, 0, 1).contiguous() # nH, Wh*Ww, Wh*Ww - return relative_position_bias.unsqueeze(0) - - -def attention_forward(self, x, resolution, shared_rel_pos_bias: Optional[torch.Tensor] = None): - """ - Modification of timm.models.beit.py: Attention.forward to support arbitrary window sizes. - """ - B, N, C = x.shape - - qkv_bias = torch.cat((self.q_bias, self.k_bias, self.v_bias)) if self.q_bias is not None else None - qkv = F.linear(input=x, weight=self.qkv.weight, bias=qkv_bias) - qkv = qkv.reshape(B, N, 3, self.num_heads, -1).permute(2, 0, 3, 1, 4) - q, k, v = qkv.unbind(0) # make torchscript happy (cannot use tensor as tuple) - - q = q * self.scale - attn = (q @ k.transpose(-2, -1)) - - if self.relative_position_bias_table is not None: - window_size = tuple(np.array(resolution) // 16) - attn = attn + self._get_rel_pos_bias(window_size) - if shared_rel_pos_bias is not None: - attn = attn + shared_rel_pos_bias - - attn = attn.softmax(dim=-1) - attn = self.attn_drop(attn) - - x = (attn @ v).transpose(1, 2).reshape(B, N, -1) - x = self.proj(x) - x = self.proj_drop(x) - return x - - -def block_forward(self, x, resolution, shared_rel_pos_bias: Optional[torch.Tensor] = None): - """ - Modification of timm.models.beit.py: Block.forward to support arbitrary window sizes. - """ - if self.gamma_1 is None: - x = x + self.drop_path(self.attn(self.norm1(x), resolution, shared_rel_pos_bias=shared_rel_pos_bias)) - x = x + self.drop_path(self.mlp(self.norm2(x))) - else: - x = x + self.drop_path(self.gamma_1 * self.attn(self.norm1(x), resolution, - shared_rel_pos_bias=shared_rel_pos_bias)) - x = x + self.drop_path(self.gamma_2 * self.mlp(self.norm2(x))) - return x - - -def beit_forward_features(self, x): - """ - Modification of timm.models.beit.py: Beit.forward_features to support arbitrary window sizes. - """ - resolution = x.shape[2:] - - x = self.patch_embed(x) - x = torch.cat((self.cls_token.expand(x.shape[0], -1, -1), x), dim=1) - if self.pos_embed is not None: - x = x + self.pos_embed - x = self.pos_drop(x) - - rel_pos_bias = self.rel_pos_bias() if self.rel_pos_bias is not None else None - for blk in self.blocks: - if self.grad_checkpointing and not torch.jit.is_scripting(): - x = checkpoint(blk, x, shared_rel_pos_bias=rel_pos_bias) - else: - x = blk(x, resolution, shared_rel_pos_bias=rel_pos_bias) - x = self.norm(x) - return x - - -def _make_beit_backbone( - model, - features=[96, 192, 384, 768], - size=[384, 384], - hooks=[0, 4, 8, 11], - vit_features=768, - use_readout="ignore", - start_index=1, - start_index_readout=1, -): - backbone = make_backbone_default(model, features, size, hooks, vit_features, use_readout, start_index, - start_index_readout) - - backbone.model.patch_embed.forward = types.MethodType(patch_embed_forward, backbone.model.patch_embed) - backbone.model.forward_features = types.MethodType(beit_forward_features, backbone.model) - - for block in backbone.model.blocks: - attn = block.attn - attn._get_rel_pos_bias = types.MethodType(_get_rel_pos_bias, attn) - attn.forward = types.MethodType(attention_forward, attn) - attn.relative_position_indices = {} - - block.forward = types.MethodType(block_forward, block) - - return backbone - - -def _make_pretrained_beitl16_512(pretrained, use_readout="ignore", hooks=None): - model = timm.create_model("beit_large_patch16_512", pretrained=pretrained) - - hooks = [5, 11, 17, 23] if hooks is None else hooks - - features = [256, 512, 1024, 1024] - - return _make_beit_backbone( - model, - features=features, - size=[512, 512], - hooks=hooks, - vit_features=1024, - use_readout=use_readout, - ) - - -def _make_pretrained_beitl16_384(pretrained, use_readout="ignore", hooks=None): - model = timm.create_model("beit_large_patch16_384", pretrained=pretrained) - - hooks = [5, 11, 17, 23] if hooks is None else hooks - return _make_beit_backbone( - model, - features=[256, 512, 1024, 1024], - hooks=hooks, - vit_features=1024, - use_readout=use_readout, - ) - - -def _make_pretrained_beitb16_384(pretrained, use_readout="ignore", hooks=None): - model = timm.create_model("beit_base_patch16_384", pretrained=pretrained) - - hooks = [2, 5, 8, 11] if hooks is None else hooks - return _make_beit_backbone( - model, - features=[96, 192, 384, 768], - hooks=hooks, - use_readout=use_readout, - ) diff --git a/spaces/dahaoGPT/Llama2-70b-chatmodle-demo/app.py b/spaces/dahaoGPT/Llama2-70b-chatmodle-demo/app.py deleted file mode 100644 index a461703287a9bda9c93cfdfbb94d4c3cf90aaba9..0000000000000000000000000000000000000000 --- a/spaces/dahaoGPT/Llama2-70b-chatmodle-demo/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/meta-llama/Llama-2-70b-chat-hf").launch() \ No newline at end of file diff --git a/spaces/danterivers/music-generation-samples/tests/data/test_audio_utils.py b/spaces/danterivers/music-generation-samples/tests/data/test_audio_utils.py deleted file mode 100644 index 0480671bb17281d61ce02bce6373a5ccec89fece..0000000000000000000000000000000000000000 --- a/spaces/danterivers/music-generation-samples/tests/data/test_audio_utils.py +++ /dev/null @@ -1,110 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import julius -import torch -import pytest - -from audiocraft.data.audio_utils import ( - _clip_wav, - convert_audio_channels, - convert_audio, - normalize_audio -) -from ..common_utils import get_batch_white_noise - - -class TestConvertAudioChannels: - - def test_convert_audio_channels_downmix(self): - b, c, t = 2, 3, 100 - audio = get_batch_white_noise(b, c, t) - mixed = convert_audio_channels(audio, channels=2) - assert list(mixed.shape) == [b, 2, t] - - def test_convert_audio_channels_nochange(self): - b, c, t = 2, 3, 100 - audio = get_batch_white_noise(b, c, t) - mixed = convert_audio_channels(audio, channels=c) - assert list(mixed.shape) == list(audio.shape) - - def test_convert_audio_channels_upmix(self): - b, c, t = 2, 1, 100 - audio = get_batch_white_noise(b, c, t) - mixed = convert_audio_channels(audio, channels=3) - assert list(mixed.shape) == [b, 3, t] - - def test_convert_audio_channels_upmix_error(self): - b, c, t = 2, 2, 100 - audio = get_batch_white_noise(b, c, t) - with pytest.raises(ValueError): - convert_audio_channels(audio, channels=3) - - -class TestConvertAudio: - - def test_convert_audio_channels_downmix(self): - b, c, dur = 2, 3, 4. - sr = 128 - audio = get_batch_white_noise(b, c, int(sr * dur)) - out = convert_audio(audio, from_rate=sr, to_rate=sr, to_channels=2) - assert list(out.shape) == [audio.shape[0], 2, audio.shape[-1]] - - def test_convert_audio_channels_upmix(self): - b, c, dur = 2, 1, 4. - sr = 128 - audio = get_batch_white_noise(b, c, int(sr * dur)) - out = convert_audio(audio, from_rate=sr, to_rate=sr, to_channels=3) - assert list(out.shape) == [audio.shape[0], 3, audio.shape[-1]] - - def test_convert_audio_upsample(self): - b, c, dur = 2, 1, 4. - sr = 2 - new_sr = 3 - audio = get_batch_white_noise(b, c, int(sr * dur)) - out = convert_audio(audio, from_rate=sr, to_rate=new_sr, to_channels=c) - out_j = julius.resample.resample_frac(audio, old_sr=sr, new_sr=new_sr) - assert torch.allclose(out, out_j) - - def test_convert_audio_resample(self): - b, c, dur = 2, 1, 4. - sr = 3 - new_sr = 2 - audio = get_batch_white_noise(b, c, int(sr * dur)) - out = convert_audio(audio, from_rate=sr, to_rate=new_sr, to_channels=c) - out_j = julius.resample.resample_frac(audio, old_sr=sr, new_sr=new_sr) - assert torch.allclose(out, out_j) - - -class TestNormalizeAudio: - - def test_clip_wav(self): - b, c, dur = 2, 1, 4. - sr = 3 - audio = 10.0 * get_batch_white_noise(b, c, int(sr * dur)) - _clip_wav(audio) - assert audio.abs().max() <= 1 - - def test_normalize_audio_clip(self): - b, c, dur = 2, 1, 4. - sr = 3 - audio = 10.0 * get_batch_white_noise(b, c, int(sr * dur)) - norm_audio = normalize_audio(audio, strategy='clip') - assert norm_audio.abs().max() <= 1 - - def test_normalize_audio_rms(self): - b, c, dur = 2, 1, 4. - sr = 3 - audio = 10.0 * get_batch_white_noise(b, c, int(sr * dur)) - norm_audio = normalize_audio(audio, strategy='rms') - assert norm_audio.abs().max() <= 1 - - def test_normalize_audio_peak(self): - b, c, dur = 2, 1, 4. - sr = 3 - audio = 10.0 * get_batch_white_noise(b, c, int(sr * dur)) - norm_audio = normalize_audio(audio, strategy='peak') - assert norm_audio.abs().max() <= 1 diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/PIL/GifImagePlugin.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/PIL/GifImagePlugin.py deleted file mode 100644 index cf2993e38920bdebf79c6342875c2898e174ef6b..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/PIL/GifImagePlugin.py +++ /dev/null @@ -1,1064 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# GIF file handling -# -# History: -# 1995-09-01 fl Created -# 1996-12-14 fl Added interlace support -# 1996-12-30 fl Added animation support -# 1997-01-05 fl Added write support, fixed local colour map bug -# 1997-02-23 fl Make sure to load raster data in getdata() -# 1997-07-05 fl Support external decoder (0.4) -# 1998-07-09 fl Handle all modes when saving (0.5) -# 1998-07-15 fl Renamed offset attribute to avoid name clash -# 2001-04-16 fl Added rewind support (seek to frame 0) (0.6) -# 2001-04-17 fl Added palette optimization (0.7) -# 2002-06-06 fl Added transparency support for save (0.8) -# 2004-02-24 fl Disable interlacing for small images -# -# Copyright (c) 1997-2004 by Secret Labs AB -# Copyright (c) 1995-2004 by Fredrik Lundh -# -# See the README file for information on usage and redistribution. -# - -import itertools -import math -import os -import subprocess -from enum import IntEnum - -from . import Image, ImageChops, ImageFile, ImagePalette, ImageSequence -from ._binary import i16le as i16 -from ._binary import o8 -from ._binary import o16le as o16 - - -class LoadingStrategy(IntEnum): - """.. versionadded:: 9.1.0""" - - RGB_AFTER_FIRST = 0 - RGB_AFTER_DIFFERENT_PALETTE_ONLY = 1 - RGB_ALWAYS = 2 - - -#: .. versionadded:: 9.1.0 -LOADING_STRATEGY = LoadingStrategy.RGB_AFTER_FIRST - -# -------------------------------------------------------------------- -# Identify/read GIF files - - -def _accept(prefix): - return prefix[:6] in [b"GIF87a", b"GIF89a"] - - -## -# Image plugin for GIF images. This plugin supports both GIF87 and -# GIF89 images. - - -class GifImageFile(ImageFile.ImageFile): - format = "GIF" - format_description = "Compuserve GIF" - _close_exclusive_fp_after_loading = False - - global_palette = None - - def data(self): - s = self.fp.read(1) - if s and s[0]: - return self.fp.read(s[0]) - return None - - def _is_palette_needed(self, p): - for i in range(0, len(p), 3): - if not (i // 3 == p[i] == p[i + 1] == p[i + 2]): - return True - return False - - def _open(self): - # Screen - s = self.fp.read(13) - if not _accept(s): - msg = "not a GIF file" - raise SyntaxError(msg) - - self.info["version"] = s[:6] - self._size = i16(s, 6), i16(s, 8) - self.tile = [] - flags = s[10] - bits = (flags & 7) + 1 - - if flags & 128: - # get global palette - self.info["background"] = s[11] - # check if palette contains colour indices - p = self.fp.read(3 << bits) - if self._is_palette_needed(p): - p = ImagePalette.raw("RGB", p) - self.global_palette = self.palette = p - - self._fp = self.fp # FIXME: hack - self.__rewind = self.fp.tell() - self._n_frames = None - self._is_animated = None - self._seek(0) # get ready to read first frame - - @property - def n_frames(self): - if self._n_frames is None: - current = self.tell() - try: - while True: - self._seek(self.tell() + 1, False) - except EOFError: - self._n_frames = self.tell() + 1 - self.seek(current) - return self._n_frames - - @property - def is_animated(self): - if self._is_animated is None: - if self._n_frames is not None: - self._is_animated = self._n_frames != 1 - else: - current = self.tell() - if current: - self._is_animated = True - else: - try: - self._seek(1, False) - self._is_animated = True - except EOFError: - self._is_animated = False - - self.seek(current) - return self._is_animated - - def seek(self, frame): - if not self._seek_check(frame): - return - if frame < self.__frame: - self.im = None - self._seek(0) - - last_frame = self.__frame - for f in range(self.__frame + 1, frame + 1): - try: - self._seek(f) - except EOFError as e: - self.seek(last_frame) - msg = "no more images in GIF file" - raise EOFError(msg) from e - - def _seek(self, frame, update_image=True): - if frame == 0: - # rewind - self.__offset = 0 - self.dispose = None - self.__frame = -1 - self._fp.seek(self.__rewind) - self.disposal_method = 0 - if "comment" in self.info: - del self.info["comment"] - else: - # ensure that the previous frame was loaded - if self.tile and update_image: - self.load() - - if frame != self.__frame + 1: - msg = f"cannot seek to frame {frame}" - raise ValueError(msg) - - self.fp = self._fp - if self.__offset: - # backup to last frame - self.fp.seek(self.__offset) - while self.data(): - pass - self.__offset = 0 - - s = self.fp.read(1) - if not s or s == b";": - raise EOFError - - palette = None - - info = {} - frame_transparency = None - interlace = None - frame_dispose_extent = None - while True: - if not s: - s = self.fp.read(1) - if not s or s == b";": - break - - elif s == b"!": - # - # extensions - # - s = self.fp.read(1) - block = self.data() - if s[0] == 249: - # - # graphic control extension - # - flags = block[0] - if flags & 1: - frame_transparency = block[3] - info["duration"] = i16(block, 1) * 10 - - # disposal method - find the value of bits 4 - 6 - dispose_bits = 0b00011100 & flags - dispose_bits = dispose_bits >> 2 - if dispose_bits: - # only set the dispose if it is not - # unspecified. I'm not sure if this is - # correct, but it seems to prevent the last - # frame from looking odd for some animations - self.disposal_method = dispose_bits - elif s[0] == 254: - # - # comment extension - # - comment = b"" - - # Read this comment block - while block: - comment += block - block = self.data() - - if "comment" in info: - # If multiple comment blocks in frame, separate with \n - info["comment"] += b"\n" + comment - else: - info["comment"] = comment - s = None - continue - elif s[0] == 255 and frame == 0: - # - # application extension - # - info["extension"] = block, self.fp.tell() - if block[:11] == b"NETSCAPE2.0": - block = self.data() - if len(block) >= 3 and block[0] == 1: - self.info["loop"] = i16(block, 1) - while self.data(): - pass - - elif s == b",": - # - # local image - # - s = self.fp.read(9) - - # extent - x0, y0 = i16(s, 0), i16(s, 2) - x1, y1 = x0 + i16(s, 4), y0 + i16(s, 6) - if (x1 > self.size[0] or y1 > self.size[1]) and update_image: - self._size = max(x1, self.size[0]), max(y1, self.size[1]) - Image._decompression_bomb_check(self._size) - frame_dispose_extent = x0, y0, x1, y1 - flags = s[8] - - interlace = (flags & 64) != 0 - - if flags & 128: - bits = (flags & 7) + 1 - p = self.fp.read(3 << bits) - if self._is_palette_needed(p): - palette = ImagePalette.raw("RGB", p) - else: - palette = False - - # image data - bits = self.fp.read(1)[0] - self.__offset = self.fp.tell() - break - - else: - pass - # raise OSError, "illegal GIF tag `%x`" % s[0] - s = None - - if interlace is None: - # self._fp = None - raise EOFError - - self.__frame = frame - if not update_image: - return - - self.tile = [] - - if self.dispose: - self.im.paste(self.dispose, self.dispose_extent) - - self._frame_palette = palette if palette is not None else self.global_palette - self._frame_transparency = frame_transparency - if frame == 0: - if self._frame_palette: - if LOADING_STRATEGY == LoadingStrategy.RGB_ALWAYS: - self.mode = "RGBA" if frame_transparency is not None else "RGB" - else: - self.mode = "P" - else: - self.mode = "L" - - if not palette and self.global_palette: - from copy import copy - - palette = copy(self.global_palette) - self.palette = palette - else: - if self.mode == "P": - if ( - LOADING_STRATEGY != LoadingStrategy.RGB_AFTER_DIFFERENT_PALETTE_ONLY - or palette - ): - self.pyaccess = None - if "transparency" in self.info: - self.im.putpalettealpha(self.info["transparency"], 0) - self.im = self.im.convert("RGBA", Image.Dither.FLOYDSTEINBERG) - self.mode = "RGBA" - del self.info["transparency"] - else: - self.mode = "RGB" - self.im = self.im.convert("RGB", Image.Dither.FLOYDSTEINBERG) - - def _rgb(color): - if self._frame_palette: - color = tuple(self._frame_palette.palette[color * 3 : color * 3 + 3]) - else: - color = (color, color, color) - return color - - self.dispose_extent = frame_dispose_extent - try: - if self.disposal_method < 2: - # do not dispose or none specified - self.dispose = None - elif self.disposal_method == 2: - # replace with background colour - - # only dispose the extent in this frame - x0, y0, x1, y1 = self.dispose_extent - dispose_size = (x1 - x0, y1 - y0) - - Image._decompression_bomb_check(dispose_size) - - # by convention, attempt to use transparency first - dispose_mode = "P" - color = self.info.get("transparency", frame_transparency) - if color is not None: - if self.mode in ("RGB", "RGBA"): - dispose_mode = "RGBA" - color = _rgb(color) + (0,) - else: - color = self.info.get("background", 0) - if self.mode in ("RGB", "RGBA"): - dispose_mode = "RGB" - color = _rgb(color) - self.dispose = Image.core.fill(dispose_mode, dispose_size, color) - else: - # replace with previous contents - if self.im is not None: - # only dispose the extent in this frame - self.dispose = self._crop(self.im, self.dispose_extent) - elif frame_transparency is not None: - x0, y0, x1, y1 = self.dispose_extent - dispose_size = (x1 - x0, y1 - y0) - - Image._decompression_bomb_check(dispose_size) - dispose_mode = "P" - color = frame_transparency - if self.mode in ("RGB", "RGBA"): - dispose_mode = "RGBA" - color = _rgb(frame_transparency) + (0,) - self.dispose = Image.core.fill(dispose_mode, dispose_size, color) - except AttributeError: - pass - - if interlace is not None: - transparency = -1 - if frame_transparency is not None: - if frame == 0: - if LOADING_STRATEGY != LoadingStrategy.RGB_ALWAYS: - self.info["transparency"] = frame_transparency - elif self.mode not in ("RGB", "RGBA"): - transparency = frame_transparency - self.tile = [ - ( - "gif", - (x0, y0, x1, y1), - self.__offset, - (bits, interlace, transparency), - ) - ] - - if info.get("comment"): - self.info["comment"] = info["comment"] - for k in ["duration", "extension"]: - if k in info: - self.info[k] = info[k] - elif k in self.info: - del self.info[k] - - def load_prepare(self): - temp_mode = "P" if self._frame_palette else "L" - self._prev_im = None - if self.__frame == 0: - if self._frame_transparency is not None: - self.im = Image.core.fill( - temp_mode, self.size, self._frame_transparency - ) - elif self.mode in ("RGB", "RGBA"): - self._prev_im = self.im - if self._frame_palette: - self.im = Image.core.fill("P", self.size, self._frame_transparency or 0) - self.im.putpalette(*self._frame_palette.getdata()) - else: - self.im = None - self.mode = temp_mode - self._frame_palette = None - - super().load_prepare() - - def load_end(self): - if self.__frame == 0: - if self.mode == "P" and LOADING_STRATEGY == LoadingStrategy.RGB_ALWAYS: - if self._frame_transparency is not None: - self.im.putpalettealpha(self._frame_transparency, 0) - self.mode = "RGBA" - else: - self.mode = "RGB" - self.im = self.im.convert(self.mode, Image.Dither.FLOYDSTEINBERG) - return - if not self._prev_im: - return - if self._frame_transparency is not None: - self.im.putpalettealpha(self._frame_transparency, 0) - frame_im = self.im.convert("RGBA") - else: - frame_im = self.im.convert("RGB") - frame_im = self._crop(frame_im, self.dispose_extent) - - self.im = self._prev_im - self.mode = self.im.mode - if frame_im.mode == "RGBA": - self.im.paste(frame_im, self.dispose_extent, frame_im) - else: - self.im.paste(frame_im, self.dispose_extent) - - def tell(self): - return self.__frame - - -# -------------------------------------------------------------------- -# Write GIF files - - -RAWMODE = {"1": "L", "L": "L", "P": "P"} - - -def _normalize_mode(im): - """ - Takes an image (or frame), returns an image in a mode that is appropriate - for saving in a Gif. - - It may return the original image, or it may return an image converted to - palette or 'L' mode. - - :param im: Image object - :returns: Image object - """ - if im.mode in RAWMODE: - im.load() - return im - if Image.getmodebase(im.mode) == "RGB": - im = im.convert("P", palette=Image.Palette.ADAPTIVE) - if im.palette.mode == "RGBA": - for rgba in im.palette.colors: - if rgba[3] == 0: - im.info["transparency"] = im.palette.colors[rgba] - break - return im - return im.convert("L") - - -def _normalize_palette(im, palette, info): - """ - Normalizes the palette for image. - - Sets the palette to the incoming palette, if provided. - - Ensures that there's a palette for L mode images - - Optimizes the palette if necessary/desired. - - :param im: Image object - :param palette: bytes object containing the source palette, or .... - :param info: encoderinfo - :returns: Image object - """ - source_palette = None - if palette: - # a bytes palette - if isinstance(palette, (bytes, bytearray, list)): - source_palette = bytearray(palette[:768]) - if isinstance(palette, ImagePalette.ImagePalette): - source_palette = bytearray(palette.palette) - - if im.mode == "P": - if not source_palette: - source_palette = im.im.getpalette("RGB")[:768] - else: # L-mode - if not source_palette: - source_palette = bytearray(i // 3 for i in range(768)) - im.palette = ImagePalette.ImagePalette("RGB", palette=source_palette) - - if palette: - used_palette_colors = [] - for i in range(0, len(source_palette), 3): - source_color = tuple(source_palette[i : i + 3]) - index = im.palette.colors.get(source_color) - if index in used_palette_colors: - index = None - used_palette_colors.append(index) - for i, index in enumerate(used_palette_colors): - if index is None: - for j in range(len(used_palette_colors)): - if j not in used_palette_colors: - used_palette_colors[i] = j - break - im = im.remap_palette(used_palette_colors) - else: - used_palette_colors = _get_optimize(im, info) - if used_palette_colors is not None: - return im.remap_palette(used_palette_colors, source_palette) - - im.palette.palette = source_palette - return im - - -def _write_single_frame(im, fp, palette): - im_out = _normalize_mode(im) - for k, v in im_out.info.items(): - im.encoderinfo.setdefault(k, v) - im_out = _normalize_palette(im_out, palette, im.encoderinfo) - - for s in _get_global_header(im_out, im.encoderinfo): - fp.write(s) - - # local image header - flags = 0 - if get_interlace(im): - flags = flags | 64 - _write_local_header(fp, im, (0, 0), flags) - - im_out.encoderconfig = (8, get_interlace(im)) - ImageFile._save(im_out, fp, [("gif", (0, 0) + im.size, 0, RAWMODE[im_out.mode])]) - - fp.write(b"\0") # end of image data - - -def _getbbox(base_im, im_frame): - if _get_palette_bytes(im_frame) == _get_palette_bytes(base_im): - delta = ImageChops.subtract_modulo(im_frame, base_im) - else: - delta = ImageChops.subtract_modulo( - im_frame.convert("RGBA"), base_im.convert("RGBA") - ) - return delta.getbbox(alpha_only=False) - - -def _write_multiple_frames(im, fp, palette): - duration = im.encoderinfo.get("duration") - disposal = im.encoderinfo.get("disposal", im.info.get("disposal")) - - im_frames = [] - frame_count = 0 - background_im = None - for imSequence in itertools.chain([im], im.encoderinfo.get("append_images", [])): - for im_frame in ImageSequence.Iterator(imSequence): - # a copy is required here since seek can still mutate the image - im_frame = _normalize_mode(im_frame.copy()) - if frame_count == 0: - for k, v in im_frame.info.items(): - if k == "transparency": - continue - im.encoderinfo.setdefault(k, v) - - encoderinfo = im.encoderinfo.copy() - im_frame = _normalize_palette(im_frame, palette, encoderinfo) - if "transparency" in im_frame.info: - encoderinfo.setdefault("transparency", im_frame.info["transparency"]) - if isinstance(duration, (list, tuple)): - encoderinfo["duration"] = duration[frame_count] - elif duration is None and "duration" in im_frame.info: - encoderinfo["duration"] = im_frame.info["duration"] - if isinstance(disposal, (list, tuple)): - encoderinfo["disposal"] = disposal[frame_count] - frame_count += 1 - - if im_frames: - # delta frame - previous = im_frames[-1] - bbox = _getbbox(previous["im"], im_frame) - if not bbox: - # This frame is identical to the previous frame - if encoderinfo.get("duration"): - previous["encoderinfo"]["duration"] += encoderinfo["duration"] - continue - if encoderinfo.get("disposal") == 2: - if background_im is None: - color = im.encoderinfo.get( - "transparency", im.info.get("transparency", (0, 0, 0)) - ) - background = _get_background(im_frame, color) - background_im = Image.new("P", im_frame.size, background) - background_im.putpalette(im_frames[0]["im"].palette) - bbox = _getbbox(background_im, im_frame) - else: - bbox = None - im_frames.append({"im": im_frame, "bbox": bbox, "encoderinfo": encoderinfo}) - - if len(im_frames) > 1: - for frame_data in im_frames: - im_frame = frame_data["im"] - if not frame_data["bbox"]: - # global header - for s in _get_global_header(im_frame, frame_data["encoderinfo"]): - fp.write(s) - offset = (0, 0) - else: - # compress difference - if not palette: - frame_data["encoderinfo"]["include_color_table"] = True - - im_frame = im_frame.crop(frame_data["bbox"]) - offset = frame_data["bbox"][:2] - _write_frame_data(fp, im_frame, offset, frame_data["encoderinfo"]) - return True - elif "duration" in im.encoderinfo and isinstance( - im.encoderinfo["duration"], (list, tuple) - ): - # Since multiple frames will not be written, add together the frame durations - im.encoderinfo["duration"] = sum(im.encoderinfo["duration"]) - - -def _save_all(im, fp, filename): - _save(im, fp, filename, save_all=True) - - -def _save(im, fp, filename, save_all=False): - # header - if "palette" in im.encoderinfo or "palette" in im.info: - palette = im.encoderinfo.get("palette", im.info.get("palette")) - else: - palette = None - im.encoderinfo["optimize"] = im.encoderinfo.get("optimize", True) - - if not save_all or not _write_multiple_frames(im, fp, palette): - _write_single_frame(im, fp, palette) - - fp.write(b";") # end of file - - if hasattr(fp, "flush"): - fp.flush() - - -def get_interlace(im): - interlace = im.encoderinfo.get("interlace", 1) - - # workaround for @PIL153 - if min(im.size) < 16: - interlace = 0 - - return interlace - - -def _write_local_header(fp, im, offset, flags): - transparent_color_exists = False - try: - if "transparency" in im.encoderinfo: - transparency = im.encoderinfo["transparency"] - else: - transparency = im.info["transparency"] - transparency = int(transparency) - except (KeyError, ValueError): - pass - else: - # optimize the block away if transparent color is not used - transparent_color_exists = True - - used_palette_colors = _get_optimize(im, im.encoderinfo) - if used_palette_colors is not None: - # adjust the transparency index after optimize - try: - transparency = used_palette_colors.index(transparency) - except ValueError: - transparent_color_exists = False - - if "duration" in im.encoderinfo: - duration = int(im.encoderinfo["duration"] / 10) - else: - duration = 0 - - disposal = int(im.encoderinfo.get("disposal", 0)) - - if transparent_color_exists or duration != 0 or disposal: - packed_flag = 1 if transparent_color_exists else 0 - packed_flag |= disposal << 2 - if not transparent_color_exists: - transparency = 0 - - fp.write( - b"!" - + o8(249) # extension intro - + o8(4) # length - + o8(packed_flag) # packed fields - + o16(duration) # duration - + o8(transparency) # transparency index - + o8(0) - ) - - include_color_table = im.encoderinfo.get("include_color_table") - if include_color_table: - palette_bytes = _get_palette_bytes(im) - color_table_size = _get_color_table_size(palette_bytes) - if color_table_size: - flags = flags | 128 # local color table flag - flags = flags | color_table_size - - fp.write( - b"," - + o16(offset[0]) # offset - + o16(offset[1]) - + o16(im.size[0]) # size - + o16(im.size[1]) - + o8(flags) # flags - ) - if include_color_table and color_table_size: - fp.write(_get_header_palette(palette_bytes)) - fp.write(o8(8)) # bits - - -def _save_netpbm(im, fp, filename): - # Unused by default. - # To use, uncomment the register_save call at the end of the file. - # - # If you need real GIF compression and/or RGB quantization, you - # can use the external NETPBM/PBMPLUS utilities. See comments - # below for information on how to enable this. - tempfile = im._dump() - - try: - with open(filename, "wb") as f: - if im.mode != "RGB": - subprocess.check_call( - ["ppmtogif", tempfile], stdout=f, stderr=subprocess.DEVNULL - ) - else: - # Pipe ppmquant output into ppmtogif - # "ppmquant 256 %s | ppmtogif > %s" % (tempfile, filename) - quant_cmd = ["ppmquant", "256", tempfile] - togif_cmd = ["ppmtogif"] - quant_proc = subprocess.Popen( - quant_cmd, stdout=subprocess.PIPE, stderr=subprocess.DEVNULL - ) - togif_proc = subprocess.Popen( - togif_cmd, - stdin=quant_proc.stdout, - stdout=f, - stderr=subprocess.DEVNULL, - ) - - # Allow ppmquant to receive SIGPIPE if ppmtogif exits - quant_proc.stdout.close() - - retcode = quant_proc.wait() - if retcode: - raise subprocess.CalledProcessError(retcode, quant_cmd) - - retcode = togif_proc.wait() - if retcode: - raise subprocess.CalledProcessError(retcode, togif_cmd) - finally: - try: - os.unlink(tempfile) - except OSError: - pass - - -# Force optimization so that we can test performance against -# cases where it took lots of memory and time previously. -_FORCE_OPTIMIZE = False - - -def _get_optimize(im, info): - """ - Palette optimization is a potentially expensive operation. - - This function determines if the palette should be optimized using - some heuristics, then returns the list of palette entries in use. - - :param im: Image object - :param info: encoderinfo - :returns: list of indexes of palette entries in use, or None - """ - if im.mode in ("P", "L") and info and info.get("optimize", 0): - # Potentially expensive operation. - - # The palette saves 3 bytes per color not used, but palette - # lengths are restricted to 3*(2**N) bytes. Max saving would - # be 768 -> 6 bytes if we went all the way down to 2 colors. - # * If we're over 128 colors, we can't save any space. - # * If there aren't any holes, it's not worth collapsing. - # * If we have a 'large' image, the palette is in the noise. - - # create the new palette if not every color is used - optimise = _FORCE_OPTIMIZE or im.mode == "L" - if optimise or im.width * im.height < 512 * 512: - # check which colors are used - used_palette_colors = [] - for i, count in enumerate(im.histogram()): - if count: - used_palette_colors.append(i) - - if optimise or max(used_palette_colors) >= len(used_palette_colors): - return used_palette_colors - - num_palette_colors = len(im.palette.palette) // Image.getmodebands( - im.palette.mode - ) - current_palette_size = 1 << (num_palette_colors - 1).bit_length() - if ( - # check that the palette would become smaller when saved - len(used_palette_colors) <= current_palette_size // 2 - # check that the palette is not already the smallest possible size - and current_palette_size > 2 - ): - return used_palette_colors - - -def _get_color_table_size(palette_bytes): - # calculate the palette size for the header - if not palette_bytes: - return 0 - elif len(palette_bytes) < 9: - return 1 - else: - return math.ceil(math.log(len(palette_bytes) // 3, 2)) - 1 - - -def _get_header_palette(palette_bytes): - """ - Returns the palette, null padded to the next power of 2 (*3) bytes - suitable for direct inclusion in the GIF header - - :param palette_bytes: Unpadded palette bytes, in RGBRGB form - :returns: Null padded palette - """ - color_table_size = _get_color_table_size(palette_bytes) - - # add the missing amount of bytes - # the palette has to be 2< 0: - palette_bytes += o8(0) * 3 * actual_target_size_diff - return palette_bytes - - -def _get_palette_bytes(im): - """ - Gets the palette for inclusion in the gif header - - :param im: Image object - :returns: Bytes, len<=768 suitable for inclusion in gif header - """ - return im.palette.palette if im.palette else b"" - - -def _get_background(im, info_background): - background = 0 - if info_background: - if isinstance(info_background, tuple): - # WebPImagePlugin stores an RGBA value in info["background"] - # So it must be converted to the same format as GifImagePlugin's - # info["background"] - a global color table index - try: - background = im.palette.getcolor(info_background, im) - except ValueError as e: - if str(e) not in ( - # If all 256 colors are in use, - # then there is no need for the background color - "cannot allocate more than 256 colors", - # Ignore non-opaque WebP background - "cannot add non-opaque RGBA color to RGB palette", - ): - raise - else: - background = info_background - return background - - -def _get_global_header(im, info): - """Return a list of strings representing a GIF header""" - - # Header Block - # https://www.matthewflickinger.com/lab/whatsinagif/bits_and_bytes.asp - - version = b"87a" - if im.info.get("version") == b"89a" or ( - info - and ( - "transparency" in info - or "loop" in info - or info.get("duration") - or info.get("comment") - ) - ): - version = b"89a" - - background = _get_background(im, info.get("background")) - - palette_bytes = _get_palette_bytes(im) - color_table_size = _get_color_table_size(palette_bytes) - - header = [ - b"GIF" # signature - + version # version - + o16(im.size[0]) # canvas width - + o16(im.size[1]), # canvas height - # Logical Screen Descriptor - # size of global color table + global color table flag - o8(color_table_size + 128), # packed fields - # background + reserved/aspect - o8(background) + o8(0), - # Global Color Table - _get_header_palette(palette_bytes), - ] - if "loop" in info: - header.append( - b"!" - + o8(255) # extension intro - + o8(11) - + b"NETSCAPE2.0" - + o8(3) - + o8(1) - + o16(info["loop"]) # number of loops - + o8(0) - ) - if info.get("comment"): - comment_block = b"!" + o8(254) # extension intro - - comment = info["comment"] - if isinstance(comment, str): - comment = comment.encode() - for i in range(0, len(comment), 255): - subblock = comment[i : i + 255] - comment_block += o8(len(subblock)) + subblock - - comment_block += o8(0) - header.append(comment_block) - return header - - -def _write_frame_data(fp, im_frame, offset, params): - try: - im_frame.encoderinfo = params - - # local image header - _write_local_header(fp, im_frame, offset, 0) - - ImageFile._save( - im_frame, fp, [("gif", (0, 0) + im_frame.size, 0, RAWMODE[im_frame.mode])] - ) - - fp.write(b"\0") # end of image data - finally: - del im_frame.encoderinfo - - -# -------------------------------------------------------------------- -# Legacy GIF utilities - - -def getheader(im, palette=None, info=None): - """ - Legacy Method to get Gif data from image. - - Warning:: May modify image data. - - :param im: Image object - :param palette: bytes object containing the source palette, or .... - :param info: encoderinfo - :returns: tuple of(list of header items, optimized palette) - - """ - used_palette_colors = _get_optimize(im, info) - - if info is None: - info = {} - - if "background" not in info and "background" in im.info: - info["background"] = im.info["background"] - - im_mod = _normalize_palette(im, palette, info) - im.palette = im_mod.palette - im.im = im_mod.im - header = _get_global_header(im, info) - - return header, used_palette_colors - - -def getdata(im, offset=(0, 0), **params): - """ - Legacy Method - - Return a list of strings representing this image. - The first string is a local image header, the rest contains - encoded image data. - - To specify duration, add the time in milliseconds, - e.g. ``getdata(im_frame, duration=1000)`` - - :param im: Image object - :param offset: Tuple of (x, y) pixels. Defaults to (0, 0) - :param \\**params: e.g. duration or other encoder info parameters - :returns: List of bytes containing GIF encoded frame data - - """ - - class Collector: - data = [] - - def write(self, data): - self.data.append(data) - - im.load() # make sure raster data is available - - fp = Collector() - - _write_frame_data(fp, im, offset, params) - - return fp.data - - -# -------------------------------------------------------------------- -# Registry - -Image.register_open(GifImageFile.format, GifImageFile, _accept) -Image.register_save(GifImageFile.format, _save) -Image.register_save_all(GifImageFile.format, _save_all) -Image.register_extension(GifImageFile.format, ".gif") -Image.register_mime(GifImageFile.format, "image/gif") - -# -# Uncomment the following line if you wish to use NETPBM/PBMPLUS -# instead of the built-in "uncompressed" GIF encoder - -# Image.register_save(GifImageFile.format, _save_netpbm) diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/misc/fixedTools.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/misc/fixedTools.py deleted file mode 100644 index 330042871c521231f2a396add543dd425783722b..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/misc/fixedTools.py +++ /dev/null @@ -1,253 +0,0 @@ -""" -The `OpenType specification `_ -defines two fixed-point data types: - -``Fixed`` - A 32-bit signed fixed-point number with a 16 bit twos-complement - magnitude component and 16 fractional bits. -``F2DOT14`` - A 16-bit signed fixed-point number with a 2 bit twos-complement - magnitude component and 14 fractional bits. - -To support reading and writing data with these data types, this module provides -functions for converting between fixed-point, float and string representations. - -.. data:: MAX_F2DOT14 - - The maximum value that can still fit in an F2Dot14. (1.99993896484375) -""" - -from .roundTools import otRound, nearestMultipleShortestRepr -import logging - -log = logging.getLogger(__name__) - -__all__ = [ - "MAX_F2DOT14", - "fixedToFloat", - "floatToFixed", - "floatToFixedToFloat", - "floatToFixedToStr", - "fixedToStr", - "strToFixed", - "strToFixedToFloat", - "ensureVersionIsLong", - "versionToFixed", -] - - -MAX_F2DOT14 = 0x7FFF / (1 << 14) - - -def fixedToFloat(value, precisionBits): - """Converts a fixed-point number to a float given the number of - precision bits. - - Args: - value (int): Number in fixed-point format. - precisionBits (int): Number of precision bits. - - Returns: - Floating point value. - - Examples:: - - >>> import math - >>> f = fixedToFloat(-10139, precisionBits=14) - >>> math.isclose(f, -0.61883544921875) - True - """ - return value / (1 << precisionBits) - - -def floatToFixed(value, precisionBits): - """Converts a float to a fixed-point number given the number of - precision bits. - - Args: - value (float): Floating point value. - precisionBits (int): Number of precision bits. - - Returns: - int: Fixed-point representation. - - Examples:: - - >>> floatToFixed(-0.61883544921875, precisionBits=14) - -10139 - >>> floatToFixed(-0.61884, precisionBits=14) - -10139 - """ - return otRound(value * (1 << precisionBits)) - - -def floatToFixedToFloat(value, precisionBits): - """Converts a float to a fixed-point number and back again. - - By converting the float to fixed, rounding it, and converting it back - to float again, this returns a floating point values which is exactly - representable in fixed-point format. - - Note: this **is** equivalent to ``fixedToFloat(floatToFixed(value))``. - - Args: - value (float): The input floating point value. - precisionBits (int): Number of precision bits. - - Returns: - float: The transformed and rounded value. - - Examples:: - >>> import math - >>> f1 = -0.61884 - >>> f2 = floatToFixedToFloat(-0.61884, precisionBits=14) - >>> f1 != f2 - True - >>> math.isclose(f2, -0.61883544921875) - True - """ - scale = 1 << precisionBits - return otRound(value * scale) / scale - - -def fixedToStr(value, precisionBits): - """Converts a fixed-point number to a string representing a decimal float. - - This chooses the float that has the shortest decimal representation (the least - number of fractional decimal digits). - - For example, to convert a fixed-point number in a 2.14 format, use - ``precisionBits=14``:: - - >>> fixedToStr(-10139, precisionBits=14) - '-0.61884' - - This is pretty slow compared to the simple division used in ``fixedToFloat``. - Use sporadically when you need to serialize or print the fixed-point number in - a human-readable form. - It uses nearestMultipleShortestRepr under the hood. - - Args: - value (int): The fixed-point value to convert. - precisionBits (int): Number of precision bits, *up to a maximum of 16*. - - Returns: - str: A string representation of the value. - """ - scale = 1 << precisionBits - return nearestMultipleShortestRepr(value / scale, factor=1.0 / scale) - - -def strToFixed(string, precisionBits): - """Converts a string representing a decimal float to a fixed-point number. - - Args: - string (str): A string representing a decimal float. - precisionBits (int): Number of precision bits, *up to a maximum of 16*. - - Returns: - int: Fixed-point representation. - - Examples:: - - >>> ## to convert a float string to a 2.14 fixed-point number: - >>> strToFixed('-0.61884', precisionBits=14) - -10139 - """ - value = float(string) - return otRound(value * (1 << precisionBits)) - - -def strToFixedToFloat(string, precisionBits): - """Convert a string to a decimal float with fixed-point rounding. - - This first converts string to a float, then turns it into a fixed-point - number with ``precisionBits`` fractional binary digits, then back to a - float again. - - This is simply a shorthand for fixedToFloat(floatToFixed(float(s))). - - Args: - string (str): A string representing a decimal float. - precisionBits (int): Number of precision bits. - - Returns: - float: The transformed and rounded value. - - Examples:: - - >>> import math - >>> s = '-0.61884' - >>> bits = 14 - >>> f = strToFixedToFloat(s, precisionBits=bits) - >>> math.isclose(f, -0.61883544921875) - True - >>> f == fixedToFloat(floatToFixed(float(s), precisionBits=bits), precisionBits=bits) - True - """ - value = float(string) - scale = 1 << precisionBits - return otRound(value * scale) / scale - - -def floatToFixedToStr(value, precisionBits): - """Convert float to string with fixed-point rounding. - - This uses the shortest decimal representation (ie. the least - number of fractional decimal digits) to represent the equivalent - fixed-point number with ``precisionBits`` fractional binary digits. - It uses nearestMultipleShortestRepr under the hood. - - >>> floatToFixedToStr(-0.61883544921875, precisionBits=14) - '-0.61884' - - Args: - value (float): The float value to convert. - precisionBits (int): Number of precision bits, *up to a maximum of 16*. - - Returns: - str: A string representation of the value. - - """ - scale = 1 << precisionBits - return nearestMultipleShortestRepr(value, factor=1.0 / scale) - - -def ensureVersionIsLong(value): - """Ensure a table version is an unsigned long. - - OpenType table version numbers are expressed as a single unsigned long - comprising of an unsigned short major version and unsigned short minor - version. This function detects if the value to be used as a version number - looks too small (i.e. is less than ``0x10000``), and converts it to - fixed-point using :func:`floatToFixed` if so. - - Args: - value (Number): a candidate table version number. - - Returns: - int: A table version number, possibly corrected to fixed-point. - """ - if value < 0x10000: - newValue = floatToFixed(value, 16) - log.warning( - "Table version value is a float: %.4f; " "fix to use hex instead: 0x%08x", - value, - newValue, - ) - value = newValue - return value - - -def versionToFixed(value): - """Ensure a table version number is fixed-point. - - Args: - value (str): a candidate table version number. - - Returns: - int: A table version number, possibly corrected to fixed-point. - """ - value = int(value, 0) if value.startswith("0") else float(value) - value = ensureVersionIsLong(value) - return value diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/ttLib/tables/T_S_I__1.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/ttLib/tables/T_S_I__1.py deleted file mode 100644 index 57163d726c1a5e850eabe8ec72a44c9ec514b715..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/ttLib/tables/T_S_I__1.py +++ /dev/null @@ -1,164 +0,0 @@ -""" TSI{0,1,2,3,5} are private tables used by Microsoft Visual TrueType (VTT) -tool to store its hinting source data. - -TSI1 contains the text of the glyph programs in the form of low-level assembly -code, as well as the 'extra' programs 'fpgm', 'ppgm' (i.e. 'prep'), and 'cvt'. -""" -from . import DefaultTable -from fontTools.misc.loggingTools import LogMixin -from fontTools.misc.textTools import strjoin, tobytes, tostr - - -class table_T_S_I__1(LogMixin, DefaultTable.DefaultTable): - - extras = {0xFFFA: "ppgm", 0xFFFB: "cvt", 0xFFFC: "reserved", 0xFFFD: "fpgm"} - - indextable = "TSI0" - - def decompile(self, data, ttFont): - totalLength = len(data) - indextable = ttFont[self.indextable] - for indices, isExtra in zip( - (indextable.indices, indextable.extra_indices), (False, True) - ): - programs = {} - for i, (glyphID, textLength, textOffset) in enumerate(indices): - if isExtra: - name = self.extras[glyphID] - else: - name = ttFont.getGlyphName(glyphID) - if textOffset > totalLength: - self.log.warning("textOffset > totalLength; %r skipped" % name) - continue - if textLength < 0x8000: - # If the length stored in the record is less than 32768, then use - # that as the length of the record. - pass - elif textLength == 0x8000: - # If the length is 32768, compute the actual length as follows: - isLast = i == (len(indices) - 1) - if isLast: - if isExtra: - # For the last "extra" record (the very last record of the - # table), the length is the difference between the total - # length of the TSI1 table and the textOffset of the final - # record. - nextTextOffset = totalLength - else: - # For the last "normal" record (the last record just prior - # to the record containing the "magic number"), the length - # is the difference between the textOffset of the record - # following the "magic number" (0xFFFE) record (i.e. the - # first "extra" record), and the textOffset of the last - # "normal" record. - nextTextOffset = indextable.extra_indices[0][2] - else: - # For all other records with a length of 0x8000, the length is - # the difference between the textOffset of the record in - # question and the textOffset of the next record. - nextTextOffset = indices[i + 1][2] - assert nextTextOffset >= textOffset, "entries not sorted by offset" - if nextTextOffset > totalLength: - self.log.warning( - "nextTextOffset > totalLength; %r truncated" % name - ) - nextTextOffset = totalLength - textLength = nextTextOffset - textOffset - else: - from fontTools import ttLib - - raise ttLib.TTLibError( - "%r textLength (%d) must not be > 32768" % (name, textLength) - ) - text = data[textOffset : textOffset + textLength] - assert len(text) == textLength - text = tostr(text, encoding="utf-8") - if text: - programs[name] = text - if isExtra: - self.extraPrograms = programs - else: - self.glyphPrograms = programs - - def compile(self, ttFont): - if not hasattr(self, "glyphPrograms"): - self.glyphPrograms = {} - self.extraPrograms = {} - data = b"" - indextable = ttFont[self.indextable] - glyphNames = ttFont.getGlyphOrder() - - indices = [] - for i in range(len(glyphNames)): - if len(data) % 2: - data = ( - data + b"\015" - ) # align on 2-byte boundaries, fill with return chars. Yum. - name = glyphNames[i] - if name in self.glyphPrograms: - text = tobytes(self.glyphPrograms[name], encoding="utf-8") - else: - text = b"" - textLength = len(text) - if textLength >= 0x8000: - textLength = 0x8000 - indices.append((i, textLength, len(data))) - data = data + text - - extra_indices = [] - codes = sorted(self.extras.items()) - for i in range(len(codes)): - if len(data) % 2: - data = ( - data + b"\015" - ) # align on 2-byte boundaries, fill with return chars. - code, name = codes[i] - if name in self.extraPrograms: - text = tobytes(self.extraPrograms[name], encoding="utf-8") - else: - text = b"" - textLength = len(text) - if textLength >= 0x8000: - textLength = 0x8000 - extra_indices.append((code, textLength, len(data))) - data = data + text - indextable.set(indices, extra_indices) - return data - - def toXML(self, writer, ttFont): - names = sorted(self.glyphPrograms.keys()) - writer.newline() - for name in names: - text = self.glyphPrograms[name] - if not text: - continue - writer.begintag("glyphProgram", name=name) - writer.newline() - writer.write_noindent(text.replace("\r", "\n")) - writer.newline() - writer.endtag("glyphProgram") - writer.newline() - writer.newline() - extra_names = sorted(self.extraPrograms.keys()) - for name in extra_names: - text = self.extraPrograms[name] - if not text: - continue - writer.begintag("extraProgram", name=name) - writer.newline() - writer.write_noindent(text.replace("\r", "\n")) - writer.newline() - writer.endtag("extraProgram") - writer.newline() - writer.newline() - - def fromXML(self, name, attrs, content, ttFont): - if not hasattr(self, "glyphPrograms"): - self.glyphPrograms = {} - self.extraPrograms = {} - lines = strjoin(content).replace("\r", "\n").split("\n") - text = "\r".join(lines[1:-1]) - if name == "glyphProgram": - self.glyphPrograms[attrs["name"]] = text - elif name == "extraProgram": - self.extraPrograms[attrs["name"]] = text diff --git a/spaces/devoworm-group/Lineage_Population/functions.py b/spaces/devoworm-group/Lineage_Population/functions.py deleted file mode 100644 index 17a3ef4f525a5152343e293a8b4cc69df370dd7f..0000000000000000000000000000000000000000 --- a/spaces/devoworm-group/Lineage_Population/functions.py +++ /dev/null @@ -1,85 +0,0 @@ -import streamlit as st -import numpy as np -import pandas as pd -from PIL import Image -from pathlib import Path -import joblib - -import numpy as np -import cv2 -import onnxruntime as ort -import imutils -# import matplotlib.pyplot as plt -import pandas as pd -import plotly.express as px - - -def scale_model_outputs(scaler_path, data): - scaler= joblib.load(scaler_path) - scaled=scaler.inverse_transform(data) - return(scaled) - - -def onnx_predict_lineage_population(input_image): - ort_session = ort.InferenceSession('onnx_models/lineage_population_model.onnx') - img = Image.fromarray(np.uint8(input_image)) - resized = img.resize((256, 256), Image.NEAREST) - - transposed=np.transpose(resized, (2, 1, 0)) - img_unsqueeze = expand_dims(transposed) - - onnx_outputs = ort_session.run(None, {'input': img_unsqueeze.astype('float32')}) - return(onnx_outputs[0]) - - - -def expand_dims(arr): - norm=(arr-np.min(arr))/(np.max(arr)-np.min(arr)) #normalize - ret = np.expand_dims(norm, axis=0) - return(ret) - - - -def lineage_population_model(): - selected_box2 = st.sidebar.selectbox( - 'Choose Example Input', - (['Example_1.png']) - ) - - st.title('Predict Cell Lineage Populations') - instructions = """ - Predict the population of cells in C. elegans embryo using fluorescence microscopy data. \n - Either upload your own image or select from the sidebar to get a preconfigured image. - The image you select or upload will be fed through the Deep Neural Network in real-time - and the output will be displayed to the screen. - """ - st.text(instructions) - file = st.file_uploader('Upload an image or choose an example') - example_image = Image.open('./images/lineage_population_examples/'+selected_box2).convert("RGB") - - col1, col2= st.columns(2) - - if file: - input = Image.open(file).convert("RGB") - fig1 = px.imshow(input, binary_string=True, labels=dict(x="Input Image")) - fig1.update(layout_coloraxis_showscale=False) - fig1.update_layout(margin=dict(l=0, r=0, b=0, t=0)) - col1.plotly_chart(fig1, use_container_width=True) - else: - input = example_image - fig1 = px.imshow(input, binary_string=True, labels=dict(x="Input Image")) - fig1.update(layout_coloraxis_showscale=False) - fig1.update_layout(margin=dict(l=0, r=0, b=0, t=0)) - col1.plotly_chart(fig1, use_container_width=True) - - pressed = st.button('Run') - if pressed: - st.empty() - output = onnx_predict_lineage_population(np.array(input)) - scaled_output = scale_model_outputs(scaler_path="./scaler.gz", data=output) - - for i in range(len(scaled_output[0])): - scaled_output[0][i]=int(round(scaled_output[0][i])) - - df = pd.DataFrame({"Lineage":["A", "E", "M", "P", "C", "D", "Z"] , "Population": scaled_output[0]}) - col2.table(df) \ No newline at end of file diff --git a/spaces/diacanFperku/AutoGPT/AutoCAD 2011 Keygen WORK Xforce.md b/spaces/diacanFperku/AutoGPT/AutoCAD 2011 Keygen WORK Xforce.md deleted file mode 100644 index 8c229402ca5544cacdd2f2366fa76482db9f891a..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/AutoCAD 2011 Keygen WORK Xforce.md +++ /dev/null @@ -1,16 +0,0 @@ -

    AutoCAD 2011 keygen xforce


    Download Filehttps://gohhs.com/2uFURC



    - -страницу - -Autocad 2011 keygen - -A autocad 2011 keygen copier is usually organized by using a cassette of components, which is accompanied by a system that allows you to purchase the components of the job or finish a job which has been prepared by an expert. A cassette is far more cost effective than a copier and allows for both the purchase of the components of a job and assembling a job at a much faster rate. While a cassette of components may be less expensive, a cassette is a lot more complex than a standard copy machine. In order to utilize a cassette, you need to handle components and replace the cassette that is particular for a job, which can be extremely hard and time consuming. Copiers are much faster than a cassette and typically much easier to use. Copiers are typically much faster than a cassette for many reasons. The most obvious is because a cassette only holds a finite amount of components, while a copier may hold hundreds or thousands of components. Copiers can usually be switched on and produce a copy from a slide or from a job that is stored on a CD or DVD. Typically, cassette copiers must be connected to a CD or DVD in order to make copies. You can not easily change which components are being copied on a cassette copier. A cassette contains only a very specific set of components. You will not be able to change the other components that are on the cassette. On the other hand, a copier can copy many different types of components, allowing you to change the components being copied without the need to remove the cassette. - -A cassette is also much larger than a copier. Copiers are typically smaller than a cassette and can be placed on a desk or on top of a desk. A cassette is typically placed on the floor and you have to lift the cassette off of the floor in order to open the cassette. The more parts you need to copy, the larger your cassette will be. The cassette can also be taller and longer than your copier. This allows you to place many different types of parts on the cassette. A cassette can also be a bit more expensive than a copier. You need a cassette in order to copy components and a cassette is much more expensive than the standard copy machine. - -Copier Price Evaluations - -A basic copier will likely be cheaper than a cassette copier. However, when you purchase a cassette copier you will not have the benefits that a cassette copier provides. 4fefd39f24
    -
    -
    -

    diff --git a/spaces/diacanFperku/AutoGPT/BaDBoy V4.2 [Cheats 4 Counter-Strike 1.6] Pc Game _BEST_.md b/spaces/diacanFperku/AutoGPT/BaDBoy V4.2 [Cheats 4 Counter-Strike 1.6] Pc Game _BEST_.md deleted file mode 100644 index e545a0b55ddfa1c6a22eba9493bedcff9cb3ec45..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/BaDBoy V4.2 [Cheats 4 Counter-Strike 1.6] Pc Game _BEST_.md +++ /dev/null @@ -1,6 +0,0 @@ -

    BaDBoy v4.2 [Cheats 4 Counter-Strike 1.6] pc game


    Download Zip ··· https://gohhs.com/2uFV9J



    - -z>sys file> get dell-mcpci.sysv -
    -
    -

    diff --git a/spaces/diacanFperku/AutoGPT/Download Mdsolids 4.0 Full Crack Idm.md b/spaces/diacanFperku/AutoGPT/Download Mdsolids 4.0 Full Crack Idm.md deleted file mode 100644 index 5e45b00bf1eecc561df9f2f4080457d361a9aac6..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Download Mdsolids 4.0 Full Crack Idm.md +++ /dev/null @@ -1,25 +0,0 @@ - -

    How to Download MDSolids 4.0 Full Crack IDM

    -

    MDSolids is a software for topics taught in the Mechanics of Materials course, such as beams, trusses, Mohr's circle transformations, section properties, torsion, and more[^1^]. It is designed to assist engineering students and professionals in solving a wide variety of engineering problems[^3^].

    -

    download mdsolids 4.0 full crack idm


    Download File ✶✶✶ https://gohhs.com/2uFUqR



    -

    If you want to download MDSolids 4.0 full crack IDM, you will need to follow these steps:

    -
      -
    1. Go to the official website of MDSolids at https://web.mst.edu/~mdsolids/ and click on the Download link.
    2. -
    3. Download MDSolids 4.1.0 (Microsoft MSI Installer Version) and save it on your computer.
    4. -
    5. Run the installer and follow the instructions to install MDSolids on your computer.
    6. -
    7. Open MDSolids and go to the Help menu. Click on Register MDSolids and enter your name and the registration code D93C8ADC. This code will remove the 30-day limitation and give you free access to the full version of MDSolids[^2^].
    8. -
    9. Enjoy using MDSolids for your engineering projects.
    10. -
    -

    Note: IDM stands for Internet Download Manager, a tool that can speed up and manage your downloads. You can download IDM from https://www.internetdownloadmanager.com/ and use it to download MDSolids faster.

    Benefits of Using MDSolids

    -

    MDSolids has many benefits for engineering students and professionals who want to learn and apply the concepts of Mechanics of Materials. Some of the benefits are:

    -

    -
      -
    • MDSolids is easy to use and has a user-friendly interface. You can access different modules and routines from the main menu or the toolbar. You can also customize the appearance and settings of MDSolids according to your preferences.
    • -
    • MDSolids covers a wide range of topics and problems in Mechanics of Materials, such as stress, strain, deformation, equilibrium, compatibility, loading, failure criteria, and more. You can use MDSolids to solve problems involving axial members, torsional members, beams, columns, pressure vessels, trusses, frames, and more.
    • -
    • MDSolids provides detailed explanations and solutions for each problem. You can see the formulas, diagrams, graphs, tables, and calculations that are used to solve the problem. You can also check your answers and compare them with the solutions provided by MDSolids.
    • -
    • MDSolids allows you to save, print, and export your data and results. You can save your data from an individual module or an entire session. You can print your data and results in a formatted report. You can also export your data and results to Excel spreadsheets or bitmap images.
    • -
    • MDSolids is compatible with Windows 10, Windows 8, Windows 7, and all prior versions of Windows. You can download and install MDSolids on any computer that runs on Windows operating system.
    • -
    -

    MDSolids is a powerful and versatile software that can help you learn and master the concepts of Mechanics of Materials. Whether you are a student or a professional, you will find MDSolids useful and beneficial for your engineering education and career.

    d5da3c52bf
    -
    -
    \ No newline at end of file diff --git a/spaces/diacanFperku/AutoGPT/Generic Text Only Driver Download Windows 7.md b/spaces/diacanFperku/AutoGPT/Generic Text Only Driver Download Windows 7.md deleted file mode 100644 index d98a2ae8ed2354d03211e2c70b769fb45121edaa..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Generic Text Only Driver Download Windows 7.md +++ /dev/null @@ -1,52 +0,0 @@ - -

    How to Download and Install Generic Text Only Driver for Windows 7

    -

    If you need to print plain text documents without any formatting, you might want to use the Generic Text Only driver. This driver is a built-in option in Windows that allows you to send raw text commands to your printer. It can be useful for printing receipts, labels, tickets, or other simple text documents.

    -

    In this article, we will show you how to download and install the Generic Text Only driver for Windows 7. We will also provide some tips on how to use it effectively.

    -

    generic text only driver download windows 7


    Downloadhttps://gohhs.com/2uFTSb



    -

    Step 1: Download the Generic Text Only Driver

    -

    The Generic Text Only driver is included in Windows 7, but you might need to update it to the latest version. To do this, you can use a reliable driver update tool like DriverGuide. DriverGuide is a free service that scans your computer and finds the best drivers for your devices. It also lets you download and install drivers with one click.

    -

    To download the Generic Text Only driver with DriverGuide, follow these steps:

    -
      -
    1. Go to https://www.driverguide.com/driver/download/Generic-Text-Only
    2. -
    3. Click on the green Download button next to the driver that matches your Windows 7 system.
    4. -
    5. Save the file to your computer and run it.
    6. -
    7. Follow the instructions on the screen to complete the installation.
    8. -
    -

    Step 2: Install the Generic Text Only Driver

    -

    After downloading the Generic Text Only driver, you need to install it on your computer. To do this, follow these steps:

    -
      -
    1. Go to Start > Devices and Printers.
    2. -
    3. Click on Add a printer.
    4. -
    5. Select Add a local printer.
    6. -
    7. Choose Use an existing port and select FILE: (Print to File) from the drop-down menu.
    8. -
    9. Click on Next.
    10. -
    11. Select Generic from the Manufacturer list and Generic / Text Only from the Printers list.
    12. -
    13. Click on Next.
    14. -
    15. Give your printer a name and click on Next.
    16. -
    17. Click on Finish to complete the installation.
    18. -
    -

    Step 3: Use the Generic Text Only Driver

    -

    Now that you have installed the Generic Text Only driver, you can use it to print plain text documents. To do this, follow these steps:

    -
      -
    1. Open the document that you want to print in a text editor like Notepad.
    2. -
    3. Go to File > Print.
    4. -
    5. Select Generic / Text Only as your printer and click on Print.
    6. -
    7. A dialog box will appear asking you to enter a file name and location for your output file. Choose a name and location that you can easily find later.
    8. -
    9. Click on Save to create your output file.
    10. -
    11. Open your output file with a text editor or a hex editor. You should see raw text commands that are sent to your printer.
    12. -
    13. If you want to print your output file, you need to copy it to your printer port. For example, if your printer is connected to LPT1, you can use this command in a command prompt: copy /b output.txt LPT1
    14. -
    - -

    Tips for Using the Generic Text Only Driver

    - -

    The Generic Text Only driver can be useful for printing simple text documents, but it has some limitations. Here are some tips for using it effectively:

    - -
      - -
    • The Generic Text Only driver does not support any formatting or graphics. If your document contains any of these elements, they will be ignored or printed as garbage characters.
    • - -
    • The Generic Text Only driver does not support any printer-specific features or settings. If you need to adjust the paper size, orientation, margins, font, or other options, you need to use escape sequences or control codes in your document. You can find more information about these codes in your printer manual or online.
    • - -
    • The Generic Text Only driver does not support Unicode characters or non-ASCII characters. If your document contains any of these characters, they will be printed as question marks or other symbols. You need to use an encoding that

      d5da3c52bf
      -
      -
      \ No newline at end of file diff --git a/spaces/diacanFperku/AutoGPT/Lenovo Windows 7 Pci Serial Port Driver.md b/spaces/diacanFperku/AutoGPT/Lenovo Windows 7 Pci Serial Port Driver.md deleted file mode 100644 index 09db8596572fb31c79ef852bea11d9e05e9ce8e3..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Lenovo Windows 7 Pci Serial Port Driver.md +++ /dev/null @@ -1,18 +0,0 @@ -

      lenovo windows 7 pci serial port driver


      Download Filehttps://gohhs.com/2uFTxx



      -
      -With the Intel 830G chipset, the device drivers run under the IA32 and IA64 architecture.To test a specific BIOS configuration, see the Integrated Performance Tools for the product that you want to test.The BIOS is self-configuring when it is loaded at power on. - -Author's Note: This tutorial is based on the Windows 7 and Vista drivers provided by Intel for the 830G chipset.I should also mention that Windows XP didn't have built-in support for the ISAPNP or PCI services.It should work on Windows 7 and Vista as well as Windows 2000 and Windows XP.If you are using the Intel 830G chipset device, you have two options for upgrading to the latest driver.This tutorial shows you how to upgrade the BIOS for your system.The Integrated Performance Tools for the system that you want to upgrade the BIOS are available free from Intel's website at - -The BIOS is usually installed on a flash memory chip, such as a flash memory card, a USB flash drive, or a diskette.You might need to ask your computer manufacturer for the BIOS flash drive.After the BIOS is installed, the BIOS is loaded when the computer boots, before Windows starts.If you are updating a BIOS version that is later than the version installed on your computer, you must boot from the flash drive or USB flash drive and insert the BIOS update before you reboot. - -If you do not have the BIOS upgrade for your computer or the appropriate drive, you can download the BIOS from Intel's website and load it using a Windows installation CD.After installing the BIOS update, run the Performance Update utility to update the BIOS for your computer.You might need to install the BIOS update on a flash drive or a CD-ROM for safekeeping, or install the BIOS update to a different computer that has the same type and model of chipset. You can also try updating the BIOS using a different computer. The general steps to update the BIOS are as follows: - Make sure that the computer is powered off. - Remove the battery to avoid any problems. - Place the BIOS flash drive in the computer's CD drive. - Turn the computer on. - -- Press the F12 key at the same time that you press the power button. - -- After a few seconds, the computer will ask you if you want to upgrade the BIOS. - -In Windows XP, the BIOS is stored in the system BIOS on the floppy diskette. You can upgrade the BIOS using a Windows 4fefd39f24
      -
      -
      -

      diff --git a/spaces/diffusers/controlnet-canny/README.md b/spaces/diffusers/controlnet-canny/README.md deleted file mode 100644 index b74ce27a0817ab2a6c04ff4d03e70ab759f255e0..0000000000000000000000000000000000000000 --- a/spaces/diffusers/controlnet-canny/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: ControlNet Canny -emoji: 🐨 -colorFrom: purple -colorTo: green -sdk: gradio -sdk_version: 3.19.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/digitalxingtong/Lixiang-Bert-Vits2/monotonic_align/setup.py b/spaces/digitalxingtong/Lixiang-Bert-Vits2/monotonic_align/setup.py deleted file mode 100644 index 30c224807a70faa9df9c9eb75f8e80c8c867b16b..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Lixiang-Bert-Vits2/monotonic_align/setup.py +++ /dev/null @@ -1,9 +0,0 @@ -from distutils.core import setup -from Cython.Build import cythonize -import numpy - -setup( - name = 'monotonic_align', - ext_modules = cythonize("core.pyx"), - include_dirs=[numpy.get_include()] -) diff --git a/spaces/digitalxingtong/Xingtong-Longread-Bert-VITS2/text/chinese.py b/spaces/digitalxingtong/Xingtong-Longread-Bert-VITS2/text/chinese.py deleted file mode 100644 index 276753880b73de2e8889dcb2101cd98c09e0710b..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Xingtong-Longread-Bert-VITS2/text/chinese.py +++ /dev/null @@ -1,193 +0,0 @@ -import os -import re - -import cn2an -from pypinyin import lazy_pinyin, Style - -from text import symbols -from text.symbols import punctuation -from text.tone_sandhi import ToneSandhi - -current_file_path = os.path.dirname(__file__) -pinyin_to_symbol_map = {line.split("\t")[0]: line.strip().split("\t")[1] for line in - open(os.path.join(current_file_path, 'opencpop-strict.txt')).readlines()} - -import jieba.posseg as psg - - -rep_map = { - ':': ',', - ';': ',', - ',': ',', - '。': '.', - '!': '!', - '?': '?', - '\n': '.', - "·": ",", - '、': ",", - '...': '…', - '$': '.', - '“': "'", - '”': "'", - '‘': "'", - '’': "'", - '(': "'", - ')': "'", - '(': "'", - ')': "'", - '《': "'", - '》': "'", - '【': "'", - '】': "'", - '[': "'", - ']': "'", - '—': "-", - '~': "-", - '~': "-", - '「': "'", - '」': "'", - -} - -tone_modifier = ToneSandhi() - -def replace_punctuation(text): - text = text.replace("嗯", "恩").replace("呣","母") - pattern = re.compile('|'.join(re.escape(p) for p in rep_map.keys())) - - replaced_text = pattern.sub(lambda x: rep_map[x.group()], text) - - replaced_text = re.sub(r'[^\u4e00-\u9fa5'+"".join(punctuation)+r']+', '', replaced_text) - - return replaced_text - -def g2p(text): - pattern = r'(?<=[{0}])\s*'.format(''.join(punctuation)) - sentences = [i for i in re.split(pattern, text) if i.strip()!=''] - phones, tones, word2ph = _g2p(sentences) - assert sum(word2ph) == len(phones) - assert len(word2ph) == len(text) #Sometimes it will crash,you can add a try-catch. - phones = ['_'] + phones + ["_"] - tones = [0] + tones + [0] - word2ph = [1] + word2ph + [1] - return phones, tones, word2ph - - -def _get_initials_finals(word): - initials = [] - finals = [] - orig_initials = lazy_pinyin( - word, neutral_tone_with_five=True, style=Style.INITIALS) - orig_finals = lazy_pinyin( - word, neutral_tone_with_five=True, style=Style.FINALS_TONE3) - for c, v in zip(orig_initials, orig_finals): - initials.append(c) - finals.append(v) - return initials, finals - - -def _g2p(segments): - phones_list = [] - tones_list = [] - word2ph = [] - for seg in segments: - pinyins = [] - # Replace all English words in the sentence - seg = re.sub('[a-zA-Z]+', '', seg) - seg_cut = psg.lcut(seg) - initials = [] - finals = [] - seg_cut = tone_modifier.pre_merge_for_modify(seg_cut) - for word, pos in seg_cut: - if pos == 'eng': - continue - sub_initials, sub_finals = _get_initials_finals(word) - sub_finals = tone_modifier.modified_tone(word, pos, - sub_finals) - initials.append(sub_initials) - finals.append(sub_finals) - - # assert len(sub_initials) == len(sub_finals) == len(word) - initials = sum(initials, []) - finals = sum(finals, []) - # - for c, v in zip(initials, finals): - raw_pinyin = c+v - # NOTE: post process for pypinyin outputs - # we discriminate i, ii and iii - if c == v: - assert c in punctuation - phone = [c] - tone = '0' - word2ph.append(1) - else: - v_without_tone = v[:-1] - tone = v[-1] - - pinyin = c+v_without_tone - assert tone in '12345' - - if c: - # 多音节 - v_rep_map = { - "uei": 'ui', - 'iou': 'iu', - 'uen': 'un', - } - if v_without_tone in v_rep_map.keys(): - pinyin = c+v_rep_map[v_without_tone] - else: - # 单音节 - pinyin_rep_map = { - 'ing': 'ying', - 'i': 'yi', - 'in': 'yin', - 'u': 'wu', - } - if pinyin in pinyin_rep_map.keys(): - pinyin = pinyin_rep_map[pinyin] - else: - single_rep_map = { - 'v': 'yu', - 'e': 'e', - 'i': 'y', - 'u': 'w', - } - if pinyin[0] in single_rep_map.keys(): - pinyin = single_rep_map[pinyin[0]]+pinyin[1:] - - assert pinyin in pinyin_to_symbol_map.keys(), (pinyin, seg, raw_pinyin) - phone = pinyin_to_symbol_map[pinyin].split(' ') - word2ph.append(len(phone)) - - phones_list += phone - tones_list += [int(tone)] * len(phone) - return phones_list, tones_list, word2ph - - - -def text_normalize(text): - numbers = re.findall(r'\d+(?:\.?\d+)?', text) - for number in numbers: - text = text.replace(number, cn2an.an2cn(number), 1) - text = replace_punctuation(text) - return text - -def get_bert_feature(text, word2ph): - from text import chinese_bert - return chinese_bert.get_bert_feature(text, word2ph) - -if __name__ == '__main__': - from text.chinese_bert import get_bert_feature - text = "啊!但是《原神》是由,米哈\游自主, [研发]的一款全.新开放世界.冒险游戏" - text = text_normalize(text) - print(text) - phones, tones, word2ph = g2p(text) - bert = get_bert_feature(text, word2ph) - - print(phones, tones, word2ph, bert.shape) - - -# # 示例用法 -# text = "这是一个示例文本:,你好!这是一个测试...." -# print(g2p_paddle(text)) # 输出: 这是一个示例文本你好这是一个测试 diff --git a/spaces/dinhminh20521597/OCR_DEMO/configs/_base_/det_pipelines/maskrcnn_pipeline.py b/spaces/dinhminh20521597/OCR_DEMO/configs/_base_/det_pipelines/maskrcnn_pipeline.py deleted file mode 100644 index fff3e071ea115843752f34de8141fa982b8ad14b..0000000000000000000000000000000000000000 --- a/spaces/dinhminh20521597/OCR_DEMO/configs/_base_/det_pipelines/maskrcnn_pipeline.py +++ /dev/null @@ -1,57 +0,0 @@ -img_norm_cfg = dict( - mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) - -train_pipeline = [ - dict(type='LoadImageFromFile', color_type='color_ignore_orientation'), - dict(type='LoadAnnotations', with_bbox=True, with_mask=True), - dict( - type='ScaleAspectJitter', - img_scale=None, - keep_ratio=False, - resize_type='indep_sample_in_range', - scale_range=(640, 2560)), - dict(type='RandomFlip', flip_ratio=0.5), - dict(type='Normalize', **img_norm_cfg), - dict( - type='RandomCropInstances', - target_size=(640, 640), - mask_type='union_all', - instance_key='gt_masks'), - dict(type='Pad', size_divisor=32), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels', 'gt_masks']), -] - -# for ctw1500 -img_scale_ctw1500 = (1600, 1600) -test_pipeline_ctw1500 = [ - dict(type='LoadImageFromFile', color_type='color_ignore_orientation'), - dict( - type='MultiScaleFlipAug', - img_scale=img_scale_ctw1500, # used by Resize - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']), - ]) -] - -# for icdar2015 -img_scale_icdar2015 = (1920, 1920) -test_pipeline_icdar2015 = [ - dict(type='LoadImageFromFile', color_type='color_ignore_orientation'), - dict( - type='MultiScaleFlipAug', - img_scale=img_scale_icdar2015, # used by Resize - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']), - ]) -] diff --git a/spaces/dorkai/text-generation-webui-main/text-generation-webui-main/docs/Spell-book.md b/spaces/dorkai/text-generation-webui-main/text-generation-webui-main/docs/Spell-book.md deleted file mode 100644 index 9b7c76c953f76f8a486bbe5156de4e9ebb3f0ec0..0000000000000000000000000000000000000000 --- a/spaces/dorkai/text-generation-webui-main/text-generation-webui-main/docs/Spell-book.md +++ /dev/null @@ -1,107 +0,0 @@ -You have now entered a hidden corner of the internet. - -A confusing yet intriguing realm of paradoxes and contradictions. - -A place where you will find out that what you thought you knew, you in fact didn't know, and what you didn't know was in front of you all along. - -![](https://i.pinimg.com/originals/6e/e2/7b/6ee27bad351d3aca470d80f1033ba9c6.jpg) - -*In other words, here I will document little-known facts about this web UI that I could not find another place for in the wiki.* - -#### You can train LoRAs in CPU mode - -Load the web UI with - -``` -python server.py --cpu -``` - -and start training the LoRA from the training tab as usual. - -#### 8-bit mode works with CPU offloading - -``` -python server.py --load-in-8bit --gpu-memory 4000MiB -``` - -#### `--pre_layer`, and not `--gpu-memory`, is the right way to do CPU offloading with 4-bit models - -``` -python server.py --wbits 4 --groupsize 128 --pre_layer 20 -``` - -#### Models can be loaded in 32-bit, 16-bit, 8-bit, and 4-bit modes - -``` -python server.py --cpu -python server.py -python server.py --load-in-8bit -python server.py --wbits 4 -``` - -#### The web UI works with any version of GPTQ-for-LLaMa - -Including the up to date triton and cuda branches. But you have to delete the `repositories/GPTQ-for-LLaMa` folder and reinstall the new one every time: - -``` -cd text-generation-webui/repositories -rm -r GPTQ-for-LLaMa -pip uninstall quant-cuda -git clone https://github.com/oobabooga/GPTQ-for-LLaMa -b cuda # or any other repository and branch -cd GPTQ-for-LLaMa -python setup_cuda.py install -``` - -#### Instruction-following templates are represented as chat characters - -https://github.com/oobabooga/text-generation-webui/tree/main/characters/instruction-following - -#### The right way to run Alpaca, Open Assistant, Vicuna, etc is Instruct mode, not normal chat mode - -Otherwise the prompt will not be formatted correctly. - -1. Start the web UI with - -``` -python server.py --chat -``` - -2. Click on the "instruct" option under "Chat modes" - -3. Select the correct template in the hidden dropdown menu that will become visible. - -#### Notebook mode is best mode - -Ascended individuals have realized that notebook mode is the superset of chat mode and can do chats with ultimate flexibility, including group chats, editing replies, starting a new bot reply in a given way, and impersonating. - -#### RWKV is a RNN - -Most models are transformers, but not RWKV, which is a RNN. It's a great model. - -#### `--gpu-memory` is not a hard limit on the GPU memory - -It is simply a parameter that is passed to the `accelerate` library while loading the model. More memory will be allocated during generation. That's why this parameter has to be set to less than your total GPU memory. - -#### Contrastive search perhaps the best preset - -But it uses a ton of VRAM. - -#### You can check the sha256sum of downloaded models with the download script - -``` -python download-model.py facebook/galactica-125m --check -``` - -#### The download script continues interrupted downloads by default - -It doesn't start over. - -#### You can download models with multiple threads - -``` -python download-model.py facebook/galactica-125m --threads 8 -``` - -#### LoRAs work in 4-bit mode - -You need to follow [these instructions](GPTQ-models-(4-bit-mode).md#using-loras-in-4-bit-mode) and then start the web UI with the `--monkey-patch` flag. diff --git a/spaces/duycse1603/math2tex/HybridViT/module/component/common/mae_posembed.py b/spaces/duycse1603/math2tex/HybridViT/module/component/common/mae_posembed.py deleted file mode 100644 index 187ecd981c59df11794df9d5be8c02d8a37e04d9..0000000000000000000000000000000000000000 --- a/spaces/duycse1603/math2tex/HybridViT/module/component/common/mae_posembed.py +++ /dev/null @@ -1,72 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. - -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. -# -------------------------------------------------------- -# Position embedding utils -# -------------------------------------------------------- - -import numpy as np -# -------------------------------------------------------- -# 2D sine-cosine position embedding -# References: -# Transformer: https://github.com/tensorflow/models/blob/master/official/nlp/transformer/model_utils.py -# MoCo v3: https://github.com/facebookresearch/moco-v3 -# -------------------------------------------------------- - -def get_2d_sincos_pos_embed(embed_dim, grid_size_H, grid_size_W, cls_token=False): - """ - grid_size: int of the grid height and width - return: - pos_embed: [grid_size*grid_size, embed_dim] or [1+grid_size*grid_size, embed_dim] (w/ or w/o cls_token) - """ - grid_h = np.arange(grid_size_H, dtype=np.float32) - grid_w = np.arange(grid_size_W, dtype=np.float32) - grid = np.meshgrid(grid_w, grid_h) # here w goes first - grid = np.stack(grid, axis=0) - - grid = grid.reshape([2, 1, grid_size_H, grid_size_W]) - - print('new grid shape', grid.shape) - - pos_embed = get_2d_sincos_pos_embed_from_grid(embed_dim, grid) - if cls_token: - pos_embed = np.concatenate([np.zeros([1, embed_dim]), pos_embed], axis=0) - return pos_embed - - -def get_2d_sincos_pos_embed_from_grid(embed_dim, grid): - assert embed_dim % 2 == 0 - - # use half of dimensions to encode grid_h - emb_h = get_1d_sincos_pos_embed_from_grid(embed_dim // 2, grid[0]) # (H*W, D/2) - emb_w = get_1d_sincos_pos_embed_from_grid(embed_dim // 2, grid[1]) # (H*W, D/2) - - emb = np.concatenate([emb_h, emb_w], axis=1) # (H*W, D) - return emb - - -def get_1d_sincos_pos_embed_from_grid(embed_dim, pos): - """ - embed_dim: output dimension for each position - pos: a list of positions to be encoded: size (M,) - out: (M, D) - """ - assert embed_dim % 2 == 0 - omega = np.arange(embed_dim // 2, dtype=np.float32) - omega /= embed_dim / 2. - omega = 1. / 10000**omega # (D/2,) - - pos = pos.reshape(-1) # (M,) - out = np.einsum('m,d->md', pos, omega) # (M, D/2), outer product - - emb_sin = np.sin(out) # (M, D/2) - emb_cos = np.cos(out) # (M, D/2) - - emb = np.concatenate([emb_sin, emb_cos], axis=1) # (M, D) - return emb - -if __name__ == '__main__': - pos_embed = get_2d_sincos_pos_embed(256, 800, 800, True) - print(pos_embed.shape) \ No newline at end of file diff --git a/spaces/elkraken/Video-Object-Detection/train_aux.py b/spaces/elkraken/Video-Object-Detection/train_aux.py deleted file mode 100644 index 0e8053f8503ba762843f6dd56219f1e6c4e74ccc..0000000000000000000000000000000000000000 --- a/spaces/elkraken/Video-Object-Detection/train_aux.py +++ /dev/null @@ -1,699 +0,0 @@ -import argparse -import logging -import math -import os -import random -import time -from copy import deepcopy -from pathlib import Path -from threading import Thread - -import numpy as np -import torch.distributed as dist -import torch.nn as nn -import torch.nn.functional as F -import torch.optim as optim -import torch.optim.lr_scheduler as lr_scheduler -import torch.utils.data -import yaml -from torch.cuda import amp -from torch.nn.parallel import DistributedDataParallel as DDP -from torch.utils.tensorboard import SummaryWriter -from tqdm import tqdm - -import test # import test.py to get mAP after each epoch -from models.experimental import attempt_load -from models.yolo import Model -from utils.autoanchor import check_anchors -from utils.datasets import create_dataloader -from utils.general import labels_to_class_weights, increment_path, labels_to_image_weights, init_seeds, \ - fitness, strip_optimizer, get_latest_run, check_dataset, check_file, check_git_status, check_img_size, \ - check_requirements, print_mutation, set_logging, one_cycle, colorstr -from utils.google_utils import attempt_download -from utils.loss import ComputeLoss, ComputeLossAuxOTA -from utils.plots import plot_images, plot_labels, plot_results, plot_evolution -from utils.torch_utils import ModelEMA, select_device, intersect_dicts, torch_distributed_zero_first, is_parallel -from utils.wandb_logging.wandb_utils import WandbLogger, check_wandb_resume - -logger = logging.getLogger(__name__) - - -def train(hyp, opt, device, tb_writer=None): - logger.info(colorstr('hyperparameters: ') + ', '.join(f'{k}={v}' for k, v in hyp.items())) - save_dir, epochs, batch_size, total_batch_size, weights, rank = \ - Path(opt.save_dir), opt.epochs, opt.batch_size, opt.total_batch_size, opt.weights, opt.global_rank - - # Directories - wdir = save_dir / 'weights' - wdir.mkdir(parents=True, exist_ok=True) # make dir - last = wdir / 'last.pt' - best = wdir / 'best.pt' - results_file = save_dir / 'results.txt' - - # Save run settings - with open(save_dir / 'hyp.yaml', 'w') as f: - yaml.dump(hyp, f, sort_keys=False) - with open(save_dir / 'opt.yaml', 'w') as f: - yaml.dump(vars(opt), f, sort_keys=False) - - # Configure - plots = not opt.evolve # create plots - cuda = device.type != 'cpu' - init_seeds(2 + rank) - with open(opt.data) as f: - data_dict = yaml.load(f, Loader=yaml.SafeLoader) # data dict - is_coco = opt.data.endswith('coco.yaml') - - # Logging- Doing this before checking the dataset. Might update data_dict - loggers = {'wandb': None} # loggers dict - if rank in [-1, 0]: - opt.hyp = hyp # add hyperparameters - run_id = torch.load(weights).get('wandb_id') if weights.endswith('.pt') and os.path.isfile(weights) else None - wandb_logger = WandbLogger(opt, Path(opt.save_dir).stem, run_id, data_dict) - loggers['wandb'] = wandb_logger.wandb - data_dict = wandb_logger.data_dict - if wandb_logger.wandb: - weights, epochs, hyp = opt.weights, opt.epochs, opt.hyp # WandbLogger might update weights, epochs if resuming - - nc = 1 if opt.single_cls else int(data_dict['nc']) # number of classes - names = ['item'] if opt.single_cls and len(data_dict['names']) != 1 else data_dict['names'] # class names - assert len(names) == nc, '%g names found for nc=%g dataset in %s' % (len(names), nc, opt.data) # check - - # Model - pretrained = weights.endswith('.pt') - if pretrained: - with torch_distributed_zero_first(rank): - attempt_download(weights) # download if not found locally - ckpt = torch.load(weights, map_location=device) # load checkpoint - model = Model(opt.cfg or ckpt['model'].yaml, ch=3, nc=nc, anchors=hyp.get('anchors')).to(device) # create - exclude = ['anchor'] if (opt.cfg or hyp.get('anchors')) and not opt.resume else [] # exclude keys - state_dict = ckpt['model'].float().state_dict() # to FP32 - state_dict = intersect_dicts(state_dict, model.state_dict(), exclude=exclude) # intersect - model.load_state_dict(state_dict, strict=False) # load - logger.info('Transferred %g/%g items from %s' % (len(state_dict), len(model.state_dict()), weights)) # report - else: - model = Model(opt.cfg, ch=3, nc=nc, anchors=hyp.get('anchors')).to(device) # create - with torch_distributed_zero_first(rank): - check_dataset(data_dict) # check - train_path = data_dict['train'] - test_path = data_dict['val'] - - # Freeze - freeze = [] # parameter names to freeze (full or partial) - for k, v in model.named_parameters(): - v.requires_grad = True # train all layers - if any(x in k for x in freeze): - print('freezing %s' % k) - v.requires_grad = False - - # Optimizer - nbs = 64 # nominal batch size - accumulate = max(round(nbs / total_batch_size), 1) # accumulate loss before optimizing - hyp['weight_decay'] *= total_batch_size * accumulate / nbs # scale weight_decay - logger.info(f"Scaled weight_decay = {hyp['weight_decay']}") - - pg0, pg1, pg2 = [], [], [] # optimizer parameter groups - for k, v in model.named_modules(): - if hasattr(v, 'bias') and isinstance(v.bias, nn.Parameter): - pg2.append(v.bias) # biases - if isinstance(v, nn.BatchNorm2d): - pg0.append(v.weight) # no decay - elif hasattr(v, 'weight') and isinstance(v.weight, nn.Parameter): - pg1.append(v.weight) # apply decay - if hasattr(v, 'im'): - if hasattr(v.im, 'implicit'): - pg0.append(v.im.implicit) - else: - for iv in v.im: - pg0.append(iv.implicit) - if hasattr(v, 'imc'): - if hasattr(v.imc, 'implicit'): - pg0.append(v.imc.implicit) - else: - for iv in v.imc: - pg0.append(iv.implicit) - if hasattr(v, 'imb'): - if hasattr(v.imb, 'implicit'): - pg0.append(v.imb.implicit) - else: - for iv in v.imb: - pg0.append(iv.implicit) - if hasattr(v, 'imo'): - if hasattr(v.imo, 'implicit'): - pg0.append(v.imo.implicit) - else: - for iv in v.imo: - pg0.append(iv.implicit) - if hasattr(v, 'ia'): - if hasattr(v.ia, 'implicit'): - pg0.append(v.ia.implicit) - else: - for iv in v.ia: - pg0.append(iv.implicit) - if hasattr(v, 'attn'): - if hasattr(v.attn, 'logit_scale'): - pg0.append(v.attn.logit_scale) - if hasattr(v.attn, 'q_bias'): - pg0.append(v.attn.q_bias) - if hasattr(v.attn, 'v_bias'): - pg0.append(v.attn.v_bias) - if hasattr(v.attn, 'relative_position_bias_table'): - pg0.append(v.attn.relative_position_bias_table) - if hasattr(v, 'rbr_dense'): - if hasattr(v.rbr_dense, 'weight_rbr_origin'): - pg0.append(v.rbr_dense.weight_rbr_origin) - if hasattr(v.rbr_dense, 'weight_rbr_avg_conv'): - pg0.append(v.rbr_dense.weight_rbr_avg_conv) - if hasattr(v.rbr_dense, 'weight_rbr_pfir_conv'): - pg0.append(v.rbr_dense.weight_rbr_pfir_conv) - if hasattr(v.rbr_dense, 'weight_rbr_1x1_kxk_idconv1'): - pg0.append(v.rbr_dense.weight_rbr_1x1_kxk_idconv1) - if hasattr(v.rbr_dense, 'weight_rbr_1x1_kxk_conv2'): - pg0.append(v.rbr_dense.weight_rbr_1x1_kxk_conv2) - if hasattr(v.rbr_dense, 'weight_rbr_gconv_dw'): - pg0.append(v.rbr_dense.weight_rbr_gconv_dw) - if hasattr(v.rbr_dense, 'weight_rbr_gconv_pw'): - pg0.append(v.rbr_dense.weight_rbr_gconv_pw) - if hasattr(v.rbr_dense, 'vector'): - pg0.append(v.rbr_dense.vector) - - if opt.adam: - optimizer = optim.Adam(pg0, lr=hyp['lr0'], betas=(hyp['momentum'], 0.999)) # adjust beta1 to momentum - else: - optimizer = optim.SGD(pg0, lr=hyp['lr0'], momentum=hyp['momentum'], nesterov=True) - - optimizer.add_param_group({'params': pg1, 'weight_decay': hyp['weight_decay']}) # add pg1 with weight_decay - optimizer.add_param_group({'params': pg2}) # add pg2 (biases) - logger.info('Optimizer groups: %g .bias, %g conv.weight, %g other' % (len(pg2), len(pg1), len(pg0))) - del pg0, pg1, pg2 - - # Scheduler https://arxiv.org/pdf/1812.01187.pdf - # https://pytorch.org/docs/stable/_modules/torch/optim/lr_scheduler.html#OneCycleLR - if opt.linear_lr: - lf = lambda x: (1 - x / (epochs - 1)) * (1.0 - hyp['lrf']) + hyp['lrf'] # linear - else: - lf = one_cycle(1, hyp['lrf'], epochs) # cosine 1->hyp['lrf'] - scheduler = lr_scheduler.LambdaLR(optimizer, lr_lambda=lf) - # plot_lr_scheduler(optimizer, scheduler, epochs) - - # EMA - ema = ModelEMA(model) if rank in [-1, 0] else None - - # Resume - start_epoch, best_fitness = 0, 0.0 - if pretrained: - # Optimizer - if ckpt['optimizer'] is not None: - optimizer.load_state_dict(ckpt['optimizer']) - best_fitness = ckpt['best_fitness'] - - # EMA - if ema and ckpt.get('ema'): - ema.ema.load_state_dict(ckpt['ema'].float().state_dict()) - ema.updates = ckpt['updates'] - - # Results - if ckpt.get('training_results') is not None: - results_file.write_text(ckpt['training_results']) # write results.txt - - # Epochs - start_epoch = ckpt['epoch'] + 1 - if opt.resume: - assert start_epoch > 0, '%s training to %g epochs is finished, nothing to resume.' % (weights, epochs) - if epochs < start_epoch: - logger.info('%s has been trained for %g epochs. Fine-tuning for %g additional epochs.' % - (weights, ckpt['epoch'], epochs)) - epochs += ckpt['epoch'] # finetune additional epochs - - del ckpt, state_dict - - # Image sizes - gs = max(int(model.stride.max()), 32) # grid size (max stride) - nl = model.model[-1].nl # number of detection layers (used for scaling hyp['obj']) - imgsz, imgsz_test = [check_img_size(x, gs) for x in opt.img_size] # verify imgsz are gs-multiples - - # DP mode - if cuda and rank == -1 and torch.cuda.device_count() > 1: - model = torch.nn.DataParallel(model) - - # SyncBatchNorm - if opt.sync_bn and cuda and rank != -1: - model = torch.nn.SyncBatchNorm.convert_sync_batchnorm(model).to(device) - logger.info('Using SyncBatchNorm()') - - # Trainloader - dataloader, dataset = create_dataloader(train_path, imgsz, batch_size, gs, opt, - hyp=hyp, augment=True, cache=opt.cache_images, rect=opt.rect, rank=rank, - world_size=opt.world_size, workers=opt.workers, - image_weights=opt.image_weights, quad=opt.quad, prefix=colorstr('train: ')) - mlc = np.concatenate(dataset.labels, 0)[:, 0].max() # max label class - nb = len(dataloader) # number of batches - assert mlc < nc, 'Label class %g exceeds nc=%g in %s. Possible class labels are 0-%g' % (mlc, nc, opt.data, nc - 1) - - # Process 0 - if rank in [-1, 0]: - testloader = create_dataloader(test_path, imgsz_test, batch_size * 2, gs, opt, # testloader - hyp=hyp, cache=opt.cache_images and not opt.notest, rect=True, rank=-1, - world_size=opt.world_size, workers=opt.workers, - pad=0.5, prefix=colorstr('val: '))[0] - - if not opt.resume: - labels = np.concatenate(dataset.labels, 0) - c = torch.tensor(labels[:, 0]) # classes - # cf = torch.bincount(c.long(), minlength=nc) + 1. # frequency - # model._initialize_biases(cf.to(device)) - if plots: - #plot_labels(labels, names, save_dir, loggers) - if tb_writer: - tb_writer.add_histogram('classes', c, 0) - - # Anchors - if not opt.noautoanchor: - check_anchors(dataset, model=model, thr=hyp['anchor_t'], imgsz=imgsz) - model.half().float() # pre-reduce anchor precision - - # DDP mode - if cuda and rank != -1: - model = DDP(model, device_ids=[opt.local_rank], output_device=opt.local_rank, - # nn.MultiheadAttention incompatibility with DDP https://github.com/pytorch/pytorch/issues/26698 - find_unused_parameters=any(isinstance(layer, nn.MultiheadAttention) for layer in model.modules())) - - # Model parameters - hyp['box'] *= 3. / nl # scale to layers - hyp['cls'] *= nc / 80. * 3. / nl # scale to classes and layers - hyp['obj'] *= (imgsz / 640) ** 2 * 3. / nl # scale to image size and layers - hyp['label_smoothing'] = opt.label_smoothing - model.nc = nc # attach number of classes to model - model.hyp = hyp # attach hyperparameters to model - model.gr = 1.0 # iou loss ratio (obj_loss = 1.0 or iou) - model.class_weights = labels_to_class_weights(dataset.labels, nc).to(device) * nc # attach class weights - model.names = names - - # Start training - t0 = time.time() - nw = max(round(hyp['warmup_epochs'] * nb), 1000) # number of warmup iterations, max(3 epochs, 1k iterations) - # nw = min(nw, (epochs - start_epoch) / 2 * nb) # limit warmup to < 1/2 of training - maps = np.zeros(nc) # mAP per class - results = (0, 0, 0, 0, 0, 0, 0) # P, R, mAP@.5, mAP@.5-.95, val_loss(box, obj, cls) - scheduler.last_epoch = start_epoch - 1 # do not move - scaler = amp.GradScaler(enabled=cuda) - compute_loss_ota = ComputeLossAuxOTA(model) # init loss class - compute_loss = ComputeLoss(model) # init loss class - logger.info(f'Image sizes {imgsz} train, {imgsz_test} test\n' - f'Using {dataloader.num_workers} dataloader workers\n' - f'Logging results to {save_dir}\n' - f'Starting training for {epochs} epochs...') - torch.save(model, wdir / 'init.pt') - for epoch in range(start_epoch, epochs): # epoch ------------------------------------------------------------------ - model.train() - - # Update image weights (optional) - if opt.image_weights: - # Generate indices - if rank in [-1, 0]: - cw = model.class_weights.cpu().numpy() * (1 - maps) ** 2 / nc # class weights - iw = labels_to_image_weights(dataset.labels, nc=nc, class_weights=cw) # image weights - dataset.indices = random.choices(range(dataset.n), weights=iw, k=dataset.n) # rand weighted idx - # Broadcast if DDP - if rank != -1: - indices = (torch.tensor(dataset.indices) if rank == 0 else torch.zeros(dataset.n)).int() - dist.broadcast(indices, 0) - if rank != 0: - dataset.indices = indices.cpu().numpy() - - # Update mosaic border - # b = int(random.uniform(0.25 * imgsz, 0.75 * imgsz + gs) // gs * gs) - # dataset.mosaic_border = [b - imgsz, -b] # height, width borders - - mloss = torch.zeros(4, device=device) # mean losses - if rank != -1: - dataloader.sampler.set_epoch(epoch) - pbar = enumerate(dataloader) - logger.info(('\n' + '%10s' * 8) % ('Epoch', 'gpu_mem', 'box', 'obj', 'cls', 'total', 'labels', 'img_size')) - if rank in [-1, 0]: - pbar = tqdm(pbar, total=nb) # progress bar - optimizer.zero_grad() - for i, (imgs, targets, paths, _) in pbar: # batch ------------------------------------------------------------- - ni = i + nb * epoch # number integrated batches (since train start) - imgs = imgs.to(device, non_blocking=True).float() / 255.0 # uint8 to float32, 0-255 to 0.0-1.0 - - # Warmup - if ni <= nw: - xi = [0, nw] # x interp - # model.gr = np.interp(ni, xi, [0.0, 1.0]) # iou loss ratio (obj_loss = 1.0 or iou) - accumulate = max(1, np.interp(ni, xi, [1, nbs / total_batch_size]).round()) - for j, x in enumerate(optimizer.param_groups): - # bias lr falls from 0.1 to lr0, all other lrs rise from 0.0 to lr0 - x['lr'] = np.interp(ni, xi, [hyp['warmup_bias_lr'] if j == 2 else 0.0, x['initial_lr'] * lf(epoch)]) - if 'momentum' in x: - x['momentum'] = np.interp(ni, xi, [hyp['warmup_momentum'], hyp['momentum']]) - - # Multi-scale - if opt.multi_scale: - sz = random.randrange(imgsz * 0.5, imgsz * 1.5 + gs) // gs * gs # size - sf = sz / max(imgs.shape[2:]) # scale factor - if sf != 1: - ns = [math.ceil(x * sf / gs) * gs for x in imgs.shape[2:]] # new shape (stretched to gs-multiple) - imgs = F.interpolate(imgs, size=ns, mode='bilinear', align_corners=False) - - # Forward - with amp.autocast(enabled=cuda): - pred = model(imgs) # forward - loss, loss_items = compute_loss_ota(pred, targets.to(device), imgs) # loss scaled by batch_size - if rank != -1: - loss *= opt.world_size # gradient averaged between devices in DDP mode - if opt.quad: - loss *= 4. - - # Backward - scaler.scale(loss).backward() - - # Optimize - if ni % accumulate == 0: - scaler.step(optimizer) # optimizer.step - scaler.update() - optimizer.zero_grad() - if ema: - ema.update(model) - - # Print - if rank in [-1, 0]: - mloss = (mloss * i + loss_items) / (i + 1) # update mean losses - mem = '%.3gG' % (torch.cuda.memory_reserved() / 1E9 if torch.cuda.is_available() else 0) # (GB) - s = ('%10s' * 2 + '%10.4g' * 6) % ( - '%g/%g' % (epoch, epochs - 1), mem, *mloss, targets.shape[0], imgs.shape[-1]) - pbar.set_description(s) - - # Plot - if plots and ni < 10: - f = save_dir / f'train_batch{ni}.jpg' # filename - Thread(target=plot_images, args=(imgs, targets, paths, f), daemon=True).start() - # if tb_writer: - # tb_writer.add_image(f, result, dataformats='HWC', global_step=epoch) - # tb_writer.add_graph(torch.jit.trace(model, imgs, strict=False), []) # add model graph - elif plots and ni == 10 and wandb_logger.wandb: - wandb_logger.log({"Mosaics": [wandb_logger.wandb.Image(str(x), caption=x.name) for x in - save_dir.glob('train*.jpg') if x.exists()]}) - - # end batch ------------------------------------------------------------------------------------------------ - # end epoch ---------------------------------------------------------------------------------------------------- - - # Scheduler - lr = [x['lr'] for x in optimizer.param_groups] # for tensorboard - scheduler.step() - - # DDP process 0 or single-GPU - if rank in [-1, 0]: - # mAP - ema.update_attr(model, include=['yaml', 'nc', 'hyp', 'gr', 'names', 'stride', 'class_weights']) - final_epoch = epoch + 1 == epochs - if not opt.notest or final_epoch: # Calculate mAP - wandb_logger.current_epoch = epoch + 1 - results, maps, times = test.test(data_dict, - batch_size=batch_size * 2, - imgsz=imgsz_test, - model=ema.ema, - single_cls=opt.single_cls, - dataloader=testloader, - save_dir=save_dir, - verbose=nc < 50 and final_epoch, - plots=plots and final_epoch, - wandb_logger=wandb_logger, - compute_loss=compute_loss, - is_coco=is_coco, - v5_metric=opt.v5_metric) - - # Write - with open(results_file, 'a') as f: - f.write(s + '%10.4g' * 7 % results + '\n') # append metrics, val_loss - if len(opt.name) and opt.bucket: - os.system('gsutil cp %s gs://%s/results/results%s.txt' % (results_file, opt.bucket, opt.name)) - - # Log - tags = ['train/box_loss', 'train/obj_loss', 'train/cls_loss', # train loss - 'metrics/precision', 'metrics/recall', 'metrics/mAP_0.5', 'metrics/mAP_0.5:0.95', - 'val/box_loss', 'val/obj_loss', 'val/cls_loss', # val loss - 'x/lr0', 'x/lr1', 'x/lr2'] # params - for x, tag in zip(list(mloss[:-1]) + list(results) + lr, tags): - if tb_writer: - tb_writer.add_scalar(tag, x, epoch) # tensorboard - if wandb_logger.wandb: - wandb_logger.log({tag: x}) # W&B - - # Update best mAP - fi = fitness(np.array(results).reshape(1, -1)) # weighted combination of [P, R, mAP@.5, mAP@.5-.95] - if fi > best_fitness: - best_fitness = fi - wandb_logger.end_epoch(best_result=best_fitness == fi) - - # Save model - if (not opt.nosave) or (final_epoch and not opt.evolve): # if save - ckpt = {'epoch': epoch, - 'best_fitness': best_fitness, - 'training_results': results_file.read_text(), - 'model': deepcopy(model.module if is_parallel(model) else model).half(), - 'ema': deepcopy(ema.ema).half(), - 'updates': ema.updates, - 'optimizer': optimizer.state_dict(), - 'wandb_id': wandb_logger.wandb_run.id if wandb_logger.wandb else None} - - # Save last, best and delete - torch.save(ckpt, last) - if best_fitness == fi: - torch.save(ckpt, best) - if (best_fitness == fi) and (epoch >= 200): - torch.save(ckpt, wdir / 'best_{:03d}.pt'.format(epoch)) - if epoch == 0: - torch.save(ckpt, wdir / 'epoch_{:03d}.pt'.format(epoch)) - elif ((epoch+1) % 25) == 0: - torch.save(ckpt, wdir / 'epoch_{:03d}.pt'.format(epoch)) - elif epoch >= (epochs-5): - torch.save(ckpt, wdir / 'epoch_{:03d}.pt'.format(epoch)) - if wandb_logger.wandb: - if ((epoch + 1) % opt.save_period == 0 and not final_epoch) and opt.save_period != -1: - wandb_logger.log_model( - last.parent, opt, epoch, fi, best_model=best_fitness == fi) - del ckpt - - # end epoch ---------------------------------------------------------------------------------------------------- - # end training - if rank in [-1, 0]: - # Plots - if plots: - plot_results(save_dir=save_dir) # save as results.png - if wandb_logger.wandb: - files = ['results.png', 'confusion_matrix.png', *[f'{x}_curve.png' for x in ('F1', 'PR', 'P', 'R')]] - wandb_logger.log({"Results": [wandb_logger.wandb.Image(str(save_dir / f), caption=f) for f in files - if (save_dir / f).exists()]}) - # Test best.pt - logger.info('%g epochs completed in %.3f hours.\n' % (epoch - start_epoch + 1, (time.time() - t0) / 3600)) - if opt.data.endswith('coco.yaml') and nc == 80: # if COCO - for m in (last, best) if best.exists() else (last): # speed, mAP tests - results, _, _ = test.test(opt.data, - batch_size=batch_size * 2, - imgsz=imgsz_test, - conf_thres=0.001, - iou_thres=0.7, - model=attempt_load(m, device).half(), - single_cls=opt.single_cls, - dataloader=testloader, - save_dir=save_dir, - save_json=True, - plots=False, - is_coco=is_coco, - v5_metric=opt.v5_metric) - - # Strip optimizers - final = best if best.exists() else last # final model - for f in last, best: - if f.exists(): - strip_optimizer(f) # strip optimizers - if opt.bucket: - os.system(f'gsutil cp {final} gs://{opt.bucket}/weights') # upload - if wandb_logger.wandb and not opt.evolve: # Log the stripped model - wandb_logger.wandb.log_artifact(str(final), type='model', - name='run_' + wandb_logger.wandb_run.id + '_model', - aliases=['last', 'best', 'stripped']) - wandb_logger.finish_run() - else: - dist.destroy_process_group() - torch.cuda.empty_cache() - return results - - -if __name__ == '__main__': - parser = argparse.ArgumentParser() - parser.add_argument('--weights', type=str, default='yolo7.pt', help='initial weights path') - parser.add_argument('--cfg', type=str, default='', help='model.yaml path') - parser.add_argument('--data', type=str, default='data/coco.yaml', help='data.yaml path') - parser.add_argument('--hyp', type=str, default='data/hyp.scratch.p5.yaml', help='hyperparameters path') - parser.add_argument('--epochs', type=int, default=300) - parser.add_argument('--batch-size', type=int, default=16, help='total batch size for all GPUs') - parser.add_argument('--img-size', nargs='+', type=int, default=[640, 640], help='[train, test] image sizes') - parser.add_argument('--rect', action='store_true', help='rectangular training') - parser.add_argument('--resume', nargs='?', const=True, default=False, help='resume most recent training') - parser.add_argument('--nosave', action='store_true', help='only save final checkpoint') - parser.add_argument('--notest', action='store_true', help='only test final epoch') - parser.add_argument('--noautoanchor', action='store_true', help='disable autoanchor check') - parser.add_argument('--evolve', action='store_true', help='evolve hyperparameters') - parser.add_argument('--bucket', type=str, default='', help='gsutil bucket') - parser.add_argument('--cache-images', action='store_true', help='cache images for faster training') - parser.add_argument('--image-weights', action='store_true', help='use weighted image selection for training') - parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu') - parser.add_argument('--multi-scale', action='store_true', help='vary img-size +/- 50%%') - parser.add_argument('--single-cls', action='store_true', help='train multi-class data as single-class') - parser.add_argument('--adam', action='store_true', help='use torch.optim.Adam() optimizer') - parser.add_argument('--sync-bn', action='store_true', help='use SyncBatchNorm, only available in DDP mode') - parser.add_argument('--local_rank', type=int, default=-1, help='DDP parameter, do not modify') - parser.add_argument('--workers', type=int, default=8, help='maximum number of dataloader workers') - parser.add_argument('--project', default='runs/train', help='save to project/name') - parser.add_argument('--entity', default=None, help='W&B entity') - parser.add_argument('--name', default='exp', help='save to project/name') - parser.add_argument('--exist-ok', action='store_true', help='existing project/name ok, do not increment') - parser.add_argument('--quad', action='store_true', help='quad dataloader') - parser.add_argument('--linear-lr', action='store_true', help='linear LR') - parser.add_argument('--label-smoothing', type=float, default=0.0, help='Label smoothing epsilon') - parser.add_argument('--upload_dataset', action='store_true', help='Upload dataset as W&B artifact table') - parser.add_argument('--bbox_interval', type=int, default=-1, help='Set bounding-box image logging interval for W&B') - parser.add_argument('--save_period', type=int, default=-1, help='Log model after every "save_period" epoch') - parser.add_argument('--artifact_alias', type=str, default="latest", help='version of dataset artifact to be used') - parser.add_argument('--v5-metric', action='store_true', help='assume maximum recall as 1.0 in AP calculation') - opt = parser.parse_args() - - # Set DDP variables - opt.world_size = int(os.environ['WORLD_SIZE']) if 'WORLD_SIZE' in os.environ else 1 - opt.global_rank = int(os.environ['RANK']) if 'RANK' in os.environ else -1 - set_logging(opt.global_rank) - #if opt.global_rank in [-1, 0]: - # check_git_status() - # check_requirements() - - # Resume - wandb_run = check_wandb_resume(opt) - if opt.resume and not wandb_run: # resume an interrupted run - ckpt = opt.resume if isinstance(opt.resume, str) else get_latest_run() # specified or most recent path - assert os.path.isfile(ckpt), 'ERROR: --resume checkpoint does not exist' - apriori = opt.global_rank, opt.local_rank - with open(Path(ckpt).parent.parent / 'opt.yaml') as f: - opt = argparse.Namespace(**yaml.load(f, Loader=yaml.SafeLoader)) # replace - opt.cfg, opt.weights, opt.resume, opt.batch_size, opt.global_rank, opt.local_rank = '', ckpt, True, opt.total_batch_size, *apriori # reinstate - logger.info('Resuming training from %s' % ckpt) - else: - # opt.hyp = opt.hyp or ('hyp.finetune.yaml' if opt.weights else 'hyp.scratch.yaml') - opt.data, opt.cfg, opt.hyp = check_file(opt.data), check_file(opt.cfg), check_file(opt.hyp) # check files - assert len(opt.cfg) or len(opt.weights), 'either --cfg or --weights must be specified' - opt.img_size.extend([opt.img_size[-1]] * (2 - len(opt.img_size))) # extend to 2 sizes (train, test) - opt.name = 'evolve' if opt.evolve else opt.name - opt.save_dir = increment_path(Path(opt.project) / opt.name, exist_ok=opt.exist_ok | opt.evolve) # increment run - - # DDP mode - opt.total_batch_size = opt.batch_size - device = select_device(opt.device, batch_size=opt.batch_size) - if opt.local_rank != -1: - assert torch.cuda.device_count() > opt.local_rank - torch.cuda.set_device(opt.local_rank) - device = torch.device('cuda', opt.local_rank) - dist.init_process_group(backend='nccl', init_method='env://') # distributed backend - assert opt.batch_size % opt.world_size == 0, '--batch-size must be multiple of CUDA device count' - opt.batch_size = opt.total_batch_size // opt.world_size - - # Hyperparameters - with open(opt.hyp) as f: - hyp = yaml.load(f, Loader=yaml.SafeLoader) # load hyps - - # Train - logger.info(opt) - if not opt.evolve: - tb_writer = None # init loggers - if opt.global_rank in [-1, 0]: - prefix = colorstr('tensorboard: ') - logger.info(f"{prefix}Start with 'tensorboard --logdir {opt.project}', view at http://localhost:6006/") - tb_writer = SummaryWriter(opt.save_dir) # Tensorboard - train(hyp, opt, device, tb_writer) - - # Evolve hyperparameters (optional) - else: - # Hyperparameter evolution metadata (mutation scale 0-1, lower_limit, upper_limit) - meta = {'lr0': (1, 1e-5, 1e-1), # initial learning rate (SGD=1E-2, Adam=1E-3) - 'lrf': (1, 0.01, 1.0), # final OneCycleLR learning rate (lr0 * lrf) - 'momentum': (0.3, 0.6, 0.98), # SGD momentum/Adam beta1 - 'weight_decay': (1, 0.0, 0.001), # optimizer weight decay - 'warmup_epochs': (1, 0.0, 5.0), # warmup epochs (fractions ok) - 'warmup_momentum': (1, 0.0, 0.95), # warmup initial momentum - 'warmup_bias_lr': (1, 0.0, 0.2), # warmup initial bias lr - 'box': (1, 0.02, 0.2), # box loss gain - 'cls': (1, 0.2, 4.0), # cls loss gain - 'cls_pw': (1, 0.5, 2.0), # cls BCELoss positive_weight - 'obj': (1, 0.2, 4.0), # obj loss gain (scale with pixels) - 'obj_pw': (1, 0.5, 2.0), # obj BCELoss positive_weight - 'iou_t': (0, 0.1, 0.7), # IoU training threshold - 'anchor_t': (1, 2.0, 8.0), # anchor-multiple threshold - 'anchors': (2, 2.0, 10.0), # anchors per output grid (0 to ignore) - 'fl_gamma': (0, 0.0, 2.0), # focal loss gamma (efficientDet default gamma=1.5) - 'hsv_h': (1, 0.0, 0.1), # image HSV-Hue augmentation (fraction) - 'hsv_s': (1, 0.0, 0.9), # image HSV-Saturation augmentation (fraction) - 'hsv_v': (1, 0.0, 0.9), # image HSV-Value augmentation (fraction) - 'degrees': (1, 0.0, 45.0), # image rotation (+/- deg) - 'translate': (1, 0.0, 0.9), # image translation (+/- fraction) - 'scale': (1, 0.0, 0.9), # image scale (+/- gain) - 'shear': (1, 0.0, 10.0), # image shear (+/- deg) - 'perspective': (0, 0.0, 0.001), # image perspective (+/- fraction), range 0-0.001 - 'flipud': (1, 0.0, 1.0), # image flip up-down (probability) - 'fliplr': (0, 0.0, 1.0), # image flip left-right (probability) - 'mosaic': (1, 0.0, 1.0), # image mixup (probability) - 'mixup': (1, 0.0, 1.0)} # image mixup (probability) - - with open(opt.hyp, errors='ignore') as f: - hyp = yaml.safe_load(f) # load hyps dict - if 'anchors' not in hyp: # anchors commented in hyp.yaml - hyp['anchors'] = 3 - - assert opt.local_rank == -1, 'DDP mode not implemented for --evolve' - opt.notest, opt.nosave = True, True # only test/save final epoch - # ei = [isinstance(x, (int, float)) for x in hyp.values()] # evolvable indices - yaml_file = Path(opt.save_dir) / 'hyp_evolved.yaml' # save best result here - if opt.bucket: - os.system('gsutil cp gs://%s/evolve.txt .' % opt.bucket) # download evolve.txt if exists - - for _ in range(300): # generations to evolve - if Path('evolve.txt').exists(): # if evolve.txt exists: select best hyps and mutate - # Select parent(s) - parent = 'single' # parent selection method: 'single' or 'weighted' - x = np.loadtxt('evolve.txt', ndmin=2) - n = min(5, len(x)) # number of previous results to consider - x = x[np.argsort(-fitness(x))][:n] # top n mutations - w = fitness(x) - fitness(x).min() # weights - if parent == 'single' or len(x) == 1: - # x = x[random.randint(0, n - 1)] # random selection - x = x[random.choices(range(n), weights=w)[0]] # weighted selection - elif parent == 'weighted': - x = (x * w.reshape(n, 1)).sum(0) / w.sum() # weighted combination - - # Mutate - mp, s = 0.8, 0.2 # mutation probability, sigma - npr = np.random - npr.seed(int(time.time())) - g = np.array([x[0] for x in meta.values()]) # gains 0-1 - ng = len(meta) - v = np.ones(ng) - while all(v == 1): # mutate until a change occurs (prevent duplicates) - v = (g * (npr.random(ng) < mp) * npr.randn(ng) * npr.random() * s + 1).clip(0.3, 3.0) - for i, k in enumerate(hyp.keys()): # plt.hist(v.ravel(), 300) - hyp[k] = float(x[i + 7] * v[i]) # mutate - - # Constrain to limits - for k, v in meta.items(): - hyp[k] = max(hyp[k], v[1]) # lower limit - hyp[k] = min(hyp[k], v[2]) # upper limit - hyp[k] = round(hyp[k], 5) # significant digits - - # Train mutation - results = train(hyp.copy(), opt, device) - - # Write mutation results - print_mutation(hyp.copy(), results, yaml_file, opt.bucket) - - # Plot results - plot_evolution(yaml_file) - print(f'Hyperparameter evolution complete. Best results saved as: {yaml_file}\n' - f'Command to train a new model with these hyperparameters: $ python train.py --hyp {yaml_file}') diff --git a/spaces/emc348/faces-through-time/color_transfer_loss.py b/spaces/emc348/faces-through-time/color_transfer_loss.py deleted file mode 100644 index febfb5db954078c0839c93a3dd11a86451839c8c..0000000000000000000000000000000000000000 --- a/spaces/emc348/faces-through-time/color_transfer_loss.py +++ /dev/null @@ -1,60 +0,0 @@ -from typing import List, Optional - -import torch -from torch import nn -from torch.nn.functional import ( - smooth_l1_loss, -) - - -def flatten_CHW(im: torch.Tensor) -> torch.Tensor: - """ - (B, C, H, W) -> (B, -1) - """ - B = im.shape[0] - return im.reshape(B, -1) - - -def stddev(x: torch.Tensor) -> torch.Tensor: - """ - x: (B, -1), assume with mean normalized - Retuens: - stddev: (B) - """ - return torch.sqrt(torch.mean(x * x, dim=-1)) - - -def gram_matrix(input_): - B, C = input_.shape[:2] - features = input_.view(B, C, -1) - N = features.shape[-1] - G = torch.bmm(features, features.transpose(1, 2)) # C x C - return G.div(C * N) - - -class ColorTransferLoss(nn.Module): - """Penalize the gram matrix difference between StyleGAN2's ToRGB outputs""" - def __init__( - self, - init_rgbs, - scale_rgb: bool = False - ): - super().__init__() - - with torch.no_grad(): - init_feats = [x.detach() for x in init_rgbs] - self.stds = [stddev(flatten_CHW(rgb)) if scale_rgb else 1 for rgb in init_feats] # (B, 1, 1, 1) or scalar - self.grams = [gram_matrix(rgb / std) for rgb, std in zip(init_feats, self.stds)] - - def forward(self, rgbs: List[torch.Tensor], level: int = None): - if level is None: - level = len(self.grams) - - feats = rgbs - loss = 0 - for i, (rgb, std) in enumerate(zip(feats[:level], self.stds[:level])): - G = gram_matrix(rgb / std) - loss = loss + smooth_l1_loss(G, self.grams[i]) - - return loss - diff --git a/spaces/exbert-project/exbert/server/model_api.py b/spaces/exbert-project/exbert/server/model_api.py deleted file mode 100644 index f3fea31468796a4ce612e15eaa77ea5eb570212b..0000000000000000000000000000000000000000 --- a/spaces/exbert-project/exbert/server/model_api.py +++ /dev/null @@ -1,164 +0,0 @@ -from typing import List, Union, Tuple - -import torch -from transformers import AutoConfig, AutoTokenizer, AutoModelWithLMHead, AutoModel - -from transformer_formatter import TransformerOutputFormatter -from utils.f import delegates, pick, memoize - -@memoize -def get_details(mname): - return ModelDetails(mname) - -def get_model_tok(mname): - conf = AutoConfig.from_pretrained(mname, output_attentions=True, output_past=False) - tok = AutoTokenizer.from_pretrained(mname, config=conf) - model = AutoModelWithLMHead.from_pretrained(mname, config=conf) - return model, tok - -class ModelDetails: - """Wraps a transformer model and tokenizer to prepare inputs to the frontend visualization""" - def __init__(self, mname): - self.mname = mname - self.model, self.tok = get_model_tok(self.mname) - self.model.eval() - self.config = self.model.config - - def from_sentence(self, sentence: str) -> TransformerOutputFormatter: - """Get attentions and word probabilities from a sentence. Special tokens are automatically added if a sentence is passed. - - Args: - sentence: The input sentence to tokenize and analyze. - """ - tokens = self.tok.tokenize(sentence) - - return self.from_tokens(tokens, sentence, add_special_tokens=True) - - def from_tokens( - self, tokens: List[str], orig_sentence:str, add_special_tokens:bool=False, mask_attentions:bool=False, topk:int=5 - ) -> TransformerOutputFormatter: - """Get formatted attention and predictions from a list of tokens. - - Args: - tokens: Tokens to analyze - orig_sentence: The sentence the tokens came from (needed to help organize the output) - add_special_tokens: Whether to add special tokens like CLS / <|endoftext|> to the tokens. - If False, assume the tokens already have the special tokens - mask_attentions: If True, do not pay attention to attention patterns to special tokens through the model. - topk: How many top predictions to report - """ - ids = self.tok.convert_tokens_to_ids(tokens) - - # For GPT2, add the beginning of sentence token to the input. Note that this will work on all models but XLM - bost = self.tok.bos_token_id - clst = self.tok.cls_token_id - sept = self.tok.sep_token_id - if (bost is not None) and (bost != clst)and add_special_tokens: - ids.insert(0, bost) - - inputs = self.tok.prepare_for_model(ids, add_special_tokens=add_special_tokens, return_tensors="pt") - parsed_input = self.parse_inputs(inputs, mask_attentions=mask_attentions) - output = self.model(parsed_input['input_ids'], attention_mask=parsed_input['attention_mask']) - - logits, atts = self.choose_logits_att(output) - words, probs = self.logits2words(logits, topk) - tokens = self.view_ids(inputs["input_ids"]) - - formatted_output = TransformerOutputFormatter( - orig_sentence, - tokens, - inputs["special_tokens_mask"], - atts, - words, - probs.tolist(), - self.config - ) - - return formatted_output - - def choose_logits_att(self, out:Tuple) -> Tuple: - """Select from the model's output the logits and the attentions, switching on model name - - Args: - out: Output from the model's forward pass - - Returns: - (logits: tensor((bs, N)), attentions: Tuple[tensor(())]) - """ - if 't5' in self.mname: - logits, _, atts = out - else: - logits, atts = out - - return logits, atts - - def logits2words(self, logits, topk): - """Convert logit probabilities into words from the tokenizer's vocabulary. - - """ - probs, idxs = torch.topk(torch.softmax(logits.squeeze(0), 1), topk) - words = [self.tok.convert_ids_to_tokens(i) for i in idxs] - return words, probs - - def view_ids(self, ids: Union[List[int], torch.Tensor]) -> List[str]: - """View what the tokenizer thinks certain ids are for a single input""" - if type(ids) == torch.Tensor: - # Remove batch dimension - ids = ids.squeeze(0).tolist() - - out = self.tok.convert_ids_to_tokens(ids) - return out - - def parse_inputs(self, inputs, mask_attentions=False): - """Parse the output from `tokenizer.prepare_for_model` to the desired attention mask from special tokens - - Args: - - inputs: The output of `tokenizer.prepare_for_model`. - A dict with keys: {'special_token_mask', 'token_type_ids', 'input_ids'} - - mask_attentions: Flag indicating whether to mask the attentions or not - - Returns: - Dict with keys: {'input_ids', 'token_type_ids', 'attention_mask', 'special_tokens_mask'} - - Usage: - - ``` - s = "test sentence" - - # from raw sentence to tokens - tokens = tokenizer.tokenize(s) - - # From tokens to ids - ids = tokenizer.convert_tokens_to_ids(tokens) - - # From ids to input - inputs = tokenizer.prepare_for_model(ids, return_tensors='pt') - - # Parse the input. Optionally mask the special tokens from the analysis. - parsed_input = parse_inputs(inputs) - - # Run the model, pick from this output whatever inputs you want - from utils.f import pick - out = model(**pick(['input_ids'], parse_inputs(inputs))) - ``` - """ - - out = inputs.copy() - - # DEFINE SPECIAL TOKENS MASK - if "special_tokens_mask" not in inputs.keys(): - special_tokens = set([self.tok.unk_token_id, self.tok.cls_token_id, self.tok.sep_token_id, self.tok.bos_token_id, self.tok.eos_token_id, self.tok.pad_token_id]) - in_ids = inputs['input_ids'][0] - special_tok_mask = [1 if int(i) in special_tokens else 0 for i in in_ids] - inputs['special_tokens_mask'] = special_tok_mask - - if mask_attentions: - out["attention_mask"] = torch.tensor( - [int(not i) for i in inputs.get("special_tokens_mask")] - ).unsqueeze(0) - else: - out["attention_mask"] = torch.tensor( - [1 for i in inputs.get("special_tokens_mask")] - ).unsqueeze(0) - - return out \ No newline at end of file diff --git a/spaces/facebook/MusicGen/audiocraft/modules/conv.py b/spaces/facebook/MusicGen/audiocraft/modules/conv.py deleted file mode 100644 index d115cbf8729b642ed78608bd00a4d0fd5afae6fd..0000000000000000000000000000000000000000 --- a/spaces/facebook/MusicGen/audiocraft/modules/conv.py +++ /dev/null @@ -1,243 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import math -import typing as tp -import warnings - -import torch -from torch import nn -from torch.nn import functional as F -from torch.nn.utils import spectral_norm, weight_norm - - -CONV_NORMALIZATIONS = frozenset(['none', 'weight_norm', 'spectral_norm', - 'time_group_norm']) - - -def apply_parametrization_norm(module: nn.Module, norm: str = 'none'): - assert norm in CONV_NORMALIZATIONS - if norm == 'weight_norm': - return weight_norm(module) - elif norm == 'spectral_norm': - return spectral_norm(module) - else: - # We already check was in CONV_NORMALIZATION, so any other choice - # doesn't need reparametrization. - return module - - -def get_norm_module(module: nn.Module, causal: bool = False, norm: str = 'none', **norm_kwargs): - """Return the proper normalization module. If causal is True, this will ensure the returned - module is causal, or return an error if the normalization doesn't support causal evaluation. - """ - assert norm in CONV_NORMALIZATIONS - if norm == 'time_group_norm': - if causal: - raise ValueError("GroupNorm doesn't support causal evaluation.") - assert isinstance(module, nn.modules.conv._ConvNd) - return nn.GroupNorm(1, module.out_channels, **norm_kwargs) - else: - return nn.Identity() - - -def get_extra_padding_for_conv1d(x: torch.Tensor, kernel_size: int, stride: int, - padding_total: int = 0) -> int: - """See `pad_for_conv1d`.""" - length = x.shape[-1] - n_frames = (length - kernel_size + padding_total) / stride + 1 - ideal_length = (math.ceil(n_frames) - 1) * stride + (kernel_size - padding_total) - return ideal_length - length - - -def pad_for_conv1d(x: torch.Tensor, kernel_size: int, stride: int, padding_total: int = 0): - """Pad for a convolution to make sure that the last window is full. - Extra padding is added at the end. This is required to ensure that we can rebuild - an output of the same length, as otherwise, even with padding, some time steps - might get removed. - For instance, with total padding = 4, kernel size = 4, stride = 2: - 0 0 1 2 3 4 5 0 0 # (0s are padding) - 1 2 3 # (output frames of a convolution, last 0 is never used) - 0 0 1 2 3 4 5 0 # (output of tr. conv., but pos. 5 is going to get removed as padding) - 1 2 3 4 # once you removed padding, we are missing one time step ! - """ - extra_padding = get_extra_padding_for_conv1d(x, kernel_size, stride, padding_total) - return F.pad(x, (0, extra_padding)) - - -def pad1d(x: torch.Tensor, paddings: tp.Tuple[int, int], mode: str = 'constant', value: float = 0.): - """Tiny wrapper around F.pad, just to allow for reflect padding on small input. - If this is the case, we insert extra 0 padding to the right before the reflection happen. - """ - length = x.shape[-1] - padding_left, padding_right = paddings - assert padding_left >= 0 and padding_right >= 0, (padding_left, padding_right) - if mode == 'reflect': - max_pad = max(padding_left, padding_right) - extra_pad = 0 - if length <= max_pad: - extra_pad = max_pad - length + 1 - x = F.pad(x, (0, extra_pad)) - padded = F.pad(x, paddings, mode, value) - end = padded.shape[-1] - extra_pad - return padded[..., :end] - else: - return F.pad(x, paddings, mode, value) - - -def unpad1d(x: torch.Tensor, paddings: tp.Tuple[int, int]): - """Remove padding from x, handling properly zero padding. Only for 1d!""" - padding_left, padding_right = paddings - assert padding_left >= 0 and padding_right >= 0, (padding_left, padding_right) - assert (padding_left + padding_right) <= x.shape[-1] - end = x.shape[-1] - padding_right - return x[..., padding_left: end] - - -class NormConv1d(nn.Module): - """Wrapper around Conv1d and normalization applied to this conv - to provide a uniform interface across normalization approaches. - """ - def __init__(self, *args, causal: bool = False, norm: str = 'none', - norm_kwargs: tp.Dict[str, tp.Any] = {}, **kwargs): - super().__init__() - self.conv = apply_parametrization_norm(nn.Conv1d(*args, **kwargs), norm) - self.norm = get_norm_module(self.conv, causal, norm, **norm_kwargs) - self.norm_type = norm - - def forward(self, x): - x = self.conv(x) - x = self.norm(x) - return x - - -class NormConv2d(nn.Module): - """Wrapper around Conv2d and normalization applied to this conv - to provide a uniform interface across normalization approaches. - """ - def __init__(self, *args, norm: str = 'none', norm_kwargs: tp.Dict[str, tp.Any] = {}, **kwargs): - super().__init__() - self.conv = apply_parametrization_norm(nn.Conv2d(*args, **kwargs), norm) - self.norm = get_norm_module(self.conv, causal=False, norm=norm, **norm_kwargs) - self.norm_type = norm - - def forward(self, x): - x = self.conv(x) - x = self.norm(x) - return x - - -class NormConvTranspose1d(nn.Module): - """Wrapper around ConvTranspose1d and normalization applied to this conv - to provide a uniform interface across normalization approaches. - """ - def __init__(self, *args, causal: bool = False, norm: str = 'none', - norm_kwargs: tp.Dict[str, tp.Any] = {}, **kwargs): - super().__init__() - self.convtr = apply_parametrization_norm(nn.ConvTranspose1d(*args, **kwargs), norm) - self.norm = get_norm_module(self.convtr, causal, norm, **norm_kwargs) - self.norm_type = norm - - def forward(self, x): - x = self.convtr(x) - x = self.norm(x) - return x - - -class NormConvTranspose2d(nn.Module): - """Wrapper around ConvTranspose2d and normalization applied to this conv - to provide a uniform interface across normalization approaches. - """ - def __init__(self, *args, norm: str = 'none', norm_kwargs: tp.Dict[str, tp.Any] = {}, **kwargs): - super().__init__() - self.convtr = apply_parametrization_norm(nn.ConvTranspose2d(*args, **kwargs), norm) - self.norm = get_norm_module(self.convtr, causal=False, norm=norm, **norm_kwargs) - - def forward(self, x): - x = self.convtr(x) - x = self.norm(x) - return x - - -class StreamableConv1d(nn.Module): - """Conv1d with some builtin handling of asymmetric or causal padding - and normalization. - """ - def __init__(self, in_channels: int, out_channels: int, - kernel_size: int, stride: int = 1, dilation: int = 1, - groups: int = 1, bias: bool = True, causal: bool = False, - norm: str = 'none', norm_kwargs: tp.Dict[str, tp.Any] = {}, - pad_mode: str = 'reflect'): - super().__init__() - # warn user on unusual setup between dilation and stride - if stride > 1 and dilation > 1: - warnings.warn("StreamableConv1d has been initialized with stride > 1 and dilation > 1" - f" (kernel_size={kernel_size} stride={stride}, dilation={dilation}).") - self.conv = NormConv1d(in_channels, out_channels, kernel_size, stride, - dilation=dilation, groups=groups, bias=bias, causal=causal, - norm=norm, norm_kwargs=norm_kwargs) - self.causal = causal - self.pad_mode = pad_mode - - def forward(self, x): - B, C, T = x.shape - kernel_size = self.conv.conv.kernel_size[0] - stride = self.conv.conv.stride[0] - dilation = self.conv.conv.dilation[0] - kernel_size = (kernel_size - 1) * dilation + 1 # effective kernel size with dilations - padding_total = kernel_size - stride - extra_padding = get_extra_padding_for_conv1d(x, kernel_size, stride, padding_total) - if self.causal: - # Left padding for causal - x = pad1d(x, (padding_total, extra_padding), mode=self.pad_mode) - else: - # Asymmetric padding required for odd strides - padding_right = padding_total // 2 - padding_left = padding_total - padding_right - x = pad1d(x, (padding_left, padding_right + extra_padding), mode=self.pad_mode) - return self.conv(x) - - -class StreamableConvTranspose1d(nn.Module): - """ConvTranspose1d with some builtin handling of asymmetric or causal padding - and normalization. - """ - def __init__(self, in_channels: int, out_channels: int, - kernel_size: int, stride: int = 1, causal: bool = False, - norm: str = 'none', trim_right_ratio: float = 1., - norm_kwargs: tp.Dict[str, tp.Any] = {}): - super().__init__() - self.convtr = NormConvTranspose1d(in_channels, out_channels, kernel_size, stride, - causal=causal, norm=norm, norm_kwargs=norm_kwargs) - self.causal = causal - self.trim_right_ratio = trim_right_ratio - assert self.causal or self.trim_right_ratio == 1., \ - "`trim_right_ratio` != 1.0 only makes sense for causal convolutions" - assert self.trim_right_ratio >= 0. and self.trim_right_ratio <= 1. - - def forward(self, x): - kernel_size = self.convtr.convtr.kernel_size[0] - stride = self.convtr.convtr.stride[0] - padding_total = kernel_size - stride - - y = self.convtr(x) - - # We will only trim fixed padding. Extra padding from `pad_for_conv1d` would be - # removed at the very end, when keeping only the right length for the output, - # as removing it here would require also passing the length at the matching layer - # in the encoder. - if self.causal: - # Trim the padding on the right according to the specified ratio - # if trim_right_ratio = 1.0, trim everything from right - padding_right = math.ceil(padding_total * self.trim_right_ratio) - padding_left = padding_total - padding_right - y = unpad1d(y, (padding_left, padding_right)) - else: - # Asymmetric padding required for odd strides - padding_right = padding_total // 2 - padding_left = padding_total - padding_right - y = unpad1d(y, (padding_left, padding_right)) - return y diff --git a/spaces/facebook/StyleNeRF/training/facial_recognition/model_irse.py b/spaces/facebook/StyleNeRF/training/facial_recognition/model_irse.py deleted file mode 100644 index 48f3dc128f0ba7bfe49ae43a65f8922786a236b2..0000000000000000000000000000000000000000 --- a/spaces/facebook/StyleNeRF/training/facial_recognition/model_irse.py +++ /dev/null @@ -1,86 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved - -from torch.nn import Linear, Conv2d, BatchNorm1d, BatchNorm2d, PReLU, Dropout, Sequential, Module -from training.facial_recognition.helpers import get_blocks, Flatten, bottleneck_IR, bottleneck_IR_SE, l2_norm - -""" -Modified Backbone implementation from [TreB1eN](https://github.com/TreB1eN/InsightFace_Pytorch) -""" - - -class Backbone(Module): - def __init__(self, input_size, num_layers, mode='ir', drop_ratio=0.4, affine=True): - super(Backbone, self).__init__() - assert input_size in [112, 224], "input_size should be 112 or 224" - assert num_layers in [50, 100, 152], "num_layers should be 50, 100 or 152" - assert mode in ['ir', 'ir_se'], "mode should be ir or ir_se" - blocks = get_blocks(num_layers) - if mode == 'ir': - unit_module = bottleneck_IR - elif mode == 'ir_se': - unit_module = bottleneck_IR_SE - self.input_layer = Sequential(Conv2d(3, 64, (3, 3), 1, 1, bias=False), - BatchNorm2d(64), - PReLU(64)) - if input_size == 112: - self.output_layer = Sequential(BatchNorm2d(512), - Dropout(drop_ratio), - Flatten(), - Linear(512 * 7 * 7, 512), - BatchNorm1d(512, affine=affine)) - else: - self.output_layer = Sequential(BatchNorm2d(512), - Dropout(drop_ratio), - Flatten(), - Linear(512 * 14 * 14, 512), - BatchNorm1d(512, affine=affine)) - - modules = [] - for block in blocks: - for bottleneck in block: - modules.append(unit_module(bottleneck.in_channel, - bottleneck.depth, - bottleneck.stride)) - self.body = Sequential(*modules) - - def forward(self, x): - x = self.input_layer(x) - x = self.body(x) - x = self.output_layer(x) - return l2_norm(x) - - -def IR_50(input_size): - """Constructs a ir-50 model.""" - model = Backbone(input_size, num_layers=50, mode='ir', drop_ratio=0.4, affine=False) - return model - - -def IR_101(input_size): - """Constructs a ir-101 model.""" - model = Backbone(input_size, num_layers=100, mode='ir', drop_ratio=0.4, affine=False) - return model - - -def IR_152(input_size): - """Constructs a ir-152 model.""" - model = Backbone(input_size, num_layers=152, mode='ir', drop_ratio=0.4, affine=False) - return model - - -def IR_SE_50(input_size): - """Constructs a ir_se-50 model.""" - model = Backbone(input_size, num_layers=50, mode='ir_se', drop_ratio=0.4, affine=False) - return model - - -def IR_SE_101(input_size): - """Constructs a ir_se-101 model.""" - model = Backbone(input_size, num_layers=100, mode='ir_se', drop_ratio=0.4, affine=False) - return model - - -def IR_SE_152(input_size): - """Constructs a ir_se-152 model.""" - model = Backbone(input_size, num_layers=152, mode='ir_se', drop_ratio=0.4, affine=False) - return model diff --git a/spaces/failfast/2D-GameCreator/src/pages/api/generate.ts b/spaces/failfast/2D-GameCreator/src/pages/api/generate.ts deleted file mode 100644 index ea49f617169528c37002c0d035081349efd3ff6a..0000000000000000000000000000000000000000 --- a/spaces/failfast/2D-GameCreator/src/pages/api/generate.ts +++ /dev/null @@ -1,24 +0,0 @@ -import { toOpenAI } from "@/services/api"; -import { NextApiRequest, NextApiResponse } from "next"; -import { AxiosError } from "axios"; -import process from "node:process"; -import { createClient } from "@/services/api/openai"; -import { CustomAxiosError } from "@/services/api/axios"; - -export default async function handler(request: NextApiRequest, response: NextApiResponse) { - switch (request.method) { - case "POST": - try { - const client = createClient(process.env.OPENAI_API_KEY as string); - - const answer = await toOpenAI({ ...request.body, client }); - return response.status(200).json(answer); - } catch (error) { - return response.status((error as AxiosError).status ?? 500).json({ - message: (error as CustomAxiosError).data?.error?.message ?? "UNKNOWN", - }); - } - default: - return response.status(405).json({}); - } -} diff --git a/spaces/faizhalas/coconut/pages/2 Topic Modeling.py b/spaces/faizhalas/coconut/pages/2 Topic Modeling.py deleted file mode 100644 index d4fdd593731f1b6e9c4d209c7cddd298b470a838..0000000000000000000000000000000000000000 --- a/spaces/faizhalas/coconut/pages/2 Topic Modeling.py +++ /dev/null @@ -1,436 +0,0 @@ -#import module -import streamlit as st -import pandas as pd -import numpy as np -import re -import nltk -nltk.download('wordnet') -from nltk.stem import WordNetLemmatizer -nltk.download('stopwords') -from nltk.corpus import stopwords -import gensim -import gensim.corpora as corpora -from gensim.corpora import Dictionary -from gensim.models.coherencemodel import CoherenceModel -from gensim.models.ldamodel import LdaModel -from pprint import pprint -import pickle -import pyLDAvis -import pyLDAvis.gensim_models as gensimvis -import streamlit.components.v1 as components -from io import StringIO -from ipywidgets.embed import embed_minimal_html -from nltk.stem.snowball import SnowballStemmer -from bertopic import BERTopic -import plotly.express as px -from sklearn.cluster import KMeans -import bitermplus as btm -import tmplot as tmp -import tomotopy -import sys -import spacy -import en_core_web_sm -import pipeline -from html2image import Html2Image -from umap import UMAP -import os -import time - - -#===config=== -st.set_page_config( - page_title="Coconut", - page_icon="🥥", - layout="wide" -) -st.header("Topic Modeling") -hide_streamlit_style = """ - - """ -st.markdown(hide_streamlit_style, unsafe_allow_html=True) - -st.subheader('Put your file here...') - -#========unique id======== -@st.cache_resource(ttl=3600) -def create_list(): - l = [1, 2, 3] - return l - -l = create_list() -first_list_value = l[0] -l[0] = first_list_value + 1 -uID = str(l[0]) - -@st.cache_data(ttl=3600) -def get_ext(uploaded_file): - extype = uID+uploaded_file.name - return extype - -#===clear cache=== - -def reset_biterm(): - try: - biterm_map.clear() - biterm_bar.clear() - except NameError: - biterm_topic.clear() - -def reset_all(): - st.cache_data.clear() - -#===avoiding deadlock=== -os.environ["TOKENIZERS_PARALLELISM"] = "false" - -#===clean csv=== -@st.cache_data(ttl=3600, show_spinner=False) -def clean_csv(extype): - try: - paper = papers.dropna(subset=['Abstract']) - except KeyError: - st.error('Error: Please check your Abstract column.') - sys.exit(1) - paper = paper[~paper.Abstract.str.contains("No abstract available")] - paper = paper[~paper.Abstract.str.contains("STRAIT")] - - #===mapping=== - paper['Abstract_pre'] = paper['Abstract'].map(lambda x: re.sub('[,:;\.!-?•=]', ' ', x)) - paper['Abstract_pre'] = paper['Abstract_pre'].map(lambda x: x.lower()) - paper['Abstract_pre'] = paper['Abstract_pre'].map(lambda x: re.sub('©.*', '', x)) - paper['Abstract_pre'] = paper['Abstract_pre'].str.replace('\u201c|\u201d', '', regex=True) - - #===stopword removal=== - stop = stopwords.words('english') - paper['Abstract_stop'] = paper['Abstract_pre'].apply(lambda x: ' '.join([word for word in x.split() if word not in (stop)])) - - #===lemmatize=== - lemmatizer = WordNetLemmatizer() - def lemmatize_words(text): - words = text.split() - words = [lemmatizer.lemmatize(word) for word in words] - return ' '.join(words) - paper['Abstract_lem'] = paper['Abstract_stop'].apply(lemmatize_words) - - words_rmv = [word.strip() for word in words_to_remove.split(";")] - remove_dict = {word: None for word in words_rmv} - def remove_words(text): - words = text.split() - cleaned_words = [word for word in words if word not in remove_dict] - return ' '.join(cleaned_words) - paper['Abstract_lem'] = paper['Abstract_lem'].map(remove_words) - - topic_abs = paper.Abstract_lem.values.tolist() - return topic_abs, paper - -#===upload file=== -@st.cache_data(ttl=3600) -def upload(file): - papers = pd.read_csv(uploaded_file) - return papers - -@st.cache_data(ttl=3600) -def conv_txt(extype): - col_dict = {'TI': 'Title', - 'SO': 'Source title', - 'DT': 'Document Type', - 'AB': 'Abstract', - 'PY': 'Year'} - papers = pd.read_csv(uploaded_file, sep='\t', lineterminator='\r') - papers.rename(columns=col_dict, inplace=True) - return papers - - -#===Read data=== -uploaded_file = st.file_uploader("Choose a file", type=['csv', 'txt'], on_change=reset_all) - -if uploaded_file is not None: - extype = get_ext(uploaded_file) - - if extype.endswith('.csv'): - papers = upload(extype) - elif extype.endswith('.txt'): - papers = conv_txt(extype) - - c1, c2, c3 = st.columns([3,2,5]) - method = c1.selectbox( - 'Choose method', - ('Choose...', 'pyLDA', 'Biterm', 'BERTopic'), on_change=reset_all) - num_cho = c2.number_input('Choose number of topics', min_value=2, max_value=30, value=5) - words_to_remove = c3.text_input("Remove specific words. Separate words by semicolons (;)") - - d1, d2 = st.columns([8,2]) - d2.info("Don't do anything during the computing", icon="⚠️") - topic_abs, paper=clean_csv(extype) - - #===advance settings=== - with d1.expander("🧮 Show advance settings"): - t1, t2 = st.columns([5,5]) - if method == 'pyLDA': - py_random_state = t1.number_input('Random state', min_value=0, max_value=None, step=1) - py_chunksize = t2.number_input('Chunk size', value=100 , min_value=10, max_value=None, step=1) - elif method == 'Biterm': - btm_seed = t1.number_input('Random state seed', value=100 , min_value=1, max_value=None, step=1) - btm_iterations = t2.number_input('Iterations number', value=20 , min_value=2, max_value=None, step=1) - elif method == 'BERTopic': - bert_top_n_words = t1.number_input('top_n_words', value=5 , min_value=5, max_value=25, step=1) - bert_random_state = t1.number_input('random_state', value=42 , min_value=1, max_value=None, step=1) - bert_n_components = t2.number_input('n_components', value=5 , min_value=1, max_value=None, step=1) - bert_n_neighbors = t2.number_input('n_neighbors', value=15 , min_value=1, max_value=None, step=1) - bert_embedding_model = st.radio( - "embedding_model", - ["all-MiniLM-L6-v2", "en_core_web_sm", "paraphrase-multilingual-MiniLM-L12-v2"], index=0, horizontal=True) - else: - st.write('Please choose your preferred method') - if st.button("Submit", on_click=reset_all): - num_topic = num_cho - - #===topic=== - if method == 'Choose...': - st.write('') - - elif method == 'pyLDA': - tab1, tab2, tab3 = st.tabs(["📈 Generate visualization", "📃 Reference", "📓 Recommended Reading"]) - - with tab1: - #===visualization=== - @st.cache_data(ttl=3600, show_spinner=False) - def pylda(extype): - topic_abs_LDA = [t.split(' ') for t in topic_abs] - id2word = Dictionary(topic_abs_LDA) - corpus = [id2word.doc2bow(text) for text in topic_abs_LDA] - #===LDA=== - lda_model = LdaModel(corpus=corpus, - id2word=id2word, - num_topics=num_topic, - random_state=py_random_state, - chunksize=py_chunksize, - alpha='auto', - per_word_topics=True) - - pprint(lda_model.print_topics()) - doc_lda = lda_model[corpus] - - #===visualization=== - coherence_model_lda = CoherenceModel(model=lda_model, texts=topic_abs_LDA, dictionary=id2word, coherence='c_v') - coherence_lda = coherence_model_lda.get_coherence() - vis = pyLDAvis.gensim_models.prepare(lda_model, corpus, id2word) - py_lda_vis_html = pyLDAvis.prepared_data_to_html(vis) - return py_lda_vis_html, coherence_lda, vis - - with st.spinner('Performing computations. Please wait ...'): - try: - py_lda_vis_html, coherence_lda, vis = pylda(extype) - st.write('Coherence score: ', coherence_lda) - st.components.v1.html(py_lda_vis_html, width=1500, height=800) - st.markdown('Copyright (c) 2015, Ben Mabey. https://github.com/bmabey/pyLDAvis') - - @st.cache_data(ttl=3600, show_spinner=False) - def img_lda(vis): - pyLDAvis.save_html(vis, 'output.html') - hti = Html2Image() - hti.browser.flags = ['--default-background-color=ffffff', '--hide-scrollbars'] - css = "body {background: white;}" - hti.screenshot( - other_file='output.html', css_str=css, size=(1500, 800), - save_as='ldavis_img.png' - ) - - img_lda(vis) - with open("ldavis_img.png", "rb") as file: - btn = st.download_button( - label="Download image", - data=file, - file_name="ldavis_img.png", - mime="image/png" - ) - - except NameError: - st.warning('🖱️ Please click Submit') - - with tab2: - st.markdown('**Sievert, C., & Shirley, K. (2014). LDAvis: A method for visualizing and interpreting topics. Proceedings of the Workshop on Interactive Language Learning, Visualization, and Interfaces.** https://doi.org/10.3115/v1/w14-3110') - - with tab3: - st.markdown('**Chen, X., & Wang, H. (2019, January). Automated chat transcript analysis using topic modeling for library reference services. Proceedings of the Association for Information Science and Technology, 56(1), 368–371.** https://doi.org/10.1002/pra2.31') - st.markdown('**Joo, S., Ingram, E., & Cahill, M. (2021, December 15). Exploring Topics and Genres in Storytime Books: A Text Mining Approach. Evidence Based Library and Information Practice, 16(4), 41–62.** https://doi.org/10.18438/eblip29963') - st.markdown('**Lamba, M., & Madhusudhan, M. (2021, July 31). Topic Modeling. Text Mining for Information Professionals, 105–137.** https://doi.org/10.1007/978-3-030-85085-2_4') - st.markdown('**Lamba, M., & Madhusudhan, M. (2019, June 7). Mapping of topics in DESIDOC Journal of Library and Information Technology, India: a study. Scientometrics, 120(2), 477–505.** https://doi.org/10.1007/s11192-019-03137-5') - - #===Biterm=== - elif method == 'Biterm': - - #===optimize Biterm=== - @st.cache_data(ttl=3600, show_spinner=False) - def biterm_topic(extype): - X, vocabulary, vocab_dict = btm.get_words_freqs(topic_abs) - tf = np.array(X.sum(axis=0)).ravel() - docs_vec = btm.get_vectorized_docs(topic_abs, vocabulary) - docs_lens = list(map(len, docs_vec)) - biterms = btm.get_biterms(docs_vec) - model = btm.BTM( - X, vocabulary, seed=btm_seed, T=num_topic, M=20, alpha=50/8, beta=0.01) - model.fit(biterms, iterations=btm_iterations) - p_zd = model.transform(docs_vec) - coherence = model.coherence_ - phi = tmp.get_phi(model) - topics_coords = tmp.prepare_coords(model) - totaltop = topics_coords.label.values.tolist() - perplexity = model.perplexity_ - return topics_coords, phi, totaltop, perplexity - - tab1, tab2, tab3 = st.tabs(["📈 Generate visualization", "📃 Reference", "📓 Recommended Reading"]) - with tab1: - try: - with st.spinner('Performing computations. Please wait ...'): - topics_coords, phi, totaltop, perplexity = biterm_topic(extype) - col1, col2 = st.columns([4,6]) - - @st.cache_data(ttl=3600) - def biterm_map(extype): - btmvis_coords = tmp.plot_scatter_topics(topics_coords, size_col='size', label_col='label', topic=numvis) - return btmvis_coords - - @st.cache_data(ttl=3600) - def biterm_bar(extype): - terms_probs = tmp.calc_terms_probs_ratio(phi, topic=numvis, lambda_=1) - btmvis_probs = tmp.plot_terms(terms_probs, font_size=12) - return btmvis_probs - - with col1: - st.write('Perplexity score: ', perplexity) - st.write('') - numvis = st.selectbox( - 'Choose topic', - (totaltop), on_change=reset_biterm) - btmvis_coords = biterm_map(extype) - st.altair_chart(btmvis_coords) - with col2: - btmvis_probs = biterm_bar(extype) - st.altair_chart(btmvis_probs, use_container_width=True) - - except ValueError: - st.error('🙇‍♂️ Please raise the number of topics and click submit') - except NameError: - st.warning('🖱️ Please click Submit') - - with tab2: - st.markdown('**Yan, X., Guo, J., Lan, Y., & Cheng, X. (2013, May 13). A biterm topic model for short texts. Proceedings of the 22nd International Conference on World Wide Web.** https://doi.org/10.1145/2488388.2488514') - with tab3: - st.markdown('**Cai, M., Shah, N., Li, J., Chen, W. H., Cuomo, R. E., Obradovich, N., & Mackey, T. K. (2020, August 26). Identification and characterization of tweets related to the 2015 Indiana HIV outbreak: A retrospective infoveillance study. PLOS ONE, 15(8), e0235150.** https://doi.org/10.1371/journal.pone.0235150') - st.markdown('**Chen, Y., Dong, T., Ban, Q., & Li, Y. (2021). What Concerns Consumers about Hypertension? A Comparison between the Online Health Community and the Q&A Forum. International Journal of Computational Intelligence Systems, 14(1), 734.** https://doi.org/10.2991/ijcis.d.210203.002') - st.markdown('**George, Crissandra J., "AMBIGUOUS APPALACHIANNESS: A LINGUISTIC AND PERCEPTUAL INVESTIGATION INTO ARC-LABELED PENNSYLVANIA COUNTIES" (2022). Theses and Dissertations-- Linguistics. 48.** https://doi.org/10.13023/etd.2022.217') - st.markdown('**Li, J., Chen, W. H., Xu, Q., Shah, N., Kohler, J. C., & Mackey, T. K. (2020). Detection of self-reported experiences with corruption on twitter using unsupervised machine learning. Social Sciences & Humanities Open, 2(1), 100060.** https://doi.org/10.1016/j.ssaho.2020.100060') - - #===BERTopic=== - elif method == 'BERTopic': - @st.cache_data(ttl=3600, show_spinner=False) - def bertopic_vis(extype): - if 'Publication Year' in paper.columns: - paper.rename(columns={'Publication Year': 'Year'}, inplace=True) - topic_time = paper.Year.values.tolist() - umap_model = UMAP(n_neighbors=bert_n_neighbors, n_components=bert_n_components, - min_dist=0.0, metric='cosine', random_state=bert_random_state) - cluster_model = KMeans(n_clusters=num_topic) - if bert_embedding_model == 'all-MiniLM-L6-v2': - emb_mod = 'all-MiniLM-L6-v2' - lang = 'en' - elif bert_embedding_model == 'en_core_web_sm': - emb_mod = en_core_web_sm.load(exclude=['tagger', 'parser', 'ner', 'attribute_ruler', 'lemmatizer']) - lang = 'en' - elif bert_embedding_model == 'paraphrase-multilingual-MiniLM-L12-v2': - emb_mod = 'paraphrase-multilingual-MiniLM-L12-v2' - lang = 'multilingual' - topic_model = BERTopic(embedding_model=emb_mod, hdbscan_model=cluster_model, language=lang, umap_model=umap_model, top_n_words=bert_top_n_words) - topics, probs = topic_model.fit_transform(topic_abs) - return topic_model, topic_time, topics, probs - - @st.cache_data(ttl=3600, show_spinner=False) - def Vis_Topics(extype): - fig1 = topic_model.visualize_topics() - return fig1 - - @st.cache_data(ttl=3600, show_spinner=False) - def Vis_Documents(extype): - fig2 = topic_model.visualize_documents(topic_abs) - return fig2 - - @st.cache_data(ttl=3600, show_spinner=False) - def Vis_Hierarchy(extype): - fig3 = topic_model.visualize_hierarchy(top_n_topics=num_topic) - return fig3 - - @st.cache_data(ttl=3600, show_spinner=False) - def Vis_Heatmap(extype): - global topic_model - fig4 = topic_model.visualize_heatmap(n_clusters=num_topic-1, width=1000, height=1000) - return fig4 - - @st.cache_data(ttl=3600, show_spinner=False) - def Vis_Barchart(extype): - fig5 = topic_model.visualize_barchart(top_n_topics=num_topic) #, n_words=10) - return fig5 - - @st.cache_data(ttl=3600, show_spinner=False) - def Vis_ToT(extype): - topics_over_time = topic_model.topics_over_time(topic_abs, topic_time) - fig6 = topic_model.visualize_topics_over_time(topics_over_time) - return fig6 - - tab1, tab2, tab3 = st.tabs(["📈 Generate visualization", "📃 Reference", "📓 Recommended Reading"]) - with tab1: - try: - with st.spinner('Performing computations. Please wait ...'): - - topic_model, topic_time, topics, probs = bertopic_vis(extype) - time.sleep(.5) - st.toast('Visualize Topics', icon='🏃') - fig1 = Vis_Topics(extype) - - time.sleep(.5) - st.toast('Visualize Document', icon='🏃') - fig2 = Vis_Documents(extype) - - time.sleep(.5) - st.toast('Visualize Document Hierarchy', icon='🏃') - fig3 = Vis_Hierarchy(extype) - - time.sleep(.5) - st.toast('Visualize Topic Similarity', icon='🏃') - fig4 = Vis_Heatmap(extype) - - time.sleep(.5) - st.toast('Visualize Terms', icon='🏃') - fig5 = Vis_Barchart(extype) - - time.sleep(.5) - st.toast('Visualize Topics over Time', icon='🏃') - fig6 = Vis_ToT(extype) - - with st.expander("Visualize Topics"): - st.write(fig1) - with st.expander("Visualize Terms"): - st.write(fig5) - with st.expander("Visualize Documents"): - st.write(fig2) - with st.expander("Visualize Document Hierarchy"): - st.write(fig3) - with st.expander("Visualize Topic Similarity"): - st.write(fig4) - with st.expander("Visualize Topics over Time"): - st.write(fig6) - - except ValueError: - st.error('🙇‍♂️ Please raise the number of topics and click submit') - - except NameError: - st.warning('🖱️ Please click Submit') - - with tab2: - st.markdown('**Grootendorst, M. (2022). BERTopic: Neural topic modeling with a class-based TF-IDF procedure. arXiv preprint arXiv:2203.05794.** https://doi.org/10.48550/arXiv.2203.05794') - - with tab3: - st.markdown('**Jeet Rawat, A., Ghildiyal, S., & Dixit, A. K. (2022, December 1). Topic modelling of legal documents using NLP and bidirectional encoder representations from transformers. Indonesian Journal of Electrical Engineering and Computer Science, 28(3), 1749.** https://doi.org/10.11591/ijeecs.v28.i3.pp1749-1755') - st.markdown('**Yao, L. F., Ferawati, K., Liew, K., Wakamiya, S., & Aramaki, E. (2023, April 20). Disruptions in the Cystic Fibrosis Community’s Experiences and Concerns During the COVID-19 Pandemic: Topic Modeling and Time Series Analysis of Reddit Comments. Journal of Medical Internet Research, 25, e45249.** https://doi.org/10.2196/45249') \ No newline at end of file diff --git a/spaces/falterWliame/Face_Mask_Detection/AerosoftCrackerV2.exel ((TOP)).md b/spaces/falterWliame/Face_Mask_Detection/AerosoftCrackerV2.exel ((TOP)).md deleted file mode 100644 index 9ecc7bd102c6fd7db524e8ce78171271fb207639..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/AerosoftCrackerV2.exel ((TOP)).md +++ /dev/null @@ -1,14 +0,0 @@ -

      AerosoftCrackerV2.exel


      Downloadhttps://urlca.com/2uDcrX



      -
      -August 28, 2013 — I use Excel (there are other good spreadsheets!) to fly 1500m in 2 minutes, 15,000m in 20 minutes? I'm not quite sure what I should be doing. -I have found several solutions but I am stuck. -Some people told me to try to climb 2000m to get the maximum height. -But I don't know how to do it! -I understand that I am using a table and that there are a few things to calculate in order to get the maximum height. -But I can't figure out how. -I found this solution: -I'm wondering if this is a useful solution for me? -I don't want to just rely on this decision. 8a78ff9644
      -
      -
      -

      diff --git a/spaces/falterWliame/Face_Mask_Detection/ECA VRT DVD 2012.rar.md b/spaces/falterWliame/Face_Mask_Detection/ECA VRT DVD 2012.rar.md deleted file mode 100644 index a8bdcaeb8a8ff2fdfa60a9c8e171b3900cd2154d..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/ECA VRT DVD 2012.rar.md +++ /dev/null @@ -1,6 +0,0 @@ -

      ECA VRT DVD 2012.rar


      Download ––– https://urlca.com/2uDdzy



      - -Jul 15, 2014 Eca Vrt Dvd 2012 Rar > us/0fj6d teamviewer 8 full free download torrent autodesk autocad 2013 32bit crack download torrent. 4d29de3e1b
      -
      -
      -

      diff --git a/spaces/falterWliame/Face_Mask_Detection/Extreme Karaoke V3 Crack 34 __HOT__.md b/spaces/falterWliame/Face_Mask_Detection/Extreme Karaoke V3 Crack 34 __HOT__.md deleted file mode 100644 index 0425be99160e612aaa3c87a4e98022045dbe6b6a..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Extreme Karaoke V3 Crack 34 __HOT__.md +++ /dev/null @@ -1,6 +0,0 @@ -

      extreme karaoke v3 crack 34


      Download Zip ✑ ✑ ✑ https://urlca.com/2uDd0L



      - -.0K fps 459KB 54.9MB 5,487.0KB 0% 4fefd39f24
      -
      -
      -

      diff --git a/spaces/falterWliame/Face_Mask_Detection/HD Online Player (Fraud Saiyaan Movie Download Dual Au).md b/spaces/falterWliame/Face_Mask_Detection/HD Online Player (Fraud Saiyaan Movie Download Dual Au).md deleted file mode 100644 index a9563232a972ec41df76ebefce44da138690d1d4..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/HD Online Player (Fraud Saiyaan Movie Download Dual Au).md +++ /dev/null @@ -1,11 +0,0 @@ - -

      download fantastic beasts 3 english audio as source file.flv in 480p, 720p & 1080p. this is a hollywood movie and available in 720p,480p & 1080p qualities. this is one of the best movies based on drama, thriller. this movie is not available in hindi or dual audio.this is web-dlprint with dd5.1 english audio & esubs.

      -

      HD Online Player (Fraud Saiyaan Movie Download Dual Au)


      Download Zip ✸✸✸ https://urlca.com/2uDdYC



      -

      download gravity english audio as source file.flv in 480p, 720p & 1080p. this is a hollywood movie and available in 720p,480p & 1080p qualities. this is one of the best movies based on drama. this movie is not available in hindi or dual audio.this is web-dlprint with dd5.1 english audio & esubs.

      -

      this is a hollywood movie and available in 720p,480p & 1080p qualities. this is one of the best movie based on drama. this movie is not available in hindi or dual audio.this is web-dlprint with dd5.1 english audio & esubs. english subtitles added with english

      -

      download brilliant: the operative code-uhd 720p web-dl.download brilliant: the operative code movie uhd web-dl original.this is an english movie & available in 720p,480p & 1080p. this is one of the best movie based on action. this movie is not available in hindi or dual audio.this is web-dlprint with dd5.1 english audio & esubs. english subtitles added with english

      -

      -

      download civilization: beyond earth trailer 2022 english audio (720p 1080p ) online. this is a hollywood movie and available in 720p,480p & 1080p qualities. this is one of the best movies based on comedy. this movie is not available in hindi or dual audio.this is web-dlprint with dd5.1 english audio & esubs.

      -

      download age of glory movie 2022 english audio in 720p,480p &1080p. this is a hollywood movie and available in 720p,480p & 1080p qualities. this is one of the best movies based on fantasy. this movie is not available in hindi or dual audio.this is web-dlprint with dd5.1 english audio & esubs.

      899543212b
      -
      -
      \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Blast Your Way Through Enemies with Tank Hero Mod APK (Unlimited Money and Gold).md b/spaces/fatiXbelha/sd/Blast Your Way Through Enemies with Tank Hero Mod APK (Unlimited Money and Gold).md deleted file mode 100644 index 9750e7df225b957df4e90feed8722c064077b356..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Blast Your Way Through Enemies with Tank Hero Mod APK (Unlimited Money and Gold).md +++ /dev/null @@ -1,99 +0,0 @@ - - - | Table 2: Article with HTML formatting | |---------------------------------------| |

      Tank Hero Mod APK (Unlimited Money and Gold)

      | |

      Do you love playing tank games on your Android device? Do you want to have unlimited money and gold to upgrade your tanks and weapons? Do you want to enjoy various game modes, stunning graphics, and sound effects? If you answered yes to any of these questions, then you should download Tank Hero Mod APK right now!

      | |

      Introduction

      | |

      Tank Hero Mod APK is a modified version of the original Tank Hero game, which is a fast-paced 3D tank action game. In this game, you can control your own tank and shoot your enemies with different weapons. You can also customize your tank with various skins and decals. You can play in different game modes, such as campaign, survival, and multiplayer. You can also challenge yourself with different levels of difficulty and achievements.

      -

      tank hero mod apk (unlimited money and gold)


      DOWNLOAD >>> https://urllie.com/2uNDn0



      | |

      However, the original Tank Hero game has some limitations that may affect your gaming experience. For example, you need to earn money and gold by completing missions and defeating enemies. You need to use them to buy and upgrade your tanks and weapons. You may also encounter ads and in-app purchases that may interrupt your gameplay.

      | |

      That's why you need Tank Hero Mod APK, which is a hacked version of the original game that gives you unlimited money and gold. With this mod, you can buy and upgrade any tank and weapon you want without worrying about the cost. You can also enjoy the game without any ads or in-app purchases. You can also access all the features and content of the game without any restrictions.

      | |

      To download and install Tank Hero Mod APK, you need to follow these simple steps:

      | |
        -
      1. Click on the download button below to get the Tank Hero Mod APK file.
      2. -
      3. Go to your device settings and enable unknown sources to allow installation of apps from unknown sources.
      4. -
      5. Locate the downloaded file in your file manager and tap on it to install it.
      6. -
      7. Launch the game and enjoy unlimited money and gold.
      8. -
      | |

      Features of Tank Hero Mod APK

      | |

      Tank Hero Mod APK has many amazing features that make it one of the best tank games for Android. Here are some of them:

      | |

      Unlimited Money and Gold

      | |

      The most obvious feature of Tank Hero Mod APK is that it gives you unlimited money and gold. Money and gold are the main currencies in the game that you need to buy and upgrade your tanks and weapons. Normally, you have to earn them by completing missions and defeating enemies. However, with Tank Hero Mod APK, you can get unlimited money and gold as soon as you start the game. You can use them to buy any tank or weapon you want without worrying about the cost. You can also upgrade them to the maximum level without any hassle.

      | |

      This feature gives you a lot of advantages in the game. For example, you can have more powerful tanks and weapons that can destroy your enemies faster and easier. You can also have more variety and fun in choosing your tanks and weapons according to your preference. You can also save time and effort in grinding for money and gold.

      | |

      Various Tanks and Weapons

      | |

      Tank Hero Mod APK has a lot of tanks and weapons for you to choose from. There are over 50 tanks and over 100 weapons in the game, each with different stats, abilities, and effects. You can find tanks and weapons of different types, such as light, medium, heavy, artillery, rocket, laser, plasma, etc. You can also customize your tanks with various skins and decals to make them look more cool and unique.

      | |

      The differences between tanks and weapons are not only cosmetic but also functional. For example, some tanks have more speed, armor, or firepower than others. Some weapons have more range, accuracy, or damage than others. Some tanks and weapons also have special features, such as stealth, shield, or EMP. You have to consider these factors when choosing your tank and weapon for your play style.

      -

      tank hero mod apk download free unlimited money and gold
      -tank hero hack mod apk unlimited money and gold latest version
      -tank hero mod apk unlimited money and gold android 1
      -tank hero mod apk unlimited money and gold rexdl
      -tank hero mod apk unlimited money and gold no root
      -tank hero mod apk unlimited money and gold offline
      -tank hero mod apk unlimited money and gold 2023
      -tank hero mod apk unlimited money and gold revdl
      -tank hero mod apk unlimited money and gold happymod
      -tank hero mod apk unlimited money and gold apkpure
      -tank hero mod apk unlimited money and gold for pc
      -tank hero mod apk unlimited money and gold ios
      -tank hero mod apk unlimited money and gold online
      -tank hero mod apk unlimited money and gold obb
      -tank hero mod apk unlimited money and gold mediafıre
      -tank hero mod apk unlimited money and gold mega
      -tank hero mod apk unlimited money and gold uptodown
      -tank hero mod apk unlimited money and gold 1.8.0
      -tank hero mod apk unlimited money and gold 1.7.9
      -tank hero mod apk unlimited money and gold 1.7.8
      -tank hero mod apk unlimited money and gold 1.7.7
      -tank hero mod apk unlimited money and gold 1.7.6
      -tank hero mod apk unlimited money and gold 1.7.5
      -tank hero mod apk unlimited money and gold 1.7.4
      -tank hero mod apk unlimited money and gold 1.7.3
      -tank hero mod apk unlimited money and gold 1.7.2
      -tank hero mod apk unlimited money and gold 1.7.1
      -tank hero mod apk unlimited money and gold 1.7.0
      -tank hero mod apk unlimited money and gold 1.6.9
      -tank hero mod apk unlimited money and gold 1.6.8
      -tank hero mod apk unlimited money and gold 1.6.7
      -tank hero mod apk unlimited money and gold 1.6.6
      -tank hero mod apk unlimited money and gold 1.6.5
      -tank hero mod apk unlimited money and gold 1.6.4
      -tank hero mod apk unlimited money and gold 1.6.3
      -tank hero mod apk unlimited money and gold 1.6.2
      -tank hero mod apk unlimited money and gold 1.6.1
      -tank hero mod apk unlimited money and gold 1.6.0
      -how to install tank hero mod apk unlimited money and gold
      -how to play tank hero mod apk unlimited money and gold
      -how to get tank hero mod apk unlimited money and gold
      -how to update tank hero mod apk unlimited money and gold
      -how to uninstall tank hero mod apk unlimited money and gold
      -how to hack tank hero with mod apk unlimited money and gold
      -how to download tank hero with mod apk unlimited money and gold

      | | Table 2: Article with HTML formatting | |---------------------------------------| |

      Tank Hero Mod APK (Unlimited Money and Gold)

      | |

      Do you love playing tank games on your Android device? Do you want to have unlimited money and gold to upgrade your tanks and weapons? Do you want to enjoy various game modes, stunning graphics, and sound effects? If you answered yes to any of these questions, then you should download Tank Hero Mod APK right now!

      | |

      Introduction

      | |

      Tank Hero Mod APK is a modified version of the original Tank Hero game, which is a fast-paced 3D tank action game. In this game, you can control your own tank and shoot your enemies with different weapons. You can also customize your tank with various skins and decals. You can play in different game modes, such as campaign, survival, and multiplayer. You can also challenge yourself with different levels of difficulty and achievements.

      | |

      However, the original Tank Hero game has some limitations that may affect your gaming experience. For example, you need to earn money and gold by completing missions and defeating enemies. You need to use them to buy and upgrade your tanks and weapons. You may also encounter ads and in-app purchases that may interrupt your gameplay.

      | |

      That's why you need Tank Hero Mod APK, which is a hacked version of the original game that gives you unlimited money and gold. With this mod, you can buy and upgrade any tank and weapon you want without worrying about the cost. You can also enjoy the game without any ads or in-app purchases. You can also access all the features and content of the game without any restrictions.

      | |

      To download and install Tank Hero Mod APK, you need to follow these simple steps:

      | |
        -
      1. Click on the download button below to get the Tank Hero Mod APK file.
      2. -
      3. Go to your device settings and enable unknown sources to allow installation of apps from unknown sources.
      4. -
      5. Locate the downloaded file in your file manager and tap on it to install it.
      6. -
      7. Launch the game and enjoy unlimited money and gold.
      8. -
      | |

      Features of Tank Hero Mod APK

      | |

      Tank Hero Mod APK has many amazing features that make it one of the best tank games for Android. Here are some of them:

      | |

      Unlimited Money and Gold

      | |

      The most obvious feature of Tank Hero Mod APK is that it gives you unlimited money and gold. Money and gold are the main currencies in the game that you need to buy and upgrade your tanks and weapons. Normally, you have to earn them by completing missions and defeating enemies. However, with Tank Hero Mod APK, you can get unlimited money and gold as soon as you start the game. You can use them to buy any tank or weapon you want without worrying about the cost. You can also upgrade them to the maximum level without any hassle.

      | |

      This feature gives you a lot of advantages in the game. For example, you can have more powerful tanks and weapons that can destroy your enemies faster and easier. You can also have more variety and fun in choosing your tanks and weapons according to your preference. You can also save time and effort in grinding for money and gold.

      | |

      Various Tanks and Weapons

      | |

      Tank Hero Mod APK has a lot of tanks and weapons for you to choose from. There are over 50 tanks and over 100 weapons in the game, each with different stats, abilities, and effects. You can find tanks and weapons of different types, such as light, medium, heavy, artillery, rocket, laser, plasma, etc. You can also customize your tanks with various skins and decals to make them look more cool and unique.

      | |

      The differences between tanks and weapons are not only cosmetic but also functional. For example, some tanks have more speed, armor, or firepower than others. Some weapons have more range, accuracy, or damage than others. Some tanks and weapons also have special features, such as stealth, shield, or EMP. You have to consider these factors when choosing your tank and weapon for your play style.

      | |

      To choose the best tank and weapon for your play style, you have to experiment with different combinations and see what works best for you. You can also check the stats and descriptions of each tank and weapon in the shop or inventory menu. You can also read reviews and tips from other players online or watch videos of gameplay demonstrations.

      | |

      Multiple Game Modes

      -| Table 2: Article with HTML formatting | |---------------------------------------| |

      Tank Hero Mod APK (Unlimited Money and Gold)

      | |

      Do you love playing tank games on your Android device? Do you want to have unlimited money and gold to upgrade your tanks and weapons? Do you want to enjoy various game modes, stunning graphics, and sound effects? If you answered yes to any of these questions, then you should download Tank Hero Mod APK right now!

      | |

      Introduction

      | |

      Tank Hero Mod APK is a modified version of the original Tank Hero game, which is a fast-paced 3D tank action game. In this game, you can control your own tank and shoot your enemies with different weapons. You can also customize your tank with various skins and decals. You can play in different game modes, such as campaign, survival, and multiplayer. You can also challenge yourself with different levels of difficulty and achievements.

      | |

      However, the original Tank Hero game has some limitations that may affect your gaming experience. For example, you need to earn money and gold by completing missions and defeating enemies. You need to use them to buy and upgrade your tanks and weapons. You may also encounter ads and in-app purchases that may interrupt your gameplay.

      | |

      That's why you need Tank Hero Mod APK, which is a hacked version of the original game that gives you unlimited money and gold. With this mod, you can buy and upgrade any tank and weapon you want without worrying about the cost. You can also enjoy the game without any ads or in-app purchases. You can also access all the features and content of the game without any restrictions.

      | |

      To download and install Tank Hero Mod APK, you need to follow these simple steps:

      | |
        -
      1. Click on the download button below to get the Tank Hero Mod APK file.
      2. -
      3. Go to your device settings and enable unknown sources to allow installation of apps from unknown sources.
      4. -
      5. Locate the downloaded file in your file manager and tap on it to install it.
      6. -
      7. Launch the game and enjoy unlimited money and gold.
      8. -
      | |

      Features of Tank Hero Mod APK

      | |

      Tank Hero Mod APK has many amazing features that make it one of the best tank games for Android. Here are some of them:

      | |

      Unlimited Money and Gold

      | |

      The most obvious feature of Tank Hero Mod APK is that it gives you unlimited money and gold. Money and gold are the main currencies in the game that you need to buy and upgrade your tanks and weapons. Normally, you have to earn them by completing missions and defeating enemies. However, with Tank Hero Mod APK, you can get unlimited money and gold as soon as you start the game. You can use them to buy any tank or weapon you want without worrying about the cost. You can also upgrade them to the maximum level without any hassle.

      | |

      This feature gives you a lot of advantages in the game. For example, you can have more powerful tanks and weapons that can destroy your enemies faster and easier. You can also have more variety and fun in choosing your tanks and weapons according to your preference. You can also save time and effort in grinding for money and gold.

      | |

      Various Tanks and Weapons

      | |

      Tank Hero Mod APK has a lot of tanks and weapons for you to choose from. There are over 50 tanks and over 100 weapons in the game, each with different stats, abilities, and effects. You can find tanks and weapons of different types, such as light, medium, heavy, artillery, rocket, laser, plasma, etc. You can also customize your tanks with various skins and decals to make them look more cool and unique.

      | |

      The differences between tanks and weapons are not only cosmetic but also functional. For example, some tanks have more speed, armor, or firepower than others. Some weapons have more range, accuracy, or damage than others. Some tanks and weapons also have special features, such as stealth, shield, or EMP. You have to consider these factors when choosing your tank and weapon for your play style.

      | |

      To choose the best tank and weapon for your play style, you have to experiment with different combinations and see what works best for you. You can also check the stats and descriptions of each tank and weapon in the shop or inventory menu. You can also read reviews and tips from other players online or watch videos of gameplay demonstrations.

      | |

      Multiple Game Modes

      -| Table 2: Article with HTML formatting | |---------------------------------------| |

      Tank Hero Mod APK (Unlimited Money and Gold)

      | |

      Do you love playing tank games on your Android device? Do you want to have unlimited money and gold to upgrade your tanks and weapons? Do you want to enjoy various game modes, stunning graphics, and sound effects? If you answered yes to any of these questions, then you should download Tank Hero Mod APK right now!

      | |

      Introduction

      | |

      Tank Hero Mod APK is a modified version of the original Tank Hero game, which is a fast-paced 3D tank action game. In this game, you can control your own tank and shoot your enemies with different weapons. You can also customize your tank with various skins and decals. You can play in different game modes, such as campaign, survival, and multiplayer. You can also challenge yourself with different levels of difficulty and achievements.

      | |

      However, the original Tank Hero game has some limitations that may affect your gaming experience. For example, you need to earn money and gold by completing missions and defeating enemies. You need to use them to buy and upgrade your tanks and weapons. You may also encounter ads and in-app purchases that may interrupt your gameplay.

      | |

      That's why you need Tank Hero Mod APK, which is a hacked version of the original game that gives you unlimited money and gold. With this mod, you can buy and upgrade any tank and weapon you want without worrying about the cost. You can also enjoy the game without any ads or in-app purchases. You can also access all the features and content of the game without any restrictions.

      | |

      To download and install Tank Hero Mod APK, you need to follow these simple steps:

      | |
        -
      1. Click on the download button below to get the Tank Hero Mod APK file.
      2. -
      3. Go to your device settings and enable unknown sources to allow installation of apps from unknown sources.
      4. -
      5. Locate the downloaded file in your file manager and tap on it to install it.
      6. -
      7. Launch the game and enjoy unlimited money and gold.
      8. -
      | |

      Features of Tank Hero Mod APK

      | |

      Tank Hero Mod APK has many amazing features that make it one of the best tank games for Android. Here are some of them:

      | |

      Unlimited Money and Gold

      | |

      The most obvious feature of Tank Hero Mod APK is that it gives you unlimited money and gold. Money and gold are the main currencies in the game that you need to buy and upgrade your tanks and weapons. Normally, you have to earn them by completing missions and defeating enemies. However, with Tank Hero Mod APK, you can get unlimited money and gold as soon as you start the game. You can use them to buy any tank or weapon you want without worrying about the cost. You can also upgrade them to the maximum level without any hassle.

      | |

      This feature gives you a lot of advantages in the game. For example, you can have more powerful tanks and weapons that can destroy your enemies faster and easier. You can also have more variety and fun in choosing your tanks and weapons according to your preference. You can also save time and effort in grinding for money and gold.

      | |

      Various Tanks and Weapons

      | |

      Tank Hero Mod APK has a lot of tanks and weapons for you to choose from. There are over 50 tanks and over 100 weapons in the game, each with different stats, abilities, and effects. You can find tanks and weapons of different types, such as light, medium, heavy, artillery, rocket, laser, plasma, etc. You can also customize your tanks with various skins and decals to make them look more cool and unique.

      | |

      The differences between tanks and weapons are not only cosmetic but also functional. For example, some tanks have more speed, armor, or firepower than others. Some weapons have more range, accuracy, or damage than others. Some tanks and weapons also have special features, such as stealth, shield, or EMP. You have to consider these factors when choosing your tank and weapon for your play style.

      | |

      To choose the best tank and weapon for your play style, you have to experiment with different combinations and see what works best for you. You can also check the stats and descriptions of each tank and weapon in the shop or inventory menu. You can also read reviews and tips from other players online or watch videos of gameplay demonstrations.

      | |

      Multiple Game Modes

      -

      Tank Hero Mod

      Tank Hero Mod APK has multiple game modes for you to enjoy. You can play in different game modes, such as campaign, survival, and multiplayer. Each game mode has its own rules, objectives, and challenges. You can also choose the level of difficulty and the number of enemies in each game mode.

      -

      The campaign mode is the main mode of the game, where you have to complete various missions and stages. You have to fight against different types of enemies, such as tanks, helicopters, turrets, etc. You have to destroy them all and reach the end of each stage. You can also collect stars and medals by completing the missions with high scores and achievements. You can use the stars and medals to unlock new tanks and weapons.

      -

      The survival mode is the endless mode of the game, where you have to survive as long as possible against waves of enemies. You have to shoot and dodge the incoming enemies and avoid getting hit by their bullets and missiles. You can also collect power-ups and bonuses that can help you survive longer. You can also compete with other players on the global leaderboard and see how long you can last.

      -

      The multiplayer mode is the online mode of the game, where you can play with or against other players from around the world. You can join or create rooms in the multiplayer mode and choose the game mode, map, and settings. You can also chat with other players using the in-game chat feature. You can play in different modes, such as team deathmatch, capture the flag, king of the hill, etc. You can also cooperate or compete with other players and show your skills and strategies.

      -

      Stunning Graphics and Sound Effects

      -

      Tank Hero Mod APK has stunning graphics and sound effects that make the game more realistic and immersive. The game has 3D graphics that are well-designed and detailed. The game also has dynamic lighting and shadows that create a realistic atmosphere. The game also has smooth animations and transitions that make the gameplay more fluid and responsive.

      -

      The game also has sound effects that are clear and crisp. The game also has background music that is catchy and fitting for each game mode and situation. The game also has voice-overs that are expressive and humorous. The game also has sound settings that allow you to adjust the volume and quality of the sound effects, music, and voice-overs.

      -

      To enjoy the stunning graphics and sound effects of Tank Hero Mod APK, you need to have a device that meets the minimum requirements of the game. You also need to have a stable internet connection for the online mode. You can also adjust the graphics and sound settings in the game menu to suit your preference and device performance.

      -

      Conclusion

      -

      Tank Hero Mod APK is a great tank game for Android that offers unlimited money and gold, various tanks and weapons, multiple game modes, stunning graphics and sound effects, and more. It is a fun and addictive game that will keep you entertained for hours. It is also easy to download and install on your device.

      -

      If you are looking for a tank game that will challenge your skills and strategies, then you should download Tank Hero Mod APK now. It is one of the best tank games for Android that you will ever play. It is a modded version of the original Tank Hero game that gives you more features and content than ever before.

      -

      So what are you waiting for? Download Tank Hero Mod APK now and enjoy unlimited money and gold, various tanks and weapons, multiple game modes, stunning graphics and sound effects, and more!

      -

      FAQs

      -
        -
      • Q1: Is Tank Hero Mod APK safe to download and use?
      • -
      • A1: Yes, Tank Hero Mod APK is safe to download and use. It does not contain any viruses or malware. However, you should always download it from a trusted source and scan it with an antivirus before installing it.
      • -
      • Q2: Do I need to root my device to use Tank Hero Mod APK?
      • -
      • A2: No, you do not need to root your device to use Tank Hero Mod APK. It works on both rooted and non-rooted devices. However, you may need to enable unknown sources in your device settings to install it.
      • -
      • Q3: Can I play Tank Hero Mod APK online with other players?
      • -
      • A3: Yes, you can play Tank Hero Mod APK online with other players. You can join or create rooms in the multiplayer mode and compete with other players from around the world. You can also chat with them using the in-game chat feature.
      • -
      • Q4: How can I update Tank Hero Mod APK to the latest version?
      • -
      • A4: You can update Tank Hero Mod APK to the latest version by downloading it again from the same source where you downloaded it before. You do not need to uninstall the previous version before installing the new one. Your progress and data will be saved automatically.
      • -
      • Q5: What if I have any problems or questions about Tank Hero Mod APK?
      • -
      • A5: If you have any problems or questions about Tank Hero Mod APK, you can contact the developer through their email or social media accounts. You can also visit their official website or blog for more information and support.
      • -
      -

      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Enjoy Vintage Photos and Videos with Old Roll Mod APK.md b/spaces/fatiXbelha/sd/Enjoy Vintage Photos and Videos with Old Roll Mod APK.md deleted file mode 100644 index 7484b01d7f6a3062eacbd25a2f7586d02a9b0804..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Enjoy Vintage Photos and Videos with Old Roll Mod APK.md +++ /dev/null @@ -1,122 +0,0 @@ -
      -

      Download APK Mod Old Roll: How to Get Vintage Photos and Videos on Your Android Device

      -

      Do you love the nostalgic look of old photos and videos? Do you want to capture your memories in a retro style? If yes, then you might be interested in downloading APK Mod Old Roll, a photography app that lets you take and edit photos and videos in a vintage, classic style. In this article, we will tell you what APK Mod Old Roll is, what features it offers, what are the benefits of using it, and how to install it on your Android device. We will also give you some tips on how to find the best APK mod sites to download this app and other modded apps for free.

      -

      download apk mod old roll


      Download File 🆗 https://urllie.com/2uNH2z



      -

      What is APK Mod Old Roll?

      -

      APK Mod Old Roll is a modified version of the original Old Roll app, which is a photography app that allows you to take and edit photos and videos in a vintage, classic style. The app has various filters, effects, stickers, frames, and fonts that you can apply to your photos and videos to make them look like they were taken decades ago. You can also adjust the brightness, contrast, saturation, exposure, and other settings to enhance your photos and videos.

      -

      Features of APK Mod Old Roll

      -

      The modded version of Old Roll has some extra features that are not available in the original version. These include:

      -
        -
      • Unlocked all cameras, including Polaroid, Kodak, Fuji, Agfa, and more
      • -
      • Premium version unlocked, which gives you access to more filters, effects, stickers, frames, and fonts
      • -
      • No ads or watermarks
      • -
      • No root required
      • -
      -

      Benefits of using APK Mod Old Roll

      -

      By using APK Mod Old Roll, you can enjoy the following benefits:

      -
        -
      • You can save money by getting the premium features for free
      • -
      • You can have more creative options by using different cameras, filters, effects, stickers, frames, and fonts
      • -
      • You can have more fun by creating unique and artistic photos and videos
      • -
      • You can share your photos and videos with your friends and family on social media platforms
      • -
      -

      What is APK Mod and How Does It Work?

      -

      Before we tell you how to install APK Mod Old Roll on your Android device, let us explain what APK mod is and how it works. APK mod is a modified version of an original Android app that has been altered by someone to provide new or improved features that are not present in the original version. An Android app is packaged into a file that has an extension named .APK, which contains all the elements of the app and can be installed on an Android device.

      -

      download old roll premium apk mod
      -old roll mod apk free download
      -how to download old roll mod apk
      -download old roll mod apk latest version
      -old roll mod apk download for android
      -old roll camera mod apk download
      -download old roll mod apk 2023
      -old roll photo editor mod apk download
      -download old roll pro mod apk
      -old roll video editor mod apk download
      -download old roll full unlocked mod apk
      -old roll vintage camera mod apk download
      -download old roll hack mod apk
      -old roll retro camera mod apk download
      -download old roll cracked mod apk
      -old roll film camera mod apk download
      -download old roll vip mod apk
      -old roll 1998 cam mod apk download
      -download old roll plus mod apk
      -old roll disposable camera mod apk download
      -download old roll gold mod apk
      -old roll analog camera mod apk download
      -download old roll ultimate mod apk
      -old roll vhs camcorder mod apk download
      -download old roll diamond mod apk
      -old roll polaroid camera mod apk download
      -download old roll elite mod apk
      -old roll 35mm film camera mod apk download
      -download old roll ad free mod apk
      -old roll instax camera mod apk download

      -

      Advantages and Disadvantages of APK Mod

      -

      Using APK mod has some advantages and disadvantages that you should be aware of. Here are some of them:

      - - - - - - - - - - - - - - - - - - - - - -
      AdvantagesDisadvantages
      You can get premium features for freeYou may violate the intellectual property rights of the original developers
      You can access new or improved features that are not available in the original versionYou may expose your device to malware or virus infections
      You can customize the app according to your preferencesYou may encounter compatibility or stability issues with your device or other apps
      You can bypass the restrictions or limitations imposed by the original developers or the app storeYou may lose the support or updates from the original developers or the app store
      -

      Risks and Precautions of APK Mod

      -

      Using APK mod also involves some risks and precautions that you should take into account. Here are some of them:

      -
        -
      • Make sure you download the APK mod file from a trusted and reliable source, such as the ones we will recommend later in this article. Avoid downloading from unknown or suspicious websites that may contain malware or virus.
      • -
      • Check the permissions and reviews of the APK mod file before installing it on your device. Make sure you are comfortable with granting the app access to your device's data and functions. Read the feedback and ratings of other users who have used the app to see if they have encountered any problems or issues.
      • -
      • Backup your device's data and settings before installing the APK mod file. In case something goes wrong or you want to uninstall the app, you can restore your device to its previous state.
      • -
      • Use a VPN or a proxy server to hide your IP address and location when downloading or using the APK mod file. This can help you avoid being tracked or detected by the original developers or the app store, who may ban your account or take legal action against you for violating their terms and conditions.
      • -
      -

      How to Install APK Mod Old Roll on Your Android Device

      -

      Now that you know what APK mod is and how it works, let us show you how to install APK Mod Old Roll on your Android device. The process is simple and easy, and it only takes a few minutes. Just follow these steps:

      -

      Step 1: Download the APK file from a trusted source

      -

      The first step is to download the APK file of APK Mod Old Roll from a trusted and reliable source. You can use one of the best APK mod sites that we will recommend later in this article, or you can search for it on Google or other search engines. Make sure you download the latest version of the app, which is 1.0.9 as of June 2023.

      -

      Step 2: Enable unknown sources on your device settings

      -

      The second step is to enable unknown sources on your device settings. This will allow you to install apps that are not from the official app store, such as APK mod files. To do this, go to your device's settings, then security, then unknown sources, and toggle it on. You may see a warning message that says installing apps from unknown sources may harm your device, but don't worry, as long as you download from a trusted source, you should be fine.

      -

      Step 3: Locate and install the APK file

      -

      The third step is to locate and install the APK file that you have downloaded on your device. You can use a file manager app to find the file in your downloads folder, or you can tap on the notification that says download complete. Once you find the file, tap on it and follow the instructions on the screen to install it on your device. You may see a pop-up message that says this type of file can harm your device, but just ignore it and tap on install anyway.

      -

      Step 4: Launch the app and enjoy

      -

      The fourth and final step is to launch the app and enjoy its features. You can find the app icon on your home screen or in your app drawer. Tap on it and grant it any permissions it may ask for. Then, you can start taking and editing photos and videos in a vintage, classic style with APK Mod Old Roll.

      -

      Best APK Mod Sites to Download APK Mod Old Roll and Other Apps

      -

      If you are looking for the best APK mod sites to download APK Mod Old Roll and other apps for free, here are some of our recommendations:

      -

      APKPure

      -

      APKPure is one of the most popular and trusted APK mod sites that offers a wide range of apps and games for Android devices. You can find both original and modded versions of apps and games, as well as exclusive apps that are not available on the official app store. You can also update your apps with one click, download region-locked apps, and request new apps or mods.

      -

      HappyMod

      -

      HappyMod is another popular and trusted APK mod site that specializes in providing modded versions of apps and games for Android devices. You can find thousands of mods for different categories, such as action, adventure, arcade, casual, simulation, sports, and more. You can also download multiple mods for the same app or game, and choose the one that suits your needs. You can also rate and review the mods, and request new mods or updates.

      -

      ReXdl

      -

      ReXdl is another popular and trusted APK mod site that offers a huge collection of apps and games for Android devices. You can find both original and modded versions of apps and games, as well as premium apps that are normally paid on the official app store. You can also download apps and games that are modded with unlimited money, coins, gems, lives, and other resources. You can also browse by categories, genres, or tags, and download fast and secure.

      -

      Apkmody

      -

      Apkmody is another popular and trusted APK mod site that provides high-quality apps and games for Android devices. You can find both original and modded versions of apps and games, as well as exclusive apps that are not available on the official app store. You can also download apps and games that are modded with unlimited money, coins, gems, lives, and other resources. You can also search by keywords, categories, or popularity, and download fast and safe.

      -

      Conclusion

      -

      In conclusion, APK Mod Old Roll is a photography app that lets you take and edit photos and videos in a vintage, classic style. It has various filters, effects, stickers, frames, and fonts that you can apply to your photos and videos to make them look like they were taken decades ago. You can also adjust the brightness, contrast, saturation, exposure, and other settings to enhance your photos and videos. The modded version of Old Roll has some extra features that are not available in the original version, such as unlocked all cameras, premium version unlocked, no ads or watermarks, and no root required. To install APK Mod Old Roll on your Android device, you need to download the APK file from a trusted source, enable unknown sources on your device settings, locate and install the APK file, and launch the app and enjoy. You can also use one of the best APK mod sites that we have recommended in this article to download APK Mod Old Roll and other apps for free.

      -

      FAQs

      -

      Here are some frequently asked questions about APK Mod Old Roll:

      -

      Q: Is APK Mod Old Roll safe to use?

      -

      A: APK Mod Old Roll is safe to use as long as you download it from a trusted source and follow the precautions we have mentioned in this article. However, there is always a risk of malware or virus infections when using APK mod files, so you should always scan the file before installing it on your device.

      -

      Q: Is APK Mod Old Roll legal to use?

      -

      A: APK Mod Old Roll is not legal to use as it violates the intellectual property rights of the original developers of Old Roll. By using APK Mod Old Roll, you may also breach the terms and conditions of the official app store or the original developers. Therefore, you should use APK Mod Old Roll at your own risk and responsibility.

      -

      Q: How can I update APK Mod Old Roll?

      -

      A: To update APK Mod Old Roll, you need to download the latest version of the APK file from a trusted source and install it on your device. You may also check the best APK mod sites that we have recommended in this article to see if they have updated versions of APK Mod Old Roll.

      -

      Q: How can I uninstall APK Mod Old Roll?

      -

      A: To uninstall APK Mod Old Roll, you need to go to your device's settings, then apps, then APK Mod Old Roll, and tap on uninstall. You may also use a file manager app to find and delete the APK file from your device. You may also want to backup your photos and videos before uninstalling the app, as they may be deleted along with the app.

      -

      Q: Can I use APK Mod Old Roll on other devices?

      -

      A: APK Mod Old Roll is designed for Android devices, so you may not be able to use it on other devices, such as iOS, Windows, or Mac. However, you may try using an Android emulator or a virtual machine to run APK Mod Old Roll on your PC or laptop.

      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/fclong/summary/fengshen/examples/summary/randeng_t5_784M_summary.sh b/spaces/fclong/summary/fengshen/examples/summary/randeng_t5_784M_summary.sh deleted file mode 100644 index 5b3e60c8784ac563eff09763591e00b6d250444f..0000000000000000000000000000000000000000 --- a/spaces/fclong/summary/fengshen/examples/summary/randeng_t5_784M_summary.sh +++ /dev/null @@ -1,130 +0,0 @@ -#!/bin/bash -#SBATCH --job-name=randeng_t5_77M_summary -#SBATCH --nodes=1 -#SBATCH --ntasks-per-node=2 -#SBATCH --gres=gpu:2 # number of gpus -#SBATCH --cpus-per-task=30 -#SBATCH -o %x-%j.log - -set -x -e - -echo "START TIME: $(date)" -MODEL_NAME=randeng_t5_784M_summary -MICRO_BATCH_SIZE=8 -ROOT_DIR=/cognitive_comp/dongxiaoqun/finetune/${MODEL_NAME} -if [ ! -d ${ROOT_DIR} ];then - mkdir ${ROOT_DIR} - echo ${ROOT_DIR} created!!!!!!!!!!!!!! -else - echo ${ROOT_DIR} exist!!!!!!!!!!!!!!! -fi - -ZERO_STAGE=1 - -config_json="${ROOT_DIR}/ds_config.${MODEL_NAME}.json" - -# Deepspeed figures out GAS dynamically from dynamic GBS via set_train_batch_size() -cat < $config_json -{ - "train_micro_batch_size_per_gpu": ${MICRO_BATCH_SIZE}, - "steps_per_print": 100, - "gradient_clipping": 1.0, - "zero_optimization": { - "stage": $ZERO_STAGE, - "contiguous_gradients": false, - "overlap_comm": true, - "reduce_scatter": true, - "reduce_bucket_size": 50000000, - "allgather_bucket_size": 500000000 - }, - "optimizer": { - "type": "Adam", - "params": { - "lr": 1e-4, - "weight_decay": 1e-2 - } - }, - "scheduler": { - "params": { - "warmup_max_lr": 1e-04, - "warmup_min_lr": 1e-05, - "total_num_steps": 60000, - "warmup_num_steps" : 500 - }, - "type": "WarmupDecayLR" - }, - "zero_allow_untested_optimizer": false, - "fp16": { - "enabled": true, - "loss_scale": 0, - "loss_scale_window": 1000, - "hysteresis": 2, - "min_loss_scale": 1 - }, - "activation_checkpointing": { - "partition_activations": false, - "contiguous_memory_optimization": false - }, - "wall_clock_breakdown": false -} -EOT - -export PL_DEEPSPEED_CONFIG_PATH=$config_json -export TORCH_EXTENSIONS_DIR=/cognitive_comp/dongxiaoqun/torch_extendsions -# export MASTER_PORT=$[RANDOM%10000+30000] -# export PL_FAULT_TOLERANT_TRAINING=1 - -TRAINER_ARGS=" - --max_epochs 1 \ - --gpus 1 \ - --num_nodes 1 \ - --strategy deepspeed_stage_${ZERO_STAGE} \ - --default_root_dir $ROOT_DIR \ - --dirpath $ROOT_DIR/ckpt \ - --save_top_k 3 \ - --monitor val_loss \ - --mode min \ - --save_last \ - --every_n_train_steps 0 \ - --val_check_interval 0.1 \ -" - -prompt="summary:" -DATA_ARGS=" - --datasets_name lcsts \ - --num_workers 30 \ - --train_batchsize $MICRO_BATCH_SIZE \ - --val_batchsize $MICRO_BATCH_SIZE \ - --test_batchsize $MICRO_BATCH_SIZE \ - --max_enc_length 128 \ - --max_dec_length 64 \ - --val_datasets_field val \ - --prompt $prompt \ -" -# --prompt $prompt \ -MODEL_ARGS=" - --pretrained_model_path /cognitive_comp/ganruyi/experiments/randeng_t5_large_v2/ckpt/hf_pretrained_epoch0_step732500 \ - --output_save_path $ROOT_DIR/randeng_t5_784M_predict_lcsts.json \ -" - -SCRIPTS_PATH=/cognitive_comp/dongxiaoqun/debug/Fengshenbang-LM/fengshen/examples/summary/seq2seq_summary.py -SINGULARITY_PATH=/cognitive_comp/ganruyi/pytorch21_06_py3_docker_image_v2.sif - -export CMD=" \ - $SCRIPTS_PATH \ - $TRAINER_ARGS \ - $MODEL_ARGS \ - $DATA_ARGS \ - " -echo $CMD - -source activate -conda activate torchnew -srun --nodes=1 --ntasks-per-node=1 --gres=gpu:1 --cpus-per-task=30 -o ${MODEL_NAME}-%J.log --jobid=229668 bash -c 'python3 $SCRIPT_PATH $CMD' -# source activate base -# python $CMD - -# srun --jobid=229668 --nodes=1 --gres=gpu:1 --ntasks-per-node=1 --cpus-per-task=30 -e ${ROOT_DIR}/${MODEL_NAME}-%j.err -o ${ROOT_DIR}/${MODEL_NAME}-%j.log singularity exec --nv -B /cognitive_comp/:/cognitive_comp/ $SINGULARITY_PATH bash -c '/home/ganruyi/anaconda3/bin/python $CMD' - -# srun python $CMD -# srun singularity exec --nv -B /cognitive_comp/:/cognitive_comp/ $SINGULARITY_PATH bash -c '/home/ganruyi/anaconda3/bin/python $CMD' diff --git a/spaces/fengmuxi/ChatGpt-Web/app/store/chat.ts b/spaces/fengmuxi/ChatGpt-Web/app/store/chat.ts deleted file mode 100644 index f1ef48103aea721d39ee18275d7d873ce608fdd2..0000000000000000000000000000000000000000 --- a/spaces/fengmuxi/ChatGpt-Web/app/store/chat.ts +++ /dev/null @@ -1,534 +0,0 @@ -import { create } from "zustand"; -import { persist } from "zustand/middleware"; - -import { type ChatCompletionResponseMessage } from "openai"; -import { - ControllerPool, - requestChatStream, - requestWithPrompt, -} from "../requests"; -import { trimTopic } from "../utils"; - -import Locale from "../locales"; -import { showToast } from "../components/ui-lib"; -import { ModelType, useAppConfig } from "./config"; -import { createEmptyMask, Mask } from "./mask"; -import { StoreKey } from "../constant"; - -export type Message = ChatCompletionResponseMessage & { - date: string; - streaming?: boolean; - isError?: boolean; - id?: number; - model?: ModelType; -}; - -export function createMessage(override: Partial): Message { - return { - id: Date.now(), - date: new Date().toLocaleString(), - role: "user", - content: "", - ...override, - }; -} - -export const ROLES: Message["role"][] = ["system", "user", "assistant"]; - -export interface ChatStat { - tokenCount: number; - wordCount: number; - charCount: number; -} - -export interface ChatSession { - id: number; - - topic: string; - - memoryPrompt: string; - messages: Message[]; - stat: ChatStat; - lastUpdate: number; - lastSummarizeIndex: number; - - mask: Mask; -} - -export const DEFAULT_TOPIC = Locale.Store.DefaultTopic; -export const BOT_HELLO: Message = createMessage({ - role: "assistant", - content: Locale.Store.BotHello, -}); - -function createEmptySession(): ChatSession { - return { - id: Date.now() + Math.random(), - topic: DEFAULT_TOPIC, - memoryPrompt: "", - messages: [], - stat: { - tokenCount: 0, - wordCount: 0, - charCount: 0, - }, - lastUpdate: Date.now(), - lastSummarizeIndex: 0, - mask: createEmptyMask(), - }; -} - -interface ChatStore { - sessions: ChatSession[]; - currentSessionIndex: number; - globalId: number; - clearSessions: () => void; - moveSession: (from: number, to: number) => void; - selectSession: (index: number) => void; - newSession: (mask?: Mask) => void; - deleteSession: (index: number) => void; - currentSession: () => ChatSession; - onNewMessage: (message: Message) => void; - onUserInput: (content: string) => Promise; - summarizeSession: () => void; - updateStat: (message: Message) => void; - updateCurrentSession: (updater: (session: ChatSession) => void) => void; - updateMessage: ( - sessionIndex: number, - messageIndex: number, - updater: (message?: Message) => void, - ) => void; - resetSession: () => void; - getMessagesWithMemory: () => Message[]; - getMemoryPrompt: () => Message; - - clearAllData: () => void; - clearAll: () => void; -} - -function countMessages(msgs: Message[]) { - return msgs.reduce((pre, cur) => pre + cur.content.length, 0); -} - -export const useChatStore = create()( - persist( - (set, get) => ({ - sessions: [createEmptySession()], - currentSessionIndex: 0, - globalId: 0, - - clearSessions() { - set(() => ({ - sessions: [createEmptySession()], - currentSessionIndex: 0, - })); - }, - - selectSession(index: number) { - set({ - currentSessionIndex: index, - }); - }, - - moveSession(from: number, to: number) { - set((state) => { - const { sessions, currentSessionIndex: oldIndex } = state; - - // move the session - const newSessions = [...sessions]; - const session = newSessions[from]; - newSessions.splice(from, 1); - newSessions.splice(to, 0, session); - - // modify current session id - let newIndex = oldIndex === from ? to : oldIndex; - if (oldIndex > from && oldIndex <= to) { - newIndex -= 1; - } else if (oldIndex < from && oldIndex >= to) { - newIndex += 1; - } - - return { - currentSessionIndex: newIndex, - sessions: newSessions, - }; - }); - }, - - newSession(mask) { - const session = createEmptySession(); - - set(() => ({ globalId: get().globalId + 1 })); - session.id = get().globalId; - - if (mask) { - session.mask = { ...mask }; - session.topic = mask.name; - } - - set((state) => ({ - currentSessionIndex: 0, - sessions: [session].concat(state.sessions), - })); - }, - - deleteSession(index) { - const deletingLastSession = get().sessions.length === 1; - const deletedSession = get().sessions.at(index); - - if (!deletedSession) return; - - const sessions = get().sessions.slice(); - sessions.splice(index, 1); - - const currentIndex = get().currentSessionIndex; - let nextIndex = Math.min( - currentIndex - Number(index < currentIndex), - sessions.length - 1, - ); - - if (deletingLastSession) { - nextIndex = 0; - sessions.push(createEmptySession()); - } - - // for undo delete action - const restoreState = { - currentSessionIndex: get().currentSessionIndex, - sessions: get().sessions.slice(), - }; - - set(() => ({ - currentSessionIndex: nextIndex, - sessions, - })); - - showToast( - Locale.Home.DeleteToast, - { - text: Locale.Home.Revert, - onClick() { - set(() => restoreState); - }, - }, - 5000, - ); - }, - - currentSession() { - let index = get().currentSessionIndex; - const sessions = get().sessions; - - if (index < 0 || index >= sessions.length) { - index = Math.min(sessions.length - 1, Math.max(0, index)); - set(() => ({ currentSessionIndex: index })); - } - - const session = sessions[index]; - - return session; - }, - - onNewMessage(message) { - get().updateCurrentSession((session) => { - session.lastUpdate = Date.now(); - }); - get().updateStat(message); - get().summarizeSession(); - }, - - async onUserInput(content) { - const session = get().currentSession(); - const modelConfig = session.mask.modelConfig; - - const userMessage: Message = createMessage({ - role: "user", - content, - }); - - const botMessage: Message = createMessage({ - role: "assistant", - streaming: true, - id: userMessage.id! + 1, - model: modelConfig.model, - }); - const systemInfo = createMessage({ - role: "system", - content: `IMPRTANT: You are a virtual assistant powered by the ${ - modelConfig.model - } model, now time is ${new Date().toLocaleString()}}`, - id: botMessage.id! + 1, - }); - // get recent messages - const systemMessages = [systemInfo]; - const recentMessages = get().getMessagesWithMemory(); - const sendMessages = systemMessages.concat( - recentMessages.concat(userMessage), - ); - const sessionIndex = get().currentSessionIndex; - const messageIndex = get().currentSession().messages.length + 1; - - // save user's and bot's message - get().updateCurrentSession((session) => { - session.messages.push(userMessage); - session.messages.push(botMessage); - }); - - // make request - console.log("[User Input] ", sendMessages); - requestChatStream(sendMessages, { - onMessage(content, done) { - // stream response - if (done) { - botMessage.streaming = false; - botMessage.content = content; - get().onNewMessage(botMessage); - ControllerPool.remove( - sessionIndex, - botMessage.id ?? messageIndex, - ); - } else { - botMessage.content = content; - set(() => ({})); - } - }, - onError(error, statusCode) { - const isAborted = error.message.includes("aborted"); - if (statusCode === 401) { - botMessage.content = Locale.Error.Unauthorized; - } else if (!isAborted) { - botMessage.content += "\n\n" + Locale.Store.Error; - } - botMessage.streaming = false; - userMessage.isError = !isAborted; - botMessage.isError = !isAborted; - - set(() => ({})); - ControllerPool.remove(sessionIndex, botMessage.id ?? messageIndex); - }, - onController(controller) { - // collect controller for stop/retry - ControllerPool.addController( - sessionIndex, - botMessage.id ?? messageIndex, - controller, - ); - }, - modelConfig: { ...modelConfig }, - }); - }, - - getMemoryPrompt() { - const session = get().currentSession(); - - return { - role: "system", - content: - session.memoryPrompt.length > 0 - ? Locale.Store.Prompt.History(session.memoryPrompt) - : "", - date: "", - } as Message; - }, - - getMessagesWithMemory() { - const session = get().currentSession(); - const modelConfig = session.mask.modelConfig; - const messages = session.messages.filter((msg) => !msg.isError); - const n = messages.length; - - const context = session.mask.context.slice(); - - // long term memory - if ( - modelConfig.sendMemory && - session.memoryPrompt && - session.memoryPrompt.length > 0 - ) { - const memoryPrompt = get().getMemoryPrompt(); - context.push(memoryPrompt); - } - - // get short term and unmemoried long term memory - const shortTermMemoryMessageIndex = Math.max( - 0, - n - modelConfig.historyMessageCount, - ); - const longTermMemoryMessageIndex = session.lastSummarizeIndex; - const oldestIndex = Math.max( - shortTermMemoryMessageIndex, - longTermMemoryMessageIndex, - ); - const threshold = modelConfig.compressMessageLengthThreshold; - - // get recent messages as many as possible - const reversedRecentMessages = []; - for ( - let i = n - 1, count = 0; - i >= oldestIndex && count < threshold; - i -= 1 - ) { - const msg = messages[i]; - if (!msg || msg.isError) continue; - count += msg.content.length; - reversedRecentMessages.push(msg); - } - - // concat - const recentMessages = context.concat(reversedRecentMessages.reverse()); - - return recentMessages; - }, - - updateMessage( - sessionIndex: number, - messageIndex: number, - updater: (message?: Message) => void, - ) { - const sessions = get().sessions; - const session = sessions.at(sessionIndex); - const messages = session?.messages; - updater(messages?.at(messageIndex)); - set(() => ({ sessions })); - }, - - resetSession() { - get().updateCurrentSession((session) => { - session.messages = []; - session.memoryPrompt = ""; - }); - }, - - summarizeSession() { - const session = get().currentSession(); - // remove error messages if any - const cleanMessages = session.messages.filter((msg) => !msg.isError); - - // should summarize topic after chating more than 50 words - const SUMMARIZE_MIN_LEN = 50; - if ( - session.topic === DEFAULT_TOPIC && - countMessages(cleanMessages) >= SUMMARIZE_MIN_LEN - ) { - const Bot = useAppConfig.getState().bot; - // if (Bot != "OpenAI (VIP)") { - get().updateCurrentSession( - (session) => (session.topic = trimTopic(Bot)), - ); - // } else { - // requestWithPrompt(cleanMessages, Locale.Store.Prompt.Topic, { - // model: "gpt-3.5-turbo", - // }).then((res) => { - // get().updateCurrentSession( - // (session) => - // (session.topic = res ? trimTopic(res) : DEFAULT_TOPIC), - // ); - // }); - // } - } - - const modelConfig = session.mask.modelConfig; - let toBeSummarizedMsgs = cleanMessages.slice( - session.lastSummarizeIndex, - ); - - const historyMsgLength = countMessages(toBeSummarizedMsgs); - - if (historyMsgLength > modelConfig?.max_tokens ?? 4000) { - const n = toBeSummarizedMsgs.length; - toBeSummarizedMsgs = toBeSummarizedMsgs.slice( - Math.max(0, n - modelConfig.historyMessageCount), - ); - } - - // add memory prompt - toBeSummarizedMsgs.unshift(get().getMemoryPrompt()); - - const lastSummarizeIndex = session.messages.length; - - console.log( - "[Chat History] ", - toBeSummarizedMsgs, - historyMsgLength, - modelConfig.compressMessageLengthThreshold, - ); - - if ( - historyMsgLength > modelConfig.compressMessageLengthThreshold && - session.mask.modelConfig.sendMemory - ) { - requestChatStream( - toBeSummarizedMsgs.concat({ - role: "system", - content: Locale.Store.Prompt.Summarize, - date: "", - }), - { - overrideModel: "gpt-3.5-turbo", - onMessage(message, done) { - session.memoryPrompt = message; - if (done) { - console.log("[Memory] ", session.memoryPrompt); - session.lastSummarizeIndex = lastSummarizeIndex; - } - }, - onError(error) { - console.error("[Summarize] ", error); - }, - }, - ); - } - }, - - updateStat(message) { - get().updateCurrentSession((session) => { - session.stat.charCount += message.content.length; - // TODO: should update chat count and word count - }); - }, - - updateCurrentSession(updater) { - const sessions = get().sessions; - const index = get().currentSessionIndex; - updater(sessions[index]); - set(() => ({ sessions })); - }, - - clearAllData() { - localStorage.clear(); - location.reload(); - }, - - clearAll() { - // localStorage.clear(); - location.reload(); - }, - }), - { - name: StoreKey.Chat, - version: 2, - migrate(persistedState, version) { - const state = persistedState as any; - const newState = JSON.parse(JSON.stringify(state)) as ChatStore; - - if (version < 2) { - newState.globalId = 0; - newState.sessions = []; - - const oldSessions = state.sessions; - for (const oldSession of oldSessions) { - const newSession = createEmptySession(); - newSession.topic = oldSession.topic; - newSession.messages = [...oldSession.messages]; - newSession.mask.modelConfig.sendMemory = true; - newSession.mask.modelConfig.historyMessageCount = 4; - newSession.mask.modelConfig.compressMessageLengthThreshold = 1000; - newState.sessions.push(newSession); - } - } - - return newState; - }, - }, - ), -); diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Descargar Gacha Club APK - Un Juego Divertido y Creativo para los Amantes de Gacha.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Descargar Gacha Club APK - Un Juego Divertido y Creativo para los Amantes de Gacha.md deleted file mode 100644 index 6ec61c139a485631c0aec5f6ab9da302a48a4a04..0000000000000000000000000000000000000000 --- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Descargar Gacha Club APK - Un Juego Divertido y Creativo para los Amantes de Gacha.md +++ /dev/null @@ -1,87 +0,0 @@ -
      -

      Gacha Club: Cómo Descargar y Disfrutar el Juego de Anime

      -

      ¿Te gustan los juegos de anime, vestir y batalla? Entonces te encantará Gacha Club, el último juego casual de estrategia RPG de Lunime, la compañía que ha lanzado una gran variedad de juegos gacha desde 2015. Gacha Club es la secuela de Gacha Life, uno de los juegos más populares de Lunime con más de 10 millones de descargas. En este artículo, te mostraremos cómo descargar e instalar Gacha Club en tu dispositivo Android, así como algunas de las características y consejos que ofrece el juego.

      -

      gacha club descargar apk


      DOWNLOADhttps://gohhs.com/2uPtPO



      -

      Cómo Descargar e Instalar Gacha Club en Android

      -

      Gacha Club es un juego gratuito que puedes descargar fácilmente desde la Google Play Store. Solo tienes que seguir estos pasos:

      -
        -
      1. Abre la Google Play Store en tu dispositivo Android y busca "Gacha Club".
      2. -
      3. Selecciona el juego de la lista de resultados y pulsa el botón "Instalar".
      4. -
      5. Espera a que se complete la descarga y la instalación del juego.
      6. -
      7. Abre el juego desde el menú de aplicaciones o desde el icono en tu pantalla de inicio.
      8. -
      9. Disfruta de crear tus propios personajes de anime y jugar con ellos en diferentes modos.
      10. -
      -

      Nota: El juego puede funcionar lento o con lag en dispositivos antiguos o con pantallas 4k. También puede haber algunos errores o glitches si no tienes suficiente espacio de almacenamiento en tu teléfono. Si experimentas alguno de estos problemas, intenta reiniciar el juego o liberar espacio en tu dispositivo.

      -

      Características de Gacha Club

      -

      Gacha Club tiene mucho contenido y funciones para que te diviertas durante horas. Estas son algunas de las principales características que ofrece el juego:

      -

      gacha club descargar apk gratis
      -gacha club descargar apk ultima version
      -gacha club descargar apk para android
      -gacha club descargar apk sin internet
      -gacha club descargar apk mod
      -gacha club descargar apk full
      -gacha club descargar apk mega
      -gacha club descargar apk mediafire
      -gacha club descargar apk uptodown
      -gacha club descargar apk hackeado
      -gacha club descargar apk 2023
      -gacha club descargar apk pc
      -gacha club descargar apk windows 10
      -gacha club descargar apk laptop
      -gacha club descargar apk mac
      -gacha club descargar apk chromebook
      -gacha club descargar apk bluestacks
      -gacha club descargar apk nox player
      -gacha club descargar apk online
      -gacha club descargar apk sin emulador
      -gacha club descargar apk español
      -gacha club descargar apk ingles
      -gacha club descargar apk portugues
      -gacha club descargar apk frances
      -gacha club descargar apk aleman
      -gacha club descargar apk japones
      -gacha club descargar apk chino
      -gacha club descargar apk coreano
      -gacha club descargar apk arabe
      -gacha club descargar apk ruso
      -gacha club descargar apk original
      -gacha club descargar apk oficial
      -gacha club descargar apk lunime
      -gacha club descargar apk play store
      -gacha club descargar apk google play
      -gacha club descargar apk amazon appstore
      -gacha club descargar apk samsung galaxy store
      -gacha club descargar apk huawei appgallery
      -gacha club descargar apk xiaomi getapps
      -gacha club descargar apk oppo app market
      -gacha club descargar apk vivo app store
      -gacha club descargar apk lg smartworld
      -gacha club descargar apk nokia store
      -gacha club descargar apk motorola apps store

      -

      Personalización de Personajes y Modo Estudio

      -

      En Gacha Club, puedes crear tus propios personajes de anime y vestirlos con tus atuendos favoritos. Puedes personalizar hasta 10 personajes principales y 90 personajes extra con cientos de opciones de vestuario, peinado, accesorios, armas y más. También puedes cambiar los colores de casi todos los elementos, elegir entre 600 poses diferentes y ajustar el cabello, los ojos y los objetos para que se adapten a tus personajes.

      -

      Después de diseñar tus personajes, puedes entrar en el modo Estudio y crear cualquier escena que puedas imaginar. Puedes añadir hasta 10 personajes en cualquier lugar de la pantalla, así como tus mascotas y objetos favoritos. Puedes elegir entre una gran variedad de fondos y primeros planos, hacer que tus personajes hablen entre sí con cajas de texto personalizadas, añadir un narrador para crear escenas narrativas, guardar y cargar hasta 15 escenas y usar preajustes faciales para cambiar rápidamente tu expresión.

      -

      Gacha y Modo Batalla

      -

      Si quieres más acción, puedes gachar más de 180 unidades para usar en batalla. Puedes gachar por 150 mascotas para aumentar tus estadísticas, coleccionar personajes super raros Corruptos y DJ, usar materiales para mejorar

      Mini-Juegos y Recompensas

      -

      Otra forma de divertirte y ganar recursos en Gacha Club es jugar a los mini-juegos que ofrece el juego. Hay cuatro mini-juegos disponibles: Lemo & Yumi Dance, Mascot Whack!, Memory Match, y Usagi vs. Neko. Cada uno tiene su propia mecánica y dificultad, pero todos te darán oro y otras recompensas si logras una buena puntuación. Estos son algunos consejos para cada mini-juego:

      -
        -
      • Lemo & Yumi Dance: Este es un juego de ritmo donde tienes que pulsar las flechas que coinciden con las que aparecen en la pantalla. Cuanto más sincronizado estés con la música, más puntos obtendrás. Intenta no perder ninguna flecha y mantén una racha para aumentar tu multiplicador.
      • -
      • Mascot Whack!: Este es un juego de reflejos donde tienes que golpear a las mascotas que salen de los agujeros. Cada mascota tiene un valor diferente, así que intenta golpear a las que valen más puntos. Evita golpear a las bombas, ya que te restarán puntos y tiempo.
      • -
      • Memory Match: Este es un juego de memoria donde tienes que emparejar las cartas que tienen el mismo personaje. Tienes un tiempo limitado para encontrar todos los pares, así que sé rápido y atento. Intenta recordar la posición de las cartas y usa el botón de pista si te quedas atascado.
      • -
      • Usagi vs. Neko: Este es un juego de lucha donde tienes que controlar a Usagi o a Neko y derrotar al otro. Puedes moverte, saltar, atacar y bloquear con los botones de la pantalla. Intenta esquivar o bloquear los ataques del enemigo y golpearlo cuando esté vulnerable. Usa el botón especial cuando se llene tu barra de energía para hacer un ataque poderoso.
      • -
      -

      Conclusión

      -

      Gacha Club es un juego muy completo y divertido que te permite crear tus propios personajes de anime y jugar con ellos en diferentes modos. Puedes personalizarlos al máximo, crear escenas increíbles, gachar unidades y mascotas, luchar contra monstruos y jefes, y jugar a mini-juegos variados. Además, es un juego gratuito y sin conexión, lo que lo hace ideal para pasar el rato sin preocuparte por el dinero o el internet.

      -

      Si te gustan los juegos de anime, vestir y batalla, no dudes en descargar Gacha Club en tu dispositivo Android y disfrutar de todo lo que ofrece. También puedes unirte a la comunidad de Gacha Club en Facebook o en su página web oficial para compartir tus creaciones, conocer a otros jugadores, y estar al tanto de las novedades del juego.

      -

      ¿A qué esperas? ¡Entra en Gacha Club y empieza tu aventura hoy mismo!

      -

      Preguntas Frecuentes

      -

      A continuación, te respondemos algunas de las preguntas más comunes sobre Gacha Club:

      -
        -
      1. ¿Gacha Club es gratis? Sí, Gacha Club es un juego totalmente gratuito que puedes descargar e instalar sin pagar nada. Además, no tiene compras dentro de la aplicación ni anuncios molestos.
      2. -
      3. ¿Gacha Club es seguro para los niños? Sí, Gacha Club es un juego apto para todos los públicos que no contiene violencia explícita ni contenido inapropiado. Sin embargo, como tiene una función de chat en vivo, se recomienda a los padres supervisar con quién interactúan sus hijos en línea.
      4. -
      5. ¿Gacha Club está disponible en otras plataformas? Sí, Gacha Club también está disponible para Windows e iOS en sus respectivas tiendas de aplicaciones. Sin embargo, la versión completa del juego solo está disponible para Android por el momento.
      6. -
      7. ¿Cómo puedo importar y exportar mis personajes en Gacha Club? Puedes importar y exportar tus personajes usando códigos QR o códigos numéricos. Para importar un personaje, solo tienes que escanear el código QR o introducir el código numérico del personaje que quieres importar en el menú de importación/exportación. Para exportar un personaje, solo tienes que generar el código QR o el código numérico del personaje que quieres exportar en el mismo menú. Luego, puedes compartir el código con otros jugadores o guardarlo para usarlo más tarde.
      8. -
      9. ¿Cómo puedo conseguir más gemas y bytes en Gacha Club? Puedes conseguir más gemas y bytes, que son las monedas del juego, de varias formas. Algunas de ellas son: completar misiones, jugar a mini-juegos, gachar unidades y mascotas, vender objetos que no necesites, y ver vídeos promocionales.
      10. -

      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/fffiloni/Image-to-MusicGen/audiocraft/data/audio.py b/spaces/fffiloni/Image-to-MusicGen/audiocraft/data/audio.py deleted file mode 100644 index 1829d7db4ef832ad65598b471caa7d256a06d012..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/Image-to-MusicGen/audiocraft/data/audio.py +++ /dev/null @@ -1,213 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -""" -Audio IO methods are defined in this module (info, read, write), -We rely on av library for faster read when possible, otherwise on torchaudio. -""" - -from dataclasses import dataclass -from pathlib import Path -import logging -import typing as tp - -import numpy as np -import soundfile -import torch -from torch.nn import functional as F -import torchaudio as ta - -import av - -from .audio_utils import f32_pcm, i16_pcm, normalize_audio - - -_av_initialized = False - - -def _init_av(): - global _av_initialized - if _av_initialized: - return - logger = logging.getLogger('libav.mp3') - logger.setLevel(logging.ERROR) - _av_initialized = True - - -@dataclass(frozen=True) -class AudioFileInfo: - sample_rate: int - duration: float - channels: int - - -def _av_info(filepath: tp.Union[str, Path]) -> AudioFileInfo: - _init_av() - with av.open(str(filepath)) as af: - stream = af.streams.audio[0] - sample_rate = stream.codec_context.sample_rate - duration = float(stream.duration * stream.time_base) - channels = stream.channels - return AudioFileInfo(sample_rate, duration, channels) - - -def _soundfile_info(filepath: tp.Union[str, Path]) -> AudioFileInfo: - info = soundfile.info(filepath) - return AudioFileInfo(info.samplerate, info.duration, info.channels) - - -def audio_info(filepath: tp.Union[str, Path]) -> AudioFileInfo: - # torchaudio no longer returns useful duration informations for some formats like mp3s. - filepath = Path(filepath) - if filepath.suffix in ['.flac', '.ogg']: # TODO: Validate .ogg can be safely read with av_info - # ffmpeg has some weird issue with flac. - return _soundfile_info(filepath) - else: - return _av_info(filepath) - - -def _av_read(filepath: tp.Union[str, Path], seek_time: float = 0, duration: float = -1.) -> tp.Tuple[torch.Tensor, int]: - """FFMPEG-based audio file reading using PyAV bindings. - Soundfile cannot read mp3 and av_read is more efficient than torchaudio. - - Args: - filepath (str or Path): Path to audio file to read. - seek_time (float): Time at which to start reading in the file. - duration (float): Duration to read from the file. If set to -1, the whole file is read. - Returns: - Tuple[torch.Tensor, int]: Tuple containing audio data and sample rate - """ - _init_av() - with av.open(str(filepath)) as af: - stream = af.streams.audio[0] - sr = stream.codec_context.sample_rate - num_frames = int(sr * duration) if duration >= 0 else -1 - frame_offset = int(sr * seek_time) - # we need a small negative offset otherwise we get some edge artifact - # from the mp3 decoder. - af.seek(int(max(0, (seek_time - 0.1)) / stream.time_base), stream=stream) - frames = [] - length = 0 - for frame in af.decode(streams=stream.index): - current_offset = int(frame.rate * frame.pts * frame.time_base) - strip = max(0, frame_offset - current_offset) - buf = torch.from_numpy(frame.to_ndarray()) - if buf.shape[0] != stream.channels: - buf = buf.view(-1, stream.channels).t() - buf = buf[:, strip:] - frames.append(buf) - length += buf.shape[1] - if num_frames > 0 and length >= num_frames: - break - assert frames - # If the above assert fails, it is likely because we seeked past the end of file point, - # in which case ffmpeg returns a single frame with only zeros, and a weird timestamp. - # This will need proper debugging, in due time. - wav = torch.cat(frames, dim=1) - assert wav.shape[0] == stream.channels - if num_frames > 0: - wav = wav[:, :num_frames] - return f32_pcm(wav), sr - - -def audio_read(filepath: tp.Union[str, Path], seek_time: float = 0., - duration: float = -1., pad: bool = False) -> tp.Tuple[torch.Tensor, int]: - """Read audio by picking the most appropriate backend tool based on the audio format. - - Args: - filepath (str or Path): Path to audio file to read. - seek_time (float): Time at which to start reading in the file. - duration (float): Duration to read from the file. If set to -1, the whole file is read. - pad (bool): Pad output audio if not reaching expected duration. - Returns: - Tuple[torch.Tensor, int]: Tuple containing audio data and sample rate. - """ - fp = Path(filepath) - if fp.suffix in ['.flac', '.ogg']: # TODO: check if we can safely use av_read for .ogg - # There is some bug with ffmpeg and reading flac - info = _soundfile_info(filepath) - frames = -1 if duration <= 0 else int(duration * info.sample_rate) - frame_offset = int(seek_time * info.sample_rate) - wav, sr = soundfile.read(filepath, start=frame_offset, frames=frames, dtype=np.float32) - assert info.sample_rate == sr, f"Mismatch of sample rates {info.sample_rate} {sr}" - wav = torch.from_numpy(wav).t().contiguous() - if len(wav.shape) == 1: - wav = torch.unsqueeze(wav, 0) - elif ( - fp.suffix in ['.wav', '.mp3'] and fp.suffix[1:] in ta.utils.sox_utils.list_read_formats() - and duration <= 0 and seek_time == 0 - ): - # Torchaudio is faster if we load an entire file at once. - wav, sr = ta.load(fp) - else: - wav, sr = _av_read(filepath, seek_time, duration) - if pad and duration > 0: - expected_frames = int(duration * sr) - wav = F.pad(wav, (0, expected_frames - wav.shape[-1])) - return wav, sr - - -def audio_write(stem_name: tp.Union[str, Path], - wav: torch.Tensor, sample_rate: int, - format: str = 'wav', mp3_rate: int = 320, normalize: bool = True, - strategy: str = 'peak', peak_clip_headroom_db: float = 1, - rms_headroom_db: float = 18, loudness_headroom_db: float = 14, - log_clipping: bool = True, make_parent_dir: bool = True, - add_suffix: bool = True) -> Path: - """Convenience function for saving audio to disk. Returns the filename the audio was written to. - - Args: - stem_name (str or Path): Filename without extension which will be added automatically. - format (str): Either "wav" or "mp3". - mp3_rate (int): kbps when using mp3s. - normalize (bool): if `True` (default), normalizes according to the prescribed - strategy (see after). If `False`, the strategy is only used in case clipping - would happen. - strategy (str): Can be either 'clip', 'peak', or 'rms'. Default is 'peak', - i.e. audio is normalized by its largest value. RMS normalizes by root-mean-square - with extra headroom to avoid clipping. 'clip' just clips. - peak_clip_headroom_db (float): Headroom in dB when doing 'peak' or 'clip' strategy. - rms_headroom_db (float): Headroom in dB when doing 'rms' strategy. This must be much larger - than the `peak_clip` one to avoid further clipping. - loudness_headroom_db (float): Target loudness for loudness normalization. - log_clipping (bool): If True, basic logging on stderr when clipping still - occurs despite strategy (only for 'rms'). - make_parent_dir (bool): Make parent directory if it doesn't exist. - Returns: - Path: Path of the saved audio. - """ - assert wav.dtype.is_floating_point, "wav is not floating point" - if wav.dim() == 1: - wav = wav[None] - elif wav.dim() > 2: - raise ValueError("Input wav should be at most 2 dimension.") - assert wav.isfinite().all() - wav = normalize_audio(wav, normalize, strategy, peak_clip_headroom_db, - rms_headroom_db, loudness_headroom_db, log_clipping=log_clipping, - sample_rate=sample_rate, stem_name=str(stem_name)) - kwargs: dict = {} - if format == 'mp3': - suffix = '.mp3' - kwargs.update({"compression": mp3_rate}) - elif format == 'wav': - wav = i16_pcm(wav) - suffix = '.wav' - kwargs.update({"encoding": "PCM_S", "bits_per_sample": 16}) - else: - raise RuntimeError(f"Invalid format {format}. Only wav or mp3 are supported.") - if not add_suffix: - suffix = '' - path = Path(str(stem_name) + suffix) - if make_parent_dir: - path.parent.mkdir(exist_ok=True, parents=True) - try: - ta.save(path, wav, sample_rate, **kwargs) - except Exception: - if path.exists(): - # we do not want to leave half written files around. - path.unlink() - raise - return path diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/@types/node/https.d.ts b/spaces/fffiloni/controlnet-animation-doodle/node_modules/@types/node/https.d.ts deleted file mode 100644 index bda367d74c634f58d3e3898029bbc64bdbc61c0a..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/@types/node/https.d.ts +++ /dev/null @@ -1,542 +0,0 @@ -/** - * HTTPS is the HTTP protocol over TLS/SSL. In Node.js this is implemented as a - * separate module. - * @see [source](https://github.com/nodejs/node/blob/v18.0.0/lib/https.js) - */ -declare module 'https' { - import { Duplex } from 'node:stream'; - import * as tls from 'node:tls'; - import * as http from 'node:http'; - import { URL } from 'node:url'; - type ServerOptions< - Request extends typeof http.IncomingMessage = typeof http.IncomingMessage, - Response extends typeof http.ServerResponse = typeof http.ServerResponse, - > = tls.SecureContextOptions & tls.TlsOptions & http.ServerOptions; - type RequestOptions = http.RequestOptions & - tls.SecureContextOptions & { - checkServerIdentity?: typeof tls.checkServerIdentity | undefined; - rejectUnauthorized?: boolean | undefined; // Defaults to true - servername?: string | undefined; // SNI TLS Extension - }; - interface AgentOptions extends http.AgentOptions, tls.ConnectionOptions { - rejectUnauthorized?: boolean | undefined; - maxCachedSessions?: number | undefined; - } - /** - * An `Agent` object for HTTPS similar to `http.Agent`. See {@link request} for more information. - * @since v0.4.5 - */ - class Agent extends http.Agent { - constructor(options?: AgentOptions); - options: AgentOptions; - } - interface Server< - Request extends typeof http.IncomingMessage = typeof http.IncomingMessage, - Response extends typeof http.ServerResponse = typeof http.ServerResponse, - > extends http.Server {} - /** - * See `http.Server` for more information. - * @since v0.3.4 - */ - class Server< - Request extends typeof http.IncomingMessage = typeof http.IncomingMessage, - Response extends typeof http.ServerResponse = typeof http.ServerResponse, - > extends tls.Server { - constructor(requestListener?: http.RequestListener); - constructor( - options: ServerOptions, - requestListener?: http.RequestListener, - ); - /** - * Closes all connections connected to this server. - * @since v18.2.0 - */ - closeAllConnections(): void; - /** - * Closes all connections connected to this server which are not sending a request or waiting for a response. - * @since v18.2.0 - */ - closeIdleConnections(): void; - addListener(event: string, listener: (...args: any[]) => void): this; - addListener(event: 'keylog', listener: (line: Buffer, tlsSocket: tls.TLSSocket) => void): this; - addListener( - event: 'newSession', - listener: (sessionId: Buffer, sessionData: Buffer, callback: (err: Error, resp: Buffer) => void) => void, - ): this; - addListener( - event: 'OCSPRequest', - listener: ( - certificate: Buffer, - issuer: Buffer, - callback: (err: Error | null, resp: Buffer) => void, - ) => void, - ): this; - addListener( - event: 'resumeSession', - listener: (sessionId: Buffer, callback: (err: Error, sessionData: Buffer) => void) => void, - ): this; - addListener(event: 'secureConnection', listener: (tlsSocket: tls.TLSSocket) => void): this; - addListener(event: 'tlsClientError', listener: (err: Error, tlsSocket: tls.TLSSocket) => void): this; - addListener(event: 'close', listener: () => void): this; - addListener(event: 'connection', listener: (socket: Duplex) => void): this; - addListener(event: 'error', listener: (err: Error) => void): this; - addListener(event: 'listening', listener: () => void): this; - addListener(event: 'checkContinue', listener: http.RequestListener): this; - addListener(event: 'checkExpectation', listener: http.RequestListener): this; - addListener(event: 'clientError', listener: (err: Error, socket: Duplex) => void): this; - addListener( - event: 'connect', - listener: (req: InstanceType, socket: Duplex, head: Buffer) => void, - ): this; - addListener(event: 'request', listener: http.RequestListener): this; - addListener( - event: 'upgrade', - listener: (req: InstanceType, socket: Duplex, head: Buffer) => void, - ): this; - emit(event: string, ...args: any[]): boolean; - emit(event: 'keylog', line: Buffer, tlsSocket: tls.TLSSocket): boolean; - emit( - event: 'newSession', - sessionId: Buffer, - sessionData: Buffer, - callback: (err: Error, resp: Buffer) => void, - ): boolean; - emit( - event: 'OCSPRequest', - certificate: Buffer, - issuer: Buffer, - callback: (err: Error | null, resp: Buffer) => void, - ): boolean; - emit(event: 'resumeSession', sessionId: Buffer, callback: (err: Error, sessionData: Buffer) => void): boolean; - emit(event: 'secureConnection', tlsSocket: tls.TLSSocket): boolean; - emit(event: 'tlsClientError', err: Error, tlsSocket: tls.TLSSocket): boolean; - emit(event: 'close'): boolean; - emit(event: 'connection', socket: Duplex): boolean; - emit(event: 'error', err: Error): boolean; - emit(event: 'listening'): boolean; - emit( - event: 'checkContinue', - req: InstanceType, - res: InstanceType & { req: InstanceType }, - ): boolean; - emit( - event: 'checkExpectation', - req: InstanceType, - res: InstanceType & { req: InstanceType }, - ): boolean; - emit(event: 'clientError', err: Error, socket: Duplex): boolean; - emit(event: 'connect', req: InstanceType, socket: Duplex, head: Buffer): boolean; - emit( - event: 'request', - req: InstanceType, - res: InstanceType & { req: InstanceType }, - ): boolean; - emit(event: 'upgrade', req: InstanceType, socket: Duplex, head: Buffer): boolean; - on(event: string, listener: (...args: any[]) => void): this; - on(event: 'keylog', listener: (line: Buffer, tlsSocket: tls.TLSSocket) => void): this; - on( - event: 'newSession', - listener: (sessionId: Buffer, sessionData: Buffer, callback: (err: Error, resp: Buffer) => void) => void, - ): this; - on( - event: 'OCSPRequest', - listener: ( - certificate: Buffer, - issuer: Buffer, - callback: (err: Error | null, resp: Buffer) => void, - ) => void, - ): this; - on( - event: 'resumeSession', - listener: (sessionId: Buffer, callback: (err: Error, sessionData: Buffer) => void) => void, - ): this; - on(event: 'secureConnection', listener: (tlsSocket: tls.TLSSocket) => void): this; - on(event: 'tlsClientError', listener: (err: Error, tlsSocket: tls.TLSSocket) => void): this; - on(event: 'close', listener: () => void): this; - on(event: 'connection', listener: (socket: Duplex) => void): this; - on(event: 'error', listener: (err: Error) => void): this; - on(event: 'listening', listener: () => void): this; - on(event: 'checkContinue', listener: http.RequestListener): this; - on(event: 'checkExpectation', listener: http.RequestListener): this; - on(event: 'clientError', listener: (err: Error, socket: Duplex) => void): this; - on(event: 'connect', listener: (req: InstanceType, socket: Duplex, head: Buffer) => void): this; - on(event: 'request', listener: http.RequestListener): this; - on(event: 'upgrade', listener: (req: InstanceType, socket: Duplex, head: Buffer) => void): this; - once(event: string, listener: (...args: any[]) => void): this; - once(event: 'keylog', listener: (line: Buffer, tlsSocket: tls.TLSSocket) => void): this; - once( - event: 'newSession', - listener: (sessionId: Buffer, sessionData: Buffer, callback: (err: Error, resp: Buffer) => void) => void, - ): this; - once( - event: 'OCSPRequest', - listener: ( - certificate: Buffer, - issuer: Buffer, - callback: (err: Error | null, resp: Buffer) => void, - ) => void, - ): this; - once( - event: 'resumeSession', - listener: (sessionId: Buffer, callback: (err: Error, sessionData: Buffer) => void) => void, - ): this; - once(event: 'secureConnection', listener: (tlsSocket: tls.TLSSocket) => void): this; - once(event: 'tlsClientError', listener: (err: Error, tlsSocket: tls.TLSSocket) => void): this; - once(event: 'close', listener: () => void): this; - once(event: 'connection', listener: (socket: Duplex) => void): this; - once(event: 'error', listener: (err: Error) => void): this; - once(event: 'listening', listener: () => void): this; - once(event: 'checkContinue', listener: http.RequestListener): this; - once(event: 'checkExpectation', listener: http.RequestListener): this; - once(event: 'clientError', listener: (err: Error, socket: Duplex) => void): this; - once(event: 'connect', listener: (req: InstanceType, socket: Duplex, head: Buffer) => void): this; - once(event: 'request', listener: http.RequestListener): this; - once(event: 'upgrade', listener: (req: InstanceType, socket: Duplex, head: Buffer) => void): this; - prependListener(event: string, listener: (...args: any[]) => void): this; - prependListener(event: 'keylog', listener: (line: Buffer, tlsSocket: tls.TLSSocket) => void): this; - prependListener( - event: 'newSession', - listener: (sessionId: Buffer, sessionData: Buffer, callback: (err: Error, resp: Buffer) => void) => void, - ): this; - prependListener( - event: 'OCSPRequest', - listener: ( - certificate: Buffer, - issuer: Buffer, - callback: (err: Error | null, resp: Buffer) => void, - ) => void, - ): this; - prependListener( - event: 'resumeSession', - listener: (sessionId: Buffer, callback: (err: Error, sessionData: Buffer) => void) => void, - ): this; - prependListener(event: 'secureConnection', listener: (tlsSocket: tls.TLSSocket) => void): this; - prependListener(event: 'tlsClientError', listener: (err: Error, tlsSocket: tls.TLSSocket) => void): this; - prependListener(event: 'close', listener: () => void): this; - prependListener(event: 'connection', listener: (socket: Duplex) => void): this; - prependListener(event: 'error', listener: (err: Error) => void): this; - prependListener(event: 'listening', listener: () => void): this; - prependListener(event: 'checkContinue', listener: http.RequestListener): this; - prependListener(event: 'checkExpectation', listener: http.RequestListener): this; - prependListener(event: 'clientError', listener: (err: Error, socket: Duplex) => void): this; - prependListener( - event: 'connect', - listener: (req: InstanceType, socket: Duplex, head: Buffer) => void, - ): this; - prependListener(event: 'request', listener: http.RequestListener): this; - prependListener( - event: 'upgrade', - listener: (req: InstanceType, socket: Duplex, head: Buffer) => void, - ): this; - prependOnceListener(event: string, listener: (...args: any[]) => void): this; - prependOnceListener(event: 'keylog', listener: (line: Buffer, tlsSocket: tls.TLSSocket) => void): this; - prependOnceListener( - event: 'newSession', - listener: (sessionId: Buffer, sessionData: Buffer, callback: (err: Error, resp: Buffer) => void) => void, - ): this; - prependOnceListener( - event: 'OCSPRequest', - listener: ( - certificate: Buffer, - issuer: Buffer, - callback: (err: Error | null, resp: Buffer) => void, - ) => void, - ): this; - prependOnceListener( - event: 'resumeSession', - listener: (sessionId: Buffer, callback: (err: Error, sessionData: Buffer) => void) => void, - ): this; - prependOnceListener(event: 'secureConnection', listener: (tlsSocket: tls.TLSSocket) => void): this; - prependOnceListener(event: 'tlsClientError', listener: (err: Error, tlsSocket: tls.TLSSocket) => void): this; - prependOnceListener(event: 'close', listener: () => void): this; - prependOnceListener(event: 'connection', listener: (socket: Duplex) => void): this; - prependOnceListener(event: 'error', listener: (err: Error) => void): this; - prependOnceListener(event: 'listening', listener: () => void): this; - prependOnceListener(event: 'checkContinue', listener: http.RequestListener): this; - prependOnceListener(event: 'checkExpectation', listener: http.RequestListener): this; - prependOnceListener(event: 'clientError', listener: (err: Error, socket: Duplex) => void): this; - prependOnceListener( - event: 'connect', - listener: (req: InstanceType, socket: Duplex, head: Buffer) => void, - ): this; - prependOnceListener(event: 'request', listener: http.RequestListener): this; - prependOnceListener( - event: 'upgrade', - listener: (req: InstanceType, socket: Duplex, head: Buffer) => void, - ): this; - } - /** - * ```js - * // curl -k https://localhost:8000/ - * const https = require('https'); - * const fs = require('fs'); - * - * const options = { - * key: fs.readFileSync('test/fixtures/keys/agent2-key.pem'), - * cert: fs.readFileSync('test/fixtures/keys/agent2-cert.pem') - * }; - * - * https.createServer(options, (req, res) => { - * res.writeHead(200); - * res.end('hello world\n'); - * }).listen(8000); - * ``` - * - * Or - * - * ```js - * const https = require('https'); - * const fs = require('fs'); - * - * const options = { - * pfx: fs.readFileSync('test/fixtures/test_cert.pfx'), - * passphrase: 'sample' - * }; - * - * https.createServer(options, (req, res) => { - * res.writeHead(200); - * res.end('hello world\n'); - * }).listen(8000); - * ``` - * @since v0.3.4 - * @param options Accepts `options` from `createServer`, `createSecureContext` and `createServer`. - * @param requestListener A listener to be added to the `'request'` event. - */ - function createServer< - Request extends typeof http.IncomingMessage = typeof http.IncomingMessage, - Response extends typeof http.ServerResponse = typeof http.ServerResponse, - >(requestListener?: http.RequestListener): Server; - function createServer< - Request extends typeof http.IncomingMessage = typeof http.IncomingMessage, - Response extends typeof http.ServerResponse = typeof http.ServerResponse, - >( - options: ServerOptions, - requestListener?: http.RequestListener, - ): Server; - /** - * Makes a request to a secure web server. - * - * The following additional `options` from `tls.connect()` are also accepted:`ca`, `cert`, `ciphers`, `clientCertEngine`, `crl`, `dhparam`, `ecdhCurve`,`honorCipherOrder`, `key`, `passphrase`, - * `pfx`, `rejectUnauthorized`,`secureOptions`, `secureProtocol`, `servername`, `sessionIdContext`,`highWaterMark`. - * - * `options` can be an object, a string, or a `URL` object. If `options` is a - * string, it is automatically parsed with `new URL()`. If it is a `URL` object, it will be automatically converted to an ordinary `options` object. - * - * `https.request()` returns an instance of the `http.ClientRequest` class. The `ClientRequest` instance is a writable stream. If one needs to - * upload a file with a POST request, then write to the `ClientRequest` object. - * - * ```js - * const https = require('https'); - * - * const options = { - * hostname: 'encrypted.google.com', - * port: 443, - * path: '/', - * method: 'GET' - * }; - * - * const req = https.request(options, (res) => { - * console.log('statusCode:', res.statusCode); - * console.log('headers:', res.headers); - * - * res.on('data', (d) => { - * process.stdout.write(d); - * }); - * }); - * - * req.on('error', (e) => { - * console.error(e); - * }); - * req.end(); - * ``` - * - * Example using options from `tls.connect()`: - * - * ```js - * const options = { - * hostname: 'encrypted.google.com', - * port: 443, - * path: '/', - * method: 'GET', - * key: fs.readFileSync('test/fixtures/keys/agent2-key.pem'), - * cert: fs.readFileSync('test/fixtures/keys/agent2-cert.pem') - * }; - * options.agent = new https.Agent(options); - * - * const req = https.request(options, (res) => { - * // ... - * }); - * ``` - * - * Alternatively, opt out of connection pooling by not using an `Agent`. - * - * ```js - * const options = { - * hostname: 'encrypted.google.com', - * port: 443, - * path: '/', - * method: 'GET', - * key: fs.readFileSync('test/fixtures/keys/agent2-key.pem'), - * cert: fs.readFileSync('test/fixtures/keys/agent2-cert.pem'), - * agent: false - * }; - * - * const req = https.request(options, (res) => { - * // ... - * }); - * ``` - * - * Example using a `URL` as `options`: - * - * ```js - * const options = new URL('https://abc:xyz@example.com'); - * - * const req = https.request(options, (res) => { - * // ... - * }); - * ``` - * - * Example pinning on certificate fingerprint, or the public key (similar to`pin-sha256`): - * - * ```js - * const tls = require('tls'); - * const https = require('https'); - * const crypto = require('crypto'); - * - * function sha256(s) { - * return crypto.createHash('sha256').update(s).digest('base64'); - * } - * const options = { - * hostname: 'github.com', - * port: 443, - * path: '/', - * method: 'GET', - * checkServerIdentity: function(host, cert) { - * // Make sure the certificate is issued to the host we are connected to - * const err = tls.checkServerIdentity(host, cert); - * if (err) { - * return err; - * } - * - * // Pin the public key, similar to HPKP pin-sha25 pinning - * const pubkey256 = 'pL1+qb9HTMRZJmuC/bB/ZI9d302BYrrqiVuRyW+DGrU='; - * if (sha256(cert.pubkey) !== pubkey256) { - * const msg = 'Certificate verification error: ' + - * `The public key of '${cert.subject.CN}' ` + - * 'does not match our pinned fingerprint'; - * return new Error(msg); - * } - * - * // Pin the exact certificate, rather than the pub key - * const cert256 = '25:FE:39:32:D9:63:8C:8A:FC:A1:9A:29:87:' + - * 'D8:3E:4C:1D:98:DB:71:E4:1A:48:03:98:EA:22:6A:BD:8B:93:16'; - * if (cert.fingerprint256 !== cert256) { - * const msg = 'Certificate verification error: ' + - * `The certificate of '${cert.subject.CN}' ` + - * 'does not match our pinned fingerprint'; - * return new Error(msg); - * } - * - * // This loop is informational only. - * // Print the certificate and public key fingerprints of all certs in the - * // chain. Its common to pin the public key of the issuer on the public - * // internet, while pinning the public key of the service in sensitive - * // environments. - * do { - * console.log('Subject Common Name:', cert.subject.CN); - * console.log(' Certificate SHA256 fingerprint:', cert.fingerprint256); - * - * hash = crypto.createHash('sha256'); - * console.log(' Public key ping-sha256:', sha256(cert.pubkey)); - * - * lastprint256 = cert.fingerprint256; - * cert = cert.issuerCertificate; - * } while (cert.fingerprint256 !== lastprint256); - * - * }, - * }; - * - * options.agent = new https.Agent(options); - * const req = https.request(options, (res) => { - * console.log('All OK. Server matched our pinned cert or public key'); - * console.log('statusCode:', res.statusCode); - * // Print the HPKP values - * console.log('headers:', res.headers['public-key-pins']); - * - * res.on('data', (d) => {}); - * }); - * - * req.on('error', (e) => { - * console.error(e.message); - * }); - * req.end(); - * ``` - * - * Outputs for example: - * - * ```text - * Subject Common Name: github.com - * Certificate SHA256 fingerprint: 25:FE:39:32:D9:63:8C:8A:FC:A1:9A:29:87:D8:3E:4C:1D:98:DB:71:E4:1A:48:03:98:EA:22:6A:BD:8B:93:16 - * Public key ping-sha256: pL1+qb9HTMRZJmuC/bB/ZI9d302BYrrqiVuRyW+DGrU= - * Subject Common Name: DigiCert SHA2 Extended Validation Server CA - * Certificate SHA256 fingerprint: 40:3E:06:2A:26:53:05:91:13:28:5B:AF:80:A0:D4:AE:42:2C:84:8C:9F:78:FA:D0:1F:C9:4B:C5:B8:7F:EF:1A - * Public key ping-sha256: RRM1dGqnDFsCJXBTHky16vi1obOlCgFFn/yOhI/y+ho= - * Subject Common Name: DigiCert High Assurance EV Root CA - * Certificate SHA256 fingerprint: 74:31:E5:F4:C3:C1:CE:46:90:77:4F:0B:61:E0:54:40:88:3B:A9:A0:1E:D0:0B:A6:AB:D7:80:6E:D3:B1:18:CF - * Public key ping-sha256: WoiWRyIOVNa9ihaBciRSC7XHjliYS9VwUGOIud4PB18= - * All OK. Server matched our pinned cert or public key - * statusCode: 200 - * headers: max-age=0; pin-sha256="WoiWRyIOVNa9ihaBciRSC7XHjliYS9VwUGOIud4PB18="; pin-sha256="RRM1dGqnDFsCJXBTHky16vi1obOlCgFFn/yOhI/y+ho="; - * pin-sha256="k2v657xBsOVe1PQRwOsHsw3bsGT2VzIqz5K+59sNQws="; pin-sha256="K87oWBWM9UZfyddvDfoxL+8lpNyoUB2ptGtn0fv6G2Q="; pin-sha256="IQBnNBEiFuhj+8x6X8XLgh01V9Ic5/V3IRQLNFFc7v4="; - * pin-sha256="iie1VXtL7HzAMF+/PVPR9xzT80kQxdZeJ+zduCB3uj0="; pin-sha256="LvRiGEjRqfzurezaWuj8Wie2gyHMrW5Q06LspMnox7A="; includeSubDomains - * ``` - * @since v0.3.6 - * @param options Accepts all `options` from `request`, with some differences in default values: - */ - function request( - options: RequestOptions | string | URL, - callback?: (res: http.IncomingMessage) => void, - ): http.ClientRequest; - function request( - url: string | URL, - options: RequestOptions, - callback?: (res: http.IncomingMessage) => void, - ): http.ClientRequest; - /** - * Like `http.get()` but for HTTPS. - * - * `options` can be an object, a string, or a `URL` object. If `options` is a - * string, it is automatically parsed with `new URL()`. If it is a `URL` object, it will be automatically converted to an ordinary `options` object. - * - * ```js - * const https = require('https'); - * - * https.get('https://encrypted.google.com/', (res) => { - * console.log('statusCode:', res.statusCode); - * console.log('headers:', res.headers); - * - * res.on('data', (d) => { - * process.stdout.write(d); - * }); - * - * }).on('error', (e) => { - * console.error(e); - * }); - * ``` - * @since v0.3.6 - * @param options Accepts the same `options` as {@link request}, with the `method` always set to `GET`. - */ - function get( - options: RequestOptions | string | URL, - callback?: (res: http.IncomingMessage) => void, - ): http.ClientRequest; - function get( - url: string | URL, - options: RequestOptions, - callback?: (res: http.IncomingMessage) => void, - ): http.ClientRequest; - let globalAgent: Agent; -} -declare module 'node:https' { - export * from 'https'; -} diff --git a/spaces/fffiloni/lama-video-watermark-remover/bin/filter_sharded_dataset.py b/spaces/fffiloni/lama-video-watermark-remover/bin/filter_sharded_dataset.py deleted file mode 100644 index b3c2b490e88bb3b55c6bb717e08f97f7a396d5fa..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/lama-video-watermark-remover/bin/filter_sharded_dataset.py +++ /dev/null @@ -1,69 +0,0 @@ -#!/usr/bin/env python3 - - -import math -import os -import random - -import braceexpand -import webdataset as wds - -DEFAULT_CATS_FILE = os.path.join(os.path.dirname(__file__), '..', 'configs', 'places2-categories_157.txt') - -def is_good_key(key, cats): - return any(c in key for c in cats) - - -def main(args): - if args.categories == 'nofilter': - good_categories = None - else: - with open(args.categories, 'r') as f: - good_categories = set(line.strip().split(' ')[0] for line in f if line.strip()) - - all_input_files = list(braceexpand.braceexpand(args.infile)) - chunk_size = int(math.ceil(len(all_input_files) / args.n_read_streams)) - - input_iterators = [iter(wds.Dataset(all_input_files[start : start + chunk_size]).shuffle(args.shuffle_buffer)) - for start in range(0, len(all_input_files), chunk_size)] - output_datasets = [wds.ShardWriter(args.outpattern.format(i)) for i in range(args.n_write_streams)] - - good_readers = list(range(len(input_iterators))) - step_i = 0 - good_samples = 0 - bad_samples = 0 - while len(good_readers) > 0: - if step_i % args.print_freq == 0: - print(f'Iterations done {step_i}; readers alive {good_readers}; good samples {good_samples}; bad samples {bad_samples}') - - step_i += 1 - - ri = random.choice(good_readers) - try: - sample = next(input_iterators[ri]) - except StopIteration: - good_readers = list(set(good_readers) - {ri}) - continue - - if good_categories is not None and not is_good_key(sample['__key__'], good_categories): - bad_samples += 1 - continue - - wi = random.randint(0, args.n_write_streams - 1) - output_datasets[wi].write(sample) - good_samples += 1 - - -if __name__ == '__main__': - import argparse - - aparser = argparse.ArgumentParser() - aparser.add_argument('--categories', type=str, default=DEFAULT_CATS_FILE) - aparser.add_argument('--shuffle-buffer', type=int, default=10000) - aparser.add_argument('--n-read-streams', type=int, default=10) - aparser.add_argument('--n-write-streams', type=int, default=10) - aparser.add_argument('--print-freq', type=int, default=1000) - aparser.add_argument('infile', type=str) - aparser.add_argument('outpattern', type=str) - - main(aparser.parse_args()) diff --git a/spaces/flowers-team/Interactive_DeepRL_Demo/js/bodies/bodies_enum.js b/spaces/flowers-team/Interactive_DeepRL_Demo/js/bodies/bodies_enum.js deleted file mode 100644 index fa09136f02c609c6ae58bf9b07521a8689f2c8ac..0000000000000000000000000000000000000000 --- a/spaces/flowers-team/Interactive_DeepRL_Demo/js/bodies/bodies_enum.js +++ /dev/null @@ -1,7 +0,0 @@ -let BodyTypesEnum = { - WALKER: 0, - SWIMMER: 1, - CLIMBER: 2, - AMPHIBIAN: 3, -}; - diff --git a/spaces/frncscp/bullerengue/musika/layers.py b/spaces/frncscp/bullerengue/musika/layers.py deleted file mode 100644 index 1edd67a8d7d8c58be23211a4bc61a30e6cc703a4..0000000000000000000000000000000000000000 --- a/spaces/frncscp/bullerengue/musika/layers.py +++ /dev/null @@ -1,164 +0,0 @@ -import tensorflow as tf -import tensorflow.python.keras.backend as K -from tensorflow.python.eager import context -from tensorflow.python.ops import ( - gen_math_ops, - math_ops, - sparse_ops, - standard_ops, -) - - -def l2normalize(v, eps=1e-12): - return v / (tf.norm(v) + eps) - - -class ConvSN2D(tf.keras.layers.Conv2D): - def __init__(self, filters, kernel_size, power_iterations=1, datatype=tf.float32, **kwargs): - super(ConvSN2D, self).__init__(filters, kernel_size, **kwargs) - self.power_iterations = power_iterations - self.datatype = datatype - - def build(self, input_shape): - super(ConvSN2D, self).build(input_shape) - - if self.data_format == "channels_first": - channel_axis = 1 - else: - channel_axis = -1 - - self.u = self.add_weight( - self.name + "_u", - shape=tuple([1, self.kernel.shape.as_list()[-1]]), - initializer=tf.initializers.RandomNormal(0, 1), - trainable=False, - dtype=self.dtype, - ) - - def compute_spectral_norm(self, W, new_u, W_shape): - for _ in range(self.power_iterations): - - new_v = l2normalize(tf.matmul(new_u, tf.transpose(W))) - new_u = l2normalize(tf.matmul(new_v, W)) - - sigma = tf.matmul(tf.matmul(new_v, W), tf.transpose(new_u)) - W_bar = W / sigma - - with tf.control_dependencies([self.u.assign(new_u)]): - W_bar = tf.reshape(W_bar, W_shape) - - return W_bar - - def call(self, inputs): - W_shape = self.kernel.shape.as_list() - W_reshaped = tf.reshape(self.kernel, (-1, W_shape[-1])) - new_kernel = self.compute_spectral_norm(W_reshaped, self.u, W_shape) - outputs = self._convolution_op(inputs, new_kernel) - - if self.use_bias: - if self.data_format == "channels_first": - outputs = tf.nn.bias_add(outputs, self.bias, data_format="NCHW") - else: - outputs = tf.nn.bias_add(outputs, self.bias, data_format="NHWC") - if self.activation is not None: - return self.activation(outputs) - - return outputs - - -class DenseSN(tf.keras.layers.Dense): - def __init__(self, datatype=tf.float32, **kwargs): - super(DenseSN, self).__init__(**kwargs) - self.datatype = datatype - - def build(self, input_shape): - super(DenseSN, self).build(input_shape) - - self.u = self.add_weight( - self.name + "_u", - shape=tuple([1, self.kernel.shape.as_list()[-1]]), - initializer=tf.initializers.RandomNormal(0, 1), - trainable=False, - dtype=self.dtype, - ) - - def compute_spectral_norm(self, W, new_u, W_shape): - new_v = l2normalize(tf.matmul(new_u, tf.transpose(W))) - new_u = l2normalize(tf.matmul(new_v, W)) - sigma = tf.matmul(tf.matmul(new_v, W), tf.transpose(new_u)) - W_bar = W / sigma - with tf.control_dependencies([self.u.assign(new_u)]): - W_bar = tf.reshape(W_bar, W_shape) - return W_bar - - def call(self, inputs): - W_shape = self.kernel.shape.as_list() - W_reshaped = tf.reshape(self.kernel, (-1, W_shape[-1])) - new_kernel = self.compute_spectral_norm(W_reshaped, self.u, W_shape) - rank = len(inputs.shape) - if rank > 2: - outputs = standard_ops.tensordot(inputs, new_kernel, [[rank - 1], [0]]) - if not context.executing_eagerly(): - shape = inputs.shape.as_list() - output_shape = shape[:-1] + [self.units] - outputs.set_shape(output_shape) - else: - inputs = math_ops.cast(inputs, self._compute_dtype) - if K.is_sparse(inputs): - outputs = sparse_ops.sparse_tensor_dense_matmul(inputs, new_kernel) - else: - outputs = gen_math_ops.mat_mul(inputs, new_kernel) - if self.use_bias: - outputs = tf.nn.bias_add(outputs, self.bias) - if self.activation is not None: - return self.activation(outputs) - return outputs - - -class AddNoise(tf.keras.layers.Layer): - def __init__(self, datatype=tf.float32, **kwargs): - super(AddNoise, self).__init__(**kwargs) - self.datatype = datatype - - def build(self, input_shape): - self.b = self.add_weight( - shape=[ - 1, - ], - initializer=tf.keras.initializers.zeros(), - trainable=True, - name="noise_weight", - dtype=tf.float32, - ) - - def call(self, inputs): - rand = tf.random.normal( - [tf.shape(inputs)[0], inputs.shape[1], inputs.shape[2], 1], - mean=0.0, - stddev=1.0, - dtype=self.datatype, - ) - output = inputs + self.b * rand - return output - - -class PosEnc(tf.keras.layers.Layer): - def __init__(self, datatype=tf.float32, **kwargs): - super(PosEnc, self).__init__(**kwargs) - self.datatype = datatype - - def call(self, inputs): - pos = tf.repeat( - tf.reshape(tf.range(inputs.shape[-3], dtype=tf.int32), [1, -1, 1, 1]), - inputs.shape[-2], - -2, - ) - pos = tf.cast(tf.repeat(pos, tf.shape(inputs)[0], 0), self.dtype) / tf.cast(inputs.shape[-3], self.datatype) - return tf.concat([inputs, pos], -1) # [bs,1,hop,2] - - -def flatten_hw(x, data_format="channels_last"): - if data_format == "channels_last": - x = tf.transpose(x, perm=[0, 3, 1, 2]) # Convert to `channels_first` - - old_shape = tf.shape(x) diff --git a/spaces/gauravgarg/youtube_transcript/app.py b/spaces/gauravgarg/youtube_transcript/app.py deleted file mode 100644 index 0ae26e2c5b4f1e4bc7ef52fcbf6a27fee5e06069..0000000000000000000000000000000000000000 --- a/spaces/gauravgarg/youtube_transcript/app.py +++ /dev/null @@ -1,36 +0,0 @@ -import streamlit as st -from youtube_transcript_api import YouTubeTranscriptApi -import urllib.parse as urlparse -from pytube import extract - -transcript = "" - - -def read_file(_id): - tx = YouTubeTranscriptApi.get_transcript(_id) - my_list = [] - for i in tx: - curr_str = i["text"].replace("\xa0", "").replace("\n", " ") - my_list.append(curr_str) - transcript = " ".join(str(x) for x in my_list) - st.markdown(transcript) - -st.title('Easily transcribe YouTube videos') - -link = st.text_input('Enter your YouTube video link', 'https://www.youtube.com/watch?v=WzBt4VJzfUI') - -if link is not None: - print(link) - _id=extract.video_id(link) - st.video(link) - - st.button('check_status', on_click=read_file(_id)) - - - - - - - - - \ No newline at end of file diff --git a/spaces/geekyrakshit/enhance-me/enhance_me/zero_dce/__init__.py b/spaces/geekyrakshit/enhance-me/enhance_me/zero_dce/__init__.py deleted file mode 100644 index c11296e6da27a4d470d3589212310dad5d272840..0000000000000000000000000000000000000000 --- a/spaces/geekyrakshit/enhance-me/enhance_me/zero_dce/__init__.py +++ /dev/null @@ -1 +0,0 @@ -from .zero_dce import ZeroDCE diff --git a/spaces/ghuron/artist/model.py b/spaces/ghuron/artist/model.py deleted file mode 100644 index 1416b326c7654b016a13fb530cc6e7258989f068..0000000000000000000000000000000000000000 --- a/spaces/ghuron/artist/model.py +++ /dev/null @@ -1,36 +0,0 @@ -from sentence_transformers import SentenceTransformer, util -import torch -import numpy as np - -from sql import get_article, get_index_articles - - -device = torch.device("cpu") - -model = SentenceTransformer('all-mpnet-base-v2').to(device) -embeddings = torch.load('embeddings/embeddings.pt', map_location=device) - - -def query(sentence, initial_year, final_year, n=5): - # embedding the query - query_embedding = model.encode(sentence, convert_to_tensor=True) - - # mask index - mask_indices = np.hstack(get_index_articles(initial_year, final_year)) - - # Use mask to select only relevant embeddings - masked_embeddings = embeddings[mask_indices] - - # Calculate cosine similarity with only the masked embeddings - cos_scores = util.pytorch_cos_sim(query_embedding, masked_embeddings)[0] - top_results = torch.topk(cos_scores, k=n) - - results = [] - for score, relative_idx in zip(top_results[0].cpu().numpy(), top_results[1].cpu().numpy()): - # Map the relative index back to the original embeddings' index - original_idx = mask_indices[relative_idx] - - article = get_article(original_idx) - results.append((article, score)) - - return results diff --git a/spaces/gligen/demo/gligen/ldm/modules/ema.py b/spaces/gligen/demo/gligen/ldm/modules/ema.py deleted file mode 100644 index c8c75af43565f6e140287644aaaefa97dd6e67c5..0000000000000000000000000000000000000000 --- a/spaces/gligen/demo/gligen/ldm/modules/ema.py +++ /dev/null @@ -1,76 +0,0 @@ -import torch -from torch import nn - - -class LitEma(nn.Module): - def __init__(self, model, decay=0.9999, use_num_upates=True): - super().__init__() - if decay < 0.0 or decay > 1.0: - raise ValueError('Decay must be between 0 and 1') - - self.m_name2s_name = {} - self.register_buffer('decay', torch.tensor(decay, dtype=torch.float32)) - self.register_buffer('num_updates', torch.tensor(0,dtype=torch.int) if use_num_upates - else torch.tensor(-1,dtype=torch.int)) - - for name, p in model.named_parameters(): - if p.requires_grad: - #remove as '.'-character is not allowed in buffers - s_name = name.replace('.','') - self.m_name2s_name.update({name:s_name}) - self.register_buffer(s_name,p.clone().detach().data) - - self.collected_params = [] - - def forward(self,model): - decay = self.decay - - if self.num_updates >= 0: - self.num_updates += 1 - decay = min(self.decay,(1 + self.num_updates) / (10 + self.num_updates)) - - one_minus_decay = 1.0 - decay - - with torch.no_grad(): - m_param = dict(model.named_parameters()) - shadow_params = dict(self.named_buffers()) - - for key in m_param: - if m_param[key].requires_grad: - sname = self.m_name2s_name[key] - shadow_params[sname] = shadow_params[sname].type_as(m_param[key]) - shadow_params[sname].sub_(one_minus_decay * (shadow_params[sname] - m_param[key])) - else: - assert not key in self.m_name2s_name - - def copy_to(self, model): - m_param = dict(model.named_parameters()) - shadow_params = dict(self.named_buffers()) - for key in m_param: - if m_param[key].requires_grad: - m_param[key].data.copy_(shadow_params[self.m_name2s_name[key]].data) - else: - assert not key in self.m_name2s_name - - def store(self, parameters): - """ - Save the current parameters for restoring later. - Args: - parameters: Iterable of `torch.nn.Parameter`; the parameters to be - temporarily stored. - """ - self.collected_params = [param.clone() for param in parameters] - - def restore(self, parameters): - """ - Restore the parameters stored with the `store` method. - Useful to validate the model with EMA parameters without affecting the - original optimization process. Store the parameters before the - `copy_to` method. After validation (or model saving), use this to - restore the former parameters. - Args: - parameters: Iterable of `torch.nn.Parameter`; the parameters to be - updated with the stored parameters. - """ - for c_param, param in zip(self.collected_params, parameters): - param.data.copy_(c_param.data) diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Chicken Little (2005) [HDrip (AC3)] [Espanol] [Animacion Aventura] Descarga la pelicula que hizo reir a millones.md b/spaces/gotiQspiryo/whisper-ui/examples/Chicken Little (2005) [HDrip (AC3)] [Espanol] [Animacion Aventura] Descarga la pelicula que hizo reir a millones.md deleted file mode 100644 index c129ca656902694395c980226bf6baa4a9851be6..0000000000000000000000000000000000000000 --- a/spaces/gotiQspiryo/whisper-ui/examples/Chicken Little (2005) [HDrip (AC3)] [Espanol] [Animacion Aventura] Descarga la pelicula que hizo reir a millones.md +++ /dev/null @@ -1,6 +0,0 @@ -

      Chicken Little (2005) [HDrip (AC3)] [Espanol] [Animacion | Aventura]


      Download Filehttps://urlgoal.com/2uyNkL



      - - aaccfb2cb3
      -
      -
      -

      diff --git a/spaces/gradio/HuBERT/fairseq/criterions/composite_loss.py b/spaces/gradio/HuBERT/fairseq/criterions/composite_loss.py deleted file mode 100644 index 98e835fa6e4c0bcad062df9c519701bf795c98be..0000000000000000000000000000000000000000 --- a/spaces/gradio/HuBERT/fairseq/criterions/composite_loss.py +++ /dev/null @@ -1,100 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from fairseq import utils -from fairseq.criterions import LegacyFairseqCriterion, register_criterion -from torch import nn - - -@register_criterion("composite_loss") -class CompositeLoss(LegacyFairseqCriterion): - """This is a composite loss that, given a list of model outputs and a list of targets, - computes an average of losses for each output-target pair""" - - def __init__(self, args, task): - super().__init__(args, task) - self.underlying_criterion = args.underlying_criterion - - @staticmethod - def add_args(parser): - """Add criterion-specific arguments to the parser.""" - # fmt: off - parser.add_argument('--underlying-criterion', type=str, metavar='VAL', required=True, - help='underlying criterion to use for the composite loss') - # fmt: on - - @staticmethod - def build_underlying_criterion(args, task): - saved_criterion = args.criterion - args.criterion = args.underlying_criterion - assert saved_criterion != args.underlying_criterion - underlying_criterion = task.build_criterion(args) - args.criterion = saved_criterion - return underlying_criterion - - @classmethod - def build_criterion(cls, args, task): - underlying_criterion = CompositeLoss.build_underlying_criterion(args, task) - - class FakeModel(nn.Module): - def __init__(self, model, net_out, target): - super().__init__() - self.model = model - self.net_out = net_out - self.target = target - - def forward(self, **unused): - return self.net_out - - def get_normalized_probs(self, net_output, log_probs, sample=None): - return self.model.get_normalized_probs( - net_output, log_probs, sample=sample - ) - - def get_targets(self, *unused): - return self.target - - @property - def decoder(self): - return self.model.decoder - - class _CompositeLoss(LegacyFairseqCriterion): - def __init__(self, args, task, underlying_criterion): - super().__init__(args, task) - self.underlying_criterion = underlying_criterion - - def forward(self, model, sample, reduce=True): - net_outputs = model(**sample["net_input"]) - targets = sample["target"] - - bsz = targets[0].size(0) - loss = net_outputs[0][0].new(1 if reduce else bsz).float().zero_() - - sample_size = 0 - logging_output = {} - for o, t in zip(net_outputs[0], targets): - m = FakeModel(model, (o, net_outputs[1]), t) - sample["target"] = t - l, ss, logging_output = self.underlying_criterion(m, sample, reduce) - loss += l - sample_size += ss - - loss.div_(len(targets)) - sample_size /= len(targets) - - logging_output["loss"] = utils.item(loss.data) if reduce else loss.data - return loss, sample_size, logging_output - - @staticmethod - def aggregate_logging_outputs(logging_outputs): - return underlying_criterion.__class__.aggregate_logging_outputs( - logging_outputs - ) - - @staticmethod - def reduce_metrics(logging_outputs) -> None: - underlying_criterion.__class__.reduce_metrics(logging_outputs) - - return _CompositeLoss(args, task, underlying_criterion) diff --git a/spaces/gyugnsu/DragGan-Inversion/stylegan_human/style_mixing.py b/spaces/gyugnsu/DragGan-Inversion/stylegan_human/style_mixing.py deleted file mode 100644 index 022912df133bd977364786f90d6ae635292dc135..0000000000000000000000000000000000000000 --- a/spaces/gyugnsu/DragGan-Inversion/stylegan_human/style_mixing.py +++ /dev/null @@ -1,114 +0,0 @@ -# Copyright (c) SenseTime Research. All rights reserved. - -# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. -# - - -import os -import re -from typing import List -import legacy - -import click -import dnnlib -import numpy as np -import PIL.Image -import torch - -""" -Style mixing using pretrained network pickle. - -Examples: - -\b -python style_mixing.py --network=pretrained_models/stylegan_human_v2_1024.pkl --rows=85,100,75,458,1500 \\ - --cols=55,821,1789,293 --styles=0-3 --outdir=outputs/stylemixing -""" - - -@click.command() -@click.option('--network', 'network_pkl', help='Network pickle filename', required=True) -@click.option('--rows', 'row_seeds', type=legacy.num_range, help='Random seeds to use for image rows', required=True) -@click.option('--cols', 'col_seeds', type=legacy.num_range, help='Random seeds to use for image columns', required=True) -@click.option('--styles', 'col_styles', type=legacy.num_range, help='Style layer range', default='0-6', show_default=True) -@click.option('--trunc', 'truncation_psi', type=float, help='Truncation psi', default=0.8, show_default=True) -@click.option('--noise-mode', help='Noise mode', type=click.Choice(['const', 'random', 'none']), default='const', show_default=True) -@click.option('--outdir', type=str, required=True, default='outputs/stylemixing') -def generate_style_mix( - network_pkl: str, - row_seeds: List[int], - col_seeds: List[int], - col_styles: List[int], - truncation_psi: float, - noise_mode: str, - outdir: str -): - - print('Loading networks from "%s"...' % network_pkl) - device = torch.device('cuda') - with dnnlib.util.open_url(network_pkl) as f: - G = legacy.load_network_pkl(f)['G_ema'].to(device) - - os.makedirs(outdir, exist_ok=True) - - print('Generating W vectors...') - all_seeds = list(set(row_seeds + col_seeds)) - all_z = np.stack([np.random.RandomState(seed).randn(G.z_dim) - for seed in all_seeds]) - all_w = G.mapping(torch.from_numpy(all_z).to(device), None) - w_avg = G.mapping.w_avg - all_w = w_avg + (all_w - w_avg) * truncation_psi - w_dict = {seed: w for seed, w in zip(all_seeds, list(all_w))} - - print('Generating images...') - all_images = G.synthesis(all_w, noise_mode=noise_mode) - all_images = (all_images.permute(0, 2, 3, 1) * 127.5 + - 128).clamp(0, 255).to(torch.uint8).cpu().numpy() - image_dict = {(seed, seed): image for seed, - image in zip(all_seeds, list(all_images))} - - print('Generating style-mixed images...') - for row_seed in row_seeds: - for col_seed in col_seeds: - w = w_dict[row_seed].clone() - w[col_styles] = w_dict[col_seed][col_styles] - image = G.synthesis(w[np.newaxis], noise_mode=noise_mode) - image = (image.permute(0, 2, 3, 1) * 127.5 + - 128).clamp(0, 255).to(torch.uint8) - image_dict[(row_seed, col_seed)] = image[0].cpu().numpy() - - os.makedirs(outdir, exist_ok=True) - # print('Saving images...') - # for (row_seed, col_seed), image in image_dict.items(): - # PIL.Image.fromarray(image, 'RGB').save(f'{outdir}/{row_seed}-{col_seed}.png') - - print('Saving image grid...') - W = G.img_resolution // 2 - H = G.img_resolution - canvas = PIL.Image.new( - 'RGB', (W * (len(col_seeds) + 1), H * (len(row_seeds) + 1)), 'black') - for row_idx, row_seed in enumerate([0] + row_seeds): - for col_idx, col_seed in enumerate([0] + col_seeds): - if row_idx == 0 and col_idx == 0: - continue - key = (row_seed, col_seed) - if row_idx == 0: - key = (col_seed, col_seed) - if col_idx == 0: - key = (row_seed, row_seed) - canvas.paste(PIL.Image.fromarray( - image_dict[key], 'RGB'), (W * col_idx, H * row_idx)) - canvas.save(f'{outdir}/grid.png') - - -# ---------------------------------------------------------------------------- - -if __name__ == "__main__": - generate_style_mix() # pylint: disable=no-value-for-parameter - -# ---------------------------------------------------------------------------- diff --git a/spaces/h2oai/wave-tour/examples/canvas.py b/spaces/h2oai/wave-tour/examples/canvas.py deleted file mode 100644 index e13b25f0a24223c07939e385e068f5750da33eb3..0000000000000000000000000000000000000000 --- a/spaces/h2oai/wave-tour/examples/canvas.py +++ /dev/null @@ -1,19 +0,0 @@ -# Canvas -# A card that displays a freeform drawing canvas. -# A canvas card can synchronize its state with other canvas cards at the same URL. -# Open `/demo` in multiple browsers and watch them synchronize in realtime. -# #collaboration -# --- -from h2o_wave import site, data, ui - -page = site['/demo'] -page.drop() - -page.add('example', ui.canvas_card( - box='1 1 4 7', - title='Sample Canvas', - width=500, - height=500, - data=dict(), -)) -page.save() diff --git a/spaces/h2oai/wave-tour/examples/table_pagination.py b/spaces/h2oai/wave-tour/examples/table_pagination.py deleted file mode 100644 index e036809c1f4373bc4503a830e6ee7367c601284d..0000000000000000000000000000000000000000 --- a/spaces/h2oai/wave-tour/examples/table_pagination.py +++ /dev/null @@ -1,110 +0,0 @@ -# Table / Pagination -# Use a paginated #table to display large (100k+ rows) tabular data. -# #form #table #pagination -# --- - -import os -from typing import Dict, List -from h2o_wave import main, app, Q, ui -from copy import deepcopy -import csv - - -# Create a dummy data blueprint. -class Issue: - def __init__(self, text: str, status: str): - self.text = text - self.status = status - - -all_rows = [Issue(text=i + 1, status=('Closed' if i % 2 == 0 else 'Open')) for i in range(100)] -rows_per_page = 10 -total_rows = len(all_rows) - - -def get_rows(base: List, sort: Dict[str, bool] = None, search: Dict = None, filters: Dict[str, List[str]] = None) -> List: - # Make a deep copy in order to not mutate the original `all_issues` which serves as our baseline. - rows = deepcopy(base) - - # Sort by multiple columns. - if sort: - for col, reverse in sort.items(): - rows.sort(key=lambda i: getattr(i, col), reverse=reverse) - # Filter out all rows that do not contain searched string. - if search: - search_val = search['value'].lower() - cols = search['cols'] - rows = [row for row in rows if any(search_val in str(getattr(row, col)).lower() for col in cols)] - # Filter out rows that do not contain filtered column value. - if filters: - for col, filters in filters.items(): - rows = [row for row in rows if not filters or any(f in getattr(row, col) for f in filters)] - - return rows - - -@app('/demo') -async def serve(q: Q): - if not q.client.initialized: - q.page['meta'] = ui.meta_card(box='') - q.page['form'] = ui.form_card(box='1 1 -1 -1', items=[ - ui.table( - name='table', - columns=[ - ui.table_column(name='text', label='Text', sortable=True, searchable=True, link=False), - ui.table_column(name='status', label='Status', filterable=True, filters=['Open', 'Closed']), - ], - rows=[ui.table_row(str(r.text), [str(r.text), r.status]) for r in get_rows(all_rows)[0:rows_per_page]], - resettable=True, - downloadable=True, - pagination=ui.table_pagination(total_rows=len(all_rows), rows_per_page=rows_per_page), - # Make sure to register the necessary events for the feature you want to support, e.g. sorting. - # All the registered events have to be handled by the developer. - # `page_change` event is required to be handled for pagination to work. - events=['sort', 'filter', 'search', 'page_change', 'download', 'reset'] - ) - ]) - q.client.initialized = True - - # Check if user triggered any table action and save it to local state for allowing multiple - # actions to be performed on the data at the same time, e.g. sort the filtered data etc. - if q.events.table: - table = q.page['form'].table - if q.events.table.sort: - q.client.sort = q.events.table.sort - q.client.page_offset = 0 - if q.events.table.filter: - q.client.filters = q.events.table.filter - q.client.page_offset = 0 - if q.events.table.search is not None: - q.client.search = q.events.table.search - q.client.page_offset = 0 - if q.events.table.page_change: - q.client.page_offset = q.events.table.page_change.get('offset', 0) - if q.events.table.reset: - q.client.search = None - q.client.sort = None - q.client.filters = None - q.client.page_offset = 0 - table.pagination = ui.table_pagination(total_rows, rows_per_page) - - rows = get_rows(all_rows, q.client.sort, q.client.search, q.client.filters) - offset = q.client.page_offset or 0 - table.rows = [ui.table_row(str(r.text), [str(r.text), r.status]) for r in rows[offset: offset + rows_per_page]] - - # Update table pagination according to the new row count. - if q.client.search is not None or q.client.filters: - table.pagination = ui.table_pagination(len(rows), rows_per_page) - - if q.events.table.download: - # For multi-user apps, the tmp file name should be unique for each user, not hardcoded. - with open('data_download.csv', 'w') as csvfile: - csv_writer = csv.writer(csvfile, delimiter=',') - for r in rows: - csv_writer.writerow([r.text, r.status]) - download_url, = await q.site.upload(['data_download.csv']) - # Clean up the file after upload. - os.remove('data_download.csv') - q.page['meta'].script = ui.inline_script(f'window.open("{download_url}")') - - await q.page.save() diff --git a/spaces/hahahafofo/image2text_prompt_generator/utils/html.py b/spaces/hahahafofo/image2text_prompt_generator/utils/html.py deleted file mode 100644 index b0edb1ae05b25f21b6e71756361acfc5a7c7efcb..0000000000000000000000000000000000000000 --- a/spaces/hahahafofo/image2text_prompt_generator/utils/html.py +++ /dev/null @@ -1,8 +0,0 @@ -import html - - -def plaintext_to_html(text): - text = ( - "

      " + "
      \n".join([f"{html.escape(x)}" for x in text.split("\n")]) + "

      " - ) - return text diff --git a/spaces/hanstyle/tts/evaluation/gen_videos_from_filelist.py b/spaces/hanstyle/tts/evaluation/gen_videos_from_filelist.py deleted file mode 100644 index bd666b93258d3da3143a63da742265ebeac2a8a3..0000000000000000000000000000000000000000 --- a/spaces/hanstyle/tts/evaluation/gen_videos_from_filelist.py +++ /dev/null @@ -1,238 +0,0 @@ -from os import listdir, path -import numpy as np -import scipy, cv2, os, sys, argparse -import dlib, json, subprocess -from tqdm import tqdm -from glob import glob -import torch - -sys.path.append('../') -import audio -import face_detection -from models import Wav2Lip - -parser = argparse.ArgumentParser(description='Code to generate results for test filelists') - -parser.add_argument('--filelist', type=str, - help='Filepath of filelist file to read', required=True) -parser.add_argument('--results_dir', type=str, help='Folder to save all results into', - required=True) -parser.add_argument('--data_root', type=str, required=True) -parser.add_argument('--checkpoint_path', type=str, - help='Name of saved checkpoint to load weights from', required=True) - -parser.add_argument('--pads', nargs='+', type=int, default=[0, 0, 0, 0], - help='Padding (top, bottom, left, right)') -parser.add_argument('--face_det_batch_size', type=int, - help='Single GPU batch size for face detection', default=64) -parser.add_argument('--wav2lip_batch_size', type=int, help='Batch size for Wav2Lip', default=128) - -# parser.add_argument('--resize_factor', default=1, type=int) - -args = parser.parse_args() -args.img_size = 96 - -def get_smoothened_boxes(boxes, T): - for i in range(len(boxes)): - if i + T > len(boxes): - window = boxes[len(boxes) - T:] - else: - window = boxes[i : i + T] - boxes[i] = np.mean(window, axis=0) - return boxes - -def face_detect(images): - batch_size = args.face_det_batch_size - - while 1: - predictions = [] - try: - for i in range(0, len(images), batch_size): - predictions.extend(detector.get_detections_for_batch(np.array(images[i:i + batch_size]))) - except RuntimeError: - if batch_size == 1: - raise RuntimeError('Image too big to run face detection on GPU') - batch_size //= 2 - args.face_det_batch_size = batch_size - print('Recovering from OOM error; New batch size: {}'.format(batch_size)) - continue - break - - results = [] - pady1, pady2, padx1, padx2 = args.pads - for rect, image in zip(predictions, images): - if rect is None: - raise ValueError('Face not detected!') - - y1 = max(0, rect[1] - pady1) - y2 = min(image.shape[0], rect[3] + pady2) - x1 = max(0, rect[0] - padx1) - x2 = min(image.shape[1], rect[2] + padx2) - - results.append([x1, y1, x2, y2]) - - boxes = get_smoothened_boxes(np.array(results), T=5) - results = [[image[y1: y2, x1:x2], (y1, y2, x1, x2), True] for image, (x1, y1, x2, y2) in zip(images, boxes)] - - return results - -def datagen(frames, face_det_results, mels): - img_batch, mel_batch, frame_batch, coords_batch = [], [], [], [] - - for i, m in enumerate(mels): - if i >= len(frames): raise ValueError('Equal or less lengths only') - - frame_to_save = frames[i].copy() - face, coords, valid_frame = face_det_results[i].copy() - if not valid_frame: - continue - - face = cv2.resize(face, (args.img_size, args.img_size)) - - img_batch.append(face) - mel_batch.append(m) - frame_batch.append(frame_to_save) - coords_batch.append(coords) - - if len(img_batch) >= args.wav2lip_batch_size: - img_batch, mel_batch = np.asarray(img_batch), np.asarray(mel_batch) - - img_masked = img_batch.copy() - img_masked[:, args.img_size//2:] = 0 - - img_batch = np.concatenate((img_masked, img_batch), axis=3) / 255. - mel_batch = np.reshape(mel_batch, [len(mel_batch), mel_batch.shape[1], mel_batch.shape[2], 1]) - - yield img_batch, mel_batch, frame_batch, coords_batch - img_batch, mel_batch, frame_batch, coords_batch = [], [], [], [] - - if len(img_batch) > 0: - img_batch, mel_batch = np.asarray(img_batch), np.asarray(mel_batch) - - img_masked = img_batch.copy() - img_masked[:, args.img_size//2:] = 0 - - img_batch = np.concatenate((img_masked, img_batch), axis=3) / 255. - mel_batch = np.reshape(mel_batch, [len(mel_batch), mel_batch.shape[1], mel_batch.shape[2], 1]) - - yield img_batch, mel_batch, frame_batch, coords_batch - -fps = 25 -mel_step_size = 16 -mel_idx_multiplier = 80./fps -device = 'cuda' if torch.cuda.is_available() else 'cpu' -print('Using {} for inference.'.format(device)) - -detector = face_detection.FaceAlignment(face_detection.LandmarksType._2D, - flip_input=False, device=device) - -def _load(checkpoint_path): - if device == 'cuda': - checkpoint = torch.load(checkpoint_path) - else: - checkpoint = torch.load(checkpoint_path, - map_location=lambda storage, loc: storage) - return checkpoint - -def load_model(path): - model = Wav2Lip() - print("Load checkpoint from: {}".format(path)) - checkpoint = _load(path) - s = checkpoint["state_dict"] - new_s = {} - for k, v in s.items(): - new_s[k.replace('module.', '')] = v - model.load_state_dict(new_s) - - model = model.to(device) - return model.eval() - -model = load_model(args.checkpoint_path) - -def main(): - assert args.data_root is not None - data_root = args.data_root - - if not os.path.isdir(args.results_dir): os.makedirs(args.results_dir) - - with open(args.filelist, 'r') as filelist: - lines = filelist.readlines() - - for idx, line in enumerate(tqdm(lines)): - audio_src, video = line.strip().split() - - audio_src = os.path.join(data_root, audio_src) + '.mp4' - video = os.path.join(data_root, video) + '.mp4' - - command = 'ffmpeg -loglevel panic -y -i {} -strict -2 {}'.format(audio_src, '../temp/temp.wav') - subprocess.call(command, shell=True) - temp_audio = '../temp/temp.wav' - - wav = audio.load_wav(temp_audio, 16000) - mel = audio.melspectrogram(wav) - if np.isnan(mel.reshape(-1)).sum() > 0: - continue - - mel_chunks = [] - i = 0 - while 1: - start_idx = int(i * mel_idx_multiplier) - if start_idx + mel_step_size > len(mel[0]): - break - mel_chunks.append(mel[:, start_idx : start_idx + mel_step_size]) - i += 1 - - video_stream = cv2.VideoCapture(video) - - full_frames = [] - while 1: - still_reading, frame = video_stream.read() - if not still_reading or len(full_frames) > len(mel_chunks): - video_stream.release() - break - full_frames.append(frame) - - if len(full_frames) < len(mel_chunks): - continue - - full_frames = full_frames[:len(mel_chunks)] - - try: - face_det_results = face_detect(full_frames.copy()) - except ValueError as e: - continue - - batch_size = args.wav2lip_batch_size - gen = datagen(full_frames.copy(), face_det_results, mel_chunks) - - for i, (img_batch, mel_batch, frames, coords) in enumerate(gen): - if i == 0: - frame_h, frame_w = full_frames[0].shape[:-1] - out = cv2.VideoWriter('../temp/result.avi', - cv2.VideoWriter_fourcc(*'DIVX'), fps, (frame_w, frame_h)) - - img_batch = torch.FloatTensor(np.transpose(img_batch, (0, 3, 1, 2))).to(device) - mel_batch = torch.FloatTensor(np.transpose(mel_batch, (0, 3, 1, 2))).to(device) - - with torch.no_grad(): - pred = model(mel_batch, img_batch) - - - pred = pred.cpu().numpy().transpose(0, 2, 3, 1) * 255. - - for pl, f, c in zip(pred, frames, coords): - y1, y2, x1, x2 = c - pl = cv2.resize(pl.astype(np.uint8), (x2 - x1, y2 - y1)) - f[y1:y2, x1:x2] = pl - out.write(f) - - out.release() - - vid = os.path.join(args.results_dir, '{}.mp4'.format(idx)) - - command = 'ffmpeg -loglevel panic -y -i {} -i {} -strict -2 -q:v 1 {}'.format(temp_audio, - '../temp/result.avi', vid) - subprocess.call(command, shell=True) - -if __name__ == '__main__': - main() diff --git a/spaces/hardon-server/dalle-mini/index.html b/spaces/hardon-server/dalle-mini/index.html deleted file mode 100644 index 6882e87e6effaebbed14fb56d0f47b2a2214417d..0000000000000000000000000000000000000000 --- a/spaces/hardon-server/dalle-mini/index.html +++ /dev/null @@ -1,243 +0,0 @@ - - - - - - - - - - - - - - - - - - - - - -
      - - - diff --git a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/projects/DensePose/densepose/data/datasets/builtin.py b/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/projects/DensePose/densepose/data/datasets/builtin.py deleted file mode 100644 index e70f3d3e006d1801dcfb743c9c21b46ca54a3053..0000000000000000000000000000000000000000 --- a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/projects/DensePose/densepose/data/datasets/builtin.py +++ /dev/null @@ -1,10 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -from .coco import BASE_DATASETS as BASE_COCO_DATASETS -from .coco import DATASETS as COCO_DATASETS -from .coco import register_datasets as register_coco_datasets - -DEFAULT_DATASETS_ROOT = "data" - - -register_coco_datasets(COCO_DATASETS, DEFAULT_DATASETS_ROOT) -register_coco_datasets(BASE_COCO_DATASETS, DEFAULT_DATASETS_ROOT) diff --git a/spaces/hasibzunair/fifa-tryon-demo/util/image_pool.py b/spaces/hasibzunair/fifa-tryon-demo/util/image_pool.py deleted file mode 100644 index deeb7cca923330113afa0f460723b9d86e321c08..0000000000000000000000000000000000000000 --- a/spaces/hasibzunair/fifa-tryon-demo/util/image_pool.py +++ /dev/null @@ -1,33 +0,0 @@ -import random -import torch -from torch.autograd import Variable - - -class ImagePool(): - def __init__(self, pool_size): - self.pool_size = pool_size - if self.pool_size > 0: - self.num_imgs = 0 - self.images = [] - - def query(self, images): - if self.pool_size == 0: - return images - return_images = [] - for image in images.data: - image = torch.unsqueeze(image, 0) - if self.num_imgs < self.pool_size: - self.num_imgs = self.num_imgs + 1 - self.images.append(image) - return_images.append(image) - else: - p = random.uniform(0, 1) - if p > 0.5: - random_id = random.randint(0, self.pool_size-1) - tmp = self.images[random_id].clone() - self.images[random_id] = image - return_images.append(tmp) - else: - return_images.append(image) - return_images = Variable(torch.cat(return_images, 0)) - return return_images diff --git a/spaces/hf-vision/detection_metrics/detection_metrics/pycocotools/mask.py b/spaces/hf-vision/detection_metrics/detection_metrics/pycocotools/mask.py deleted file mode 100644 index f39db4fe729e2cc9c3372edaca6c40fd21525db2..0000000000000000000000000000000000000000 --- a/spaces/hf-vision/detection_metrics/detection_metrics/pycocotools/mask.py +++ /dev/null @@ -1,103 +0,0 @@ -__author__ = 'tsungyi' - -from detection_metrics.pycocotools import _mask - -# Interface for manipulating masks stored in RLE format. -# -# RLE is a simple yet efficient format for storing binary masks. RLE -# first divides a vector (or vectorized image) into a series of piecewise -# constant regions and then for each piece simply stores the length of -# that piece. For example, given M=[0 0 1 1 1 0 1] the RLE counts would -# be [2 3 1 1], or for M=[1 1 1 1 1 1 0] the counts would be [0 6 1] -# (note that the odd counts are always the numbers of zeros). Instead of -# storing the counts directly, additional compression is achieved with a -# variable bitrate representation based on a common scheme called LEB128. -# -# Compression is greatest given large piecewise constant regions. -# Specifically, the size of the RLE is proportional to the number of -# *boundaries* in M (or for an image the number of boundaries in the y -# direction). Assuming fairly simple shapes, the RLE representation is -# O(sqrt(n)) where n is number of pixels in the object. Hence space usage -# is substantially lower, especially for large simple objects (large n). -# -# Many common operations on masks can be computed directly using the RLE -# (without need for decoding). This includes computations such as area, -# union, intersection, etc. All of these operations are linear in the -# size of the RLE, in other words they are O(sqrt(n)) where n is the area -# of the object. Computing these operations on the original mask is O(n). -# Thus, using the RLE can result in substantial computational savings. -# -# The following API functions are defined: -# encode - Encode binary masks using RLE. -# decode - Decode binary masks encoded via RLE. -# merge - Compute union or intersection of encoded masks. -# iou - Compute intersection over union between masks. -# area - Compute area of encoded masks. -# toBbox - Get bounding boxes surrounding encoded masks. -# frPyObjects - Convert polygon, bbox, and uncompressed RLE to encoded RLE mask. -# -# Usage: -# Rs = encode( masks ) -# masks = decode( Rs ) -# R = merge( Rs, intersect=false ) -# o = iou( dt, gt, iscrowd ) -# a = area( Rs ) -# bbs = toBbox( Rs ) -# Rs = frPyObjects( [pyObjects], h, w ) -# -# In the API the following formats are used: -# Rs - [dict] Run-length encoding of binary masks -# R - dict Run-length encoding of binary mask -# masks - [hxwxn] Binary mask(s) (must have type np.ndarray(dtype=uint8) in column-major order) -# iscrowd - [nx1] list of np.ndarray. 1 indicates corresponding gt image has crowd region to ignore -# bbs - [nx4] Bounding box(es) stored as [x y w h] -# poly - Polygon stored as [[x1 y1 x2 y2...],[x1 y1 ...],...] (2D list) -# dt,gt - May be either bounding boxes or encoded masks -# Both poly and bbs are 0-indexed (bbox=[0 0 1 1] encloses first pixel). -# -# Finally, a note about the intersection over union (iou) computation. -# The standard iou of a ground truth (gt) and detected (dt) object is -# iou(gt,dt) = area(intersect(gt,dt)) / area(union(gt,dt)) -# For "crowd" regions, we use a modified criteria. If a gt object is -# marked as "iscrowd", we allow a dt to match any subregion of the gt. -# Choosing gt' in the crowd gt that best matches the dt can be done using -# gt'=intersect(dt,gt). Since by definition union(gt',dt)=dt, computing -# iou(gt,dt,iscrowd) = iou(gt',dt) = area(intersect(gt,dt)) / area(dt) -# For crowd gt regions we use this modified criteria above for the iou. -# -# To compile run "python setup.py build_ext --inplace" -# Please do not contact us for help with compiling. -# -# Microsoft COCO Toolbox. version 2.0 -# Data, paper, and tutorials available at: http://mscoco.org/ -# Code written by Piotr Dollar and Tsung-Yi Lin, 2015. -# Licensed under the Simplified BSD License [see coco/license.txt] - -iou = _mask.iou -merge = _mask.merge -frPyObjects = _mask.frPyObjects - -def encode(bimask): - if len(bimask.shape) == 3: - return _mask.encode(bimask) - elif len(bimask.shape) == 2: - h, w = bimask.shape - return _mask.encode(bimask.reshape((h, w, 1), order='F'))[0] - -def decode(rleObjs): - if type(rleObjs) == list: - return _mask.decode(rleObjs) - else: - return _mask.decode([rleObjs])[:,:,0] - -def area(rleObjs): - if type(rleObjs) == list: - return _mask.area(rleObjs) - else: - return _mask.area([rleObjs])[0] - -def toBbox(rleObjs): - if type(rleObjs) == list: - return _mask.toBbox(rleObjs) - else: - return _mask.toBbox([rleObjs])[0] \ No newline at end of file diff --git a/spaces/hiraltalsaniya/YOLOv7_face_mask/README.md b/spaces/hiraltalsaniya/YOLOv7_face_mask/README.md deleted file mode 100644 index 5886a43fad9f61a3c90ff2b47036167e57aed5be..0000000000000000000000000000000000000000 --- a/spaces/hiraltalsaniya/YOLOv7_face_mask/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: YOLOv7 Face Mask -emoji: 🐠 -colorFrom: gray -colorTo: red -sdk: gradio -sdk_version: 3.17.0 -app_file: app.py -pinned: false -license: other ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/hololee/dreambooth-training/app.py b/spaces/hololee/dreambooth-training/app.py deleted file mode 100644 index f7d90f7250ccac1b7d250062b6d3348124acdf4e..0000000000000000000000000000000000000000 --- a/spaces/hololee/dreambooth-training/app.py +++ /dev/null @@ -1,687 +0,0 @@ -from subprocess import getoutput -import os - -gpu_info = getoutput('nvidia-smi') -if("A10G" in gpu_info): - which_gpu = "A10G" - os.system(f"pip install --no-deps xformers==0.0.16rc425") -elif("T4" in gpu_info): - which_gpu = "T4" - os.system(f"pip install -q https://github.com/camenduru/stable-diffusion-webui-colab/releases/download/0.0.15/xformers-0.0.15.dev0+1515f77.d20221130-cp38-cp38-linux_x86_64.whl") -else: - which_gpu = "CPU" - -import gradio as gr -from pathlib import Path -import argparse -import shutil -from train_dreambooth import run_training -from convertosd import convert -from PIL import Image -from slugify import slugify -import requests -import torch -import zipfile -import tarfile -import urllib.parse -import gc -from diffusers import StableDiffusionPipeline -from huggingface_hub import snapshot_download, update_repo_visibility, HfApi - -is_spaces = True if "SPACE_ID" in os.environ else False -if(is_spaces): - is_shared_ui = True if "multimodalart/dreambooth-training" in os.environ['SPACE_ID'] else False -else: - is_shared_ui = False -is_gpu_associated = torch.cuda.is_available() - -os.environ["HF_HUB_ENABLE_HF_TRANSFER"] = "1" - -if(is_gpu_associated): - model_v1 = snapshot_download(repo_id="multimodalart/sd-fine-tunable") - model_v2 = snapshot_download(repo_id="stabilityai/stable-diffusion-2-1", ignore_patterns=["*.ckpt", "*.safetensors"]) - model_v2_512 = snapshot_download(repo_id="stabilityai/stable-diffusion-2-1-base", ignore_patterns=["*.ckpt", "*.safetensors"]) - safety_checker = snapshot_download(repo_id="multimodalart/sd-sc") - model_to_load = model_v1 - -def swap_base_model(selected_model): - if(is_gpu_associated): - global model_to_load - if(selected_model == "v1-5"): - model_to_load = model_v1 - elif(selected_model == "v2-1-768"): - model_to_load = model_v2 - else: - model_to_load = model_v2_512 - - - -css = ''' - .instruction{position: absolute; top: 0;right: 0;margin-top: 0px !important} - .arrow{position: absolute;top: 0;right: -110px;margin-top: -8px !important} - #component-4, #component-3, #component-10{min-height: 0} - .duplicate-button img{margin: 0} -''' -maximum_concepts = 3 - -def swap_text(option, base): - resize_width = 768 if base == "v2-1-768" else 512 - mandatory_liability = "You must have the right to do so and you are liable for the images you use, example:" - if(option == "object"): - instance_prompt_example = "cttoy" - freeze_for = 30 - return [f"You are going to train `object`(s), upload 5-10 images of each object you are planning on training on from different angles/perspectives. You can use services like birme for smart cropping. {mandatory_liability}:", '''''', f"You should name your concept with a unique made up word that has low chance of the model already knowing it (e.g.: `{instance_prompt_example}` here). Images will be automatically cropped to {resize_width}x{resize_width}.", freeze_for, gr.update(visible=False)] - elif(option == "person"): - instance_prompt_example = "julcto" - freeze_for = 70 - #show_prior_preservation = True if base != "v2-1-768" else False - show_prior_preservation=False - if(show_prior_preservation): - prior_preservation_box_update = gr.update(visible=show_prior_preservation) - else: - prior_preservation_box_update = gr.update(visible=show_prior_preservation, value=False) - return [f"You are going to train a `person`(s), upload 10-20 images of each person you are planning on training on from different angles/perspectives. You can use services like birme for smart cropping. {mandatory_liability}:", '''''', f"You should name your concept with a unique made up word that has low chance of the model already knowing it (e.g.: `{instance_prompt_example}` here). Images will be automatically cropped to {resize_width}x{resize_width}.", freeze_for, prior_preservation_box_update] - elif(option == "style"): - instance_prompt_example = "trsldamrl" - freeze_for = 10 - return [f"You are going to train a `style`, upload 10-20 images of the style you are planning on training on. You can use services like birme for smart cropping. {mandatory_liability}:", '''''', f"You should name your concept with a unique made up word that has low chance of the model already knowing it (e.g.: `{instance_prompt_example}` here). Images will be automatically cropped to {resize_width}x{resize_width}", freeze_for, gr.update(visible=False)] - -def count_files(*inputs): - file_counter = 0 - concept_counter = 0 - for i, input in enumerate(inputs): - if(i < maximum_concepts): - files = inputs[i] - if(files): - concept_counter+=1 - file_counter+=len(files) - uses_custom = inputs[-1] - type_of_thing = inputs[-4] - selected_model = inputs[-5] - experimental_faces = inputs[-6] - if(uses_custom): - Training_Steps = int(inputs[-3]) - else: - Training_Steps = file_counter*150 - if(type_of_thing == "person" and Training_Steps > 2400): - Training_Steps = 2400 #Avoid overfitting on person faces - if(is_spaces): - if(selected_model == "v1-5"): - its = 1.1 if which_gpu == "T4" else 1.8 - if(experimental_faces): - its = 1 - elif(selected_model == "v2-1-512"): - its = 0.8 if which_gpu == "T4" else 1.5 - if(experimental_faces): - its = 0.7 - elif(selected_model == "v2-1-768"): - its = 0.48 if which_gpu == "T4" else 0.85 - - gpu_price = 0.60 if which_gpu == "T4" else 1.10 - summary_sentence = f'''You are going to train {concept_counter} {type_of_thing}(s), with {file_counter} images for {Training_Steps} steps. The training should take around {round(Training_Steps/its, 2)} seconds, or {round((Training_Steps/its)/60, 2)} minutes. - The setup, compression and uploading the model can take up to 20 minutes.
      As the {which_gpu}-Small GPU costs US${gpu_price} for 1h, the estimated cost for this training is below US${round((((Training_Steps/its)/3600)+0.3+0.1)*gpu_price, 2)}.

      - If you check the box below the GPU attribution will automatically removed after training is done and the model is uploaded. If not, don't forget to come back here and swap the hardware back to CPU.

      ''' - else: - summary_sentence = f'''You are going to train {concept_counter} {type_of_thing}(s), with {file_counter} images for {Training_Steps} steps.

      ''' - - return([gr.update(visible=True), gr.update(visible=True, value=summary_sentence)]) - -def update_steps(*files_list): - file_counter = 0 - for i, files in enumerate(files_list): - if(files): - file_counter+=len(files) - return(gr.update(value=file_counter*200)) - -def visualise_progress_bar(): - return gr.update(visible=True) - -def pad_image(image): - w, h = image.size - if w == h: - return image - elif w > h: - new_image = Image.new(image.mode, (w, w), (0, 0, 0)) - new_image.paste(image, (0, (w - h) // 2)) - return new_image - else: - new_image = Image.new(image.mode, (h, h), (0, 0, 0)) - new_image.paste(image, ((h - w) // 2, 0)) - return new_image - -def validate_model_upload(hf_token, model_name): - if(hf_token != ''): - api = HfApi() - try: - _ = api.whoami(hf_token) - except: - raise gr.Error("You have inserted an invalid Hugging Face token") - try: - if(is_spaces): - update_repo_visibility(repo_id=os.environ['SPACE_ID'], private=True, token=hf_token, repo_type="space") - except: - raise gr.Error("Oops, you created a Hugging Face token with read permissions only. You need one with write permissions") - else: - raise gr.Error("Please insert a Hugging Face Token (make sure to create it with write permissions)") - if(model_name == ""): - raise gr.Error("Please fill in your model's name") - -def swap_hardware(hf_token, hardware="cpu-basic"): - hardware_url = f"https://huggingface.co/spaces/{os.environ['SPACE_ID']}/hardware" - headers = { "authorization" : f"Bearer {hf_token}"} - body = {'flavor': hardware} - requests.post(hardware_url, json = body, headers=headers) - -def swap_sleep_time(hf_token,sleep_time): - sleep_time_url = f"https://huggingface.co/api/spaces/{os.environ['SPACE_ID']}/sleeptime" - headers = { "authorization" : f"Bearer {hf_token}"} - body = {'seconds':sleep_time} - requests.post(sleep_time_url,json=body,headers=headers) - -def get_sleep_time(hf_token): - sleep_time_url = f"https://huggingface.co/api/spaces/{os.environ['SPACE_ID']}" - headers = { "authorization" : f"Bearer {hf_token}"} - response = requests.get(sleep_time_url,headers=headers) - try: - gcTimeout = response.json()['runtime']['gcTimeout'] - except: - gcTimeout = None - return gcTimeout - -def write_to_community(title, description,hf_token): - from huggingface_hub import HfApi - api = HfApi() - api.create_discussion(repo_id=os.environ['SPACE_ID'], title=title, description=description,repo_type="space", token=hf_token) - -def train(progress=gr.Progress(track_tqdm=True), *inputs): - which_model = inputs[-10] - if(which_model == ""): - raise gr.Error("You forgot to select a base model to use") - - if is_shared_ui: - raise gr.Error("This Space only works in duplicated instances") - if not is_gpu_associated: - raise gr.Error("Please associate a T4 or A10G GPU for this Space") - hf_token = inputs[-5] - model_name = inputs[-7] - if(is_spaces): - sleep_time = get_sleep_time(hf_token) - if sleep_time: - swap_sleep_time(hf_token, -1) - remove_attribution_after = inputs[-6] - else: - remove_attribution_after = False - - if(remove_attribution_after): - validate_model_upload(hf_token, model_name) - - torch.cuda.empty_cache() - if 'pipe' in globals(): - global pipe, pipe_is_set - del pipe - pipe_is_set = False - gc.collect() - - if os.path.exists("output_model"): shutil.rmtree('output_model') - if os.path.exists("instance_images"): shutil.rmtree('instance_images') - if os.path.exists("diffusers_model.tar"): os.remove("diffusers_model.tar") - if os.path.exists("model.ckpt"): os.remove("model.ckpt") - if os.path.exists("hastrained.success"): os.remove("hastrained.success") - file_counter = 0 - resolution = 512 if which_model != "v2-1-768" else 768 - for i, input in enumerate(inputs): - if(i < maximum_concepts-1): - if(input): - os.makedirs('instance_images',exist_ok=True) - files = inputs[i+(maximum_concepts*2)] - prompt = inputs[i+maximum_concepts] - if(prompt == "" or prompt == None): - raise gr.Error("You forgot to define your concept prompt") - for j, file_temp in enumerate(files): - file = Image.open(file_temp.name) - image = pad_image(file) - image = image.resize((resolution, resolution)) - extension = file_temp.name.split(".")[1] - image = image.convert('RGB') - image.save(f'instance_images/{prompt}_({j+1}).jpg', format="JPEG", quality = 100) - file_counter += 1 - - os.makedirs('output_model',exist_ok=True) - uses_custom = inputs[-1] - type_of_thing = inputs[-4] - experimental_face_improvement = inputs[-9] - - if(uses_custom): - Training_Steps = int(inputs[-3]) - Train_text_encoder_for = int(inputs[-2]) - else: - if(type_of_thing == "object"): - Train_text_encoder_for=30 - - elif(type_of_thing == "style"): - Train_text_encoder_for=15 - - elif(type_of_thing == "person"): - Train_text_encoder_for=70 - - Training_Steps = file_counter*150 - if(type_of_thing == "person" and Training_Steps > 2600): - Training_Steps = 2600 #Avoid overfitting on people's faces - stptxt = int((Training_Steps*Train_text_encoder_for)/100) - gradient_checkpointing = True if (experimental_face_improvement or which_model != "v1-5") else False - cache_latents = True if which_model != "v1-5" else False - if (type_of_thing == "object" or type_of_thing == "style" or (type_of_thing == "person" and not experimental_face_improvement)): - args_general = argparse.Namespace( - image_captions_filename = True, - train_text_encoder = True if stptxt > 0 else False, - stop_text_encoder_training = stptxt, - save_n_steps = 0, - pretrained_model_name_or_path = model_to_load, - instance_data_dir="instance_images", - class_data_dir=None, - output_dir="output_model", - instance_prompt="", - seed=42, - resolution=resolution, - mixed_precision="fp16", - train_batch_size=1, - gradient_accumulation_steps=1, - use_8bit_adam=True, - learning_rate=2e-6, - lr_scheduler="polynomial", - lr_warmup_steps = 0, - max_train_steps=Training_Steps, - gradient_checkpointing=gradient_checkpointing, - cache_latents=cache_latents, - ) - print("Starting single training...") - lock_file = open("intraining.lock", "w") - lock_file.close() - try: - run_training(args_general) - except Exception as e: - if(is_spaces): - title="There was an error on during your training" - description=f''' - Unfortunately there was an error during training your {model_name} model. - Please check it out below. Feel free to report this issue to [Dreambooth Training](https://huggingface.co/spaces/multimodalart/dreambooth-training): - ``` - {str(e)} - ``` - ''' - swap_hardware(hf_token, "cpu-basic") - write_to_community(title,description,hf_token) - - - gc.collect() - torch.cuda.empty_cache() - if(which_model == "v1-5"): - print("Adding Safety Checker to the model...") - shutil.copytree(f"{safety_checker}/feature_extractor", "output_model/feature_extractor", dirs_exist_ok=True) - shutil.copytree(f"{safety_checker}/safety_checker", "output_model/safety_checker", dirs_exist_ok=True) - shutil.copy(f"model_index.json", "output_model/model_index.json") - - if(not remove_attribution_after): - swap_sleep_time(hf_token, sleep_time) - print("Archiving model file...") - with tarfile.open("diffusers_model.tar", "w") as tar: - tar.add("output_model", arcname=os.path.basename("output_model")) - if os.path.exists("intraining.lock"): os.remove("intraining.lock") - trained_file = open("hastrained.success", "w") - trained_file.close() - print("Training completed!") - return [ - gr.update(visible=False), #progress_bar - gr.update(visible=True, value=["diffusers_model.tar"]), #result - gr.update(visible=True), #try_your_model - gr.update(visible=True), #push_to_hub - gr.update(visible=True), #convert_button - gr.update(visible=False), #training_ongoing - gr.update(visible=True) #completed_training - ] - else: - where_to_upload = inputs[-8] - push(model_name, where_to_upload, hf_token, which_model, True) - swap_hardware(hf_token, "cpu-basic") - -pipe_is_set = False -def generate(prompt, steps): - torch.cuda.empty_cache() - from diffusers import StableDiffusionPipeline - global pipe_is_set - if(not pipe_is_set): - global pipe - pipe = StableDiffusionPipeline.from_pretrained("./output_model", torch_dtype=torch.float16) - pipe = pipe.to("cuda") - pipe_is_set = True - - image = pipe(prompt, num_inference_steps=steps).images[0] - return(image) - -def push(model_name, where_to_upload, hf_token, which_model, comes_from_automated=False): - validate_model_upload(hf_token, model_name) - if(not os.path.exists("model.ckpt")): - convert("output_model", "model.ckpt") - from huggingface_hub import HfApi, HfFolder, CommitOperationAdd - from huggingface_hub import create_repo - model_name_slug = slugify(model_name) - api = HfApi() - your_username = api.whoami(token=hf_token)["name"] - if(where_to_upload == "My personal profile"): - model_id = f"{your_username}/{model_name_slug}" - else: - model_id = f"sd-dreambooth-library/{model_name_slug}" - headers = {"Authorization" : f"Bearer: {hf_token}", "Content-Type": "application/json"} - response = requests.post("https://huggingface.co/organizations/sd-dreambooth-library/share/SSeOwppVCscfTEzFGQaqpfcjukVeNrKNHX", headers=headers) - - print(f"Starting to upload the model {model_id}...") - images_upload = os.listdir("instance_images") - image_string = "" - instance_prompt_list = [] - previous_instance_prompt = '' - for i, image in enumerate(images_upload): - instance_prompt = image.split("_")[0] - if(instance_prompt != previous_instance_prompt): - title_instance_prompt_string = instance_prompt - instance_prompt_list.append(instance_prompt) - else: - title_instance_prompt_string = '' - previous_instance_prompt = instance_prompt - image_string = f'''{title_instance_prompt_string} {"(use that on your prompt)" if title_instance_prompt_string != "" else ""} -{image_string}![{instance_prompt} {i}](https://huggingface.co/{model_id}/resolve/main/concept_images/{urllib.parse.quote(image)})''' - readme_text = f'''--- -license: creativeml-openrail-m -tags: -- text-to-image -widget: -- text: {instance_prompt_list[0]} ---- -### {model_name} Dreambooth model trained by {api.whoami(token=hf_token)["name"]} with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the {which_model} base model - -You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts! - -Sample pictures of: -{image_string} -''' - #Save the readme to a file - readme_file = open("model.README.md", "w") - readme_file.write(readme_text) - readme_file.close() - #Save the token identifier to a file - text_file = open("token_identifier.txt", "w") - text_file.write(', '.join(instance_prompt_list)) - text_file.close() - try: - create_repo(model_id,private=True, token=hf_token) - except: - import time - epoch_time = str(int(time.time())) - create_repo(f"{model_id}-{epoch_time}", private=True,token=hf_token) - operations = [ - CommitOperationAdd(path_in_repo="token_identifier.txt", path_or_fileobj="token_identifier.txt"), - CommitOperationAdd(path_in_repo="README.md", path_or_fileobj="model.README.md"), - CommitOperationAdd(path_in_repo=f"model.ckpt",path_or_fileobj="model.ckpt") - ] - api.create_commit( - repo_id=model_id, - operations=operations, - commit_message=f"Upload the model {model_name}", - token=hf_token - ) - api.upload_folder( - folder_path="output_model", - repo_id=model_id, - token=hf_token - ) - api.upload_folder( - folder_path="instance_images", - path_in_repo="concept_images", - repo_id=model_id, - token=hf_token - ) - if is_spaces: - if(not comes_from_automated): - extra_message = "Don't forget to remove the GPU attribution after you play with it." - else: - extra_message = "The GPU has been removed automatically as requested, and you can try the model via the model page" - title=f"Your model {model_name} has finished trained from the Dreambooth Train Spaces!" - description=f"Your model has been successfully uploaded to: https://huggingface.co/{model_id}. {extra_message}" - write_to_community(title, description, hf_token) - #api.create_discussion(repo_id=os.environ['SPACE_ID'], title=f"Your model {model_name} has finished trained from the Dreambooth Train Spaces!", description=f"Your model has been successfully uploaded to: https://huggingface.co/{model_id}. {extra_message}",repo_type="space", token=hf_token) - print("Model uploaded successfully!") - return [gr.update(visible=True, value=f"Successfully uploaded your model. Access it [here](https://huggingface.co/{model_id})"), gr.update(visible=True, value=["diffusers_model.tar", "model.ckpt"])] - -def convert_to_ckpt(): - if 'pipe' in globals(): - global pipe, pipe_is_set - del pipe - pipe_is_set = False - gc.collect() - convert("output_model", "model.ckpt") - return gr.update(visible=True, value=["diffusers_model.tar", "model.ckpt"]) - -def check_status(top_description): - if os.path.exists("hastrained.success"): - if is_spaces: - update_top_tag = gr.update(value=f''' -
      -

      Your model has finished training ✅

      -

      Yay, congratulations on training your model. Scroll down to play with with it, save it (either downloading it or on the Hugging Face Hub). Once you are done, your model is safe, and you don't want to train a new one, go to the settings page and downgrade your Space to a CPU Basic

      -
      - ''') - else: - update_top_tag = gr.update(value=f''' -
      -

      Your model has finished training ✅

      -

      Yay, congratulations on training your model. Scroll down to play with with it, save it (either downloading it or on the Hugging Face Hub).

      -
      - ''') - show_outputs = True - elif os.path.exists("intraining.lock"): - update_top_tag = gr.update(value=''' -
      -

      Don't worry, your model is still training! ⌛

      -

      You closed the tab while your model was training, but it's all good! It is still training right now. You can click the "Open logs" button above here to check the training status. Once training is done, reload this tab to interact with your model

      -
      - ''') - show_outputs = False - else: - update_top_tag = gr.update(value=top_description) - show_outputs = False - if os.path.exists("diffusers_model.tar"): - update_files_tag = gr.update(visible=show_outputs, value=["diffusers_model.tar"]) - else: - update_files_tag = gr.update(visible=show_outputs) - return [ - update_top_tag, #top_description - gr.update(visible=show_outputs), #try_your_model - gr.update(visible=show_outputs), #push_to_hub - update_files_tag, #result - gr.update(visible=show_outputs), #convert_button - ] - -def checkbox_swap(checkbox): - return [gr.update(visible=checkbox), gr.update(visible=checkbox), gr.update(visible=checkbox), gr.update(visible=checkbox)] - -with gr.Blocks(css=css) as demo: - with gr.Box(): - if is_shared_ui: - top_description = gr.HTML(f''' -
      -

      Attention - This Space doesn't work in this shared UI

      -

      For it to work, you can either run locally or duplicate the Space and run it on your own profile using a (paid) private T4-small or A10G-small GPU for training. A T4 costs US$0.60/h, so it should cost < US$1 to train most models using default settings with it!  Duplicate Space

      - - -
      - ''') - elif(is_spaces): - if(is_gpu_associated): - top_description = gr.HTML(f''' -
      -

      You have successfully associated a {which_gpu} GPU to the Dreambooth Training Space 🎉

      -

      You can now train your model! You will be billed by the minute from when you activated the GPU until when it is turned it off.

      -
      - ''') - else: - top_description = gr.HTML(f''' -
      -

      You have successfully duplicated the Dreambooth Training Space 🎉

      -

      There's only one step left before you can train your model: attribute a T4-small or A10G-small GPU to it (via the Settings tab) and run the training below. You will be billed by the minute from when you activate the GPU until when it is turned it off.

      -
      - ''') - else: - top_description = gr.HTML(f''' -
      -

      You have successfully cloned the Dreambooth Training Space locally 🎉

      -

      Do a pip install requirements-local.txt

      -
      - ''') - gr.Markdown("# Dreambooth Training UI 💭") - gr.Markdown("Customize Stable Diffusion v1 or v2 (ⁿᵉʷ!) by giving it a few examples of a concept. Based on the [🧨 diffusers](https://github.com/huggingface/diffusers) implementation, additional techniques from [TheLastBen](https://github.com/TheLastBen/diffusers) and [ShivamShrirao](https://github.com/ShivamShrirao/diffusers)") - - with gr.Row() as what_are_you_training: - type_of_thing = gr.Dropdown(label="What would you like to train?", choices=["object", "person", "style"], value="object", interactive=True) - with gr.Column(): - base_model_to_use = gr.Dropdown(label="Which base model would you like to use?", choices=["v1-5", "v2-1-512", "v2-1-768"], value="v1-5", interactive=True) - - #Very hacky approach to emulate dynamically created Gradio components - with gr.Row() as upload_your_concept: - with gr.Column(): - thing_description = gr.Markdown("You are going to train an `object`, please upload 5-10 images of the object you are planning on training on from different angles/perspectives. You must have the right to do so and you are liable for the images you use, example") - thing_experimental = gr.Checkbox(label="Improve faces (prior preservation) - can take longer training but can improve faces", visible=False, value=False) - thing_image_example = gr.HTML('''''') - things_naming = gr.Markdown("You should name your concept with a unique made up word that has low chance of the model already knowing it (e.g.: `cttoy` here). Images will be automatically cropped to 512x512.") - - with gr.Column(): - file_collection = [] - concept_collection = [] - buttons_collection = [] - delete_collection = [] - is_visible = [] - - row = [None] * maximum_concepts - for x in range(maximum_concepts): - ordinal = lambda n: "%d%s" % (n, "tsnrhtdd"[(n // 10 % 10 != 1) * (n % 10 < 4) * n % 10::4]) - if(x == 0): - visible = True - is_visible.append(gr.State(value=True)) - else: - visible = False - is_visible.append(gr.State(value=False)) - - file_collection.append(gr.File(file_types=["image"], label=f'''Upload the images for your {ordinal(x+1) if (x>0) else ""} concept''', file_count="multiple", interactive=True, visible=visible)) - with gr.Column(visible=visible) as row[x]: - concept_collection.append(gr.Textbox(label=f'''{ordinal(x+1) if (x>0) else ""} concept prompt - use a unique, made up word to avoid collisions''')) - with gr.Row(): - if(x < maximum_concepts-1): - buttons_collection.append(gr.Button(value="Add +1 concept", visible=visible)) - if(x > 0): - delete_collection.append(gr.Button(value=f"Delete {ordinal(x+1)} concept")) - - counter_add = 1 - for button in buttons_collection: - if(counter_add < len(buttons_collection)): - button.click(lambda: - [gr.update(visible=True),gr.update(visible=True), gr.update(visible=False), gr.update(visible=True), True, None], - None, - [row[counter_add], file_collection[counter_add], buttons_collection[counter_add-1], buttons_collection[counter_add], is_visible[counter_add], file_collection[counter_add]], queue=False) - else: - button.click(lambda:[gr.update(visible=True),gr.update(visible=True), gr.update(visible=False), True], None, [row[counter_add], file_collection[counter_add], buttons_collection[counter_add-1], is_visible[counter_add]], queue=False) - counter_add += 1 - - counter_delete = 1 - for delete_button in delete_collection: - if(counter_delete < len(delete_collection)+1): - delete_button.click(lambda:[gr.update(visible=False),gr.update(visible=False), gr.update(visible=True), False], None, [file_collection[counter_delete], row[counter_delete], buttons_collection[counter_delete-1], is_visible[counter_delete]], queue=False) - counter_delete += 1 - - with gr.Accordion("Custom Settings", open=False): - swap_auto_calculated = gr.Checkbox(label="Use custom settings") - gr.Markdown("If not checked, the % of frozen encoder will be tuned automatically to whether you are training an `object`, `person` or `style`. The text-encoder is frozen after 10% of the steps for a style, 30% of the steps for an object and 75% trained for persons. The number of steps varies between 1400 and 2400 depending on how many images uploaded. If you see too many artifacts in your output, it means it may have overfit and you need less steps. If your results aren't really what you wanted, it may be underfitting and you need more steps.") - steps = gr.Number(label="How many steps", value=2400) - perc_txt_encoder = gr.Number(label="Percentage of the training steps the text-encoder should be trained as well", value=30) - - with gr.Box(visible=False) as training_summary: - training_summary_text = gr.HTML("", visible=True, label="Training Summary") - is_advanced_visible = True if is_spaces else False - training_summary_checkbox = gr.Checkbox(label="Automatically remove paid GPU attribution and upload model to the Hugging Face Hub after training", value=True, visible=is_advanced_visible) - training_summary_model_name = gr.Textbox(label="Name of your model", visible=True) - training_summary_where_to_upload = gr.Dropdown(["My personal profile", "Public Library"], value="My personal profile", label="Upload to", visible=True) - training_summary_token_message = gr.Markdown("[A Hugging Face write access token](https://huggingface.co/settings/tokens), go to \"New token\" -> Role : Write. A regular read token won't work here.", visible=True) - training_summary_token = gr.Textbox(label="Hugging Face Write Token", type="password", visible=True) - - train_btn = gr.Button("Start Training") - progress_bar = gr.Textbox(visible=False) - if(is_shared_ui): - training_ongoing = gr.Markdown("## This Space only works in duplicated instances. Please duplicate it and try again!", visible=False) - elif(not is_gpu_associated): - training_ongoing = gr.Markdown("## Oops, you haven't associated your T4 or A10G GPU to this Space. Visit the Settings tab, associate and try again.", visible=False) - else: - training_ongoing = gr.Markdown("## Training is ongoing ⌛... You can close this tab if you like or just wait. If you did not check the `Remove GPU After training`, you can come back here to try your model and upload it after training. Don't forget to remove the GPU attribution after you are done. ", visible=False) - - - #Post-training UI - completed_training = gr.Markdown('''# ✅ Training completed. - ### Don't forget to remove the GPU attribution after you are done trying and uploading your model''', visible=False) - - with gr.Row(): - with gr.Box(visible=False) as try_your_model: - gr.Markdown("## Try your model") - prompt = gr.Textbox(label="Type your prompt") - result_image = gr.Image() - inference_steps = gr.Slider(minimum=1, maximum=150, value=50, step=1) - generate_button = gr.Button("Generate Image") - - with gr.Box(visible=False) as push_to_hub: - gr.Markdown("## Push to Hugging Face Hub") - model_name = gr.Textbox(label="Name of your model", placeholder="Tarsila do Amaral Style") - where_to_upload = gr.Dropdown(["My personal profile", "Public Library"], label="Upload to") - gr.Markdown("[A Hugging Face write access token](https://huggingface.co/settings/tokens), go to \"New token\" -> Role : Write. A regular read token won't work here.") - hf_token = gr.Textbox(label="Hugging Face Write Token", type="password") - - push_button = gr.Button("Push to the Hub") - - result = gr.File(label="Download the uploaded models in the diffusers format", visible=True) - success_message_upload = gr.Markdown(visible=False) - convert_button = gr.Button("Convert to CKPT", visible=False) - - #Swap the examples and the % of text encoder trained depending if it is an object, person or style - type_of_thing.change(fn=swap_text, inputs=[type_of_thing, base_model_to_use], outputs=[thing_description, thing_image_example, things_naming, perc_txt_encoder, thing_experimental], queue=False, show_progress=False) - - #Swap the base model - - base_model_to_use.change(fn=swap_text, inputs=[type_of_thing, base_model_to_use], outputs=[thing_description, thing_image_example, things_naming, perc_txt_encoder, thing_experimental], queue=False, show_progress=False) - #base_model_to_use.change(fn=visualise_progress_bar, inputs=[], outputs=progress_bar) - base_model_to_use.change(fn=swap_base_model, inputs=base_model_to_use, outputs=[]) - #Update the summary box below the UI according to how many images are uploaded and whether users are using custom settings or not - for file in file_collection: - #file.change(fn=update_steps,inputs=file_collection, outputs=steps) - file.change(fn=count_files, inputs=file_collection+[thing_experimental]+[base_model_to_use]+[type_of_thing]+[steps]+[perc_txt_encoder]+[swap_auto_calculated], outputs=[training_summary, training_summary_text], queue=False) - - thing_experimental.change(fn=count_files, inputs=file_collection+[thing_experimental]+[base_model_to_use]+[type_of_thing]+[steps]+[perc_txt_encoder]+[swap_auto_calculated], outputs=[training_summary, training_summary_text], queue=False) - base_model_to_use.change(fn=count_files, inputs=file_collection+[thing_experimental]+[base_model_to_use]+[type_of_thing]+[steps]+[perc_txt_encoder]+[swap_auto_calculated], outputs=[training_summary, training_summary_text], queue=False) - steps.change(fn=count_files, inputs=file_collection+[thing_experimental]+[base_model_to_use]+[type_of_thing]+[steps]+[perc_txt_encoder]+[swap_auto_calculated], outputs=[training_summary, training_summary_text], queue=False) - perc_txt_encoder.change(fn=count_files, inputs=file_collection+[thing_experimental]+[base_model_to_use]+[type_of_thing]+[steps]+[perc_txt_encoder]+[swap_auto_calculated], outputs=[training_summary, training_summary_text], queue=False) - - #Give more options if the user wants to finish everything after training - if(is_spaces): - training_summary_checkbox.change(fn=checkbox_swap, inputs=training_summary_checkbox, outputs=[training_summary_token_message, training_summary_token, training_summary_model_name, training_summary_where_to_upload],queue=False, show_progress=False) - #Add a message for while it is in training - - #train_btn.click(lambda:gr.update(visible=True), inputs=None, outputs=training_ongoing) - - #The main train function - train_btn.click(lambda:gr.update(visible=True), inputs=[], outputs=progress_bar) - train_btn.click(fn=train, inputs=is_visible+concept_collection+file_collection+[base_model_to_use]+[thing_experimental]+[training_summary_where_to_upload]+[training_summary_model_name]+[training_summary_checkbox]+[training_summary_token]+[type_of_thing]+[steps]+[perc_txt_encoder]+[swap_auto_calculated], outputs=[progress_bar, result, try_your_model, push_to_hub, convert_button, training_ongoing, completed_training], queue=False) - - #Button to generate an image from your trained model after training - generate_button.click(fn=generate, inputs=[prompt, inference_steps], outputs=result_image, queue=False) - #Button to push the model to the Hugging Face Hub - push_button.click(fn=push, inputs=[model_name, where_to_upload, hf_token, base_model_to_use], outputs=[success_message_upload, result], queue=False) - #Button to convert the model to ckpt format - convert_button.click(fn=convert_to_ckpt, inputs=[], outputs=result, queue=False) - - #Checks if the training is running - demo.load(fn=check_status, inputs=top_description, outputs=[top_description, try_your_model, push_to_hub, result, convert_button], queue=False, show_progress=False) - -demo.queue(default_enabled=False).launch(debug=True) \ No newline at end of file diff --git a/spaces/hylee/apdrawing/APDrawingGAN2/util/image_pool.py b/spaces/hylee/apdrawing/APDrawingGAN2/util/image_pool.py deleted file mode 100644 index 52413e0f8a45a8c8511bf103d3aabd537fac97b9..0000000000000000000000000000000000000000 --- a/spaces/hylee/apdrawing/APDrawingGAN2/util/image_pool.py +++ /dev/null @@ -1,32 +0,0 @@ -import random -import torch - - -class ImagePool(): - def __init__(self, pool_size): - self.pool_size = pool_size - if self.pool_size > 0: - self.num_imgs = 0 - self.images = [] - - def query(self, images): - if self.pool_size == 0: - return images - return_images = [] - for image in images: - image = torch.unsqueeze(image.data, 0) - if self.num_imgs < self.pool_size: - self.num_imgs = self.num_imgs + 1 - self.images.append(image) - return_images.append(image) - else: - p = random.uniform(0, 1) - if p > 0.5: - random_id = random.randint(0, self.pool_size - 1) # randint is inclusive - tmp = self.images[random_id].clone() - self.images[random_id] = image - return_images.append(tmp) - else: - return_images.append(image) - return_images = torch.cat(return_images, 0) - return return_images diff --git a/spaces/hysts/DDNM-HQ/style.css b/spaces/hysts/DDNM-HQ/style.css deleted file mode 100644 index c4739b4ea5fc35e774a049e3dacc443f7f0eac19..0000000000000000000000000000000000000000 --- a/spaces/hysts/DDNM-HQ/style.css +++ /dev/null @@ -1,3 +0,0 @@ -h1 { - text-align: center; -} diff --git a/spaces/hysts/Kandinsky-2-1/app.py b/spaces/hysts/Kandinsky-2-1/app.py deleted file mode 100644 index 7dd489cfc466c261d54a58265befffe3747de3f1..0000000000000000000000000000000000000000 --- a/spaces/hysts/Kandinsky-2-1/app.py +++ /dev/null @@ -1,202 +0,0 @@ -#!/usr/bin/env python - -from __future__ import annotations - -import os -import random - -import gradio as gr -import numpy as np -import PIL.Image -import spaces -import torch -from diffusers import DDPMScheduler, DiffusionPipeline - -DESCRIPTION = "# Kandinsky 2.1" -if not torch.cuda.is_available(): - DESCRIPTION += "\n

      Running on CPU 🥶 This demo does not work on CPU.

      " - -MAX_SEED = np.iinfo(np.int32).max -CACHE_EXAMPLES = torch.cuda.is_available() and os.getenv("CACHE_EXAMPLES") == "1" -MAX_IMAGE_SIZE = int(os.getenv("MAX_IMAGE_SIZE", "768")) -USE_TORCH_COMPILE = os.getenv("USE_TORCH_COMPILE") == "1" - -device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") -if torch.cuda.is_available(): - pipe_prior = DiffusionPipeline.from_pretrained( - "kandinsky-community/kandinsky-2-1-prior", torch_dtype=torch.float16 - ) - pipe_prior.to(device) - - scheduler = DDPMScheduler.from_pretrained("kandinsky-community/kandinsky-2-1", subfolder="ddpm_scheduler") - pipe = DiffusionPipeline.from_pretrained( - "kandinsky-community/kandinsky-2-1", scheduler=scheduler, torch_dtype=torch.float16 - ) - pipe.to(device) - if USE_TORCH_COMPILE: - pipe.unet.to(memory_format=torch.channels_last) - pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) -else: - pipe_prior = None - pipe = None - - -def randomize_seed_fn(seed: int, randomize_seed: bool) -> int: - if randomize_seed: - seed = random.randint(0, MAX_SEED) - return seed - - -@spaces.GPU -def generate( - prompt: str, - negative_prompt: str = "low quality, bad quality", - seed: int = 0, - width: int = 768, - height: int = 768, - guidance_scale_prior: float = 1.0, - guidance_scale: float = 4.0, - num_inference_steps_prior: int = 50, - num_inference_steps: int = 100, -) -> PIL.Image.Image: - generator = torch.Generator().manual_seed(seed) - image_embeds, negative_image_embeds = pipe_prior( - prompt, - negative_prompt, - generator=generator, - guidance_scale=guidance_scale_prior, - num_inference_steps=num_inference_steps_prior, - ).to_tuple() - image = pipe( - prompt=prompt, - image_embeds=image_embeds, - negative_image_embeds=negative_image_embeds, - height=height, - width=width, - generator=generator, - guidance_scale=guidance_scale, - num_inference_steps=num_inference_steps, - ).images[0] - return image - - -examples = [ - "An astronaut riding a horse", - "portrait of a young woman, blue eyes, cinematic", - "A alien cheeseburger creature eating itself, claymation, cinematic, moody lighting", - "bird eye view shot of a full body woman with cyan light orange magenta makeup, digital art, long braided hair her face separated by makeup in the style of yin Yang surrealism, symmetrical face, real image, contrasting tone, pastel gradient background", - "A car exploding into colorful dust", - "editorial photography of an organic, almost liquid smoke style armchair", - "birds eye view of a quilted paper style alien planet landscape, vibrant colours, Cinematic lighting", -] - -with gr.Blocks(css="style.css") as demo: - gr.Markdown(DESCRIPTION) - gr.DuplicateButton( - value="Duplicate Space for private use", - elem_id="duplicate-button", - visible=os.getenv("SHOW_DUPLICATE_BUTTON") == "1", - ) - with gr.Box(): - with gr.Row(): - prompt = gr.Text( - label="Prompt", - show_label=False, - max_lines=1, - placeholder="Enter your prompt", - container=False, - ) - run_button = gr.Button("Run", scale=0) - result = gr.Image(label="Result", show_label=False) - with gr.Accordion("Advanced options", open=False): - negative_prompt = gr.Text( - label="Negative prompt", - value="low quality, bad quality", - max_lines=1, - placeholder="Enter a negative prompt", - ) - seed = gr.Slider( - label="Seed", - minimum=0, - maximum=MAX_SEED, - step=1, - value=0, - ) - randomize_seed = gr.Checkbox(label="Randomize seed", value=True) - width = gr.Slider( - label="Width", - minimum=256, - maximum=MAX_IMAGE_SIZE, - step=32, - value=768, - ) - height = gr.Slider( - label="Height", - minimum=256, - maximum=MAX_IMAGE_SIZE, - step=32, - value=768, - ) - guidance_scale_prior = gr.Slider( - label="Guidance scale for prior", - minimum=1, - maximum=20, - step=0.1, - value=4.0, - ) - guidance_scale = gr.Slider( - label="Guidance scale", - minimum=1, - maximum=20, - step=0.1, - value=4.0, - ) - num_inference_steps_prior = gr.Slider( - label="Number of inference steps for prior", - minimum=10, - maximum=100, - step=1, - value=50, - ) - num_inference_steps = gr.Slider( - label="Number of inference steps", - minimum=10, - maximum=150, - step=1, - value=100, - ) - - gr.Examples( - examples=examples, - inputs=prompt, - outputs=result, - fn=generate, - cache_examples=CACHE_EXAMPLES, - ) - - gr.on( - triggers=[prompt.submit, negative_prompt.submit, run_button.click], - fn=randomize_seed_fn, - inputs=[seed, randomize_seed], - outputs=seed, - queue=False, - api_name=False, - ).then( - fn=generate, - inputs=[ - prompt, - negative_prompt, - seed, - width, - height, - guidance_scale_prior, - guidance_scale, - num_inference_steps_prior, - num_inference_steps, - ], - outputs=result, - api_name="run", - ) - -if __name__ == "__main__": - demo.queue(max_size=20).launch() diff --git a/spaces/hysts/atksh-onnx-facial-lmk-detector/README.md b/spaces/hysts/atksh-onnx-facial-lmk-detector/README.md deleted file mode 100644 index 7d4a05a86006b0b10812b8c04bcaef61071a8b08..0000000000000000000000000000000000000000 --- a/spaces/hysts/atksh-onnx-facial-lmk-detector/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Atksh Onnx Facial Lmk Detector -emoji: 🏢 -colorFrom: gray -colorTo: purple -sdk: gradio -sdk_version: 3.36.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/iamstolas/STOLAS/src/components/chat-image.tsx b/spaces/iamstolas/STOLAS/src/components/chat-image.tsx deleted file mode 100644 index 05ecc9771eada27a0f2d160bb01cba170d37bb09..0000000000000000000000000000000000000000 --- a/spaces/iamstolas/STOLAS/src/components/chat-image.tsx +++ /dev/null @@ -1,170 +0,0 @@ -import { - useEffect, - useState, - useCallback, - ChangeEvent, - ClipboardEvent, - MouseEventHandler, - FormEvent, - useRef -} from "react" -import Image from 'next/image' -import PasteIcon from '@/assets/images/paste.svg' -import UploadIcon from '@/assets/images/upload.svg' -import CameraIcon from '@/assets/images/camera.svg' -import { useBing } from '@/lib/hooks/use-bing' -import { cn } from '@/lib/utils' - -interface ChatImageProps extends Pick, 'uploadImage'> {} - -const preventDefault: MouseEventHandler = (event) => { - event.nativeEvent.stopImmediatePropagation() -} - -const toBase64 = (file: File): Promise => new Promise((resolve, reject) => { - const reader = new FileReader() - reader.readAsDataURL(file) - reader.onload = () => resolve(reader.result as string) - reader.onerror = reject -}) - -export function ChatImage({ children, uploadImage }: React.PropsWithChildren) { - const videoRef = useRef(null) - const canvasRef = useRef(null) - const mediaStream = useRef() - const [panel, setPanel] = useState('none') - - const upload = useCallback((url: string) => { - if (url) { - uploadImage(url) - } - setPanel('none') - }, [panel]) - - const onUpload = useCallback(async (event: ChangeEvent) => { - const file = event.target.files?.[0] - if (file) { - const fileDataUrl = await toBase64(file) - if (fileDataUrl) { - upload(fileDataUrl) - } - } - }, []) - - const onPaste = useCallback((event: ClipboardEvent) => { - const pasteUrl = event.clipboardData.getData('text') ?? '' - upload(pasteUrl) - }, []) - - const onEnter = useCallback((event: FormEvent) => { - event.preventDefault() - event.stopPropagation() - // @ts-ignore - const inputUrl = event.target.elements.image.value - if (inputUrl) { - upload(inputUrl) - } - }, []) - - const openVideo: MouseEventHandler = async (event) => { - event.stopPropagation() - setPanel('camera-mode') - } - - const onCapture = () => { - if (canvasRef.current && videoRef.current) { - const canvas = canvasRef.current - canvas.width = videoRef.current!.videoWidth - canvas.height = videoRef.current!.videoHeight - canvas.getContext('2d')?.drawImage(videoRef.current, 0, 0, canvas.width, canvas.height) - const cameraUrl = canvas.toDataURL('image/jpeg') - upload(cameraUrl) - } - } - - useEffect(() => { - const handleBlur = () => { - if (panel !== 'none') { - setPanel('none') - } - } - document.addEventListener('click', handleBlur) - return () => { - document.removeEventListener('click', handleBlur) - } - }, [panel]) - - useEffect(() => { - if (panel === 'camera-mode') { - navigator.mediaDevices.getUserMedia({ video: true, audio: false }) - .then(videoStream => { - mediaStream.current = videoStream - if (videoRef.current) { - videoRef.current.srcObject = videoStream - } - }) - } else { - if (mediaStream.current) { - mediaStream.current.getTracks().forEach(function(track) { - track.stop() - }) - mediaStream.current = undefined - } - } - }, [panel]) - - return ( -
      -
      panel === 'none' ? setPanel('normal') : setPanel('none')}>{children}
      -
      -
      -
      -

      添加图像

      -
      -
      - paste -
      - e.stopPropagation()} - /> -
      -
      -
      - - -
      -
      - {panel === 'camera-mode' &&
      -
      -
      -
      -
      -
      -
      -
      } -
      -
      - ) -} diff --git a/spaces/imseldrith/DeepFakeAI/DeepFakeAI/processors/frame/modules/face_swapper.py b/spaces/imseldrith/DeepFakeAI/DeepFakeAI/processors/frame/modules/face_swapper.py deleted file mode 100644 index 4a3d024fbe66148cafdb2dadb61a3fd3ee0696cb..0000000000000000000000000000000000000000 --- a/spaces/imseldrith/DeepFakeAI/DeepFakeAI/processors/frame/modules/face_swapper.py +++ /dev/null @@ -1,105 +0,0 @@ -from typing import Any, List, Callable -import cv2 -import insightface -import threading - -import DeepFakeAI.globals -import DeepFakeAI.processors.frame.core as frame_processors -from DeepFakeAI import wording -from DeepFakeAI.core import update_status -from DeepFakeAI.face_analyser import get_one_face, get_many_faces, find_similar_faces -from DeepFakeAI.face_reference import get_face_reference, set_face_reference -from DeepFakeAI.typing import Face, Frame -from DeepFakeAI.utilities import conditional_download, resolve_relative_path, is_image, is_video - -FRAME_PROCESSOR = None -THREAD_LOCK = threading.Lock() -NAME = 'FACEFUSION.FRAME_PROCESSOR.FACE_SWAPPER' - - -def get_frame_processor() -> Any: - global FRAME_PROCESSOR - - with THREAD_LOCK: - if FRAME_PROCESSOR is None: - model_path = resolve_relative_path('../.assets/models/inswapper_128.onnx') - FRAME_PROCESSOR = insightface.model_zoo.get_model(model_path, providers = DeepFakeAI.globals.execution_providers) - return FRAME_PROCESSOR - - -def clear_frame_processor() -> None: - global FRAME_PROCESSOR - - FRAME_PROCESSOR = None - - -def pre_check() -> bool: - download_directory_path = resolve_relative_path('../.assets/models') - conditional_download(download_directory_path, ['https://github.com/DeepFakeAI/DeepFakeAI-assets/releases/download/models/inswapper_128.onnx']) - return True - - -def pre_process() -> bool: - if not is_image(DeepFakeAI.globals.source_path): - update_status(wording.get('select_image_source') + wording.get('exclamation_mark'), NAME) - return False - elif not get_one_face(cv2.imread(DeepFakeAI.globals.source_path)): - update_status(wording.get('no_source_face_detected') + wording.get('exclamation_mark'), NAME) - return False - if not is_image(DeepFakeAI.globals.target_path) and not is_video(DeepFakeAI.globals.target_path): - update_status(wording.get('select_image_or_video_target') + wording.get('exclamation_mark'), NAME) - return False - return True - - -def post_process() -> None: - clear_frame_processor() - - -def swap_face(source_face : Face, target_face : Face, temp_frame : Frame) -> Frame: - return get_frame_processor().get(temp_frame, target_face, source_face, paste_back = True) - - -def process_frame(source_face : Face, reference_face : Face, temp_frame : Frame) -> Frame: - if 'reference' in DeepFakeAI.globals.face_recognition: - similar_faces = find_similar_faces(temp_frame, reference_face, DeepFakeAI.globals.reference_face_distance) - if similar_faces: - for similar_face in similar_faces: - temp_frame = swap_face(source_face, similar_face, temp_frame) - if 'many' in DeepFakeAI.globals.face_recognition: - many_faces = get_many_faces(temp_frame) - if many_faces: - for target_face in many_faces: - temp_frame = swap_face(source_face, target_face, temp_frame) - return temp_frame - - -def process_frames(source_path : str, temp_frame_paths : List[str], update: Callable[[], None]) -> None: - source_face = get_one_face(cv2.imread(source_path)) - reference_face = get_face_reference() if 'reference' in DeepFakeAI.globals.face_recognition else None - for temp_frame_path in temp_frame_paths: - temp_frame = cv2.imread(temp_frame_path) - result_frame = process_frame(source_face, reference_face, temp_frame) - cv2.imwrite(temp_frame_path, result_frame) - if update: - update() - - -def process_image(source_path : str, target_path : str, output_path : str) -> None: - source_face = get_one_face(cv2.imread(source_path)) - target_frame = cv2.imread(target_path) - reference_face = get_one_face(target_frame, DeepFakeAI.globals.reference_face_position) if 'reference' in DeepFakeAI.globals.face_recognition else None - result_frame = process_frame(source_face, reference_face, target_frame) - cv2.imwrite(output_path, result_frame) - - -def process_video(source_path : str, temp_frame_paths : List[str]) -> None: - conditional_set_face_reference(temp_frame_paths) - frame_processors.process_video(source_path, temp_frame_paths, process_frames) - - -def conditional_set_face_reference(temp_frame_paths : List[str]) -> None: - if 'reference' in DeepFakeAI.globals.face_recognition and not get_face_reference(): - reference_frame = cv2.imread(temp_frame_paths[DeepFakeAI.globals.reference_frame_number]) - reference_face = get_one_face(reference_frame, DeepFakeAI.globals.reference_face_position) - set_face_reference(reference_face) diff --git a/spaces/innnky/soft-vits-vc/models.py b/spaces/innnky/soft-vits-vc/models.py deleted file mode 100644 index 6468f8f6ac51efc4eb10e0c833ce30b04edb09f9..0000000000000000000000000000000000000000 --- a/spaces/innnky/soft-vits-vc/models.py +++ /dev/null @@ -1,536 +0,0 @@ -import copy -import math -import torch -from torch import nn -from torch.nn import functional as F -import numpy as np -import commons -import modules -import attentions -import monotonic_align - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from commons import init_weights, get_padding - - -class StochasticDurationPredictor(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, n_flows=4, gin_channels=0): - super().__init__() - filter_channels = in_channels # it needs to be removed from future version. - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.log_flow = modules.Log() - self.flows = nn.ModuleList() - self.flows.append(modules.ElementwiseAffine(2)) - for i in range(n_flows): - self.flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3)) - self.flows.append(modules.Flip()) - - self.post_pre = nn.Conv1d(1, filter_channels, 1) - self.post_proj = nn.Conv1d(filter_channels, filter_channels, 1) - self.post_convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout) - self.post_flows = nn.ModuleList() - self.post_flows.append(modules.ElementwiseAffine(2)) - for i in range(4): - self.post_flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3)) - self.post_flows.append(modules.Flip()) - - self.pre = nn.Conv1d(in_channels, filter_channels, 1) - self.proj = nn.Conv1d(filter_channels, filter_channels, 1) - self.convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout) - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, filter_channels, 1) - - def forward(self, x, x_mask, w=None, g=None, reverse=False, noise_scale=1.0): - x = torch.detach(x) - x = self.pre(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.convs(x, x_mask) - x = self.proj(x) * x_mask - - if not reverse: - flows = self.flows - assert w is not None - - logdet_tot_q = 0 - h_w = self.post_pre(w) - h_w = self.post_convs(h_w, x_mask) - h_w = self.post_proj(h_w) * x_mask - e_q = torch.randn(w.size(0), 2, w.size(2)).to(device=x.device, dtype=x.dtype) * x_mask - z_q = e_q - for flow in self.post_flows: - z_q, logdet_q = flow(z_q, x_mask, g=(x + h_w)) - logdet_tot_q += logdet_q - z_u, z1 = torch.split(z_q, [1, 1], 1) - u = torch.sigmoid(z_u) * x_mask - z0 = (w - u) * x_mask - logdet_tot_q += torch.sum((F.logsigmoid(z_u) + F.logsigmoid(-z_u)) * x_mask, [1,2]) - logq = torch.sum(-0.5 * (math.log(2*math.pi) + (e_q**2)) * x_mask, [1,2]) - logdet_tot_q - - logdet_tot = 0 - z0, logdet = self.log_flow(z0, x_mask) - logdet_tot += logdet - z = torch.cat([z0, z1], 1) - for flow in flows: - z, logdet = flow(z, x_mask, g=x, reverse=reverse) - logdet_tot = logdet_tot + logdet - nll = torch.sum(0.5 * (math.log(2*math.pi) + (z**2)) * x_mask, [1,2]) - logdet_tot - return nll + logq # [b] - else: - flows = list(reversed(self.flows)) - flows = flows[:-2] + [flows[-1]] # remove a useless vflow - z = torch.randn(x.size(0), 2, x.size(2)).to(device=x.device, dtype=x.dtype) * noise_scale - for flow in flows: - z = flow(z, x_mask, g=x, reverse=reverse) - z0, z1 = torch.split(z, [1, 1], 1) - logw = z0 - return logw - - -class DurationPredictor(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, gin_channels=0): - super().__init__() - - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.gin_channels = gin_channels - - self.drop = nn.Dropout(p_dropout) - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size, padding=kernel_size//2) - self.norm_1 = modules.LayerNorm(filter_channels) - self.conv_2 = nn.Conv1d(filter_channels, filter_channels, kernel_size, padding=kernel_size//2) - self.norm_2 = modules.LayerNorm(filter_channels) - self.proj = nn.Conv1d(filter_channels, 1, 1) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, in_channels, 1) - - def forward(self, x, x_mask, g=None): - x = torch.detach(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.conv_1(x * x_mask) - x = torch.relu(x) - x = self.norm_1(x) - x = self.drop(x) - x = self.conv_2(x * x_mask) - x = torch.relu(x) - x = self.norm_2(x) - x = self.drop(x) - x = self.proj(x * x_mask) - return x * x_mask - - -class TextEncoder(nn.Module): - def __init__(self, - n_vocab, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout): - super().__init__() - self.n_vocab = n_vocab - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - - # self.emb = nn.Embedding(n_vocab, hidden_channels) - # nn.init.normal_(self.emb.weight, 0.0, hidden_channels**-0.5) - - self.encoder = attentions.Encoder( - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout) - self.proj= nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths): - # x = x.transpose(1,2) - # x = self.emb(x) * math.sqrt(self.hidden_channels) # [b, t, h] - # print(x.shape) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return x, m, logs, x_mask - - -class ResidualCouplingBlock(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append(modules.ResidualCouplingLayer(channels, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels, mean_only=True)) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - -class PosteriorEncoder(nn.Module): - def __init__(self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN(hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - -class Generator(torch.nn.Module): - def __init__(self, initial_channel, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=0): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d(initial_channel, upsample_initial_channel, 7, 1, padding=3) - resblock = modules.ResBlock1 if resblock == '1' else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append(weight_norm( - ConvTranspose1d(upsample_initial_channel//(2**i), upsample_initial_channel//(2**(i+1)), - k, u, padding=(k-u)//2))) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel//(2**(i+1)) - for j, (k, d) in enumerate(zip(resblock_kernel_sizes, resblock_dilation_sizes)): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i*self.num_kernels+j](x) - else: - xs += self.resblocks[i*self.num_kernels+j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - print('Removing weight norm...') - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(get_padding(kernel_size, 1), 0))), - ]) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ]) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2,3,5,7,11] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - - -class SynthesizerTrn(nn.Module): - """ - Synthesizer for Training - """ - - def __init__(self, - n_vocab, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - n_speakers=0, - gin_channels=0, - use_sdp=True, - **kwargs): - - super().__init__() - self.n_vocab = n_vocab - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.n_speakers = n_speakers - self.gin_channels = gin_channels - - self.use_sdp = use_sdp - - self.enc_p = TextEncoder(n_vocab, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout) - self.dec = Generator(inter_channels, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=gin_channels) - self.enc_q = PosteriorEncoder(spec_channels, inter_channels, hidden_channels, 5, 1, 16, gin_channels=gin_channels) - self.flow = ResidualCouplingBlock(inter_channels, hidden_channels, 5, 1, 4, gin_channels=gin_channels) - - if use_sdp: - self.dp = StochasticDurationPredictor(hidden_channels, 192, 3, 0.5, 4, gin_channels=gin_channels) - else: - self.dp = DurationPredictor(hidden_channels, 256, 3, 0.5, gin_channels=gin_channels) - - if n_speakers > 1: - self.emb_g = nn.Embedding(n_speakers, gin_channels) - - def forward(self, x, x_lengths, y, y_lengths, sid=None): - - x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths) - if self.n_speakers > 0: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = None - - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - - with torch.no_grad(): - # negative cross-entropy - s_p_sq_r = torch.exp(-2 * logs_p) # [b, d, t] - neg_cent1 = torch.sum(-0.5 * math.log(2 * math.pi) - logs_p, [1], keepdim=True) # [b, 1, t_s] - neg_cent2 = torch.matmul(-0.5 * (z_p ** 2).transpose(1, 2), s_p_sq_r) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s] - neg_cent3 = torch.matmul(z_p.transpose(1, 2), (m_p * s_p_sq_r)) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s] - neg_cent4 = torch.sum(-0.5 * (m_p ** 2) * s_p_sq_r, [1], keepdim=True) # [b, 1, t_s] - neg_cent = neg_cent1 + neg_cent2 + neg_cent3 + neg_cent4 - - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1) - attn = monotonic_align.maximum_path(neg_cent, attn_mask.squeeze(1)).unsqueeze(1).detach() - - w = attn.sum(2) - if self.use_sdp: - l_length = self.dp(x, x_mask, w, g=g) - l_length = l_length / torch.sum(x_mask) - else: - logw_ = torch.log(w + 1e-6) * x_mask - logw = self.dp(x, x_mask, g=g) - l_length = torch.sum((logw - logw_)**2, [1,2]) / torch.sum(x_mask) # for averaging - - # expand prior - m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) - logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2) - - z_slice, ids_slice = commons.rand_slice_segments(z, y_lengths, self.segment_size) - o = self.dec(z_slice, g=g) - return o, l_length, attn, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, x, x_lengths, sid=None, noise_scale=1, length_scale=1, noise_scale_w=1., max_len=None): - x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths) - if self.n_speakers > 0: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = None - - if self.use_sdp: - logw = self.dp(x, x_mask, g=g, reverse=True, noise_scale=noise_scale_w) - else: - logw = self.dp(x, x_mask, g=g) - w = torch.exp(logw) * x_mask * length_scale - w_ceil = torch.ceil(w) - y_lengths = torch.clamp_min(torch.sum(w_ceil, [1, 2]), 1).long() - y_mask = torch.unsqueeze(commons.sequence_mask(y_lengths, None), 1).to(x_mask.dtype) - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1) - attn = commons.generate_path(w_ceil, attn_mask) - - m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t'] - logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t'] - - z_p = m_p + torch.randn_like(m_p) * torch.exp(logs_p) * noise_scale - z = self.flow(z_p, y_mask, g=g, reverse=True) - o = self.dec((z * y_mask)[:,:,:max_len], g=g) - return o, attn, y_mask, (z, z_p, m_p, logs_p) - - def voice_conversion(self, y, y_lengths, sid_src, sid_tgt): - assert self.n_speakers > 0, "n_speakers have to be larger than 0." - g_src = self.emb_g(sid_src).unsqueeze(-1) - g_tgt = self.emb_g(sid_tgt).unsqueeze(-1) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g_src) - z_p = self.flow(z, y_mask, g=g_src) - z_hat = self.flow(z_p, y_mask, g=g_tgt, reverse=True) - o_hat = self.dec(z_hat * y_mask, g=g_tgt) - return o_hat, y_mask, (z, z_p, z_hat) - diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/HD Online Player (cinderella 2 Cartoon Movie In Hindi ).md b/spaces/inplisQlawa/anything-midjourney-v4-1/HD Online Player (cinderella 2 Cartoon Movie In Hindi ).md deleted file mode 100644 index aaaccac4e4f1dbd4a44f868189102ce5e6d2141c..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/HD Online Player (cinderella 2 Cartoon Movie In Hindi ).md +++ /dev/null @@ -1,6 +0,0 @@ -

      HD Online Player (cinderella 2 cartoon movie in hindi )


      Download Filehttps://urlin.us/2uEyxP



      - -Watch Cinderella (Hindi) All Episodes Online - Catch Quality Streaming of all Cinderella (Hindi) Playflix Videos and Clips for Free on MX Player. 1fdad05405
      -
      -
      -

      diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Krfv 008 Rapidshare Full Version.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Krfv 008 Rapidshare Full Version.md deleted file mode 100644 index 04b1259e0c56313a33911135f7df53e1dff84836..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Krfv 008 Rapidshare Full Version.md +++ /dev/null @@ -1,9 +0,0 @@ -

      Krfv 008 Rapidshare Full Version


      Download Zip · https://urlin.us/2uExFe



      - -Krfv 008 Rapidshare Full version xforce keygen AutoCAD Mechanical 2013 Portable Letasoft Sound Booster Activation key Download via torrent. AutoCAD Mechanical 2013 x86 and x64 Autodesk Design Suite 2013. -Autodesk AutoCAD Mechanical 2013 x86 and x64 Released: 2013 Version: 2013 x86 and x64 Developer: Autodesk Inc. Platform: Windows® X86/X64 Compatibility with Seven Interface language: Russian Tablet: Present Autodesk AutoCAD Mechanical 2013 x86 and x64 Autodesk Design Suite 2013 Release Year/Date: 2013 Version: 2013. -Interface: Russian, English License: FPP. -Autodesk AutoCAD Mechanical 2013 x86 and x64 Year/Date. 8a78ff9644
      -
      -
      -

      diff --git a/spaces/inreVtussa/clothingai/Examples/Aptron Portaria 2009.md b/spaces/inreVtussa/clothingai/Examples/Aptron Portaria 2009.md deleted file mode 100644 index 21c8c3c45c1cbd20e89797e30f7ae203fbe30e6e..0000000000000000000000000000000000000000 --- a/spaces/inreVtussa/clothingai/Examples/Aptron Portaria 2009.md +++ /dev/null @@ -1,18 +0,0 @@ -

      Aptron portaria 2009


      Download Filehttps://tiurll.com/2uCizI



      - -. 7383628160 . coub.com/stories/3146267-new-autodesk-robot-structural-analysis-2009-crack. html -1xbet mirror working for free right now right now. -Download free song blue frost lay on the wires. -Series Chernobyl 2019 watch online all series in a row for free. -In deo, you don't need it in the ass. -Volcano million registration. -Gdz Russian language 5th grade Shmelev part 1. -How to make money with photoshop online. -Play slot machines for money -Download video of the terrorist attack in Volgodonsk on September 16. -Mathematics 1st grade gdz. -Gdz mathematics grade 1. -Porn stories about mature women. 8a78ff9644
      -
      -
      -

      diff --git a/spaces/inreVtussa/clothingai/Examples/Auto Keyboard Murgee Full Crack Kid.md b/spaces/inreVtussa/clothingai/Examples/Auto Keyboard Murgee Full Crack Kid.md deleted file mode 100644 index 04248370b9e42a61a476bc9f26cf7858044291f5..0000000000000000000000000000000000000000 --- a/spaces/inreVtussa/clothingai/Examples/Auto Keyboard Murgee Full Crack Kid.md +++ /dev/null @@ -1,6 +0,0 @@ -

      auto keyboard murgee full crack kid


      Download ✏ ✏ ✏ https://tiurll.com/2uCj9n



      -
      -Murgee Auto Clicker Crack is used for the automatically clicking of the Left Mouse Button by the usage of the Keyboard Shortcut. Users can ... 4d29de3e1b
      -
      -
      -

      diff --git a/spaces/inreVtussa/clothingai/Examples/Bangalore Days 1080p Movie Download.md b/spaces/inreVtussa/clothingai/Examples/Bangalore Days 1080p Movie Download.md deleted file mode 100644 index cee16f2d5368f7f1b8e3fa93315e9d249aa97888..0000000000000000000000000000000000000000 --- a/spaces/inreVtussa/clothingai/Examples/Bangalore Days 1080p Movie Download.md +++ /dev/null @@ -1,10 +0,0 @@ -

      bangalore days 1080p movie download


      Download ✶✶✶ https://tiurll.com/2uCkAp



      -
      -Bangalore Days (2021) HDRip Hindi Dubbed Full Movie Watch Online HD Print Free Download - TodayPk Movies, TodayPkBangalore Days Dubbed in Hindi, . 🔥Watch Bangalore Days (2012) full movie in good HD 1080 720p quality. -Bhandarkar (Rajamurthy) is a doctor who does not want to work in the hospital where he was forced to work in the past. -But after his wife dies from an unknown illness, he is forced to work in a hospital to help the sick. . -Watch Bangalore Days in Russian. -Watch online movie "Bhandarkar (Rajamurthy) - a doctor who does not want to work in the hospital where he was forced to work in the past. 8a78ff9644
      -
      -
      -

      diff --git a/spaces/jackli888/stable-diffusion-webui/extensions-builtin/SwinIR/swinir_model_arch_v2.py b/spaces/jackli888/stable-diffusion-webui/extensions-builtin/SwinIR/swinir_model_arch_v2.py deleted file mode 100644 index a1255881a13b480d1b7564d7474e8bbb5fd7ee76..0000000000000000000000000000000000000000 --- a/spaces/jackli888/stable-diffusion-webui/extensions-builtin/SwinIR/swinir_model_arch_v2.py +++ /dev/null @@ -1,1017 +0,0 @@ -# ----------------------------------------------------------------------------------- -# Swin2SR: Swin2SR: SwinV2 Transformer for Compressed Image Super-Resolution and Restoration, https://arxiv.org/abs/ -# Written by Conde and Choi et al. -# ----------------------------------------------------------------------------------- - -import math -import numpy as np -import torch -import torch.nn as nn -import torch.nn.functional as F -import torch.utils.checkpoint as checkpoint -from timm.models.layers import DropPath, to_2tuple, trunc_normal_ - - -class Mlp(nn.Module): - def __init__(self, in_features, hidden_features=None, out_features=None, act_layer=nn.GELU, drop=0.): - super().__init__() - out_features = out_features or in_features - hidden_features = hidden_features or in_features - self.fc1 = nn.Linear(in_features, hidden_features) - self.act = act_layer() - self.fc2 = nn.Linear(hidden_features, out_features) - self.drop = nn.Dropout(drop) - - def forward(self, x): - x = self.fc1(x) - x = self.act(x) - x = self.drop(x) - x = self.fc2(x) - x = self.drop(x) - return x - - -def window_partition(x, window_size): - """ - Args: - x: (B, H, W, C) - window_size (int): window size - Returns: - windows: (num_windows*B, window_size, window_size, C) - """ - B, H, W, C = x.shape - x = x.view(B, H // window_size, window_size, W // window_size, window_size, C) - windows = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, window_size, window_size, C) - return windows - - -def window_reverse(windows, window_size, H, W): - """ - Args: - windows: (num_windows*B, window_size, window_size, C) - window_size (int): Window size - H (int): Height of image - W (int): Width of image - Returns: - x: (B, H, W, C) - """ - B = int(windows.shape[0] / (H * W / window_size / window_size)) - x = windows.view(B, H // window_size, W // window_size, window_size, window_size, -1) - x = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(B, H, W, -1) - return x - -class WindowAttention(nn.Module): - r""" Window based multi-head self attention (W-MSA) module with relative position bias. - It supports both of shifted and non-shifted window. - Args: - dim (int): Number of input channels. - window_size (tuple[int]): The height and width of the window. - num_heads (int): Number of attention heads. - qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True - attn_drop (float, optional): Dropout ratio of attention weight. Default: 0.0 - proj_drop (float, optional): Dropout ratio of output. Default: 0.0 - pretrained_window_size (tuple[int]): The height and width of the window in pre-training. - """ - - def __init__(self, dim, window_size, num_heads, qkv_bias=True, attn_drop=0., proj_drop=0., - pretrained_window_size=[0, 0]): - - super().__init__() - self.dim = dim - self.window_size = window_size # Wh, Ww - self.pretrained_window_size = pretrained_window_size - self.num_heads = num_heads - - self.logit_scale = nn.Parameter(torch.log(10 * torch.ones((num_heads, 1, 1))), requires_grad=True) - - # mlp to generate continuous relative position bias - self.cpb_mlp = nn.Sequential(nn.Linear(2, 512, bias=True), - nn.ReLU(inplace=True), - nn.Linear(512, num_heads, bias=False)) - - # get relative_coords_table - relative_coords_h = torch.arange(-(self.window_size[0] - 1), self.window_size[0], dtype=torch.float32) - relative_coords_w = torch.arange(-(self.window_size[1] - 1), self.window_size[1], dtype=torch.float32) - relative_coords_table = torch.stack( - torch.meshgrid([relative_coords_h, - relative_coords_w])).permute(1, 2, 0).contiguous().unsqueeze(0) # 1, 2*Wh-1, 2*Ww-1, 2 - if pretrained_window_size[0] > 0: - relative_coords_table[:, :, :, 0] /= (pretrained_window_size[0] - 1) - relative_coords_table[:, :, :, 1] /= (pretrained_window_size[1] - 1) - else: - relative_coords_table[:, :, :, 0] /= (self.window_size[0] - 1) - relative_coords_table[:, :, :, 1] /= (self.window_size[1] - 1) - relative_coords_table *= 8 # normalize to -8, 8 - relative_coords_table = torch.sign(relative_coords_table) * torch.log2( - torch.abs(relative_coords_table) + 1.0) / np.log2(8) - - self.register_buffer("relative_coords_table", relative_coords_table) - - # get pair-wise relative position index for each token inside the window - coords_h = torch.arange(self.window_size[0]) - coords_w = torch.arange(self.window_size[1]) - coords = torch.stack(torch.meshgrid([coords_h, coords_w])) # 2, Wh, Ww - coords_flatten = torch.flatten(coords, 1) # 2, Wh*Ww - relative_coords = coords_flatten[:, :, None] - coords_flatten[:, None, :] # 2, Wh*Ww, Wh*Ww - relative_coords = relative_coords.permute(1, 2, 0).contiguous() # Wh*Ww, Wh*Ww, 2 - relative_coords[:, :, 0] += self.window_size[0] - 1 # shift to start from 0 - relative_coords[:, :, 1] += self.window_size[1] - 1 - relative_coords[:, :, 0] *= 2 * self.window_size[1] - 1 - relative_position_index = relative_coords.sum(-1) # Wh*Ww, Wh*Ww - self.register_buffer("relative_position_index", relative_position_index) - - self.qkv = nn.Linear(dim, dim * 3, bias=False) - if qkv_bias: - self.q_bias = nn.Parameter(torch.zeros(dim)) - self.v_bias = nn.Parameter(torch.zeros(dim)) - else: - self.q_bias = None - self.v_bias = None - self.attn_drop = nn.Dropout(attn_drop) - self.proj = nn.Linear(dim, dim) - self.proj_drop = nn.Dropout(proj_drop) - self.softmax = nn.Softmax(dim=-1) - - def forward(self, x, mask=None): - """ - Args: - x: input features with shape of (num_windows*B, N, C) - mask: (0/-inf) mask with shape of (num_windows, Wh*Ww, Wh*Ww) or None - """ - B_, N, C = x.shape - qkv_bias = None - if self.q_bias is not None: - qkv_bias = torch.cat((self.q_bias, torch.zeros_like(self.v_bias, requires_grad=False), self.v_bias)) - qkv = F.linear(input=x, weight=self.qkv.weight, bias=qkv_bias) - qkv = qkv.reshape(B_, N, 3, self.num_heads, -1).permute(2, 0, 3, 1, 4) - q, k, v = qkv[0], qkv[1], qkv[2] # make torchscript happy (cannot use tensor as tuple) - - # cosine attention - attn = (F.normalize(q, dim=-1) @ F.normalize(k, dim=-1).transpose(-2, -1)) - logit_scale = torch.clamp(self.logit_scale, max=torch.log(torch.tensor(1. / 0.01)).to(self.logit_scale.device)).exp() - attn = attn * logit_scale - - relative_position_bias_table = self.cpb_mlp(self.relative_coords_table).view(-1, self.num_heads) - relative_position_bias = relative_position_bias_table[self.relative_position_index.view(-1)].view( - self.window_size[0] * self.window_size[1], self.window_size[0] * self.window_size[1], -1) # Wh*Ww,Wh*Ww,nH - relative_position_bias = relative_position_bias.permute(2, 0, 1).contiguous() # nH, Wh*Ww, Wh*Ww - relative_position_bias = 16 * torch.sigmoid(relative_position_bias) - attn = attn + relative_position_bias.unsqueeze(0) - - if mask is not None: - nW = mask.shape[0] - attn = attn.view(B_ // nW, nW, self.num_heads, N, N) + mask.unsqueeze(1).unsqueeze(0) - attn = attn.view(-1, self.num_heads, N, N) - attn = self.softmax(attn) - else: - attn = self.softmax(attn) - - attn = self.attn_drop(attn) - - x = (attn @ v).transpose(1, 2).reshape(B_, N, C) - x = self.proj(x) - x = self.proj_drop(x) - return x - - def extra_repr(self) -> str: - return f'dim={self.dim}, window_size={self.window_size}, ' \ - f'pretrained_window_size={self.pretrained_window_size}, num_heads={self.num_heads}' - - def flops(self, N): - # calculate flops for 1 window with token length of N - flops = 0 - # qkv = self.qkv(x) - flops += N * self.dim * 3 * self.dim - # attn = (q @ k.transpose(-2, -1)) - flops += self.num_heads * N * (self.dim // self.num_heads) * N - # x = (attn @ v) - flops += self.num_heads * N * N * (self.dim // self.num_heads) - # x = self.proj(x) - flops += N * self.dim * self.dim - return flops - -class SwinTransformerBlock(nn.Module): - r""" Swin Transformer Block. - Args: - dim (int): Number of input channels. - input_resolution (tuple[int]): Input resulotion. - num_heads (int): Number of attention heads. - window_size (int): Window size. - shift_size (int): Shift size for SW-MSA. - mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. - qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True - drop (float, optional): Dropout rate. Default: 0.0 - attn_drop (float, optional): Attention dropout rate. Default: 0.0 - drop_path (float, optional): Stochastic depth rate. Default: 0.0 - act_layer (nn.Module, optional): Activation layer. Default: nn.GELU - norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm - pretrained_window_size (int): Window size in pre-training. - """ - - def __init__(self, dim, input_resolution, num_heads, window_size=7, shift_size=0, - mlp_ratio=4., qkv_bias=True, drop=0., attn_drop=0., drop_path=0., - act_layer=nn.GELU, norm_layer=nn.LayerNorm, pretrained_window_size=0): - super().__init__() - self.dim = dim - self.input_resolution = input_resolution - self.num_heads = num_heads - self.window_size = window_size - self.shift_size = shift_size - self.mlp_ratio = mlp_ratio - if min(self.input_resolution) <= self.window_size: - # if window size is larger than input resolution, we don't partition windows - self.shift_size = 0 - self.window_size = min(self.input_resolution) - assert 0 <= self.shift_size < self.window_size, "shift_size must in 0-window_size" - - self.norm1 = norm_layer(dim) - self.attn = WindowAttention( - dim, window_size=to_2tuple(self.window_size), num_heads=num_heads, - qkv_bias=qkv_bias, attn_drop=attn_drop, proj_drop=drop, - pretrained_window_size=to_2tuple(pretrained_window_size)) - - self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity() - self.norm2 = norm_layer(dim) - mlp_hidden_dim = int(dim * mlp_ratio) - self.mlp = Mlp(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop) - - if self.shift_size > 0: - attn_mask = self.calculate_mask(self.input_resolution) - else: - attn_mask = None - - self.register_buffer("attn_mask", attn_mask) - - def calculate_mask(self, x_size): - # calculate attention mask for SW-MSA - H, W = x_size - img_mask = torch.zeros((1, H, W, 1)) # 1 H W 1 - h_slices = (slice(0, -self.window_size), - slice(-self.window_size, -self.shift_size), - slice(-self.shift_size, None)) - w_slices = (slice(0, -self.window_size), - slice(-self.window_size, -self.shift_size), - slice(-self.shift_size, None)) - cnt = 0 - for h in h_slices: - for w in w_slices: - img_mask[:, h, w, :] = cnt - cnt += 1 - - mask_windows = window_partition(img_mask, self.window_size) # nW, window_size, window_size, 1 - mask_windows = mask_windows.view(-1, self.window_size * self.window_size) - attn_mask = mask_windows.unsqueeze(1) - mask_windows.unsqueeze(2) - attn_mask = attn_mask.masked_fill(attn_mask != 0, float(-100.0)).masked_fill(attn_mask == 0, float(0.0)) - - return attn_mask - - def forward(self, x, x_size): - H, W = x_size - B, L, C = x.shape - #assert L == H * W, "input feature has wrong size" - - shortcut = x - x = x.view(B, H, W, C) - - # cyclic shift - if self.shift_size > 0: - shifted_x = torch.roll(x, shifts=(-self.shift_size, -self.shift_size), dims=(1, 2)) - else: - shifted_x = x - - # partition windows - x_windows = window_partition(shifted_x, self.window_size) # nW*B, window_size, window_size, C - x_windows = x_windows.view(-1, self.window_size * self.window_size, C) # nW*B, window_size*window_size, C - - # W-MSA/SW-MSA (to be compatible for testing on images whose shapes are the multiple of window size - if self.input_resolution == x_size: - attn_windows = self.attn(x_windows, mask=self.attn_mask) # nW*B, window_size*window_size, C - else: - attn_windows = self.attn(x_windows, mask=self.calculate_mask(x_size).to(x.device)) - - # merge windows - attn_windows = attn_windows.view(-1, self.window_size, self.window_size, C) - shifted_x = window_reverse(attn_windows, self.window_size, H, W) # B H' W' C - - # reverse cyclic shift - if self.shift_size > 0: - x = torch.roll(shifted_x, shifts=(self.shift_size, self.shift_size), dims=(1, 2)) - else: - x = shifted_x - x = x.view(B, H * W, C) - x = shortcut + self.drop_path(self.norm1(x)) - - # FFN - x = x + self.drop_path(self.norm2(self.mlp(x))) - - return x - - def extra_repr(self) -> str: - return f"dim={self.dim}, input_resolution={self.input_resolution}, num_heads={self.num_heads}, " \ - f"window_size={self.window_size}, shift_size={self.shift_size}, mlp_ratio={self.mlp_ratio}" - - def flops(self): - flops = 0 - H, W = self.input_resolution - # norm1 - flops += self.dim * H * W - # W-MSA/SW-MSA - nW = H * W / self.window_size / self.window_size - flops += nW * self.attn.flops(self.window_size * self.window_size) - # mlp - flops += 2 * H * W * self.dim * self.dim * self.mlp_ratio - # norm2 - flops += self.dim * H * W - return flops - -class PatchMerging(nn.Module): - r""" Patch Merging Layer. - Args: - input_resolution (tuple[int]): Resolution of input feature. - dim (int): Number of input channels. - norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm - """ - - def __init__(self, input_resolution, dim, norm_layer=nn.LayerNorm): - super().__init__() - self.input_resolution = input_resolution - self.dim = dim - self.reduction = nn.Linear(4 * dim, 2 * dim, bias=False) - self.norm = norm_layer(2 * dim) - - def forward(self, x): - """ - x: B, H*W, C - """ - H, W = self.input_resolution - B, L, C = x.shape - assert L == H * W, "input feature has wrong size" - assert H % 2 == 0 and W % 2 == 0, f"x size ({H}*{W}) are not even." - - x = x.view(B, H, W, C) - - x0 = x[:, 0::2, 0::2, :] # B H/2 W/2 C - x1 = x[:, 1::2, 0::2, :] # B H/2 W/2 C - x2 = x[:, 0::2, 1::2, :] # B H/2 W/2 C - x3 = x[:, 1::2, 1::2, :] # B H/2 W/2 C - x = torch.cat([x0, x1, x2, x3], -1) # B H/2 W/2 4*C - x = x.view(B, -1, 4 * C) # B H/2*W/2 4*C - - x = self.reduction(x) - x = self.norm(x) - - return x - - def extra_repr(self) -> str: - return f"input_resolution={self.input_resolution}, dim={self.dim}" - - def flops(self): - H, W = self.input_resolution - flops = (H // 2) * (W // 2) * 4 * self.dim * 2 * self.dim - flops += H * W * self.dim // 2 - return flops - -class BasicLayer(nn.Module): - """ A basic Swin Transformer layer for one stage. - Args: - dim (int): Number of input channels. - input_resolution (tuple[int]): Input resolution. - depth (int): Number of blocks. - num_heads (int): Number of attention heads. - window_size (int): Local window size. - mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. - qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True - drop (float, optional): Dropout rate. Default: 0.0 - attn_drop (float, optional): Attention dropout rate. Default: 0.0 - drop_path (float | tuple[float], optional): Stochastic depth rate. Default: 0.0 - norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm - downsample (nn.Module | None, optional): Downsample layer at the end of the layer. Default: None - use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False. - pretrained_window_size (int): Local window size in pre-training. - """ - - def __init__(self, dim, input_resolution, depth, num_heads, window_size, - mlp_ratio=4., qkv_bias=True, drop=0., attn_drop=0., - drop_path=0., norm_layer=nn.LayerNorm, downsample=None, use_checkpoint=False, - pretrained_window_size=0): - - super().__init__() - self.dim = dim - self.input_resolution = input_resolution - self.depth = depth - self.use_checkpoint = use_checkpoint - - # build blocks - self.blocks = nn.ModuleList([ - SwinTransformerBlock(dim=dim, input_resolution=input_resolution, - num_heads=num_heads, window_size=window_size, - shift_size=0 if (i % 2 == 0) else window_size // 2, - mlp_ratio=mlp_ratio, - qkv_bias=qkv_bias, - drop=drop, attn_drop=attn_drop, - drop_path=drop_path[i] if isinstance(drop_path, list) else drop_path, - norm_layer=norm_layer, - pretrained_window_size=pretrained_window_size) - for i in range(depth)]) - - # patch merging layer - if downsample is not None: - self.downsample = downsample(input_resolution, dim=dim, norm_layer=norm_layer) - else: - self.downsample = None - - def forward(self, x, x_size): - for blk in self.blocks: - if self.use_checkpoint: - x = checkpoint.checkpoint(blk, x, x_size) - else: - x = blk(x, x_size) - if self.downsample is not None: - x = self.downsample(x) - return x - - def extra_repr(self) -> str: - return f"dim={self.dim}, input_resolution={self.input_resolution}, depth={self.depth}" - - def flops(self): - flops = 0 - for blk in self.blocks: - flops += blk.flops() - if self.downsample is not None: - flops += self.downsample.flops() - return flops - - def _init_respostnorm(self): - for blk in self.blocks: - nn.init.constant_(blk.norm1.bias, 0) - nn.init.constant_(blk.norm1.weight, 0) - nn.init.constant_(blk.norm2.bias, 0) - nn.init.constant_(blk.norm2.weight, 0) - -class PatchEmbed(nn.Module): - r""" Image to Patch Embedding - Args: - img_size (int): Image size. Default: 224. - patch_size (int): Patch token size. Default: 4. - in_chans (int): Number of input image channels. Default: 3. - embed_dim (int): Number of linear projection output channels. Default: 96. - norm_layer (nn.Module, optional): Normalization layer. Default: None - """ - - def __init__(self, img_size=224, patch_size=4, in_chans=3, embed_dim=96, norm_layer=None): - super().__init__() - img_size = to_2tuple(img_size) - patch_size = to_2tuple(patch_size) - patches_resolution = [img_size[0] // patch_size[0], img_size[1] // patch_size[1]] - self.img_size = img_size - self.patch_size = patch_size - self.patches_resolution = patches_resolution - self.num_patches = patches_resolution[0] * patches_resolution[1] - - self.in_chans = in_chans - self.embed_dim = embed_dim - - self.proj = nn.Conv2d(in_chans, embed_dim, kernel_size=patch_size, stride=patch_size) - if norm_layer is not None: - self.norm = norm_layer(embed_dim) - else: - self.norm = None - - def forward(self, x): - B, C, H, W = x.shape - # FIXME look at relaxing size constraints - # assert H == self.img_size[0] and W == self.img_size[1], - # f"Input image size ({H}*{W}) doesn't match model ({self.img_size[0]}*{self.img_size[1]})." - x = self.proj(x).flatten(2).transpose(1, 2) # B Ph*Pw C - if self.norm is not None: - x = self.norm(x) - return x - - def flops(self): - Ho, Wo = self.patches_resolution - flops = Ho * Wo * self.embed_dim * self.in_chans * (self.patch_size[0] * self.patch_size[1]) - if self.norm is not None: - flops += Ho * Wo * self.embed_dim - return flops - -class RSTB(nn.Module): - """Residual Swin Transformer Block (RSTB). - - Args: - dim (int): Number of input channels. - input_resolution (tuple[int]): Input resolution. - depth (int): Number of blocks. - num_heads (int): Number of attention heads. - window_size (int): Local window size. - mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. - qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True - drop (float, optional): Dropout rate. Default: 0.0 - attn_drop (float, optional): Attention dropout rate. Default: 0.0 - drop_path (float | tuple[float], optional): Stochastic depth rate. Default: 0.0 - norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm - downsample (nn.Module | None, optional): Downsample layer at the end of the layer. Default: None - use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False. - img_size: Input image size. - patch_size: Patch size. - resi_connection: The convolutional block before residual connection. - """ - - def __init__(self, dim, input_resolution, depth, num_heads, window_size, - mlp_ratio=4., qkv_bias=True, drop=0., attn_drop=0., - drop_path=0., norm_layer=nn.LayerNorm, downsample=None, use_checkpoint=False, - img_size=224, patch_size=4, resi_connection='1conv'): - super(RSTB, self).__init__() - - self.dim = dim - self.input_resolution = input_resolution - - self.residual_group = BasicLayer(dim=dim, - input_resolution=input_resolution, - depth=depth, - num_heads=num_heads, - window_size=window_size, - mlp_ratio=mlp_ratio, - qkv_bias=qkv_bias, - drop=drop, attn_drop=attn_drop, - drop_path=drop_path, - norm_layer=norm_layer, - downsample=downsample, - use_checkpoint=use_checkpoint) - - if resi_connection == '1conv': - self.conv = nn.Conv2d(dim, dim, 3, 1, 1) - elif resi_connection == '3conv': - # to save parameters and memory - self.conv = nn.Sequential(nn.Conv2d(dim, dim // 4, 3, 1, 1), nn.LeakyReLU(negative_slope=0.2, inplace=True), - nn.Conv2d(dim // 4, dim // 4, 1, 1, 0), - nn.LeakyReLU(negative_slope=0.2, inplace=True), - nn.Conv2d(dim // 4, dim, 3, 1, 1)) - - self.patch_embed = PatchEmbed( - img_size=img_size, patch_size=patch_size, in_chans=dim, embed_dim=dim, - norm_layer=None) - - self.patch_unembed = PatchUnEmbed( - img_size=img_size, patch_size=patch_size, in_chans=dim, embed_dim=dim, - norm_layer=None) - - def forward(self, x, x_size): - return self.patch_embed(self.conv(self.patch_unembed(self.residual_group(x, x_size), x_size))) + x - - def flops(self): - flops = 0 - flops += self.residual_group.flops() - H, W = self.input_resolution - flops += H * W * self.dim * self.dim * 9 - flops += self.patch_embed.flops() - flops += self.patch_unembed.flops() - - return flops - -class PatchUnEmbed(nn.Module): - r""" Image to Patch Unembedding - - Args: - img_size (int): Image size. Default: 224. - patch_size (int): Patch token size. Default: 4. - in_chans (int): Number of input image channels. Default: 3. - embed_dim (int): Number of linear projection output channels. Default: 96. - norm_layer (nn.Module, optional): Normalization layer. Default: None - """ - - def __init__(self, img_size=224, patch_size=4, in_chans=3, embed_dim=96, norm_layer=None): - super().__init__() - img_size = to_2tuple(img_size) - patch_size = to_2tuple(patch_size) - patches_resolution = [img_size[0] // patch_size[0], img_size[1] // patch_size[1]] - self.img_size = img_size - self.patch_size = patch_size - self.patches_resolution = patches_resolution - self.num_patches = patches_resolution[0] * patches_resolution[1] - - self.in_chans = in_chans - self.embed_dim = embed_dim - - def forward(self, x, x_size): - B, HW, C = x.shape - x = x.transpose(1, 2).view(B, self.embed_dim, x_size[0], x_size[1]) # B Ph*Pw C - return x - - def flops(self): - flops = 0 - return flops - - -class Upsample(nn.Sequential): - """Upsample module. - - Args: - scale (int): Scale factor. Supported scales: 2^n and 3. - num_feat (int): Channel number of intermediate features. - """ - - def __init__(self, scale, num_feat): - m = [] - if (scale & (scale - 1)) == 0: # scale = 2^n - for _ in range(int(math.log(scale, 2))): - m.append(nn.Conv2d(num_feat, 4 * num_feat, 3, 1, 1)) - m.append(nn.PixelShuffle(2)) - elif scale == 3: - m.append(nn.Conv2d(num_feat, 9 * num_feat, 3, 1, 1)) - m.append(nn.PixelShuffle(3)) - else: - raise ValueError(f'scale {scale} is not supported. ' 'Supported scales: 2^n and 3.') - super(Upsample, self).__init__(*m) - -class Upsample_hf(nn.Sequential): - """Upsample module. - - Args: - scale (int): Scale factor. Supported scales: 2^n and 3. - num_feat (int): Channel number of intermediate features. - """ - - def __init__(self, scale, num_feat): - m = [] - if (scale & (scale - 1)) == 0: # scale = 2^n - for _ in range(int(math.log(scale, 2))): - m.append(nn.Conv2d(num_feat, 4 * num_feat, 3, 1, 1)) - m.append(nn.PixelShuffle(2)) - elif scale == 3: - m.append(nn.Conv2d(num_feat, 9 * num_feat, 3, 1, 1)) - m.append(nn.PixelShuffle(3)) - else: - raise ValueError(f'scale {scale} is not supported. ' 'Supported scales: 2^n and 3.') - super(Upsample_hf, self).__init__(*m) - - -class UpsampleOneStep(nn.Sequential): - """UpsampleOneStep module (the difference with Upsample is that it always only has 1conv + 1pixelshuffle) - Used in lightweight SR to save parameters. - - Args: - scale (int): Scale factor. Supported scales: 2^n and 3. - num_feat (int): Channel number of intermediate features. - - """ - - def __init__(self, scale, num_feat, num_out_ch, input_resolution=None): - self.num_feat = num_feat - self.input_resolution = input_resolution - m = [] - m.append(nn.Conv2d(num_feat, (scale ** 2) * num_out_ch, 3, 1, 1)) - m.append(nn.PixelShuffle(scale)) - super(UpsampleOneStep, self).__init__(*m) - - def flops(self): - H, W = self.input_resolution - flops = H * W * self.num_feat * 3 * 9 - return flops - - - -class Swin2SR(nn.Module): - r""" Swin2SR - A PyTorch impl of : `Swin2SR: SwinV2 Transformer for Compressed Image Super-Resolution and Restoration`. - - Args: - img_size (int | tuple(int)): Input image size. Default 64 - patch_size (int | tuple(int)): Patch size. Default: 1 - in_chans (int): Number of input image channels. Default: 3 - embed_dim (int): Patch embedding dimension. Default: 96 - depths (tuple(int)): Depth of each Swin Transformer layer. - num_heads (tuple(int)): Number of attention heads in different layers. - window_size (int): Window size. Default: 7 - mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. Default: 4 - qkv_bias (bool): If True, add a learnable bias to query, key, value. Default: True - drop_rate (float): Dropout rate. Default: 0 - attn_drop_rate (float): Attention dropout rate. Default: 0 - drop_path_rate (float): Stochastic depth rate. Default: 0.1 - norm_layer (nn.Module): Normalization layer. Default: nn.LayerNorm. - ape (bool): If True, add absolute position embedding to the patch embedding. Default: False - patch_norm (bool): If True, add normalization after patch embedding. Default: True - use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False - upscale: Upscale factor. 2/3/4/8 for image SR, 1 for denoising and compress artifact reduction - img_range: Image range. 1. or 255. - upsampler: The reconstruction reconstruction module. 'pixelshuffle'/'pixelshuffledirect'/'nearest+conv'/None - resi_connection: The convolutional block before residual connection. '1conv'/'3conv' - """ - - def __init__(self, img_size=64, patch_size=1, in_chans=3, - embed_dim=96, depths=[6, 6, 6, 6], num_heads=[6, 6, 6, 6], - window_size=7, mlp_ratio=4., qkv_bias=True, - drop_rate=0., attn_drop_rate=0., drop_path_rate=0.1, - norm_layer=nn.LayerNorm, ape=False, patch_norm=True, - use_checkpoint=False, upscale=2, img_range=1., upsampler='', resi_connection='1conv', - **kwargs): - super(Swin2SR, self).__init__() - num_in_ch = in_chans - num_out_ch = in_chans - num_feat = 64 - self.img_range = img_range - if in_chans == 3: - rgb_mean = (0.4488, 0.4371, 0.4040) - self.mean = torch.Tensor(rgb_mean).view(1, 3, 1, 1) - else: - self.mean = torch.zeros(1, 1, 1, 1) - self.upscale = upscale - self.upsampler = upsampler - self.window_size = window_size - - ##################################################################################################### - ################################### 1, shallow feature extraction ################################### - self.conv_first = nn.Conv2d(num_in_ch, embed_dim, 3, 1, 1) - - ##################################################################################################### - ################################### 2, deep feature extraction ###################################### - self.num_layers = len(depths) - self.embed_dim = embed_dim - self.ape = ape - self.patch_norm = patch_norm - self.num_features = embed_dim - self.mlp_ratio = mlp_ratio - - # split image into non-overlapping patches - self.patch_embed = PatchEmbed( - img_size=img_size, patch_size=patch_size, in_chans=embed_dim, embed_dim=embed_dim, - norm_layer=norm_layer if self.patch_norm else None) - num_patches = self.patch_embed.num_patches - patches_resolution = self.patch_embed.patches_resolution - self.patches_resolution = patches_resolution - - # merge non-overlapping patches into image - self.patch_unembed = PatchUnEmbed( - img_size=img_size, patch_size=patch_size, in_chans=embed_dim, embed_dim=embed_dim, - norm_layer=norm_layer if self.patch_norm else None) - - # absolute position embedding - if self.ape: - self.absolute_pos_embed = nn.Parameter(torch.zeros(1, num_patches, embed_dim)) - trunc_normal_(self.absolute_pos_embed, std=.02) - - self.pos_drop = nn.Dropout(p=drop_rate) - - # stochastic depth - dpr = [x.item() for x in torch.linspace(0, drop_path_rate, sum(depths))] # stochastic depth decay rule - - # build Residual Swin Transformer blocks (RSTB) - self.layers = nn.ModuleList() - for i_layer in range(self.num_layers): - layer = RSTB(dim=embed_dim, - input_resolution=(patches_resolution[0], - patches_resolution[1]), - depth=depths[i_layer], - num_heads=num_heads[i_layer], - window_size=window_size, - mlp_ratio=self.mlp_ratio, - qkv_bias=qkv_bias, - drop=drop_rate, attn_drop=attn_drop_rate, - drop_path=dpr[sum(depths[:i_layer]):sum(depths[:i_layer + 1])], # no impact on SR results - norm_layer=norm_layer, - downsample=None, - use_checkpoint=use_checkpoint, - img_size=img_size, - patch_size=patch_size, - resi_connection=resi_connection - - ) - self.layers.append(layer) - - if self.upsampler == 'pixelshuffle_hf': - self.layers_hf = nn.ModuleList() - for i_layer in range(self.num_layers): - layer = RSTB(dim=embed_dim, - input_resolution=(patches_resolution[0], - patches_resolution[1]), - depth=depths[i_layer], - num_heads=num_heads[i_layer], - window_size=window_size, - mlp_ratio=self.mlp_ratio, - qkv_bias=qkv_bias, - drop=drop_rate, attn_drop=attn_drop_rate, - drop_path=dpr[sum(depths[:i_layer]):sum(depths[:i_layer + 1])], # no impact on SR results - norm_layer=norm_layer, - downsample=None, - use_checkpoint=use_checkpoint, - img_size=img_size, - patch_size=patch_size, - resi_connection=resi_connection - - ) - self.layers_hf.append(layer) - - self.norm = norm_layer(self.num_features) - - # build the last conv layer in deep feature extraction - if resi_connection == '1conv': - self.conv_after_body = nn.Conv2d(embed_dim, embed_dim, 3, 1, 1) - elif resi_connection == '3conv': - # to save parameters and memory - self.conv_after_body = nn.Sequential(nn.Conv2d(embed_dim, embed_dim // 4, 3, 1, 1), - nn.LeakyReLU(negative_slope=0.2, inplace=True), - nn.Conv2d(embed_dim // 4, embed_dim // 4, 1, 1, 0), - nn.LeakyReLU(negative_slope=0.2, inplace=True), - nn.Conv2d(embed_dim // 4, embed_dim, 3, 1, 1)) - - ##################################################################################################### - ################################ 3, high quality image reconstruction ################################ - if self.upsampler == 'pixelshuffle': - # for classical SR - self.conv_before_upsample = nn.Sequential(nn.Conv2d(embed_dim, num_feat, 3, 1, 1), - nn.LeakyReLU(inplace=True)) - self.upsample = Upsample(upscale, num_feat) - self.conv_last = nn.Conv2d(num_feat, num_out_ch, 3, 1, 1) - elif self.upsampler == 'pixelshuffle_aux': - self.conv_bicubic = nn.Conv2d(num_in_ch, num_feat, 3, 1, 1) - self.conv_before_upsample = nn.Sequential( - nn.Conv2d(embed_dim, num_feat, 3, 1, 1), - nn.LeakyReLU(inplace=True)) - self.conv_aux = nn.Conv2d(num_feat, num_out_ch, 3, 1, 1) - self.conv_after_aux = nn.Sequential( - nn.Conv2d(3, num_feat, 3, 1, 1), - nn.LeakyReLU(inplace=True)) - self.upsample = Upsample(upscale, num_feat) - self.conv_last = nn.Conv2d(num_feat, num_out_ch, 3, 1, 1) - - elif self.upsampler == 'pixelshuffle_hf': - self.conv_before_upsample = nn.Sequential(nn.Conv2d(embed_dim, num_feat, 3, 1, 1), - nn.LeakyReLU(inplace=True)) - self.upsample = Upsample(upscale, num_feat) - self.upsample_hf = Upsample_hf(upscale, num_feat) - self.conv_last = nn.Conv2d(num_feat, num_out_ch, 3, 1, 1) - self.conv_first_hf = nn.Sequential(nn.Conv2d(num_feat, embed_dim, 3, 1, 1), - nn.LeakyReLU(inplace=True)) - self.conv_after_body_hf = nn.Conv2d(embed_dim, embed_dim, 3, 1, 1) - self.conv_before_upsample_hf = nn.Sequential( - nn.Conv2d(embed_dim, num_feat, 3, 1, 1), - nn.LeakyReLU(inplace=True)) - self.conv_last_hf = nn.Conv2d(num_feat, num_out_ch, 3, 1, 1) - - elif self.upsampler == 'pixelshuffledirect': - # for lightweight SR (to save parameters) - self.upsample = UpsampleOneStep(upscale, embed_dim, num_out_ch, - (patches_resolution[0], patches_resolution[1])) - elif self.upsampler == 'nearest+conv': - # for real-world SR (less artifacts) - assert self.upscale == 4, 'only support x4 now.' - self.conv_before_upsample = nn.Sequential(nn.Conv2d(embed_dim, num_feat, 3, 1, 1), - nn.LeakyReLU(inplace=True)) - self.conv_up1 = nn.Conv2d(num_feat, num_feat, 3, 1, 1) - self.conv_up2 = nn.Conv2d(num_feat, num_feat, 3, 1, 1) - self.conv_hr = nn.Conv2d(num_feat, num_feat, 3, 1, 1) - self.conv_last = nn.Conv2d(num_feat, num_out_ch, 3, 1, 1) - self.lrelu = nn.LeakyReLU(negative_slope=0.2, inplace=True) - else: - # for image denoising and JPEG compression artifact reduction - self.conv_last = nn.Conv2d(embed_dim, num_out_ch, 3, 1, 1) - - self.apply(self._init_weights) - - def _init_weights(self, m): - if isinstance(m, nn.Linear): - trunc_normal_(m.weight, std=.02) - if isinstance(m, nn.Linear) and m.bias is not None: - nn.init.constant_(m.bias, 0) - elif isinstance(m, nn.LayerNorm): - nn.init.constant_(m.bias, 0) - nn.init.constant_(m.weight, 1.0) - - @torch.jit.ignore - def no_weight_decay(self): - return {'absolute_pos_embed'} - - @torch.jit.ignore - def no_weight_decay_keywords(self): - return {'relative_position_bias_table'} - - def check_image_size(self, x): - _, _, h, w = x.size() - mod_pad_h = (self.window_size - h % self.window_size) % self.window_size - mod_pad_w = (self.window_size - w % self.window_size) % self.window_size - x = F.pad(x, (0, mod_pad_w, 0, mod_pad_h), 'reflect') - return x - - def forward_features(self, x): - x_size = (x.shape[2], x.shape[3]) - x = self.patch_embed(x) - if self.ape: - x = x + self.absolute_pos_embed - x = self.pos_drop(x) - - for layer in self.layers: - x = layer(x, x_size) - - x = self.norm(x) # B L C - x = self.patch_unembed(x, x_size) - - return x - - def forward_features_hf(self, x): - x_size = (x.shape[2], x.shape[3]) - x = self.patch_embed(x) - if self.ape: - x = x + self.absolute_pos_embed - x = self.pos_drop(x) - - for layer in self.layers_hf: - x = layer(x, x_size) - - x = self.norm(x) # B L C - x = self.patch_unembed(x, x_size) - - return x - - def forward(self, x): - H, W = x.shape[2:] - x = self.check_image_size(x) - - self.mean = self.mean.type_as(x) - x = (x - self.mean) * self.img_range - - if self.upsampler == 'pixelshuffle': - # for classical SR - x = self.conv_first(x) - x = self.conv_after_body(self.forward_features(x)) + x - x = self.conv_before_upsample(x) - x = self.conv_last(self.upsample(x)) - elif self.upsampler == 'pixelshuffle_aux': - bicubic = F.interpolate(x, size=(H * self.upscale, W * self.upscale), mode='bicubic', align_corners=False) - bicubic = self.conv_bicubic(bicubic) - x = self.conv_first(x) - x = self.conv_after_body(self.forward_features(x)) + x - x = self.conv_before_upsample(x) - aux = self.conv_aux(x) # b, 3, LR_H, LR_W - x = self.conv_after_aux(aux) - x = self.upsample(x)[:, :, :H * self.upscale, :W * self.upscale] + bicubic[:, :, :H * self.upscale, :W * self.upscale] - x = self.conv_last(x) - aux = aux / self.img_range + self.mean - elif self.upsampler == 'pixelshuffle_hf': - # for classical SR with HF - x = self.conv_first(x) - x = self.conv_after_body(self.forward_features(x)) + x - x_before = self.conv_before_upsample(x) - x_out = self.conv_last(self.upsample(x_before)) - - x_hf = self.conv_first_hf(x_before) - x_hf = self.conv_after_body_hf(self.forward_features_hf(x_hf)) + x_hf - x_hf = self.conv_before_upsample_hf(x_hf) - x_hf = self.conv_last_hf(self.upsample_hf(x_hf)) - x = x_out + x_hf - x_hf = x_hf / self.img_range + self.mean - - elif self.upsampler == 'pixelshuffledirect': - # for lightweight SR - x = self.conv_first(x) - x = self.conv_after_body(self.forward_features(x)) + x - x = self.upsample(x) - elif self.upsampler == 'nearest+conv': - # for real-world SR - x = self.conv_first(x) - x = self.conv_after_body(self.forward_features(x)) + x - x = self.conv_before_upsample(x) - x = self.lrelu(self.conv_up1(torch.nn.functional.interpolate(x, scale_factor=2, mode='nearest'))) - x = self.lrelu(self.conv_up2(torch.nn.functional.interpolate(x, scale_factor=2, mode='nearest'))) - x = self.conv_last(self.lrelu(self.conv_hr(x))) - else: - # for image denoising and JPEG compression artifact reduction - x_first = self.conv_first(x) - res = self.conv_after_body(self.forward_features(x_first)) + x_first - x = x + self.conv_last(res) - - x = x / self.img_range + self.mean - if self.upsampler == "pixelshuffle_aux": - return x[:, :, :H*self.upscale, :W*self.upscale], aux - - elif self.upsampler == "pixelshuffle_hf": - x_out = x_out / self.img_range + self.mean - return x_out[:, :, :H*self.upscale, :W*self.upscale], x[:, :, :H*self.upscale, :W*self.upscale], x_hf[:, :, :H*self.upscale, :W*self.upscale] - - else: - return x[:, :, :H*self.upscale, :W*self.upscale] - - def flops(self): - flops = 0 - H, W = self.patches_resolution - flops += H * W * 3 * self.embed_dim * 9 - flops += self.patch_embed.flops() - for i, layer in enumerate(self.layers): - flops += layer.flops() - flops += H * W * 3 * self.embed_dim * self.embed_dim - flops += self.upsample.flops() - return flops - - -if __name__ == '__main__': - upscale = 4 - window_size = 8 - height = (1024 // upscale // window_size + 1) * window_size - width = (720 // upscale // window_size + 1) * window_size - model = Swin2SR(upscale=2, img_size=(height, width), - window_size=window_size, img_range=1., depths=[6, 6, 6, 6], - embed_dim=60, num_heads=[6, 6, 6, 6], mlp_ratio=2, upsampler='pixelshuffledirect') - print(model) - print(height, width, model.flops() / 1e9) - - x = torch.randn((1, 3, height, width)) - x = model(x) - print(x.shape) \ No newline at end of file diff --git a/spaces/jackli888/stable-diffusion-webui/extensions/deforum/scripts/deforum_helpers/rich.py b/spaces/jackli888/stable-diffusion-webui/extensions/deforum/scripts/deforum_helpers/rich.py deleted file mode 100644 index 745d8c8bc41116fe1ead73e18569c075a03450e1..0000000000000000000000000000000000000000 --- a/spaces/jackli888/stable-diffusion-webui/extensions/deforum/scripts/deforum_helpers/rich.py +++ /dev/null @@ -1,2 +0,0 @@ -from rich.console import Console -console = Console() \ No newline at end of file diff --git a/spaces/jackli888/stable-diffusion-webui/modules/extras.py b/spaces/jackli888/stable-diffusion-webui/modules/extras.py deleted file mode 100644 index 6a9af2d8e641fdf1ebd29045078d29b5aeae3d6f..0000000000000000000000000000000000000000 --- a/spaces/jackli888/stable-diffusion-webui/modules/extras.py +++ /dev/null @@ -1,258 +0,0 @@ -import os -import re -import shutil - - -import torch -import tqdm - -from modules import shared, images, sd_models, sd_vae, sd_models_config -from modules.ui_common import plaintext_to_html -import gradio as gr -import safetensors.torch - - -def run_pnginfo(image): - if image is None: - return '', '', '' - - geninfo, items = images.read_info_from_image(image) - items = {**{'parameters': geninfo}, **items} - - info = '' - for key, text in items.items(): - info += f""" -
      -

      {plaintext_to_html(str(key))}

      -

      {plaintext_to_html(str(text))}

      -
      -""".strip()+"\n" - - if len(info) == 0: - message = "Nothing found in the image." - info = f"

      {message}

      " - - return '', geninfo, info - - -def create_config(ckpt_result, config_source, a, b, c): - def config(x): - res = sd_models_config.find_checkpoint_config_near_filename(x) if x else None - return res if res != shared.sd_default_config else None - - if config_source == 0: - cfg = config(a) or config(b) or config(c) - elif config_source == 1: - cfg = config(b) - elif config_source == 2: - cfg = config(c) - else: - cfg = None - - if cfg is None: - return - - filename, _ = os.path.splitext(ckpt_result) - checkpoint_filename = filename + ".yaml" - - print("Copying config:") - print(" from:", cfg) - print(" to:", checkpoint_filename) - shutil.copyfile(cfg, checkpoint_filename) - - -checkpoint_dict_skip_on_merge = ["cond_stage_model.transformer.text_model.embeddings.position_ids"] - - -def to_half(tensor, enable): - if enable and tensor.dtype == torch.float: - return tensor.half() - - return tensor - - -def run_modelmerger(id_task, primary_model_name, secondary_model_name, tertiary_model_name, interp_method, multiplier, save_as_half, custom_name, checkpoint_format, config_source, bake_in_vae, discard_weights): - shared.state.begin() - shared.state.job = 'model-merge' - - def fail(message): - shared.state.textinfo = message - shared.state.end() - return [*[gr.update() for _ in range(4)], message] - - def weighted_sum(theta0, theta1, alpha): - return ((1 - alpha) * theta0) + (alpha * theta1) - - def get_difference(theta1, theta2): - return theta1 - theta2 - - def add_difference(theta0, theta1_2_diff, alpha): - return theta0 + (alpha * theta1_2_diff) - - def filename_weighted_sum(): - a = primary_model_info.model_name - b = secondary_model_info.model_name - Ma = round(1 - multiplier, 2) - Mb = round(multiplier, 2) - - return f"{Ma}({a}) + {Mb}({b})" - - def filename_add_difference(): - a = primary_model_info.model_name - b = secondary_model_info.model_name - c = tertiary_model_info.model_name - M = round(multiplier, 2) - - return f"{a} + {M}({b} - {c})" - - def filename_nothing(): - return primary_model_info.model_name - - theta_funcs = { - "Weighted sum": (filename_weighted_sum, None, weighted_sum), - "Add difference": (filename_add_difference, get_difference, add_difference), - "No interpolation": (filename_nothing, None, None), - } - filename_generator, theta_func1, theta_func2 = theta_funcs[interp_method] - shared.state.job_count = (1 if theta_func1 else 0) + (1 if theta_func2 else 0) - - if not primary_model_name: - return fail("Failed: Merging requires a primary model.") - - primary_model_info = sd_models.checkpoints_list[primary_model_name] - - if theta_func2 and not secondary_model_name: - return fail("Failed: Merging requires a secondary model.") - - secondary_model_info = sd_models.checkpoints_list[secondary_model_name] if theta_func2 else None - - if theta_func1 and not tertiary_model_name: - return fail(f"Failed: Interpolation method ({interp_method}) requires a tertiary model.") - - tertiary_model_info = sd_models.checkpoints_list[tertiary_model_name] if theta_func1 else None - - result_is_inpainting_model = False - result_is_instruct_pix2pix_model = False - - if theta_func2: - shared.state.textinfo = f"Loading B" - print(f"Loading {secondary_model_info.filename}...") - theta_1 = sd_models.read_state_dict(secondary_model_info.filename, map_location='cpu') - else: - theta_1 = None - - if theta_func1: - shared.state.textinfo = f"Loading C" - print(f"Loading {tertiary_model_info.filename}...") - theta_2 = sd_models.read_state_dict(tertiary_model_info.filename, map_location='cpu') - - shared.state.textinfo = 'Merging B and C' - shared.state.sampling_steps = len(theta_1.keys()) - for key in tqdm.tqdm(theta_1.keys()): - if key in checkpoint_dict_skip_on_merge: - continue - - if 'model' in key: - if key in theta_2: - t2 = theta_2.get(key, torch.zeros_like(theta_1[key])) - theta_1[key] = theta_func1(theta_1[key], t2) - else: - theta_1[key] = torch.zeros_like(theta_1[key]) - - shared.state.sampling_step += 1 - del theta_2 - - shared.state.nextjob() - - shared.state.textinfo = f"Loading {primary_model_info.filename}..." - print(f"Loading {primary_model_info.filename}...") - theta_0 = sd_models.read_state_dict(primary_model_info.filename, map_location='cpu') - - print("Merging...") - shared.state.textinfo = 'Merging A and B' - shared.state.sampling_steps = len(theta_0.keys()) - for key in tqdm.tqdm(theta_0.keys()): - if theta_1 and 'model' in key and key in theta_1: - - if key in checkpoint_dict_skip_on_merge: - continue - - a = theta_0[key] - b = theta_1[key] - - # this enables merging an inpainting model (A) with another one (B); - # where normal model would have 4 channels, for latenst space, inpainting model would - # have another 4 channels for unmasked picture's latent space, plus one channel for mask, for a total of 9 - if a.shape != b.shape and a.shape[0:1] + a.shape[2:] == b.shape[0:1] + b.shape[2:]: - if a.shape[1] == 4 and b.shape[1] == 9: - raise RuntimeError("When merging inpainting model with a normal one, A must be the inpainting model.") - if a.shape[1] == 4 and b.shape[1] == 8: - raise RuntimeError("When merging instruct-pix2pix model with a normal one, A must be the instruct-pix2pix model.") - - if a.shape[1] == 8 and b.shape[1] == 4:#If we have an Instruct-Pix2Pix model... - theta_0[key][:, 0:4, :, :] = theta_func2(a[:, 0:4, :, :], b, multiplier)#Merge only the vectors the models have in common. Otherwise we get an error due to dimension mismatch. - result_is_instruct_pix2pix_model = True - else: - assert a.shape[1] == 9 and b.shape[1] == 4, f"Bad dimensions for merged layer {key}: A={a.shape}, B={b.shape}" - theta_0[key][:, 0:4, :, :] = theta_func2(a[:, 0:4, :, :], b, multiplier) - result_is_inpainting_model = True - else: - theta_0[key] = theta_func2(a, b, multiplier) - - theta_0[key] = to_half(theta_0[key], save_as_half) - - shared.state.sampling_step += 1 - - del theta_1 - - bake_in_vae_filename = sd_vae.vae_dict.get(bake_in_vae, None) - if bake_in_vae_filename is not None: - print(f"Baking in VAE from {bake_in_vae_filename}") - shared.state.textinfo = 'Baking in VAE' - vae_dict = sd_vae.load_vae_dict(bake_in_vae_filename, map_location='cpu') - - for key in vae_dict.keys(): - theta_0_key = 'first_stage_model.' + key - if theta_0_key in theta_0: - theta_0[theta_0_key] = to_half(vae_dict[key], save_as_half) - - del vae_dict - - if save_as_half and not theta_func2: - for key in theta_0.keys(): - theta_0[key] = to_half(theta_0[key], save_as_half) - - if discard_weights: - regex = re.compile(discard_weights) - for key in list(theta_0): - if re.search(regex, key): - theta_0.pop(key, None) - - ckpt_dir = shared.cmd_opts.ckpt_dir or sd_models.model_path - - filename = filename_generator() if custom_name == '' else custom_name - filename += ".inpainting" if result_is_inpainting_model else "" - filename += ".instruct-pix2pix" if result_is_instruct_pix2pix_model else "" - filename += "." + checkpoint_format - - output_modelname = os.path.join(ckpt_dir, filename) - - shared.state.nextjob() - shared.state.textinfo = "Saving" - print(f"Saving to {output_modelname}...") - - _, extension = os.path.splitext(output_modelname) - if extension.lower() == ".safetensors": - safetensors.torch.save_file(theta_0, output_modelname, metadata={"format": "pt"}) - else: - torch.save(theta_0, output_modelname) - - sd_models.list_models() - - create_config(output_modelname, config_source, primary_model_info, secondary_model_info, tertiary_model_info) - - print(f"Checkpoint saved to {output_modelname}.") - shared.state.textinfo = "Checkpoint saved" - shared.state.end() - - return [*[gr.Dropdown.update(choices=sd_models.checkpoint_tiles()) for _ in range(4)], "Checkpoint saved to " + output_modelname] diff --git a/spaces/jbilcke-hf/ai-clip-factory/src/lib/useCountdown.ts b/spaces/jbilcke-hf/ai-clip-factory/src/lib/useCountdown.ts deleted file mode 100644 index 0aa553427abc29da26366ac0982b7e5032ee93d2..0000000000000000000000000000000000000000 --- a/spaces/jbilcke-hf/ai-clip-factory/src/lib/useCountdown.ts +++ /dev/null @@ -1,60 +0,0 @@ -"use client" - -import { useEffect, useRef, useState } from "react" - -export function useCountdown({ - isActive, - timerId, - durationInSec, - onEnd = () => {}, -}: { - isActive: boolean - timerId: string | number - durationInSec: number - onEnd: () => void -}) { - const intervalRef = useRef() - const startedAt = useRef() - const [elapsedTimeInMs, setElapsedTimeInMs] = useState(0) - - const durationInMs = durationInSec * 1000 - - useEffect(() => { - clearInterval(intervalRef.current) - setElapsedTimeInMs(0) - startedAt.current = new Date() - - if (isActive) { - intervalRef.current = setInterval(() => { - const now = new Date() - const newElapsedInMs = Math.min(durationInMs, now.getTime() - startedAt.current!.getTime()) - setElapsedTimeInMs(newElapsedInMs) - if (elapsedTimeInMs > durationInMs) { - // console.log("end of timer") - clearInterval(intervalRef.current) - onEnd() - } - }, 100) - } - - return () => { - // console.log("destruction of timer") - clearInterval(intervalRef.current) - } - }, [isActive, timerId, durationInMs]) - - const remainingTimeInMs = Math.max(0, durationInMs - elapsedTimeInMs) - - const progressRatio = elapsedTimeInMs / durationInMs - const progressPercent = progressRatio * 100 - - return { - elapsedTimeInMs, - remainingTimeInMs, - elapsedTimeInSec: Math.round(elapsedTimeInMs / 1000), - remainingTimeInSec: Math.round(remainingTimeInMs / 1000), - progressRatio, - progressPercent, - timeIsUp: remainingTimeInMs <= 0, - } -} \ No newline at end of file diff --git a/spaces/jdhuka/HTML5Interactivity/README.md b/spaces/jdhuka/HTML5Interactivity/README.md deleted file mode 100644 index 3528eb6faee83c1c08a5bf59c91b551e8eb413b7..0000000000000000000000000000000000000000 --- a/spaces/jdhuka/HTML5Interactivity/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: HTML5Interactivity -emoji: 👁 -colorFrom: gray -colorTo: gray -sdk: static -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/jiejiejie0420/bingo/src/components/chat-scroll-anchor.tsx b/spaces/jiejiejie0420/bingo/src/components/chat-scroll-anchor.tsx deleted file mode 100644 index ac809f4486a48e134cb69314c3d0dae5e68d614e..0000000000000000000000000000000000000000 --- a/spaces/jiejiejie0420/bingo/src/components/chat-scroll-anchor.tsx +++ /dev/null @@ -1,29 +0,0 @@ -'use client' - -import * as React from 'react' -import { useInView } from 'react-intersection-observer' - -import { useAtBottom } from '@/lib/hooks/use-at-bottom' - -interface ChatScrollAnchorProps { - trackVisibility?: boolean -} - -export function ChatScrollAnchor({ trackVisibility }: ChatScrollAnchorProps) { - const isAtBottom = useAtBottom() - const { ref, entry, inView } = useInView({ - trackVisibility, - delay: 100, - rootMargin: '0px 0px -150px 0px' - }) - - React.useEffect(() => { - if (isAtBottom && trackVisibility && !inView) { - entry?.target.scrollIntoView({ - block: 'start' - }) - } - }, [inView, entry, isAtBottom, trackVisibility]) - - return
      -} diff --git a/spaces/jiejiejie0420/bingo/src/lib/hooks/use-bing.ts b/spaces/jiejiejie0420/bingo/src/lib/hooks/use-bing.ts deleted file mode 100644 index dcdb1667ced0cba299b0825c0e91c4732411308c..0000000000000000000000000000000000000000 --- a/spaces/jiejiejie0420/bingo/src/lib/hooks/use-bing.ts +++ /dev/null @@ -1,173 +0,0 @@ -'use client' - -import { useState, useCallback, useEffect, useMemo } from 'react' -import { useAtom, useAtomValue } from 'jotai' -import { chatFamily, bingConversationStyleAtom, GreetMessages, hashAtom, voiceAtom } from '@/state' -import { setConversationMessages } from './chat-history' -import { ChatMessageModel, BotId, FileItem } from '@/lib/bots/bing/types' -import { nanoid } from '../utils' -import { TTS } from '../bots/bing/tts' - -export function useBing(botId: BotId = 'bing') { - const chatAtom = useMemo(() => chatFamily({ botId, page: 'singleton' }), [botId]) - const [enableTTS] = useAtom(voiceAtom) - const speaker = useMemo(() => new TTS(), []) - const [hash, setHash] = useAtom(hashAtom) - const bingConversationStyle = useAtomValue(bingConversationStyleAtom) - const [chatState, setChatState] = useAtom(chatAtom) - const [input, setInput] = useState('') - const [attachmentList, setAttachmentList] = useState([]) - - const updateMessage = useCallback( - (messageId: string, updater: (message: ChatMessageModel) => void) => { - setChatState((draft) => { - const message = draft.messages.find((m) => m.id === messageId) - if (message) { - updater(message) - } - }) - }, - [setChatState], - ) - - const sendMessage = useCallback( - async (input: string, options = {}) => { - const botMessageId = nanoid() - const imageUrl = attachmentList?.[0]?.status === 'loaded' ? attachmentList[0].url : undefined - setChatState((draft) => { - const text = imageUrl ? `${input}\n\n![image](${imageUrl})` : input - draft.messages.push({ id: nanoid(), text, author: 'user' }, { id: botMessageId, text: '', author: 'bot' }) - setAttachmentList([]) - }) - const abortController = new AbortController() - setChatState((draft) => { - draft.generatingMessageId = botMessageId - draft.abortController = abortController - }) - speaker.reset() - await chatState.bot.sendMessage({ - prompt: input, - imageUrl: /\?bcid=([^&]+)/.test(imageUrl ?? '') ? `https://www.bing.com/images/blob?bcid=${RegExp.$1}` : imageUrl, - options: { - ...options, - bingConversationStyle, - }, - signal: abortController.signal, - onEvent(event) { - if (event.type === 'UPDATE_ANSWER') { - updateMessage(botMessageId, (message) => { - if (event.data.text.length > message.text.length) { - message.text = event.data.text - } - - if (event.data.spokenText && enableTTS) { - speaker.speak(event.data.spokenText) - } - - message.throttling = event.data.throttling || message.throttling - message.sourceAttributions = event.data.sourceAttributions || message.sourceAttributions - message.suggestedResponses = event.data.suggestedResponses || message.suggestedResponses - }) - } else if (event.type === 'ERROR') { - updateMessage(botMessageId, (message) => { - message.error = event.error - }) - setChatState((draft) => { - draft.abortController = undefined - draft.generatingMessageId = '' - }) - } else if (event.type === 'DONE') { - setChatState((draft) => { - draft.abortController = undefined - draft.generatingMessageId = '' - }) - } - }, - }) - }, - [botId, attachmentList, chatState.bot, setChatState, updateMessage], - ) - - const uploadImage = useCallback(async (imgUrl: string) => { - setAttachmentList([{ url: imgUrl, status: 'loading' }]) - const response = await chatState.bot.uploadImage(imgUrl, bingConversationStyle) - if (response?.blobId) { - setAttachmentList([{ url: `/api/blob?bcid=${response.blobId}`, status: 'loaded' }]) - } else { - setAttachmentList([{ url: imgUrl, status: 'error' }]) - } - }, [chatState.bot]) - - const resetConversation = useCallback(() => { - chatState.bot.resetConversation() - speaker.abort() - setChatState((draft) => { - draft.abortController = undefined - draft.generatingMessageId = '' - draft.messages = [{ author: 'bot', text: GreetMessages[Math.floor(GreetMessages.length * Math.random())], id: nanoid() }] - draft.conversationId = nanoid() - }) - }, [chatState.bot, setChatState]) - - const stopGenerating = useCallback(() => { - chatState.abortController?.abort() - if (chatState.generatingMessageId) { - updateMessage(chatState.generatingMessageId, (message) => { - if (!message.text && !message.error) { - message.text = 'Cancelled' - } - }) - } - setChatState((draft) => { - draft.generatingMessageId = '' - }) - }, [chatState.abortController, chatState.generatingMessageId, setChatState, updateMessage]) - - useEffect(() => { - if (chatState.messages.length) { - setConversationMessages(botId, chatState.conversationId, chatState.messages) - } - }, [botId, chatState.conversationId, chatState.messages]) - - useEffect(() => { - if (hash === 'reset') { - resetConversation() - setHash('') - } - }, [hash, setHash]) - - const chat = useMemo( - () => ({ - botId, - bot: chatState.bot, - isSpeaking: speaker.isSpeaking, - messages: chatState.messages, - sendMessage, - setInput, - input, - resetConversation, - generating: !!chatState.generatingMessageId, - stopGenerating, - uploadImage, - setAttachmentList, - attachmentList, - }), - [ - botId, - bingConversationStyle, - chatState.bot, - chatState.generatingMessageId, - chatState.messages, - speaker.isSpeaking, - setInput, - input, - setAttachmentList, - attachmentList, - resetConversation, - sendMessage, - stopGenerating, - ], - ) - - return chat -} diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/_distutils_hack/override.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/_distutils_hack/override.py deleted file mode 100644 index 2cc433a4a55e3b41fa31089918fb62096092f89f..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/_distutils_hack/override.py +++ /dev/null @@ -1 +0,0 @@ -__import__('_distutils_hack').do_override() diff --git a/spaces/jordonpeter01/ai-comic-factory/src/lib/base64ToFile.ts b/spaces/jordonpeter01/ai-comic-factory/src/lib/base64ToFile.ts deleted file mode 100644 index 8286631c6899135e74c02be4dd8395e8864714c8..0000000000000000000000000000000000000000 --- a/spaces/jordonpeter01/ai-comic-factory/src/lib/base64ToFile.ts +++ /dev/null @@ -1,11 +0,0 @@ -export function base64ToFile(dataurl: string, filename: string) { - var arr = dataurl.split(','), - mime = arr[0].match(/:(.*?);/)?.[1], - bstr = atob(arr[arr.length - 1]), - n = bstr.length, - u8arr = new Uint8Array(n); - while(n--){ - u8arr[n] = bstr.charCodeAt(n); - } - return new File([u8arr], filename, {type:mime}); -} \ No newline at end of file diff --git a/spaces/jorge-henao/ask2democracy/document_quieries.py b/spaces/jorge-henao/ask2democracy/document_quieries.py deleted file mode 100644 index ca9b488f4f7b7149844db6474a401bdf78b06b90..0000000000000000000000000000000000000000 --- a/spaces/jorge-henao/ask2democracy/document_quieries.py +++ /dev/null @@ -1,36 +0,0 @@ -from abc import ABC, abstractmethod -from haystack.nodes import BM25Retriever, FARMReader -from haystack.document_stores import ElasticsearchDocumentStore -from haystack.pipelines import ExtractiveQAPipeline -import certifi - -ca_certs=certifi.where() - -class DocumentQueries(ABC): - - @abstractmethod - def search_by_query(self, query : str, retriever_top_k: int, reader_top_k: int, es_index: str): - pass - -class ExtractiveProposalQueries(DocumentQueries): - - def __init__(self, es_host: str, es_index: str, es_user, es_password, reader_name_or_path: str, use_gpu = True) -> None: - reader = FARMReader(model_name_or_path = reader_name_or_path, use_gpu = use_gpu, num_processes=1) - self._initialize_pipeline(es_host, es_index, es_user, es_password, reader = reader) - - def _initialize_pipeline(self, es_host, es_index, es_user, es_password, reader = None): - if reader is not None: - self.reader = reader - self.es_host = es_host - self.es_user = es_user - self.es_password = es_password - self.document_store = ElasticsearchDocumentStore(host = es_host, username=es_user, password=es_password, index = es_index, port = 443, scheme='https', verify_certs=True, ca_certs=ca_certs) - self.retriever = BM25Retriever(document_store = self.document_store) - self.pipe = ExtractiveQAPipeline(self.reader, self.retriever) - - def search_by_query(self, query : str, retriever_top_k: int, reader_top_k: int, es_index: str = None) : - if es_index is not None: - self._initialize_pipeline(self.es_host, es_index, self.es_user, self.es_password) - params = {"Retriever": {"top_k": retriever_top_k}, "Reader": {"top_k": reader_top_k}} - prediction = self.pipe.run( query = query, params = params) - return prediction["answers"] diff --git a/spaces/jracca/04-learning-space/README.md b/spaces/jracca/04-learning-space/README.md deleted file mode 100644 index 7e1f9e56b0155fb88627200bb5853b500d5f0b7a..0000000000000000000000000000000000000000 --- a/spaces/jracca/04-learning-space/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: 04 Learning Space -emoji: 🚀 -colorFrom: pink -colorTo: red -sdk: streamlit -sdk_version: 1.10.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/julien-c/sveltekit-demo/src/app.css b/spaces/julien-c/sveltekit-demo/src/app.css deleted file mode 100644 index 77bf6af764d05b4adcf9c5eaca58d01c97164f06..0000000000000000000000000000000000000000 --- a/spaces/julien-c/sveltekit-demo/src/app.css +++ /dev/null @@ -1,107 +0,0 @@ -@import '@fontsource/fira-mono'; - -:root { - font-family: Arial, -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, Oxygen, Ubuntu, - Cantarell, 'Open Sans', 'Helvetica Neue', sans-serif; - --font-mono: 'Fira Mono', monospace; - --pure-white: #ffffff; - --primary-color: #b9c6d2; - --secondary-color: #d0dde9; - --tertiary-color: #edf0f8; - --accent-color: #ff3e00; - --heading-color: rgba(0, 0, 0, 0.7); - --text-color: #444444; - --background-without-opacity: rgba(255, 255, 255, 0.7); - --column-width: 42rem; - --column-margin-top: 4rem; -} - -body { - min-height: 100vh; - margin: 0; - background-color: var(--primary-color); - background: linear-gradient( - 180deg, - var(--primary-color) 0%, - var(--secondary-color) 10.45%, - var(--tertiary-color) 41.35% - ); -} - -body::before { - content: ''; - width: 80vw; - height: 100vh; - position: absolute; - top: 0; - left: 10vw; - z-index: -1; - background: radial-gradient( - 50% 50% at 50% 50%, - var(--pure-white) 0%, - rgba(255, 255, 255, 0) 100% - ); - opacity: 0.05; -} - -#svelte { - min-height: 100vh; - display: flex; - flex-direction: column; -} - -h1, -h2, -p { - font-weight: 400; - color: var(--heading-color); -} - -p { - line-height: 1.5; -} - -a { - color: var(--accent-color); - text-decoration: none; -} - -a:hover { - text-decoration: underline; -} - -h1 { - font-size: 2rem; - text-align: center; -} - -h2 { - font-size: 1rem; -} - -pre { - font-size: 16px; - font-family: var(--font-mono); - background-color: rgba(255, 255, 255, 0.45); - border-radius: 3px; - box-shadow: 2px 2px 6px rgb(255 255 255 / 25%); - padding: 0.5em; - overflow-x: auto; - color: var(--text-color); -} - -input, -button { - font-size: inherit; - font-family: inherit; -} - -button:focus:not(:focus-visible) { - outline: none; -} - -@media (min-width: 720px) { - h1 { - font-size: 2.4rem; - } -} diff --git a/spaces/justest/gpt4free/g4f/Provider/Providers/helpers/you.py b/spaces/justest/gpt4free/g4f/Provider/Providers/helpers/you.py deleted file mode 100644 index 02985ed14d4848c2de20a99b4771d208286a2558..0000000000000000000000000000000000000000 --- a/spaces/justest/gpt4free/g4f/Provider/Providers/helpers/you.py +++ /dev/null @@ -1,79 +0,0 @@ -import sys -import json -import urllib.parse - -from curl_cffi import requests - -config = json.loads(sys.argv[1]) -messages = config['messages'] -prompt = '' - - -def transform(messages: list) -> list: - result = [] - i = 0 - - while i < len(messages): - if messages[i]['role'] == 'user': - question = messages[i]['content'] - i += 1 - - if i < len(messages) and messages[i]['role'] == 'assistant': - answer = messages[i]['content'] - i += 1 - else: - answer = '' - - result.append({'question': question, 'answer': answer}) - - elif messages[i]['role'] == 'assistant': - result.append({'question': '', 'answer': messages[i]['content']}) - i += 1 - - elif messages[i]['role'] == 'system': - result.append({'question': messages[i]['content'], 'answer': ''}) - i += 1 - - return result - -headers = { - 'Content-Type': 'application/x-www-form-urlencoded', - 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8', - 'Sec-Fetch-Site': 'same-origin', - 'Accept-Language': 'en-GB,en;q=0.9', - 'Sec-Fetch-Mode': 'navigate', - 'Host': 'you.com', - 'Origin': 'https://you.com', - 'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/16.4 Safari/605.1.15', - 'Referer': 'https://you.com/api/streamingSearch?q=nice&safeSearch=Moderate&onShoppingPage=false&mkt=&responseFilter=WebPages,Translations,TimeZone,Computation,RelatedSearches&domain=youchat&queryTraceId=7a6671f8-5881-404d-8ea3-c3f8301f85ba&chat=%5B%7B%22question%22%3A%22hi%22%2C%22answer%22%3A%22Hello!%20How%20can%20I%20assist%20you%20today%3F%22%7D%5D&chatId=7a6671f8-5881-404d-8ea3-c3f8301f85ba&__cf_chl_tk=ex2bw6vn5vbLsUm8J5rDYUC0Bjzc1XZqka6vUl6765A-1684108495-0-gaNycGzNDtA', - 'Connection': 'keep-alive', - 'Sec-Fetch-Dest': 'document', - 'Priority': 'u=0, i', -} - -if messages[-1]['role'] == 'user': - prompt = messages[-1]['content'] - messages = messages[:-1] - -params = urllib.parse.urlencode({ - 'q': prompt, - 'domain': 'youchat', - 'chat': transform(messages) -}) - -def output(chunk): - if b'"youChatToken"' in chunk: - chunk_json = json.loads(chunk.decode().split('data: ')[1]) - - print(chunk_json['youChatToken'], flush=True, end = '') - -while True: - try: - response = requests.get(f'https://you.com/api/streamingSearch?{params}', - headers=headers, content_callback=output, impersonate='safari15_5') - - exit(0) - - except Exception as e: - print('an error occured, retrying... |', e, flush=True) - continue \ No newline at end of file diff --git a/spaces/karay/diar_speech/README.md b/spaces/karay/diar_speech/README.md deleted file mode 100644 index 3cd05122968221b7f79ced6bbaf2ff04ea4ef007..0000000000000000000000000000000000000000 --- a/spaces/karay/diar_speech/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Speech recognition and diarization -emoji: 🌖 -colorFrom: gray -colorTo: green -sdk: streamlit -sdk_version: 1.10.0 -app_file: app.py -pinned: false -python_version: 3.10.0 ---- - - diff --git a/spaces/kcagle/AutoGPT/autogpt/json_utils/json_fix_llm.py b/spaces/kcagle/AutoGPT/autogpt/json_utils/json_fix_llm.py deleted file mode 100644 index 869aed125cfb8cd7a69ed02eeb389cc72a3e296b..0000000000000000000000000000000000000000 --- a/spaces/kcagle/AutoGPT/autogpt/json_utils/json_fix_llm.py +++ /dev/null @@ -1,220 +0,0 @@ -"""This module contains functions to fix JSON strings generated by LLM models, such as ChatGPT, using the assistance -of the ChatGPT API or LLM models.""" -from __future__ import annotations - -import contextlib -import json -from typing import Any, Dict - -from colorama import Fore -from regex import regex - -from autogpt.config import Config -from autogpt.json_utils.json_fix_general import correct_json -from autogpt.llm_utils import call_ai_function -from autogpt.logs import logger -from autogpt.speech import say_text - -JSON_SCHEMA = """ -{ - "command": { - "name": "command name", - "args": { - "arg name": "value" - } - }, - "thoughts": - { - "text": "thought", - "reasoning": "reasoning", - "plan": "- short bulleted\n- list that conveys\n- long-term plan", - "criticism": "constructive self-criticism", - "speak": "thoughts summary to say to user" - } -} -""" - -CFG = Config() - - -def auto_fix_json(json_string: str, schema: str) -> str: - """Fix the given JSON string to make it parseable and fully compliant with - the provided schema using GPT-3. - - Args: - json_string (str): The JSON string to fix. - schema (str): The schema to use to fix the JSON. - Returns: - str: The fixed JSON string. - """ - # Try to fix the JSON using GPT: - function_string = "def fix_json(json_string: str, schema:str=None) -> str:" - args = [f"'''{json_string}'''", f"'''{schema}'''"] - description_string = ( - "This function takes a JSON string and ensures that it" - " is parseable and fully compliant with the provided schema. If an object" - " or field specified in the schema isn't contained within the correct JSON," - " it is omitted. The function also escapes any double quotes within JSON" - " string values to ensure that they are valid. If the JSON string contains" - " any None or NaN values, they are replaced with null before being parsed." - ) - - # If it doesn't already start with a "`", add one: - if not json_string.startswith("`"): - json_string = "```json\n" + json_string + "\n```" - result_string = call_ai_function( - function_string, args, description_string, model=CFG.fast_llm_model - ) - logger.debug("------------ JSON FIX ATTEMPT ---------------") - logger.debug(f"Original JSON: {json_string}") - logger.debug("-----------") - logger.debug(f"Fixed JSON: {result_string}") - logger.debug("----------- END OF FIX ATTEMPT ----------------") - - try: - json.loads(result_string) # just check the validity - return result_string - except json.JSONDecodeError: # noqa: E722 - # Get the call stack: - # import traceback - # call_stack = traceback.format_exc() - # print(f"Failed to fix JSON: '{json_string}' "+call_stack) - return "failed" - - -def fix_json_using_multiple_techniques(assistant_reply: str) -> Dict[Any, Any]: - """Fix the given JSON string to make it parseable and fully compliant with two techniques. - - Args: - json_string (str): The JSON string to fix. - - Returns: - str: The fixed JSON string. - """ - - # Parse and print Assistant response - assistant_reply_json = fix_and_parse_json(assistant_reply) - if assistant_reply_json == {}: - assistant_reply_json = attempt_to_fix_json_by_finding_outermost_brackets( - assistant_reply - ) - - if assistant_reply_json != {}: - return assistant_reply_json - - logger.error( - "Error: The following AI output couldn't be converted to a JSON:\n", - assistant_reply, - ) - if CFG.speak_mode: - say_text("I have received an invalid JSON response from the OpenAI API.") - - return {} - - -def fix_and_parse_json( - json_to_load: str, try_to_fix_with_gpt: bool = True -) -> Dict[Any, Any]: - """Fix and parse JSON string - - Args: - json_to_load (str): The JSON string. - try_to_fix_with_gpt (bool, optional): Try to fix the JSON with GPT. - Defaults to True. - - Returns: - str or dict[Any, Any]: The parsed JSON. - """ - - with contextlib.suppress(json.JSONDecodeError): - json_to_load = json_to_load.replace("\t", "") - return json.loads(json_to_load) - - with contextlib.suppress(json.JSONDecodeError): - json_to_load = correct_json(json_to_load) - return json.loads(json_to_load) - # Let's do something manually: - # sometimes GPT responds with something BEFORE the braces: - # "I'm sorry, I don't understand. Please try again." - # {"text": "I'm sorry, I don't understand. Please try again.", - # "confidence": 0.0} - # So let's try to find the first brace and then parse the rest - # of the string - try: - brace_index = json_to_load.index("{") - maybe_fixed_json = json_to_load[brace_index:] - last_brace_index = maybe_fixed_json.rindex("}") - maybe_fixed_json = maybe_fixed_json[: last_brace_index + 1] - return json.loads(maybe_fixed_json) - except (json.JSONDecodeError, ValueError) as e: - return try_ai_fix(try_to_fix_with_gpt, e, json_to_load) - - -def try_ai_fix( - try_to_fix_with_gpt: bool, exception: Exception, json_to_load: str -) -> Dict[Any, Any]: - """Try to fix the JSON with the AI - - Args: - try_to_fix_with_gpt (bool): Whether to try to fix the JSON with the AI. - exception (Exception): The exception that was raised. - json_to_load (str): The JSON string to load. - - Raises: - exception: If try_to_fix_with_gpt is False. - - Returns: - str or dict[Any, Any]: The JSON string or dictionary. - """ - if not try_to_fix_with_gpt: - raise exception - if CFG.debug_mode: - logger.warn( - "Warning: Failed to parse AI output, attempting to fix." - "\n If you see this warning frequently, it's likely that" - " your prompt is confusing the AI. Try changing it up" - " slightly." - ) - # Now try to fix this up using the ai_functions - ai_fixed_json = auto_fix_json(json_to_load, JSON_SCHEMA) - - if ai_fixed_json != "failed": - return json.loads(ai_fixed_json) - # This allows the AI to react to the error message, - # which usually results in it correcting its ways. - # logger.error("Failed to fix AI output, telling the AI.") - return {} - - -def attempt_to_fix_json_by_finding_outermost_brackets(json_string: str): - if CFG.speak_mode and CFG.debug_mode: - say_text( - "I have received an invalid JSON response from the OpenAI API. " - "Trying to fix it now." - ) - logger.error("Attempting to fix JSON by finding outermost brackets\n") - - try: - json_pattern = regex.compile(r"\{(?:[^{}]|(?R))*\}") - json_match = json_pattern.search(json_string) - - if json_match: - # Extract the valid JSON object from the string - json_string = json_match.group(0) - logger.typewriter_log( - title="Apparently json was fixed.", title_color=Fore.GREEN - ) - if CFG.speak_mode and CFG.debug_mode: - say_text("Apparently json was fixed.") - else: - return {} - - except (json.JSONDecodeError, ValueError): - if CFG.debug_mode: - logger.error(f"Error: Invalid JSON: {json_string}\n") - if CFG.speak_mode: - say_text("Didn't work. I will have to ignore this response then.") - logger.error("Error: Invalid JSON, setting it to empty JSON now.\n") - json_string = {} - - return fix_and_parse_json(json_string) diff --git a/spaces/keras-io/structured-data-classification-grn-vsn/utils/preprocess.py b/spaces/keras-io/structured-data-classification-grn-vsn/utils/preprocess.py deleted file mode 100644 index 38d0f892684299ec6dff740e72fee1c7db68bd2a..0000000000000000000000000000000000000000 --- a/spaces/keras-io/structured-data-classification-grn-vsn/utils/preprocess.py +++ /dev/null @@ -1,84 +0,0 @@ -import tensorflow as tf -import pandas as pd -from .constants import CSV_HEADER, TARGET_FEATURE_NAME, WEIGHT_COLUMN_NAME, NUMERIC_FEATURE_NAMES - - -##Helper functions for preprocessing of data: - -def load_test_data(): - - test_data_url = "https://archive.ics.uci.edu/ml/machine-learning-databases/census-income-mld/census-income.test.gz" - test_data = pd.read_csv(test_data_url, header=None, names=CSV_HEADER) - - return test_data - -test_data = load_test_data() - -CATEGORICAL_FEATURES_WITH_VOCABULARY = { - feature_name: sorted([str(value) for value in list(test_data[feature_name].unique())]) - for feature_name in CSV_HEADER - if feature_name - not in list(NUMERIC_FEATURE_NAMES + [WEIGHT_COLUMN_NAME, TARGET_FEATURE_NAME]) -} -# All features names. -FEATURE_NAMES = NUMERIC_FEATURE_NAMES + list( - CATEGORICAL_FEATURES_WITH_VOCABULARY.keys() -) -# Feature default values. -COLUMN_DEFAULTS = [ - [0.0] - if feature_name in NUMERIC_FEATURE_NAMES + [TARGET_FEATURE_NAME, WEIGHT_COLUMN_NAME] - else ["NA"] - for feature_name in CSV_HEADER -] - -def process(features, target): - for feature_name in features: - if feature_name in CATEGORICAL_FEATURES_WITH_VOCABULARY: - # Cast categorical feature values to string. - features[feature_name] = tf.cast(features[feature_name], tf.dtypes.string) - # Get the instance weight. - weight = features.pop(WEIGHT_COLUMN_NAME) - return features, target, weight - - -def get_dataset_from_csv(csv_file_path, shuffle=False, batch_size=128): - - dataset = tf.data.experimental.make_csv_dataset( - csv_file_path, - batch_size=batch_size, - column_names=CSV_HEADER, - column_defaults=COLUMN_DEFAULTS, - label_name=TARGET_FEATURE_NAME, - num_epochs=1, - header=False, - shuffle=shuffle, - ).map(process) - - return dataset - -def create_max_values_map(): - max_values_map = {} - for col in NUMERIC_FEATURE_NAMES: - max_val = max(test_data[col]) - max_values_map["max_"+col] = max_val - return max_values_map - -def create_dropdown_default_values_map(): - dropdown_default_values_map = {} - for col in CATEGORICAL_FEATURES_WITH_VOCABULARY.keys(): - max_val = test_data[col].max() - dropdown_default_values_map["max_"+col] = max_val - return dropdown_default_values_map - -def create_sample_test_data(): - - test_data["income_level"] = test_data["income_level"].apply( - lambda x: 0 if x == " - 50000." else 1) - - sample_df = test_data.loc[:20,:] - sample_df_values = sample_df.values.tolist() - - return sample_df_values - - \ No newline at end of file diff --git a/spaces/kevinwang676/SadTalker/src/face3d/models/arcface_torch/configs/base.py b/spaces/kevinwang676/SadTalker/src/face3d/models/arcface_torch/configs/base.py deleted file mode 100644 index 78e4b36a9142b649ec39a8c59331bb2557f2ad57..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/SadTalker/src/face3d/models/arcface_torch/configs/base.py +++ /dev/null @@ -1,56 +0,0 @@ -from easydict import EasyDict as edict - -# make training faster -# our RAM is 256G -# mount -t tmpfs -o size=140G tmpfs /train_tmp - -config = edict() -config.loss = "arcface" -config.network = "r50" -config.resume = False -config.output = "ms1mv3_arcface_r50" - -config.dataset = "ms1m-retinaface-t1" -config.embedding_size = 512 -config.sample_rate = 1 -config.fp16 = False -config.momentum = 0.9 -config.weight_decay = 5e-4 -config.batch_size = 128 -config.lr = 0.1 # batch size is 512 - -if config.dataset == "emore": - config.rec = "/train_tmp/faces_emore" - config.num_classes = 85742 - config.num_image = 5822653 - config.num_epoch = 16 - config.warmup_epoch = -1 - config.decay_epoch = [8, 14, ] - config.val_targets = ["lfw", ] - -elif config.dataset == "ms1m-retinaface-t1": - config.rec = "/train_tmp/ms1m-retinaface-t1" - config.num_classes = 93431 - config.num_image = 5179510 - config.num_epoch = 25 - config.warmup_epoch = -1 - config.decay_epoch = [11, 17, 22] - config.val_targets = ["lfw", "cfp_fp", "agedb_30"] - -elif config.dataset == "glint360k": - config.rec = "/train_tmp/glint360k" - config.num_classes = 360232 - config.num_image = 17091657 - config.num_epoch = 20 - config.warmup_epoch = -1 - config.decay_epoch = [8, 12, 15, 18] - config.val_targets = ["lfw", "cfp_fp", "agedb_30"] - -elif config.dataset == "webface": - config.rec = "/train_tmp/faces_webface_112x112" - config.num_classes = 10572 - config.num_image = "forget" - config.num_epoch = 34 - config.warmup_epoch = -1 - config.decay_epoch = [20, 28, 32] - config.val_targets = ["lfw", "cfp_fp", "agedb_30"] diff --git a/spaces/kevinwang676/VITS2-Mandarin/monotonic_align/__init__.py b/spaces/kevinwang676/VITS2-Mandarin/monotonic_align/__init__.py deleted file mode 100644 index e97eecc595dd3bd97d0104ec62799e2e5efea57c..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/VITS2-Mandarin/monotonic_align/__init__.py +++ /dev/null @@ -1,20 +0,0 @@ -from numpy import zeros, int32, float32 -from torch import from_numpy - -from .core import maximum_path_jit - - -def maximum_path(neg_cent, mask): - """ numba optimized version. - neg_cent: [b, t_t, t_s] - mask: [b, t_t, t_s] - """ - device = neg_cent.device - dtype = neg_cent.dtype - neg_cent = neg_cent.data.cpu().numpy().astype(float32) - path = zeros(neg_cent.shape, dtype=int32) - - t_t_max = mask.sum(1)[:, 0].data.cpu().numpy().astype(int32) - t_s_max = mask.sum(2)[:, 0].data.cpu().numpy().astype(int32) - maximum_path_jit(path, neg_cent, t_t_max, t_s_max) - return from_numpy(path).to(device=device, dtype=dtype) diff --git a/spaces/kukuhtw/AutoGPT/autogpt/speech/__init__.py b/spaces/kukuhtw/AutoGPT/autogpt/speech/__init__.py deleted file mode 100644 index 2ff0d2bf48dc356bf810cb5a2063d6774e5fec6e..0000000000000000000000000000000000000000 --- a/spaces/kukuhtw/AutoGPT/autogpt/speech/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -"""This module contains the speech recognition and speech synthesis functions.""" -from autogpt.speech.say import say_text - -__all__ = ["say_text"] diff --git a/spaces/kukuhtw/VToonify/vtoonify/model/raft/train_mixed.sh b/spaces/kukuhtw/VToonify/vtoonify/model/raft/train_mixed.sh deleted file mode 100644 index d9b979f143902a17a0ba7b0a8f960598b7096e0b..0000000000000000000000000000000000000000 --- a/spaces/kukuhtw/VToonify/vtoonify/model/raft/train_mixed.sh +++ /dev/null @@ -1,6 +0,0 @@ -#!/bin/bash -mkdir -p checkpoints -python -u train.py --name raft-chairs --stage chairs --validation chairs --gpus 0 --num_steps 120000 --batch_size 8 --lr 0.00025 --image_size 368 496 --wdecay 0.0001 --mixed_precision -python -u train.py --name raft-things --stage things --validation sintel --restore_ckpt checkpoints/raft-chairs.pth --gpus 0 --num_steps 120000 --batch_size 5 --lr 0.0001 --image_size 400 720 --wdecay 0.0001 --mixed_precision -python -u train.py --name raft-sintel --stage sintel --validation sintel --restore_ckpt checkpoints/raft-things.pth --gpus 0 --num_steps 120000 --batch_size 5 --lr 0.0001 --image_size 368 768 --wdecay 0.00001 --gamma=0.85 --mixed_precision -python -u train.py --name raft-kitti --stage kitti --validation kitti --restore_ckpt checkpoints/raft-sintel.pth --gpus 0 --num_steps 50000 --batch_size 5 --lr 0.0001 --image_size 288 960 --wdecay 0.00001 --gamma=0.85 --mixed_precision diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/aiohttp/client.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/aiohttp/client.py deleted file mode 100644 index 0d0f4c16c0cfa3751343e2ee60104e3e1a3db04c..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/aiohttp/client.py +++ /dev/null @@ -1,1305 +0,0 @@ -"""HTTP Client for asyncio.""" - -import asyncio -import base64 -import hashlib -import json -import os -import sys -import traceback -import warnings -from contextlib import suppress -from types import SimpleNamespace, TracebackType -from typing import ( - Any, - Awaitable, - Callable, - Coroutine, - FrozenSet, - Generator, - Generic, - Iterable, - List, - Mapping, - Optional, - Set, - Tuple, - Type, - TypeVar, - Union, -) - -import attr -from multidict import CIMultiDict, MultiDict, MultiDictProxy, istr -from yarl import URL - -from . import hdrs, http, payload -from .abc import AbstractCookieJar -from .client_exceptions import ( - ClientConnectionError as ClientConnectionError, - ClientConnectorCertificateError as ClientConnectorCertificateError, - ClientConnectorError as ClientConnectorError, - ClientConnectorSSLError as ClientConnectorSSLError, - ClientError as ClientError, - ClientHttpProxyError as ClientHttpProxyError, - ClientOSError as ClientOSError, - ClientPayloadError as ClientPayloadError, - ClientProxyConnectionError as ClientProxyConnectionError, - ClientResponseError as ClientResponseError, - ClientSSLError as ClientSSLError, - ContentTypeError as ContentTypeError, - InvalidURL as InvalidURL, - ServerConnectionError as ServerConnectionError, - ServerDisconnectedError as ServerDisconnectedError, - ServerFingerprintMismatch as ServerFingerprintMismatch, - ServerTimeoutError as ServerTimeoutError, - TooManyRedirects as TooManyRedirects, - WSServerHandshakeError as WSServerHandshakeError, -) -from .client_reqrep import ( - ClientRequest as ClientRequest, - ClientResponse as ClientResponse, - Fingerprint as Fingerprint, - RequestInfo as RequestInfo, - _merge_ssl_params, -) -from .client_ws import ClientWebSocketResponse as ClientWebSocketResponse -from .connector import ( - BaseConnector as BaseConnector, - NamedPipeConnector as NamedPipeConnector, - TCPConnector as TCPConnector, - UnixConnector as UnixConnector, -) -from .cookiejar import CookieJar -from .helpers import ( - DEBUG, - PY_36, - BasicAuth, - TimeoutHandle, - ceil_timeout, - get_env_proxy_for_url, - get_running_loop, - sentinel, - strip_auth_from_url, -) -from .http import WS_KEY, HttpVersion, WebSocketReader, WebSocketWriter -from .http_websocket import WSHandshakeError, WSMessage, ws_ext_gen, ws_ext_parse -from .streams import FlowControlDataQueue -from .tracing import Trace, TraceConfig -from .typedefs import Final, JSONEncoder, LooseCookies, LooseHeaders, StrOrURL - -__all__ = ( - # client_exceptions - "ClientConnectionError", - "ClientConnectorCertificateError", - "ClientConnectorError", - "ClientConnectorSSLError", - "ClientError", - "ClientHttpProxyError", - "ClientOSError", - "ClientPayloadError", - "ClientProxyConnectionError", - "ClientResponseError", - "ClientSSLError", - "ContentTypeError", - "InvalidURL", - "ServerConnectionError", - "ServerDisconnectedError", - "ServerFingerprintMismatch", - "ServerTimeoutError", - "TooManyRedirects", - "WSServerHandshakeError", - # client_reqrep - "ClientRequest", - "ClientResponse", - "Fingerprint", - "RequestInfo", - # connector - "BaseConnector", - "TCPConnector", - "UnixConnector", - "NamedPipeConnector", - # client_ws - "ClientWebSocketResponse", - # client - "ClientSession", - "ClientTimeout", - "request", -) - - -try: - from ssl import SSLContext -except ImportError: # pragma: no cover - SSLContext = object # type: ignore[misc,assignment] - - -@attr.s(auto_attribs=True, frozen=True, slots=True) -class ClientTimeout: - total: Optional[float] = None - connect: Optional[float] = None - sock_read: Optional[float] = None - sock_connect: Optional[float] = None - - # pool_queue_timeout: Optional[float] = None - # dns_resolution_timeout: Optional[float] = None - # socket_connect_timeout: Optional[float] = None - # connection_acquiring_timeout: Optional[float] = None - # new_connection_timeout: Optional[float] = None - # http_header_timeout: Optional[float] = None - # response_body_timeout: Optional[float] = None - - # to create a timeout specific for a single request, either - # - create a completely new one to overwrite the default - # - or use http://www.attrs.org/en/stable/api.html#attr.evolve - # to overwrite the defaults - - -# 5 Minute default read timeout -DEFAULT_TIMEOUT: Final[ClientTimeout] = ClientTimeout(total=5 * 60) - -_RetType = TypeVar("_RetType") - - -class ClientSession: - """First-class interface for making HTTP requests.""" - - ATTRS = frozenset( - [ - "_base_url", - "_source_traceback", - "_connector", - "requote_redirect_url", - "_loop", - "_cookie_jar", - "_connector_owner", - "_default_auth", - "_version", - "_json_serialize", - "_requote_redirect_url", - "_timeout", - "_raise_for_status", - "_auto_decompress", - "_trust_env", - "_default_headers", - "_skip_auto_headers", - "_request_class", - "_response_class", - "_ws_response_class", - "_trace_configs", - "_read_bufsize", - ] - ) - - _source_traceback = None # type: Optional[traceback.StackSummary] - _connector = None # type: Optional[BaseConnector] - - def __init__( - self, - base_url: Optional[StrOrURL] = None, - *, - connector: Optional[BaseConnector] = None, - loop: Optional[asyncio.AbstractEventLoop] = None, - cookies: Optional[LooseCookies] = None, - headers: Optional[LooseHeaders] = None, - skip_auto_headers: Optional[Iterable[str]] = None, - auth: Optional[BasicAuth] = None, - json_serialize: JSONEncoder = json.dumps, - request_class: Type[ClientRequest] = ClientRequest, - response_class: Type[ClientResponse] = ClientResponse, - ws_response_class: Type[ClientWebSocketResponse] = ClientWebSocketResponse, - version: HttpVersion = http.HttpVersion11, - cookie_jar: Optional[AbstractCookieJar] = None, - connector_owner: bool = True, - raise_for_status: bool = False, - read_timeout: Union[float, object] = sentinel, - conn_timeout: Optional[float] = None, - timeout: Union[object, ClientTimeout] = sentinel, - auto_decompress: bool = True, - trust_env: bool = False, - requote_redirect_url: bool = True, - trace_configs: Optional[List[TraceConfig]] = None, - read_bufsize: int = 2**16, - ) -> None: - if loop is None: - if connector is not None: - loop = connector._loop - - loop = get_running_loop(loop) - - if base_url is None or isinstance(base_url, URL): - self._base_url: Optional[URL] = base_url - else: - self._base_url = URL(base_url) - assert ( - self._base_url.origin() == self._base_url - ), "Only absolute URLs without path part are supported" - - if connector is None: - connector = TCPConnector(loop=loop) - - if connector._loop is not loop: - raise RuntimeError("Session and connector has to use same event loop") - - self._loop = loop - - if loop.get_debug(): - self._source_traceback = traceback.extract_stack(sys._getframe(1)) - - if cookie_jar is None: - cookie_jar = CookieJar(loop=loop) - self._cookie_jar = cookie_jar - - if cookies is not None: - self._cookie_jar.update_cookies(cookies) - - self._connector = connector - self._connector_owner = connector_owner - self._default_auth = auth - self._version = version - self._json_serialize = json_serialize - if timeout is sentinel: - self._timeout = DEFAULT_TIMEOUT - if read_timeout is not sentinel: - warnings.warn( - "read_timeout is deprecated, " "use timeout argument instead", - DeprecationWarning, - stacklevel=2, - ) - self._timeout = attr.evolve(self._timeout, total=read_timeout) - if conn_timeout is not None: - self._timeout = attr.evolve(self._timeout, connect=conn_timeout) - warnings.warn( - "conn_timeout is deprecated, " "use timeout argument instead", - DeprecationWarning, - stacklevel=2, - ) - else: - self._timeout = timeout # type: ignore[assignment] - if read_timeout is not sentinel: - raise ValueError( - "read_timeout and timeout parameters " - "conflict, please setup " - "timeout.read" - ) - if conn_timeout is not None: - raise ValueError( - "conn_timeout and timeout parameters " - "conflict, please setup " - "timeout.connect" - ) - self._raise_for_status = raise_for_status - self._auto_decompress = auto_decompress - self._trust_env = trust_env - self._requote_redirect_url = requote_redirect_url - self._read_bufsize = read_bufsize - - # Convert to list of tuples - if headers: - real_headers: CIMultiDict[str] = CIMultiDict(headers) - else: - real_headers = CIMultiDict() - self._default_headers: CIMultiDict[str] = real_headers - if skip_auto_headers is not None: - self._skip_auto_headers = frozenset(istr(i) for i in skip_auto_headers) - else: - self._skip_auto_headers = frozenset() - - self._request_class = request_class - self._response_class = response_class - self._ws_response_class = ws_response_class - - self._trace_configs = trace_configs or [] - for trace_config in self._trace_configs: - trace_config.freeze() - - def __init_subclass__(cls: Type["ClientSession"]) -> None: - warnings.warn( - "Inheritance class {} from ClientSession " - "is discouraged".format(cls.__name__), - DeprecationWarning, - stacklevel=2, - ) - - if DEBUG: - - def __setattr__(self, name: str, val: Any) -> None: - if name not in self.ATTRS: - warnings.warn( - "Setting custom ClientSession.{} attribute " - "is discouraged".format(name), - DeprecationWarning, - stacklevel=2, - ) - super().__setattr__(name, val) - - def __del__(self, _warnings: Any = warnings) -> None: - if not self.closed: - if PY_36: - kwargs = {"source": self} - else: - kwargs = {} - _warnings.warn( - f"Unclosed client session {self!r}", ResourceWarning, **kwargs - ) - context = {"client_session": self, "message": "Unclosed client session"} - if self._source_traceback is not None: - context["source_traceback"] = self._source_traceback - self._loop.call_exception_handler(context) - - def request( - self, method: str, url: StrOrURL, **kwargs: Any - ) -> "_RequestContextManager": - """Perform HTTP request.""" - return _RequestContextManager(self._request(method, url, **kwargs)) - - def _build_url(self, str_or_url: StrOrURL) -> URL: - url = URL(str_or_url) - if self._base_url is None: - return url - else: - assert not url.is_absolute() and url.path.startswith("/") - return self._base_url.join(url) - - async def _request( - self, - method: str, - str_or_url: StrOrURL, - *, - params: Optional[Mapping[str, str]] = None, - data: Any = None, - json: Any = None, - cookies: Optional[LooseCookies] = None, - headers: Optional[LooseHeaders] = None, - skip_auto_headers: Optional[Iterable[str]] = None, - auth: Optional[BasicAuth] = None, - allow_redirects: bool = True, - max_redirects: int = 10, - compress: Optional[str] = None, - chunked: Optional[bool] = None, - expect100: bool = False, - raise_for_status: Optional[bool] = None, - read_until_eof: bool = True, - proxy: Optional[StrOrURL] = None, - proxy_auth: Optional[BasicAuth] = None, - timeout: Union[ClientTimeout, object] = sentinel, - verify_ssl: Optional[bool] = None, - fingerprint: Optional[bytes] = None, - ssl_context: Optional[SSLContext] = None, - ssl: Optional[Union[SSLContext, bool, Fingerprint]] = None, - proxy_headers: Optional[LooseHeaders] = None, - trace_request_ctx: Optional[SimpleNamespace] = None, - read_bufsize: Optional[int] = None, - ) -> ClientResponse: - - # NOTE: timeout clamps existing connect and read timeouts. We cannot - # set the default to None because we need to detect if the user wants - # to use the existing timeouts by setting timeout to None. - - if self.closed: - raise RuntimeError("Session is closed") - - ssl = _merge_ssl_params(ssl, verify_ssl, ssl_context, fingerprint) - - if data is not None and json is not None: - raise ValueError( - "data and json parameters can not be used at the same time" - ) - elif json is not None: - data = payload.JsonPayload(json, dumps=self._json_serialize) - - if not isinstance(chunked, bool) and chunked is not None: - warnings.warn("Chunk size is deprecated #1615", DeprecationWarning) - - redirects = 0 - history = [] - version = self._version - - # Merge with default headers and transform to CIMultiDict - headers = self._prepare_headers(headers) - proxy_headers = self._prepare_headers(proxy_headers) - - try: - url = self._build_url(str_or_url) - except ValueError as e: - raise InvalidURL(str_or_url) from e - - skip_headers = set(self._skip_auto_headers) - if skip_auto_headers is not None: - for i in skip_auto_headers: - skip_headers.add(istr(i)) - - if proxy is not None: - try: - proxy = URL(proxy) - except ValueError as e: - raise InvalidURL(proxy) from e - - if timeout is sentinel: - real_timeout: ClientTimeout = self._timeout - else: - if not isinstance(timeout, ClientTimeout): - real_timeout = ClientTimeout(total=timeout) # type: ignore[arg-type] - else: - real_timeout = timeout - # timeout is cumulative for all request operations - # (request, redirects, responses, data consuming) - tm = TimeoutHandle(self._loop, real_timeout.total) - handle = tm.start() - - if read_bufsize is None: - read_bufsize = self._read_bufsize - - traces = [ - Trace( - self, - trace_config, - trace_config.trace_config_ctx(trace_request_ctx=trace_request_ctx), - ) - for trace_config in self._trace_configs - ] - - for trace in traces: - await trace.send_request_start(method, url.update_query(params), headers) - - timer = tm.timer() - try: - with timer: - while True: - url, auth_from_url = strip_auth_from_url(url) - if auth and auth_from_url: - raise ValueError( - "Cannot combine AUTH argument with " - "credentials encoded in URL" - ) - - if auth is None: - auth = auth_from_url - if auth is None: - auth = self._default_auth - # It would be confusing if we support explicit - # Authorization header with auth argument - if ( - headers is not None - and auth is not None - and hdrs.AUTHORIZATION in headers - ): - raise ValueError( - "Cannot combine AUTHORIZATION header " - "with AUTH argument or credentials " - "encoded in URL" - ) - - all_cookies = self._cookie_jar.filter_cookies(url) - - if cookies is not None: - tmp_cookie_jar = CookieJar() - tmp_cookie_jar.update_cookies(cookies) - req_cookies = tmp_cookie_jar.filter_cookies(url) - if req_cookies: - all_cookies.load(req_cookies) - - if proxy is not None: - proxy = URL(proxy) - elif self._trust_env: - with suppress(LookupError): - proxy, proxy_auth = get_env_proxy_for_url(url) - - req = self._request_class( - method, - url, - params=params, - headers=headers, - skip_auto_headers=skip_headers, - data=data, - cookies=all_cookies, - auth=auth, - version=version, - compress=compress, - chunked=chunked, - expect100=expect100, - loop=self._loop, - response_class=self._response_class, - proxy=proxy, - proxy_auth=proxy_auth, - timer=timer, - session=self, - ssl=ssl, - proxy_headers=proxy_headers, - traces=traces, - ) - - # connection timeout - try: - async with ceil_timeout(real_timeout.connect): - assert self._connector is not None - conn = await self._connector.connect( - req, traces=traces, timeout=real_timeout - ) - except asyncio.TimeoutError as exc: - raise ServerTimeoutError( - "Connection timeout " "to host {}".format(url) - ) from exc - - assert conn.transport is not None - - assert conn.protocol is not None - conn.protocol.set_response_params( - timer=timer, - skip_payload=method.upper() == "HEAD", - read_until_eof=read_until_eof, - auto_decompress=self._auto_decompress, - read_timeout=real_timeout.sock_read, - read_bufsize=read_bufsize, - ) - - try: - try: - resp = await req.send(conn) - try: - await resp.start(conn) - except BaseException: - resp.close() - raise - except BaseException: - conn.close() - raise - except ClientError: - raise - except OSError as exc: - if exc.errno is None and isinstance(exc, asyncio.TimeoutError): - raise - raise ClientOSError(*exc.args) from exc - - self._cookie_jar.update_cookies(resp.cookies, resp.url) - - # redirects - if resp.status in (301, 302, 303, 307, 308) and allow_redirects: - - for trace in traces: - await trace.send_request_redirect( - method, url.update_query(params), headers, resp - ) - - redirects += 1 - history.append(resp) - if max_redirects and redirects >= max_redirects: - resp.close() - raise TooManyRedirects( - history[0].request_info, tuple(history) - ) - - # For 301 and 302, mimic IE, now changed in RFC - # https://github.com/kennethreitz/requests/pull/269 - if (resp.status == 303 and resp.method != hdrs.METH_HEAD) or ( - resp.status in (301, 302) and resp.method == hdrs.METH_POST - ): - method = hdrs.METH_GET - data = None - if headers.get(hdrs.CONTENT_LENGTH): - headers.pop(hdrs.CONTENT_LENGTH) - - r_url = resp.headers.get(hdrs.LOCATION) or resp.headers.get( - hdrs.URI - ) - if r_url is None: - # see github.com/aio-libs/aiohttp/issues/2022 - break - else: - # reading from correct redirection - # response is forbidden - resp.release() - - try: - parsed_url = URL( - r_url, encoded=not self._requote_redirect_url - ) - - except ValueError as e: - raise InvalidURL(r_url) from e - - scheme = parsed_url.scheme - if scheme not in ("http", "https", ""): - resp.close() - raise ValueError("Can redirect only to http or https") - elif not scheme: - parsed_url = url.join(parsed_url) - - if url.origin() != parsed_url.origin(): - auth = None - headers.pop(hdrs.AUTHORIZATION, None) - - url = parsed_url - params = None - resp.release() - continue - - break - - # check response status - if raise_for_status is None: - raise_for_status = self._raise_for_status - if raise_for_status: - resp.raise_for_status() - - # register connection - if handle is not None: - if resp.connection is not None: - resp.connection.add_callback(handle.cancel) - else: - handle.cancel() - - resp._history = tuple(history) - - for trace in traces: - await trace.send_request_end( - method, url.update_query(params), headers, resp - ) - return resp - - except BaseException as e: - # cleanup timer - tm.close() - if handle: - handle.cancel() - handle = None - - for trace in traces: - await trace.send_request_exception( - method, url.update_query(params), headers, e - ) - raise - - def ws_connect( - self, - url: StrOrURL, - *, - method: str = hdrs.METH_GET, - protocols: Iterable[str] = (), - timeout: float = 10.0, - receive_timeout: Optional[float] = None, - autoclose: bool = True, - autoping: bool = True, - heartbeat: Optional[float] = None, - auth: Optional[BasicAuth] = None, - origin: Optional[str] = None, - params: Optional[Mapping[str, str]] = None, - headers: Optional[LooseHeaders] = None, - proxy: Optional[StrOrURL] = None, - proxy_auth: Optional[BasicAuth] = None, - ssl: Union[SSLContext, bool, None, Fingerprint] = None, - verify_ssl: Optional[bool] = None, - fingerprint: Optional[bytes] = None, - ssl_context: Optional[SSLContext] = None, - proxy_headers: Optional[LooseHeaders] = None, - compress: int = 0, - max_msg_size: int = 4 * 1024 * 1024, - ) -> "_WSRequestContextManager": - """Initiate websocket connection.""" - return _WSRequestContextManager( - self._ws_connect( - url, - method=method, - protocols=protocols, - timeout=timeout, - receive_timeout=receive_timeout, - autoclose=autoclose, - autoping=autoping, - heartbeat=heartbeat, - auth=auth, - origin=origin, - params=params, - headers=headers, - proxy=proxy, - proxy_auth=proxy_auth, - ssl=ssl, - verify_ssl=verify_ssl, - fingerprint=fingerprint, - ssl_context=ssl_context, - proxy_headers=proxy_headers, - compress=compress, - max_msg_size=max_msg_size, - ) - ) - - async def _ws_connect( - self, - url: StrOrURL, - *, - method: str = hdrs.METH_GET, - protocols: Iterable[str] = (), - timeout: float = 10.0, - receive_timeout: Optional[float] = None, - autoclose: bool = True, - autoping: bool = True, - heartbeat: Optional[float] = None, - auth: Optional[BasicAuth] = None, - origin: Optional[str] = None, - params: Optional[Mapping[str, str]] = None, - headers: Optional[LooseHeaders] = None, - proxy: Optional[StrOrURL] = None, - proxy_auth: Optional[BasicAuth] = None, - ssl: Union[SSLContext, bool, None, Fingerprint] = None, - verify_ssl: Optional[bool] = None, - fingerprint: Optional[bytes] = None, - ssl_context: Optional[SSLContext] = None, - proxy_headers: Optional[LooseHeaders] = None, - compress: int = 0, - max_msg_size: int = 4 * 1024 * 1024, - ) -> ClientWebSocketResponse: - - if headers is None: - real_headers: CIMultiDict[str] = CIMultiDict() - else: - real_headers = CIMultiDict(headers) - - default_headers = { - hdrs.UPGRADE: "websocket", - hdrs.CONNECTION: "upgrade", - hdrs.SEC_WEBSOCKET_VERSION: "13", - } - - for key, value in default_headers.items(): - real_headers.setdefault(key, value) - - sec_key = base64.b64encode(os.urandom(16)) - real_headers[hdrs.SEC_WEBSOCKET_KEY] = sec_key.decode() - - if protocols: - real_headers[hdrs.SEC_WEBSOCKET_PROTOCOL] = ",".join(protocols) - if origin is not None: - real_headers[hdrs.ORIGIN] = origin - if compress: - extstr = ws_ext_gen(compress=compress) - real_headers[hdrs.SEC_WEBSOCKET_EXTENSIONS] = extstr - - ssl = _merge_ssl_params(ssl, verify_ssl, ssl_context, fingerprint) - - # send request - resp = await self.request( - method, - url, - params=params, - headers=real_headers, - read_until_eof=False, - auth=auth, - proxy=proxy, - proxy_auth=proxy_auth, - ssl=ssl, - proxy_headers=proxy_headers, - ) - - try: - # check handshake - if resp.status != 101: - raise WSServerHandshakeError( - resp.request_info, - resp.history, - message="Invalid response status", - status=resp.status, - headers=resp.headers, - ) - - if resp.headers.get(hdrs.UPGRADE, "").lower() != "websocket": - raise WSServerHandshakeError( - resp.request_info, - resp.history, - message="Invalid upgrade header", - status=resp.status, - headers=resp.headers, - ) - - if resp.headers.get(hdrs.CONNECTION, "").lower() != "upgrade": - raise WSServerHandshakeError( - resp.request_info, - resp.history, - message="Invalid connection header", - status=resp.status, - headers=resp.headers, - ) - - # key calculation - r_key = resp.headers.get(hdrs.SEC_WEBSOCKET_ACCEPT, "") - match = base64.b64encode(hashlib.sha1(sec_key + WS_KEY).digest()).decode() - if r_key != match: - raise WSServerHandshakeError( - resp.request_info, - resp.history, - message="Invalid challenge response", - status=resp.status, - headers=resp.headers, - ) - - # websocket protocol - protocol = None - if protocols and hdrs.SEC_WEBSOCKET_PROTOCOL in resp.headers: - resp_protocols = [ - proto.strip() - for proto in resp.headers[hdrs.SEC_WEBSOCKET_PROTOCOL].split(",") - ] - - for proto in resp_protocols: - if proto in protocols: - protocol = proto - break - - # websocket compress - notakeover = False - if compress: - compress_hdrs = resp.headers.get(hdrs.SEC_WEBSOCKET_EXTENSIONS) - if compress_hdrs: - try: - compress, notakeover = ws_ext_parse(compress_hdrs) - except WSHandshakeError as exc: - raise WSServerHandshakeError( - resp.request_info, - resp.history, - message=exc.args[0], - status=resp.status, - headers=resp.headers, - ) from exc - else: - compress = 0 - notakeover = False - - conn = resp.connection - assert conn is not None - conn_proto = conn.protocol - assert conn_proto is not None - transport = conn.transport - assert transport is not None - reader: FlowControlDataQueue[WSMessage] = FlowControlDataQueue( - conn_proto, 2**16, loop=self._loop - ) - conn_proto.set_parser(WebSocketReader(reader, max_msg_size), reader) - writer = WebSocketWriter( - conn_proto, - transport, - use_mask=True, - compress=compress, - notakeover=notakeover, - ) - except BaseException: - resp.close() - raise - else: - return self._ws_response_class( - reader, - writer, - protocol, - resp, - timeout, - autoclose, - autoping, - self._loop, - receive_timeout=receive_timeout, - heartbeat=heartbeat, - compress=compress, - client_notakeover=notakeover, - ) - - def _prepare_headers(self, headers: Optional[LooseHeaders]) -> "CIMultiDict[str]": - """Add default headers and transform it to CIMultiDict""" - # Convert headers to MultiDict - result = CIMultiDict(self._default_headers) - if headers: - if not isinstance(headers, (MultiDictProxy, MultiDict)): - headers = CIMultiDict(headers) - added_names: Set[str] = set() - for key, value in headers.items(): - if key in added_names: - result.add(key, value) - else: - result[key] = value - added_names.add(key) - return result - - def get( - self, url: StrOrURL, *, allow_redirects: bool = True, **kwargs: Any - ) -> "_RequestContextManager": - """Perform HTTP GET request.""" - return _RequestContextManager( - self._request(hdrs.METH_GET, url, allow_redirects=allow_redirects, **kwargs) - ) - - def options( - self, url: StrOrURL, *, allow_redirects: bool = True, **kwargs: Any - ) -> "_RequestContextManager": - """Perform HTTP OPTIONS request.""" - return _RequestContextManager( - self._request( - hdrs.METH_OPTIONS, url, allow_redirects=allow_redirects, **kwargs - ) - ) - - def head( - self, url: StrOrURL, *, allow_redirects: bool = False, **kwargs: Any - ) -> "_RequestContextManager": - """Perform HTTP HEAD request.""" - return _RequestContextManager( - self._request( - hdrs.METH_HEAD, url, allow_redirects=allow_redirects, **kwargs - ) - ) - - def post( - self, url: StrOrURL, *, data: Any = None, **kwargs: Any - ) -> "_RequestContextManager": - """Perform HTTP POST request.""" - return _RequestContextManager( - self._request(hdrs.METH_POST, url, data=data, **kwargs) - ) - - def put( - self, url: StrOrURL, *, data: Any = None, **kwargs: Any - ) -> "_RequestContextManager": - """Perform HTTP PUT request.""" - return _RequestContextManager( - self._request(hdrs.METH_PUT, url, data=data, **kwargs) - ) - - def patch( - self, url: StrOrURL, *, data: Any = None, **kwargs: Any - ) -> "_RequestContextManager": - """Perform HTTP PATCH request.""" - return _RequestContextManager( - self._request(hdrs.METH_PATCH, url, data=data, **kwargs) - ) - - def delete(self, url: StrOrURL, **kwargs: Any) -> "_RequestContextManager": - """Perform HTTP DELETE request.""" - return _RequestContextManager(self._request(hdrs.METH_DELETE, url, **kwargs)) - - async def close(self) -> None: - """Close underlying connector. - - Release all acquired resources. - """ - if not self.closed: - if self._connector is not None and self._connector_owner: - await self._connector.close() - self._connector = None - - @property - def closed(self) -> bool: - """Is client session closed. - - A readonly property. - """ - return self._connector is None or self._connector.closed - - @property - def connector(self) -> Optional[BaseConnector]: - """Connector instance used for the session.""" - return self._connector - - @property - def cookie_jar(self) -> AbstractCookieJar: - """The session cookies.""" - return self._cookie_jar - - @property - def version(self) -> Tuple[int, int]: - """The session HTTP protocol version.""" - return self._version - - @property - def requote_redirect_url(self) -> bool: - """Do URL requoting on redirection handling.""" - return self._requote_redirect_url - - @requote_redirect_url.setter - def requote_redirect_url(self, val: bool) -> None: - """Do URL requoting on redirection handling.""" - warnings.warn( - "session.requote_redirect_url modification " "is deprecated #2778", - DeprecationWarning, - stacklevel=2, - ) - self._requote_redirect_url = val - - @property - def loop(self) -> asyncio.AbstractEventLoop: - """Session's loop.""" - warnings.warn( - "client.loop property is deprecated", DeprecationWarning, stacklevel=2 - ) - return self._loop - - @property - def timeout(self) -> ClientTimeout: - """Timeout for the session.""" - return self._timeout - - @property - def headers(self) -> "CIMultiDict[str]": - """The default headers of the client session.""" - return self._default_headers - - @property - def skip_auto_headers(self) -> FrozenSet[istr]: - """Headers for which autogeneration should be skipped""" - return self._skip_auto_headers - - @property - def auth(self) -> Optional[BasicAuth]: - """An object that represents HTTP Basic Authorization""" - return self._default_auth - - @property - def json_serialize(self) -> JSONEncoder: - """Json serializer callable""" - return self._json_serialize - - @property - def connector_owner(self) -> bool: - """Should connector be closed on session closing""" - return self._connector_owner - - @property - def raise_for_status( - self, - ) -> Union[bool, Callable[[ClientResponse], Awaitable[None]]]: - """Should `ClientResponse.raise_for_status()` be called for each response.""" - return self._raise_for_status - - @property - def auto_decompress(self) -> bool: - """Should the body response be automatically decompressed.""" - return self._auto_decompress - - @property - def trust_env(self) -> bool: - """ - Should proxies information from environment or netrc be trusted. - - Information is from HTTP_PROXY / HTTPS_PROXY environment variables - or ~/.netrc file if present. - """ - return self._trust_env - - @property - def trace_configs(self) -> List[TraceConfig]: - """A list of TraceConfig instances used for client tracing""" - return self._trace_configs - - def detach(self) -> None: - """Detach connector from session without closing the former. - - Session is switched to closed state anyway. - """ - self._connector = None - - def __enter__(self) -> None: - raise TypeError("Use async with instead") - - def __exit__( - self, - exc_type: Optional[Type[BaseException]], - exc_val: Optional[BaseException], - exc_tb: Optional[TracebackType], - ) -> None: - # __exit__ should exist in pair with __enter__ but never executed - pass # pragma: no cover - - async def __aenter__(self) -> "ClientSession": - return self - - async def __aexit__( - self, - exc_type: Optional[Type[BaseException]], - exc_val: Optional[BaseException], - exc_tb: Optional[TracebackType], - ) -> None: - await self.close() - - -class _BaseRequestContextManager(Coroutine[Any, Any, _RetType], Generic[_RetType]): - - __slots__ = ("_coro", "_resp") - - def __init__(self, coro: Coroutine["asyncio.Future[Any]", None, _RetType]) -> None: - self._coro = coro - - def send(self, arg: None) -> "asyncio.Future[Any]": - return self._coro.send(arg) - - def throw(self, arg: BaseException) -> None: # type: ignore[arg-type,override] - self._coro.throw(arg) - - def close(self) -> None: - return self._coro.close() - - def __await__(self) -> Generator[Any, None, _RetType]: - ret = self._coro.__await__() - return ret - - def __iter__(self) -> Generator[Any, None, _RetType]: - return self.__await__() - - async def __aenter__(self) -> _RetType: - self._resp = await self._coro - return self._resp - - -class _RequestContextManager(_BaseRequestContextManager[ClientResponse]): - __slots__ = () - - async def __aexit__( - self, - exc_type: Optional[Type[BaseException]], - exc: Optional[BaseException], - tb: Optional[TracebackType], - ) -> None: - # We're basing behavior on the exception as it can be caused by - # user code unrelated to the status of the connection. If you - # would like to close a connection you must do that - # explicitly. Otherwise connection error handling should kick in - # and close/recycle the connection as required. - self._resp.release() - - -class _WSRequestContextManager(_BaseRequestContextManager[ClientWebSocketResponse]): - __slots__ = () - - async def __aexit__( - self, - exc_type: Optional[Type[BaseException]], - exc: Optional[BaseException], - tb: Optional[TracebackType], - ) -> None: - await self._resp.close() - - -class _SessionRequestContextManager: - - __slots__ = ("_coro", "_resp", "_session") - - def __init__( - self, - coro: Coroutine["asyncio.Future[Any]", None, ClientResponse], - session: ClientSession, - ) -> None: - self._coro = coro - self._resp: Optional[ClientResponse] = None - self._session = session - - async def __aenter__(self) -> ClientResponse: - try: - self._resp = await self._coro - except BaseException: - await self._session.close() - raise - else: - return self._resp - - async def __aexit__( - self, - exc_type: Optional[Type[BaseException]], - exc: Optional[BaseException], - tb: Optional[TracebackType], - ) -> None: - assert self._resp is not None - self._resp.close() - await self._session.close() - - -def request( - method: str, - url: StrOrURL, - *, - params: Optional[Mapping[str, str]] = None, - data: Any = None, - json: Any = None, - headers: Optional[LooseHeaders] = None, - skip_auto_headers: Optional[Iterable[str]] = None, - auth: Optional[BasicAuth] = None, - allow_redirects: bool = True, - max_redirects: int = 10, - compress: Optional[str] = None, - chunked: Optional[bool] = None, - expect100: bool = False, - raise_for_status: Optional[bool] = None, - read_until_eof: bool = True, - proxy: Optional[StrOrURL] = None, - proxy_auth: Optional[BasicAuth] = None, - timeout: Union[ClientTimeout, object] = sentinel, - cookies: Optional[LooseCookies] = None, - version: HttpVersion = http.HttpVersion11, - connector: Optional[BaseConnector] = None, - read_bufsize: Optional[int] = None, - loop: Optional[asyncio.AbstractEventLoop] = None, -) -> _SessionRequestContextManager: - """Constructs and sends a request. - - Returns response object. - method - HTTP method - url - request url - params - (optional) Dictionary or bytes to be sent in the query - string of the new request - data - (optional) Dictionary, bytes, or file-like object to - send in the body of the request - json - (optional) Any json compatible python object - headers - (optional) Dictionary of HTTP Headers to send with - the request - cookies - (optional) Dict object to send with the request - auth - (optional) BasicAuth named tuple represent HTTP Basic Auth - auth - aiohttp.helpers.BasicAuth - allow_redirects - (optional) If set to False, do not follow - redirects - version - Request HTTP version. - compress - Set to True if request has to be compressed - with deflate encoding. - chunked - Set to chunk size for chunked transfer encoding. - expect100 - Expect 100-continue response from server. - connector - BaseConnector sub-class instance to support - connection pooling. - read_until_eof - Read response until eof if response - does not have Content-Length header. - loop - Optional event loop. - timeout - Optional ClientTimeout settings structure, 5min - total timeout by default. - Usage:: - >>> import aiohttp - >>> resp = await aiohttp.request('GET', 'http://python.org/') - >>> resp - - >>> data = await resp.read() - """ - connector_owner = False - if connector is None: - connector_owner = True - connector = TCPConnector(loop=loop, force_close=True) - - session = ClientSession( - loop=loop, - cookies=cookies, - version=version, - timeout=timeout, - connector=connector, - connector_owner=connector_owner, - ) - - return _SessionRequestContextManager( - session._request( - method, - url, - params=params, - data=data, - json=json, - headers=headers, - skip_auto_headers=skip_auto_headers, - auth=auth, - allow_redirects=allow_redirects, - max_redirects=max_redirects, - compress=compress, - chunked=chunked, - expect100=expect100, - raise_for_status=raise_for_status, - read_until_eof=read_until_eof, - proxy=proxy, - proxy_auth=proxy_auth, - read_bufsize=read_bufsize, - ), - session, - ) diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/jinja2/runtime.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/jinja2/runtime.py deleted file mode 100644 index 985842b284270bcd52855029f13d3da19d718349..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/jinja2/runtime.py +++ /dev/null @@ -1,1053 +0,0 @@ -"""The runtime functions and state used by compiled templates.""" -import functools -import sys -import typing as t -from collections import abc -from itertools import chain - -from markupsafe import escape # noqa: F401 -from markupsafe import Markup -from markupsafe import soft_str - -from .async_utils import auto_aiter -from .async_utils import auto_await # noqa: F401 -from .exceptions import TemplateNotFound # noqa: F401 -from .exceptions import TemplateRuntimeError # noqa: F401 -from .exceptions import UndefinedError -from .nodes import EvalContext -from .utils import _PassArg -from .utils import concat -from .utils import internalcode -from .utils import missing -from .utils import Namespace # noqa: F401 -from .utils import object_type_repr -from .utils import pass_eval_context - -V = t.TypeVar("V") -F = t.TypeVar("F", bound=t.Callable[..., t.Any]) - -if t.TYPE_CHECKING: - import logging - import typing_extensions as te - from .environment import Environment - - class LoopRenderFunc(te.Protocol): - def __call__( - self, - reciter: t.Iterable[V], - loop_render_func: "LoopRenderFunc", - depth: int = 0, - ) -> str: - ... - - -# these variables are exported to the template runtime -exported = [ - "LoopContext", - "TemplateReference", - "Macro", - "Markup", - "TemplateRuntimeError", - "missing", - "escape", - "markup_join", - "str_join", - "identity", - "TemplateNotFound", - "Namespace", - "Undefined", - "internalcode", -] -async_exported = [ - "AsyncLoopContext", - "auto_aiter", - "auto_await", -] - - -def identity(x: V) -> V: - """Returns its argument. Useful for certain things in the - environment. - """ - return x - - -def markup_join(seq: t.Iterable[t.Any]) -> str: - """Concatenation that escapes if necessary and converts to string.""" - buf = [] - iterator = map(soft_str, seq) - for arg in iterator: - buf.append(arg) - if hasattr(arg, "__html__"): - return Markup("").join(chain(buf, iterator)) - return concat(buf) - - -def str_join(seq: t.Iterable[t.Any]) -> str: - """Simple args to string conversion and concatenation.""" - return concat(map(str, seq)) - - -def new_context( - environment: "Environment", - template_name: t.Optional[str], - blocks: t.Dict[str, t.Callable[["Context"], t.Iterator[str]]], - vars: t.Optional[t.Dict[str, t.Any]] = None, - shared: bool = False, - globals: t.Optional[t.MutableMapping[str, t.Any]] = None, - locals: t.Optional[t.Mapping[str, t.Any]] = None, -) -> "Context": - """Internal helper for context creation.""" - if vars is None: - vars = {} - if shared: - parent = vars - else: - parent = dict(globals or (), **vars) - if locals: - # if the parent is shared a copy should be created because - # we don't want to modify the dict passed - if shared: - parent = dict(parent) - for key, value in locals.items(): - if value is not missing: - parent[key] = value - return environment.context_class( - environment, parent, template_name, blocks, globals=globals - ) - - -class TemplateReference: - """The `self` in templates.""" - - def __init__(self, context: "Context") -> None: - self.__context = context - - def __getitem__(self, name: str) -> t.Any: - blocks = self.__context.blocks[name] - return BlockReference(name, self.__context, blocks, 0) - - def __repr__(self) -> str: - return f"<{type(self).__name__} {self.__context.name!r}>" - - -def _dict_method_all(dict_method: F) -> F: - @functools.wraps(dict_method) - def f_all(self: "Context") -> t.Any: - return dict_method(self.get_all()) - - return t.cast(F, f_all) - - -@abc.Mapping.register -class Context: - """The template context holds the variables of a template. It stores the - values passed to the template and also the names the template exports. - Creating instances is neither supported nor useful as it's created - automatically at various stages of the template evaluation and should not - be created by hand. - - The context is immutable. Modifications on :attr:`parent` **must not** - happen and modifications on :attr:`vars` are allowed from generated - template code only. Template filters and global functions marked as - :func:`pass_context` get the active context passed as first argument - and are allowed to access the context read-only. - - The template context supports read only dict operations (`get`, - `keys`, `values`, `items`, `iterkeys`, `itervalues`, `iteritems`, - `__getitem__`, `__contains__`). Additionally there is a :meth:`resolve` - method that doesn't fail with a `KeyError` but returns an - :class:`Undefined` object for missing variables. - """ - - def __init__( - self, - environment: "Environment", - parent: t.Dict[str, t.Any], - name: t.Optional[str], - blocks: t.Dict[str, t.Callable[["Context"], t.Iterator[str]]], - globals: t.Optional[t.MutableMapping[str, t.Any]] = None, - ): - self.parent = parent - self.vars: t.Dict[str, t.Any] = {} - self.environment: "Environment" = environment - self.eval_ctx = EvalContext(self.environment, name) - self.exported_vars: t.Set[str] = set() - self.name = name - self.globals_keys = set() if globals is None else set(globals) - - # create the initial mapping of blocks. Whenever template inheritance - # takes place the runtime will update this mapping with the new blocks - # from the template. - self.blocks = {k: [v] for k, v in blocks.items()} - - def super( - self, name: str, current: t.Callable[["Context"], t.Iterator[str]] - ) -> t.Union["BlockReference", "Undefined"]: - """Render a parent block.""" - try: - blocks = self.blocks[name] - index = blocks.index(current) + 1 - blocks[index] - except LookupError: - return self.environment.undefined( - f"there is no parent block called {name!r}.", name="super" - ) - return BlockReference(name, self, blocks, index) - - def get(self, key: str, default: t.Any = None) -> t.Any: - """Look up a variable by name, or return a default if the key is - not found. - - :param key: The variable name to look up. - :param default: The value to return if the key is not found. - """ - try: - return self[key] - except KeyError: - return default - - def resolve(self, key: str) -> t.Union[t.Any, "Undefined"]: - """Look up a variable by name, or return an :class:`Undefined` - object if the key is not found. - - If you need to add custom behavior, override - :meth:`resolve_or_missing`, not this method. The various lookup - functions use that method, not this one. - - :param key: The variable name to look up. - """ - rv = self.resolve_or_missing(key) - - if rv is missing: - return self.environment.undefined(name=key) - - return rv - - def resolve_or_missing(self, key: str) -> t.Any: - """Look up a variable by name, or return a ``missing`` sentinel - if the key is not found. - - Override this method to add custom lookup behavior. - :meth:`resolve`, :meth:`get`, and :meth:`__getitem__` use this - method. Don't call this method directly. - - :param key: The variable name to look up. - """ - if key in self.vars: - return self.vars[key] - - if key in self.parent: - return self.parent[key] - - return missing - - def get_exported(self) -> t.Dict[str, t.Any]: - """Get a new dict with the exported variables.""" - return {k: self.vars[k] for k in self.exported_vars} - - def get_all(self) -> t.Dict[str, t.Any]: - """Return the complete context as dict including the exported - variables. For optimizations reasons this might not return an - actual copy so be careful with using it. - """ - if not self.vars: - return self.parent - if not self.parent: - return self.vars - return dict(self.parent, **self.vars) - - @internalcode - def call( - __self, __obj: t.Callable, *args: t.Any, **kwargs: t.Any # noqa: B902 - ) -> t.Union[t.Any, "Undefined"]: - """Call the callable with the arguments and keyword arguments - provided but inject the active context or environment as first - argument if the callable has :func:`pass_context` or - :func:`pass_environment`. - """ - if __debug__: - __traceback_hide__ = True # noqa - - # Allow callable classes to take a context - if ( - hasattr(__obj, "__call__") # noqa: B004 - and _PassArg.from_obj(__obj.__call__) is not None # type: ignore - ): - __obj = __obj.__call__ # type: ignore - - pass_arg = _PassArg.from_obj(__obj) - - if pass_arg is _PassArg.context: - # the active context should have access to variables set in - # loops and blocks without mutating the context itself - if kwargs.get("_loop_vars"): - __self = __self.derived(kwargs["_loop_vars"]) - if kwargs.get("_block_vars"): - __self = __self.derived(kwargs["_block_vars"]) - args = (__self,) + args - elif pass_arg is _PassArg.eval_context: - args = (__self.eval_ctx,) + args - elif pass_arg is _PassArg.environment: - args = (__self.environment,) + args - - kwargs.pop("_block_vars", None) - kwargs.pop("_loop_vars", None) - - try: - return __obj(*args, **kwargs) - except StopIteration: - return __self.environment.undefined( - "value was undefined because a callable raised a" - " StopIteration exception" - ) - - def derived(self, locals: t.Optional[t.Dict[str, t.Any]] = None) -> "Context": - """Internal helper function to create a derived context. This is - used in situations where the system needs a new context in the same - template that is independent. - """ - context = new_context( - self.environment, self.name, {}, self.get_all(), True, None, locals - ) - context.eval_ctx = self.eval_ctx - context.blocks.update((k, list(v)) for k, v in self.blocks.items()) - return context - - keys = _dict_method_all(dict.keys) - values = _dict_method_all(dict.values) - items = _dict_method_all(dict.items) - - def __contains__(self, name: str) -> bool: - return name in self.vars or name in self.parent - - def __getitem__(self, key: str) -> t.Any: - """Look up a variable by name with ``[]`` syntax, or raise a - ``KeyError`` if the key is not found. - """ - item = self.resolve_or_missing(key) - - if item is missing: - raise KeyError(key) - - return item - - def __repr__(self) -> str: - return f"<{type(self).__name__} {self.get_all()!r} of {self.name!r}>" - - -class BlockReference: - """One block on a template reference.""" - - def __init__( - self, - name: str, - context: "Context", - stack: t.List[t.Callable[["Context"], t.Iterator[str]]], - depth: int, - ) -> None: - self.name = name - self._context = context - self._stack = stack - self._depth = depth - - @property - def super(self) -> t.Union["BlockReference", "Undefined"]: - """Super the block.""" - if self._depth + 1 >= len(self._stack): - return self._context.environment.undefined( - f"there is no parent block called {self.name!r}.", name="super" - ) - return BlockReference(self.name, self._context, self._stack, self._depth + 1) - - @internalcode - async def _async_call(self) -> str: - rv = concat( - [x async for x in self._stack[self._depth](self._context)] # type: ignore - ) - - if self._context.eval_ctx.autoescape: - return Markup(rv) - - return rv - - @internalcode - def __call__(self) -> str: - if self._context.environment.is_async: - return self._async_call() # type: ignore - - rv = concat(self._stack[self._depth](self._context)) - - if self._context.eval_ctx.autoescape: - return Markup(rv) - - return rv - - -class LoopContext: - """A wrapper iterable for dynamic ``for`` loops, with information - about the loop and iteration. - """ - - #: Current iteration of the loop, starting at 0. - index0 = -1 - - _length: t.Optional[int] = None - _after: t.Any = missing - _current: t.Any = missing - _before: t.Any = missing - _last_changed_value: t.Any = missing - - def __init__( - self, - iterable: t.Iterable[V], - undefined: t.Type["Undefined"], - recurse: t.Optional["LoopRenderFunc"] = None, - depth0: int = 0, - ) -> None: - """ - :param iterable: Iterable to wrap. - :param undefined: :class:`Undefined` class to use for next and - previous items. - :param recurse: The function to render the loop body when the - loop is marked recursive. - :param depth0: Incremented when looping recursively. - """ - self._iterable = iterable - self._iterator = self._to_iterator(iterable) - self._undefined = undefined - self._recurse = recurse - #: How many levels deep a recursive loop currently is, starting at 0. - self.depth0 = depth0 - - @staticmethod - def _to_iterator(iterable: t.Iterable[V]) -> t.Iterator[V]: - return iter(iterable) - - @property - def length(self) -> int: - """Length of the iterable. - - If the iterable is a generator or otherwise does not have a - size, it is eagerly evaluated to get a size. - """ - if self._length is not None: - return self._length - - try: - self._length = len(self._iterable) # type: ignore - except TypeError: - iterable = list(self._iterator) - self._iterator = self._to_iterator(iterable) - self._length = len(iterable) + self.index + (self._after is not missing) - - return self._length - - def __len__(self) -> int: - return self.length - - @property - def depth(self) -> int: - """How many levels deep a recursive loop currently is, starting at 1.""" - return self.depth0 + 1 - - @property - def index(self) -> int: - """Current iteration of the loop, starting at 1.""" - return self.index0 + 1 - - @property - def revindex0(self) -> int: - """Number of iterations from the end of the loop, ending at 0. - - Requires calculating :attr:`length`. - """ - return self.length - self.index - - @property - def revindex(self) -> int: - """Number of iterations from the end of the loop, ending at 1. - - Requires calculating :attr:`length`. - """ - return self.length - self.index0 - - @property - def first(self) -> bool: - """Whether this is the first iteration of the loop.""" - return self.index0 == 0 - - def _peek_next(self) -> t.Any: - """Return the next element in the iterable, or :data:`missing` - if the iterable is exhausted. Only peeks one item ahead, caching - the result in :attr:`_last` for use in subsequent checks. The - cache is reset when :meth:`__next__` is called. - """ - if self._after is not missing: - return self._after - - self._after = next(self._iterator, missing) - return self._after - - @property - def last(self) -> bool: - """Whether this is the last iteration of the loop. - - Causes the iterable to advance early. See - :func:`itertools.groupby` for issues this can cause. - The :func:`groupby` filter avoids that issue. - """ - return self._peek_next() is missing - - @property - def previtem(self) -> t.Union[t.Any, "Undefined"]: - """The item in the previous iteration. Undefined during the - first iteration. - """ - if self.first: - return self._undefined("there is no previous item") - - return self._before - - @property - def nextitem(self) -> t.Union[t.Any, "Undefined"]: - """The item in the next iteration. Undefined during the last - iteration. - - Causes the iterable to advance early. See - :func:`itertools.groupby` for issues this can cause. - The :func:`jinja-filters.groupby` filter avoids that issue. - """ - rv = self._peek_next() - - if rv is missing: - return self._undefined("there is no next item") - - return rv - - def cycle(self, *args: V) -> V: - """Return a value from the given args, cycling through based on - the current :attr:`index0`. - - :param args: One or more values to cycle through. - """ - if not args: - raise TypeError("no items for cycling given") - - return args[self.index0 % len(args)] - - def changed(self, *value: t.Any) -> bool: - """Return ``True`` if previously called with a different value - (including when called for the first time). - - :param value: One or more values to compare to the last call. - """ - if self._last_changed_value != value: - self._last_changed_value = value - return True - - return False - - def __iter__(self) -> "LoopContext": - return self - - def __next__(self) -> t.Tuple[t.Any, "LoopContext"]: - if self._after is not missing: - rv = self._after - self._after = missing - else: - rv = next(self._iterator) - - self.index0 += 1 - self._before = self._current - self._current = rv - return rv, self - - @internalcode - def __call__(self, iterable: t.Iterable[V]) -> str: - """When iterating over nested data, render the body of the loop - recursively with the given inner iterable data. - - The loop must have the ``recursive`` marker for this to work. - """ - if self._recurse is None: - raise TypeError( - "The loop must have the 'recursive' marker to be called recursively." - ) - - return self._recurse(iterable, self._recurse, depth=self.depth) - - def __repr__(self) -> str: - return f"<{type(self).__name__} {self.index}/{self.length}>" - - -class AsyncLoopContext(LoopContext): - _iterator: t.AsyncIterator[t.Any] # type: ignore - - @staticmethod - def _to_iterator( # type: ignore - iterable: t.Union[t.Iterable[V], t.AsyncIterable[V]] - ) -> t.AsyncIterator[V]: - return auto_aiter(iterable) - - @property - async def length(self) -> int: # type: ignore - if self._length is not None: - return self._length - - try: - self._length = len(self._iterable) # type: ignore - except TypeError: - iterable = [x async for x in self._iterator] - self._iterator = self._to_iterator(iterable) - self._length = len(iterable) + self.index + (self._after is not missing) - - return self._length - - @property - async def revindex0(self) -> int: # type: ignore - return await self.length - self.index - - @property - async def revindex(self) -> int: # type: ignore - return await self.length - self.index0 - - async def _peek_next(self) -> t.Any: - if self._after is not missing: - return self._after - - try: - self._after = await self._iterator.__anext__() - except StopAsyncIteration: - self._after = missing - - return self._after - - @property - async def last(self) -> bool: # type: ignore - return await self._peek_next() is missing - - @property - async def nextitem(self) -> t.Union[t.Any, "Undefined"]: - rv = await self._peek_next() - - if rv is missing: - return self._undefined("there is no next item") - - return rv - - def __aiter__(self) -> "AsyncLoopContext": - return self - - async def __anext__(self) -> t.Tuple[t.Any, "AsyncLoopContext"]: - if self._after is not missing: - rv = self._after - self._after = missing - else: - rv = await self._iterator.__anext__() - - self.index0 += 1 - self._before = self._current - self._current = rv - return rv, self - - -class Macro: - """Wraps a macro function.""" - - def __init__( - self, - environment: "Environment", - func: t.Callable[..., str], - name: str, - arguments: t.List[str], - catch_kwargs: bool, - catch_varargs: bool, - caller: bool, - default_autoescape: t.Optional[bool] = None, - ): - self._environment = environment - self._func = func - self._argument_count = len(arguments) - self.name = name - self.arguments = arguments - self.catch_kwargs = catch_kwargs - self.catch_varargs = catch_varargs - self.caller = caller - self.explicit_caller = "caller" in arguments - - if default_autoescape is None: - if callable(environment.autoescape): - default_autoescape = environment.autoescape(None) - else: - default_autoescape = environment.autoescape - - self._default_autoescape = default_autoescape - - @internalcode - @pass_eval_context - def __call__(self, *args: t.Any, **kwargs: t.Any) -> str: - # This requires a bit of explanation, In the past we used to - # decide largely based on compile-time information if a macro is - # safe or unsafe. While there was a volatile mode it was largely - # unused for deciding on escaping. This turns out to be - # problematic for macros because whether a macro is safe depends not - # on the escape mode when it was defined, but rather when it was used. - # - # Because however we export macros from the module system and - # there are historic callers that do not pass an eval context (and - # will continue to not pass one), we need to perform an instance - # check here. - # - # This is considered safe because an eval context is not a valid - # argument to callables otherwise anyway. Worst case here is - # that if no eval context is passed we fall back to the compile - # time autoescape flag. - if args and isinstance(args[0], EvalContext): - autoescape = args[0].autoescape - args = args[1:] - else: - autoescape = self._default_autoescape - - # try to consume the positional arguments - arguments = list(args[: self._argument_count]) - off = len(arguments) - - # For information why this is necessary refer to the handling - # of caller in the `macro_body` handler in the compiler. - found_caller = False - - # if the number of arguments consumed is not the number of - # arguments expected we start filling in keyword arguments - # and defaults. - if off != self._argument_count: - for name in self.arguments[len(arguments) :]: - try: - value = kwargs.pop(name) - except KeyError: - value = missing - if name == "caller": - found_caller = True - arguments.append(value) - else: - found_caller = self.explicit_caller - - # it's important that the order of these arguments does not change - # if not also changed in the compiler's `function_scoping` method. - # the order is caller, keyword arguments, positional arguments! - if self.caller and not found_caller: - caller = kwargs.pop("caller", None) - if caller is None: - caller = self._environment.undefined("No caller defined", name="caller") - arguments.append(caller) - - if self.catch_kwargs: - arguments.append(kwargs) - elif kwargs: - if "caller" in kwargs: - raise TypeError( - f"macro {self.name!r} was invoked with two values for the special" - " caller argument. This is most likely a bug." - ) - raise TypeError( - f"macro {self.name!r} takes no keyword argument {next(iter(kwargs))!r}" - ) - if self.catch_varargs: - arguments.append(args[self._argument_count :]) - elif len(args) > self._argument_count: - raise TypeError( - f"macro {self.name!r} takes not more than" - f" {len(self.arguments)} argument(s)" - ) - - return self._invoke(arguments, autoescape) - - async def _async_invoke(self, arguments: t.List[t.Any], autoescape: bool) -> str: - rv = await self._func(*arguments) # type: ignore - - if autoescape: - return Markup(rv) - - return rv # type: ignore - - def _invoke(self, arguments: t.List[t.Any], autoescape: bool) -> str: - if self._environment.is_async: - return self._async_invoke(arguments, autoescape) # type: ignore - - rv = self._func(*arguments) - - if autoescape: - rv = Markup(rv) - - return rv - - def __repr__(self) -> str: - name = "anonymous" if self.name is None else repr(self.name) - return f"<{type(self).__name__} {name}>" - - -class Undefined: - """The default undefined type. This undefined type can be printed and - iterated over, but every other access will raise an :exc:`UndefinedError`: - - >>> foo = Undefined(name='foo') - >>> str(foo) - '' - >>> not foo - True - >>> foo + 42 - Traceback (most recent call last): - ... - jinja2.exceptions.UndefinedError: 'foo' is undefined - """ - - __slots__ = ( - "_undefined_hint", - "_undefined_obj", - "_undefined_name", - "_undefined_exception", - ) - - def __init__( - self, - hint: t.Optional[str] = None, - obj: t.Any = missing, - name: t.Optional[str] = None, - exc: t.Type[TemplateRuntimeError] = UndefinedError, - ) -> None: - self._undefined_hint = hint - self._undefined_obj = obj - self._undefined_name = name - self._undefined_exception = exc - - @property - def _undefined_message(self) -> str: - """Build a message about the undefined value based on how it was - accessed. - """ - if self._undefined_hint: - return self._undefined_hint - - if self._undefined_obj is missing: - return f"{self._undefined_name!r} is undefined" - - if not isinstance(self._undefined_name, str): - return ( - f"{object_type_repr(self._undefined_obj)} has no" - f" element {self._undefined_name!r}" - ) - - return ( - f"{object_type_repr(self._undefined_obj)!r} has no" - f" attribute {self._undefined_name!r}" - ) - - @internalcode - def _fail_with_undefined_error( - self, *args: t.Any, **kwargs: t.Any - ) -> "te.NoReturn": - """Raise an :exc:`UndefinedError` when operations are performed - on the undefined value. - """ - raise self._undefined_exception(self._undefined_message) - - @internalcode - def __getattr__(self, name: str) -> t.Any: - if name[:2] == "__": - raise AttributeError(name) - - return self._fail_with_undefined_error() - - __add__ = __radd__ = __sub__ = __rsub__ = _fail_with_undefined_error - __mul__ = __rmul__ = __div__ = __rdiv__ = _fail_with_undefined_error - __truediv__ = __rtruediv__ = _fail_with_undefined_error - __floordiv__ = __rfloordiv__ = _fail_with_undefined_error - __mod__ = __rmod__ = _fail_with_undefined_error - __pos__ = __neg__ = _fail_with_undefined_error - __call__ = __getitem__ = _fail_with_undefined_error - __lt__ = __le__ = __gt__ = __ge__ = _fail_with_undefined_error - __int__ = __float__ = __complex__ = _fail_with_undefined_error - __pow__ = __rpow__ = _fail_with_undefined_error - - def __eq__(self, other: t.Any) -> bool: - return type(self) is type(other) - - def __ne__(self, other: t.Any) -> bool: - return not self.__eq__(other) - - def __hash__(self) -> int: - return id(type(self)) - - def __str__(self) -> str: - return "" - - def __len__(self) -> int: - return 0 - - def __iter__(self) -> t.Iterator[t.Any]: - yield from () - - async def __aiter__(self) -> t.AsyncIterator[t.Any]: - for _ in (): - yield - - def __bool__(self) -> bool: - return False - - def __repr__(self) -> str: - return "Undefined" - - -def make_logging_undefined( - logger: t.Optional["logging.Logger"] = None, base: t.Type[Undefined] = Undefined -) -> t.Type[Undefined]: - """Given a logger object this returns a new undefined class that will - log certain failures. It will log iterations and printing. If no - logger is given a default logger is created. - - Example:: - - logger = logging.getLogger(__name__) - LoggingUndefined = make_logging_undefined( - logger=logger, - base=Undefined - ) - - .. versionadded:: 2.8 - - :param logger: the logger to use. If not provided, a default logger - is created. - :param base: the base class to add logging functionality to. This - defaults to :class:`Undefined`. - """ - if logger is None: - import logging - - logger = logging.getLogger(__name__) - logger.addHandler(logging.StreamHandler(sys.stderr)) - - def _log_message(undef: Undefined) -> None: - logger.warning( # type: ignore - "Template variable warning: %s", undef._undefined_message - ) - - class LoggingUndefined(base): # type: ignore - __slots__ = () - - def _fail_with_undefined_error( # type: ignore - self, *args: t.Any, **kwargs: t.Any - ) -> "te.NoReturn": - try: - super()._fail_with_undefined_error(*args, **kwargs) - except self._undefined_exception as e: - logger.error("Template variable error: %s", e) # type: ignore - raise e - - def __str__(self) -> str: - _log_message(self) - return super().__str__() # type: ignore - - def __iter__(self) -> t.Iterator[t.Any]: - _log_message(self) - return super().__iter__() # type: ignore - - def __bool__(self) -> bool: - _log_message(self) - return super().__bool__() # type: ignore - - return LoggingUndefined - - -class ChainableUndefined(Undefined): - """An undefined that is chainable, where both ``__getattr__`` and - ``__getitem__`` return itself rather than raising an - :exc:`UndefinedError`. - - >>> foo = ChainableUndefined(name='foo') - >>> str(foo.bar['baz']) - '' - >>> foo.bar['baz'] + 42 - Traceback (most recent call last): - ... - jinja2.exceptions.UndefinedError: 'foo' is undefined - - .. versionadded:: 2.11.0 - """ - - __slots__ = () - - def __html__(self) -> str: - return str(self) - - def __getattr__(self, _: str) -> "ChainableUndefined": - return self - - __getitem__ = __getattr__ # type: ignore - - -class DebugUndefined(Undefined): - """An undefined that returns the debug info when printed. - - >>> foo = DebugUndefined(name='foo') - >>> str(foo) - '{{ foo }}' - >>> not foo - True - >>> foo + 42 - Traceback (most recent call last): - ... - jinja2.exceptions.UndefinedError: 'foo' is undefined - """ - - __slots__ = () - - def __str__(self) -> str: - if self._undefined_hint: - message = f"undefined value printed: {self._undefined_hint}" - - elif self._undefined_obj is missing: - message = self._undefined_name # type: ignore - - else: - message = ( - f"no such element: {object_type_repr(self._undefined_obj)}" - f"[{self._undefined_name!r}]" - ) - - return f"{{{{ {message} }}}}" - - -class StrictUndefined(Undefined): - """An undefined that barks on print and iteration as well as boolean - tests and all kinds of comparisons. In other words: you can do nothing - with it except checking if it's defined using the `defined` test. - - >>> foo = StrictUndefined(name='foo') - >>> str(foo) - Traceback (most recent call last): - ... - jinja2.exceptions.UndefinedError: 'foo' is undefined - >>> not foo - Traceback (most recent call last): - ... - jinja2.exceptions.UndefinedError: 'foo' is undefined - >>> foo + 42 - Traceback (most recent call last): - ... - jinja2.exceptions.UndefinedError: 'foo' is undefined - """ - - __slots__ = () - __iter__ = __str__ = __len__ = Undefined._fail_with_undefined_error - __eq__ = __ne__ = __bool__ = __hash__ = Undefined._fail_with_undefined_error - __contains__ = Undefined._fail_with_undefined_error - - -# Remove slots attributes, after the metaclass is applied they are -# unneeded and contain wrong data for subclasses. -del ( - Undefined.__slots__, - ChainableUndefined.__slots__, - DebugUndefined.__slots__, - StrictUndefined.__slots__, -) diff --git a/spaces/leafShen/CodeFormer/CodeFormer/facelib/detection/yolov5face/utils/extract_ckpt.py b/spaces/leafShen/CodeFormer/CodeFormer/facelib/detection/yolov5face/utils/extract_ckpt.py deleted file mode 100644 index 4b8b631348f2d0cdea4e5a3594bb59f3e8f34a0f..0000000000000000000000000000000000000000 --- a/spaces/leafShen/CodeFormer/CodeFormer/facelib/detection/yolov5face/utils/extract_ckpt.py +++ /dev/null @@ -1,5 +0,0 @@ -import torch -import sys -sys.path.insert(0,'./facelib/detection/yolov5face') -model = torch.load('facelib/detection/yolov5face/yolov5n-face.pt', map_location='cpu')['model'] -torch.save(model.state_dict(),'weights/facelib/yolov5n-face.pth') \ No newline at end of file diff --git a/spaces/lharr345/alecsharpie-codegen_350m_html/README.md b/spaces/lharr345/alecsharpie-codegen_350m_html/README.md deleted file mode 100644 index 2d06cc5ac0f0f8ffe0ed85c291b4b27f33a36edb..0000000000000000000000000000000000000000 --- a/spaces/lharr345/alecsharpie-codegen_350m_html/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Alecsharpie-codegen 350m Html -emoji: 🐢 -colorFrom: green -colorTo: indigo -sdk: gradio -sdk_version: 3.23.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/lightli/bingo-newbing/README.md b/spaces/lightli/bingo-newbing/README.md deleted file mode 100644 index 218767d1d7debd26932ffddca2ec0f421c0171a9..0000000000000000000000000000000000000000 --- a/spaces/lightli/bingo-newbing/README.md +++ /dev/null @@ -1,195 +0,0 @@ ---- -title: bingo -emoji: 📉 -colorFrom: red -colorTo: red -sdk: docker -pinned: true -license: mit -duplicated_from: hf4all/bingo ---- - -
      - -# Bingo - -Bingo,一个让你呼吸顺畅 New Bing。 - -高度还原 New Bing 网页版的主要操作,国内可用,兼容绝大多数微软 Bing AI 的功能,可自行部署使用。 - -![Github stars](https://badgen.net/github/stars/weaigc/bingo?icon=github&label=stars) -![Gthub issues](https://img.shields.io/github/issues/weaigc/bingo) -[![docker build](https://github.com/weaigc/bingo/actions/workflows/docker.yml/badge.svg)](https://hub.docker.com/repository/docker/weaigc/bingo/) -[![docker hub](https://badgen.net/docker/size/weaigc/bingo?icon=docker&label=image%20size)](https://hub.docker.com/repository/docker/weaigc/bingo/) -[![MIT License](https://img.shields.io/badge/license-MIT-97c50f)](https://github.com/weaigc/bingo/blob/main/license) - -
      - -## 演示站点 - -https://bing.github1s.tk - - - -[![img](./docs/images/demo.png)](https://bing.github1s.tk) - -## 功能和特点 - -- 完全基于 Next.js 重写,高度还原 New Bing Web 版 UI,使用体验和 Bing AI 基本一致。 -- 支持 Docker 构建,方便快捷地部署和访问。 -- Cookie 可全局配置,全局共享。 -- 支持持续语音对话 - -## RoadMap - - - [x] 支持 wss 转发 - - [x] 支持一键部署 - - [x] 优化移动端展示 - - [x] 支持画图 - - [x] 支持语音输入(支持语音指令,目前仅支持 PC 版 Edge 及 Chrome 浏览器) - - [x] 支持语音输出(需要手动开启) - - [x] 支持图片输入 - - [x] 支持自定义域名 - - [ ] 支持历史记录 - - [ ] 适配深色模式 - - [ ] 支持内置提示词 - - [ ] 支持离线访问 - - [ ] 国际化翻译 - -## 一键部署 -你也可以一键部署自己的 New Bing AI 到 🤗 HuggingFace 。 - -### 部署到 Huggingface -1. 点击此图标 -[![Deploy to HuggingFace](https://img.shields.io/badge/%E7%82%B9%E5%87%BB%E9%83%A8%E7%BD%B2-%F0%9F%A4%97-fff)](https://huggingface.co/login?next=%2Fspaces%2Fhf4all%2Fbingo%3Fduplicate%3Dtrue%26visibility%3Dpublic),配置可以不改。 - -2. 部署署完成后,点击“设置” 》“站点域名”,点一下,复制一下 HF 域名信息,然后分享给别人即可。 - -> Huggingface 不支持绑定自己的域名,不过我们可以使用曲线救国的方式来达到这个目的 -> 1. 方式二,借助 Cloudflare Workers [部署Cloudflare Workers](#使用Cloudflare-Workers自定义域名) -> 2. 方式一,借助 Github Pages 及 iframe [如何绑定域名](https://github.com/weaigc/bingo/issues/4) - -### 使用Cloudflare Workers自定义域名 - -> 核心代码 [worker.js](./cloudflare/worker.js) - -- [注册 Cloudflare 账号](https://dash.cloudflare.com/sign-up) - -- 添加一个新的网站,需要你有自己的域名并且将域名`Name Server`托管给 Cloudflare 才行(更多信息可自行 Google) - -- 通过左侧菜单进入「Workers」,并点击「Create a Worker」。 - -- 创建 Worker 服务,复制 [worker.js](./cloudflare/worker.js) 全部代码,粘贴至创建的服务中,根据注释进行改动,保存并部署。 - -- 触发器 中自定义访问域名。 - -### 部署其它平台 -
      - -由于其他平台目前遭到 New Bing 封杀,会遇到很多问题,不再做推荐,有需要的可以自行查看 - - -#### 部署到 Netlify -[![Deploy to Netlify Button](https://www.netlify.com/img/deploy/button.svg)](https://app.netlify.com/start/deploy?repository=https://github.com/weaigc/bingo) - -#### 部署到 Vercel -如果你是 Vercel 付费用户,可以点以下链接一键部署到 Vercel。免费版本有[接口超时限制](https://vercel.com/docs/concepts/limits/overview),不推荐使用 - -[![Deploy with Vercel](https://vercel.com/button)](https://vercel.com/new/clone?demo-title=bingo&demo-description=bingo&demo-url=https%3A%2F%2Fbing.github1s.tk%2F&project-name=bingo&repository-name=bingo&repository-url=https%3A%2F%2Fgithub.com%2Fweaigc%2Fbingo&from=templates&skippable-integrations=1&env=BING_HEADER&envDescription=%E5%A6%82%E6%9E%9C%E4%B8%8D%E7%9F%A5%E9%81%93%E6%80%8E%E4%B9%88%E9%85%8D%E7%BD%AE%E8%AF%B7%E7%82%B9%E5%8F%B3%E4%BE%A7Learn+More&envLink=https%3A%2F%2Fgithub.com%2Fweaigc%2Fbingo%2Fblob%2Fmain%2F.env.example) - -#### 部署到 Render - -[![Deploy to Render](https://render.com/images/deploy-to-render-button.svg)](https://render.com/deploy?repo=https://github.com/weaigc/bingo) -
      - -## 环境和依赖 - -- Node.js >= 18 -- Bing AI 的[身份信息](#如何获取-BING_HEADER)) - -## 安装和使用 - -* 使用 Node 启动 - -```bash -git clone https://github.com/weaigc/bingo.git -npm i # 推荐使用 pnpm i -npm run build -npm run start -``` - -* 使用 Docker 启动 -```bash -docker pull weaigc/bingo -docker run --rm -it -p 7860:7860 weaigc/bingo -# 或者 -docker run --rm -it -e BING_HEADER=xxxx -p 7860:7860 weaigc/bingo -``` - -## 如何获取 BING_HEADER -> 配置了 BING_HEADER 意味着你将自己的账号共享给所有使用此服务的人,如果不需要免登录画图的功能,不建议设置此变量 - -打开 https://www.bing.com 并登录,然后访问 https://www.bing.com/turing/captcha/challenge,通过人机校验,然后 - -![BING HEADER](./docs/images/curl.png) - -> 复制出来的内容应该如下所示。确认格式无误后,打开 https://effulgent-bubblegum-e2f5df.netlify.app/#dialog=%22settings%22 ,粘贴进去,点击“转成 BING_HEADER 并复制”,然后从剪切板粘贴即可得到。(你也可以先在网页上进行验证) - -以下是格式参考,需要注意的是,网页端保存的格式是以`curl`开头, 而服务端配置的 `BING_HEADER` 是 `base64` 格式,两者不能互通。 -
      -正常格式/网页端保存的格式(格式仅供参考) - -``` -curl 'https://www.bing.com/turing/captcha/challenge' \ - -H 'authority: www.bing.com' \ - -H 'accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7' \ - -H 'accept-language: zh-CN,zh;q=0.9,en;q=0.8,en-GB;q=0.7,en-US;q=0.6' \ - -H 'cache-control: max-age=0' \ - -H 'cookie: MicrosoftApplicationsTelemetryDeviceId=3399c004-fd0e-48ec-bb92-d82a27b2bbd4; _EDGE_V=1; SRCHD=AF=NOFORM; SRCHUID=V=2&GUID=29EBDDA4E6674329ACCF1A0A423C3E98&dmnchg=1; _UR=QS=0&TQS=0; _HPVN=CS=eyJQbiI6eyJDbiI6MSwiU3QiOjAsIlFzIjowLCJQcm9kIjoiUCJ9LCJTYyI6eyJDbiI6MSwiU3QiOjAsIlFzIjowLCJQcm9kIjoiSCJ9LCJReiI6eyJDbiI6MSwiU3QiOjAsIlFzIjowLCJQcm9kIjoiVCJ9LCJBcCI6dHJ1ZSwiTXV0ZSI6dHJ1ZSwiTGFkIjoiMjAyMy0wNy0yNVQwMDowMDowMFoiLCJJb3RkIjowLCJHd2IiOjAsIkRmdCI6bnVsbCwiTXZzIjowLCJGbHQiOjAsIkltcCI6Mn0=; _RwBf=ilt=1&ihpd=1&ispd=0&rc=0&rb=0&gb=0&rg=200&pc=0&mtu=0&rbb=0&g=0&cid=&clo=0&v=1&l=2023-07-25T07:00:00.0000000Z&lft=0001-01-01T00:00:00.0000000&aof=0&o=2&p=&c=&t=0&s=0001-01-01T00:00:00.0000000+00:00&ts=2023-07-25T11:00:31.7111548+00:00&rwred=0&wls=&lka=0&lkt=0&TH=&dci=0; ANON=A=0043C6590EA808ED6E395059FFFFFFFF&E=1c8b&W=1; NAP=V=1.9&E=1c31&C=DnaMSbDN_4efZ_xXqBF3Daorjr53kYqYoaP8YHsupjmiXnysX7a37A&W=1; PPLState=1; KievRPSSecAuth=FABSBBRaTOJILtFsMkpLVWSG6AN6C/svRwNmAAAEgAAACMGUA7EGVSjGEAQBGHtNsc5sNL7unmJsfPJ2t6imfo4BeUJlAia3IpMTtMUy4PU/C5QAzRI5pODtsIee0+blgllXt/5IiWwGjwmdhivsFM597pRPkjARPfwsPhNLPNbJrCPNPHdje4Is78MnCADXw6/NBq2FL8V2/byw2fH6IuAMD2MvN/VvqpEa9ZxiDjZtENj4HEj0mO2SgzjfyEhVAkjvznJqU2rw/Q2tHmX94NAM2kzlzKF/hWPhCCUmu8IHLvCnHDS6mSptvJDDP/sp3ovtzOXkP1mlM/Xju5ftesUvccVEQGffXORa1dE5hEMbKIiKXz1tDdduSXE19g9/+mRMAjaQhpwhI8XmilCTx1adb1Ll5qK+VjC9GNfEZzcbsGBPVaOl+anG8rEMq+Xnhjo7J+NqTNolavHgcuV8kJsCeJZIged33UA8eOZeFo+wAECMguxMoSqgpGH+sthqynvD/FJD6r/tiU2N3uqVq8NE8V37asrN6T14Z0FGBJOe6ET1+PGApm3s11OY9/xhFEB9T5BEPUGEbvRcLcW2ncFQX0EU+xweiPqo1Q1hNUg/dCtSI+lZ7c2H8XheePZavZ0TJQ8oNCSAuKiTqJmI0fVGpwbXwfaADkEipuawz3fIuMJBNgMU0OtA7Hm59v2fGLIBuvi6YeKS6GgVk3BIPf+P/eKahwozrxQZaFnoHTSqMkvct7xCP4atBROfXKf5Ww0CcFKp+2WX9BIskTOo2jjk6bAyyYJ+ElUB1fgLKNk5m/YSMc9iYCLIBMIGN8F0Yvy3tZ7cvh7Ue5Klo98US/I+nW1G7ZJMHRgUO8h8lpneHqEMegKd8gynO4VF7RpCjJkunDmW0Ta+RkXAP619pg0dqHMFkoOgknN78oBbGTV6fJUKotv+vi61kLhAeXZGWoHGCRXh2wUC6YgfPgKA6ESRNHtFn7E5B3HHpLc5rVMDSNhKZYfdhupV4Ezf6+5DhMcZLZhi0kk+ivDiN1gdHlVtSN55xpvf+c+XZDzR0uhgcvgy0LAbmzgk6y4WbYH+LQsMpzNNj+aC72vMiWovWrKh9jY4MYCmdgxsS/skPtLdp18muiEIRXTbZQGUmhxFpJAIbBIsCscMpzL0BgeujxUwM5wr79Sd9r4xwbgSMwmBlBfUHRVBdNyg8feepeJbCS63nD6eHOuLqMRsPIio3w/ki/EAa92UUEiZeavLsMUD/y/qAvWUdzdP5Y+C/TM+CMGS/kGL4LEdY/28MQeTvU1qv1X21kQt2aiaj3pPVL36hAzxbcLgqcMo9oymDRy87kdCXW/+g4oKLtMh6fm/G6W6Y/B01JlxohyyvueHQIG557uzkEkTJ3FnOVODSKBKpb3WZ65rExfV71zSZa25F3GmpaIG6HiYrX2YYhQAkIE9pKEQBHbnwHuwNDGottZTXZw=; WLS=C=9df3f9d8518fae19&N=wen; WLID=pGY8HgWCu4p5XYCOk2oa0+DBdftkMUfmNIn8XtSjSTKsgv/Il7GUlYs0Jpjf/E12jZMgV7x44Dy3fXOgjjUoJx7Y/ClLrLhsk20THksJJoI=; _EDGE_S=F=1&SID=17CF6EE006426448213C7DB907436588&mkt=zh-CN; MUID=225621093D8A6C27301632413C0E6D08; MUIDB=225621093D8A6C27301632413C0E6D08; SUID=A; SNRHOP=I=&TS=; _U=nGyzKQruEsDwLiu65fZFIG6e12hf2lwTJmroW__k8joUJIKmG3OIjayXKGW9dCVR3sNhF76mEVxyW6yjUGPodOfjtSa3s3J_DxMOrEK1BqXCOBI9bC66spAIASV7prsYFlVAJz73jVNENp_tBubLHJy6EbT0BKRe4AjrYkH-9uMnmCKB8Zmyg; _SS=SID=17CF6EE006426448213C7DB907436588&R=0&RB=0&GB=0&RG=200&RP=0&PC=U531; SRCHS=PC=U531; USRLOC=HS=1&ELOC=LAT=22.501529693603516|LON=113.9263687133789|N=%E5%8D%97%E5%B1%B1%E5%8C%BA%EF%BC%8C%E5%B9%BF%E4%B8%9C%E7%9C%81|ELT=2|&CLOC=LAT=22.50153029046461|LON=113.92637070632928|A=733.4464586120832|TS=230726151034|SRC=W; SRCHUSR=DOB=20230725&T=1690384908000&POEX=W; ipv6=hit=1690388509974&t=6; SRCHHPGUSR=HV=1690384945&SRCHLANG=zh-Hans&PV=15.0.0&BRW=MW&BRH=MT&CW=410&CH=794&SCW=410&SCH=794&DPR=1.5&UTC=480&DM=0&WTS=63825879627&PRVCW=410&PRVCH=794&PR=1.5; cct=AjWIBYOoVP-Afq6gWwtx80If6yHn6iBuEVHA1XHdAKpny6Y_CVyi_MSyM94VyMWnjdYkkccVtm3czoIAtXUGQA; GC=AjWIBYOoVP-Afq6gWwtx80If6yHn6iBuEVHA1XHdAKpR3Y_D9Ytcks4Ht6XhadXk75dvhzP4YOUS0UmoEyqyxw' \ - -H 'dnt: 1' \ - -H 'sec-ch-ua: "Chromium";v="116", "Not)A;Brand";v="24", "Microsoft Edge";v="116"' \ - -H 'sec-ch-ua-arch: "x86"' \ - -H 'sec-ch-ua-bitness: "64"' \ - -H 'sec-ch-ua-full-version: "116.0.1938.29"' \ - -H 'sec-ch-ua-full-version-list: "Chromium";v="116.0.5845.42", "Not)A;Brand";v="24.0.0.0", "Microsoft Edge";v="116.0.1938.29"' \ - -H 'sec-ch-ua-mobile: ?0' \ - -H 'sec-ch-ua-model: ""' \ - -H 'sec-ch-ua-platform: "Windows"' \ - -H 'sec-ch-ua-platform-version: "15.0.0"' \ - -H 'sec-fetch-dest: document' \ - -H 'sec-fetch-mode: navigate' \ - -H 'sec-fetch-site: none' \ - -H 'sec-fetch-user: ?1' \ - -H 'sec-ms-gec: B3F47AD4A283CAB374C0451C46AAFD147C6A4DACAFF6A1C13F34B2C72B024494' \ - -H 'sec-ms-gec-version: 1-116.0.1938.29' \ - -H 'upgrade-insecure-requests: 1' \ - -H 'user-agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36 Edg/116.0.0.0' \ - -H 'x-client-data: eyIxIjoiMiIsIjEwIjoiXCJTMGg3R05HOTF2aDQ1TUZSUnZ5NHN2akRmMWdlaVJKenNxNlA3aU1WbnF3PVwiIiwiMiI6IjEiLCIzIjoiMSIsIjQiOiIyMTU4ODQ5NTM4MjY4OTM5NTA3IiwiNSI6IlwiSm9GUWpPTDk3OS9MbkRRZnlCd2N1M2FsOUN3eTZTQmdaMGNYMXBtOWVMZz1cIiIsIjYiOiJiZXRhIiwiNyI6IjE4MDM4ODYyNjQzNSIsIjkiOiJkZXNrdG9wIn0=' \ - -H 'x-edge-shopping-flag: 1' \ - --compressed -``` -
      - -
      -转成base64之后的格式(BING_HEADER只能使用 base64 之后的格式) - -``` -Y3VybCAnaHR0cHM6Ly93d3cuYmluZy5jb20vdHVyaW5nL2NvbnZlcnNhdGlvbi9jcmVhdGUnIFwgICAtSCAnYXV0aG9yaXR5OiB3d3cuYmluZy5jb20nIFwgICAtSCAnYWNjZXB0OiB0ZXh0L2h0bWwsYXBwbGljYXRpb24veGh0bWwreG1sLGFwcGxpY2F0aW9uL3htbDtxPTAuOSxpbWFnZS93ZWJwLGltYWdlL2FwbmcsKi8qO3E9MC44LGFwcGxpY2F0aW9uL3NpZ25lZC1leGNoYW5nZTt2PWIzO3E9MC43JyBcICAgLUggJ2FjY2VwdC1sYW5ndWFnZTogemgtQ04semg7cT0wLjksZW47cT0wLjgsZW4tR0I7cT0wLjcsZW4tVVM7cT0wLjYnIFwgICAtSCAnY2FjaGUtY29udHJvbDogbWF4LWFnZT0wJyBcICAgLUggJ2Nvb2tpZTogTWljcm9zb2Z0QXBwbGljYXRpb25zVGVsZW1ldHJ5RGV2aWNlSWQ9MzM5OWMwMDQtZmQwZS00OGVjLWJiOTItZDgyYTI3YjJiYmQ0OyBfRURHRV9WPTE7IFNSQ0hEPUFGPU5PRk9STTsgU1JDSFVJRD1WPTImR1VJRD0yOUVCRERBNEU2Njc0MzI5QUNDRjFBMEE0MjNDM0U5OCZkbW5jaGc9MTsgX1VSPVFTPTAmVFFTPTA7IF9IUFZOPUNTPWV5SlFiaUk2ZXlKRGJpSTZNU3dpVTNRaU9qQXNJbEZ6SWpvd0xDSlFjbTlrSWpvaVVDSjlMQ0pUWXlJNmV5SkRiaUk2TVN3aVUzUWlPakFzSWxGeklqb3dMQ0pRY205a0lqb2lTQ0o5TENKUmVpSTZleUpEYmlJNk1Td2lVM1FpT2pBc0lsRnpJam93TENKUWNtOWtJam9pVkNKOUxDSkJjQ0k2ZEhKMVpTd2lUWFYwWlNJNmRISjFaU3dpVEdGa0lqb2lNakF5TXkwd055MHlOVlF3TURvd01Eb3dNRm9pTENKSmIzUmtJam93TENKSGQySWlPakFzSWtSbWRDSTZiblZzYkN3aVRYWnpJam93TENKR2JIUWlPakFzSWtsdGNDSTZNbjA9OyBfUndCZj1pbHQ9MSZpaHBkPTEmaXNwZD0wJnJjPTAmcmI9MCZnYj0wJnJnPTIwMCZwYz0wJm10dT0wJnJiYj0wJmc9MCZjaWQ9JmNsbz0wJnY9MSZsPTIwMjMtMDctMjVUMDc6MDA6MDAuMDAwMDAwMFombGZ0PTAwMDEtMDEtMDFUMDA6MDA6MDAuMDAwMDAwMCZhb2Y9MCZvPTImcD0mYz0mdD0wJnM9MDAwMS0wMS0wMVQwMDowMDowMC4wMDAwMDAwKzAwOjAwJnRzPTIwMjMtMDctMjVUMTE6MDA6MzEuNzExMTU0OCswMDowMCZyd3JlZD0wJndscz0mbGthPTAmbGt0PTAmVEg9JmRjaT0wOyBBTk9OPUE9MDA0M0M2NTkwRUE4MDhFRDZFMzk1MDU5RkZGRkZGRkYmRT0xYzhiJlc9MTsgTkFQPVY9MS45JkU9MWMzMSZDPURuYU1TYkROXzRlZlpfeFhxQkYzRGFvcmpyNTNrWXFZb2FQOFlIc3Vwam1pWG55c1g3YTM3QSZXPTE7IFBQTFN0YXRlPTE7IEtpZXZSUFNTZWNBdXRoPUZBQlNCQlJhVE9KSUx0RnNNa3BMVldTRzZBTjZDL3N2UndObUFBQUVnQUFBQ01HVUE3RUdWU2pHRUFRQkdIdE5zYzVzTkw3dW5tSnNmUEoydDZpbWZvNEJlVUpsQWlhM0lwTVR0TVV5NFBVL0M1UUF6Ukk1cE9EdHNJZWUwK2JsZ2xsWHQvNUlpV3dHandtZGhpdnNGTTU5N3BSUGtqQVJQZndzUGhOTFBOYkpyQ1BOUEhkamU0SXM3OE1uQ0FEWHc2L05CcTJGTDhWMi9ieXcyZkg2SXVBTUQyTXZOL1Z2cXBFYTlaeGlEalp0RU5qNEhFajBtTzJTZ3pqZnlFaFZBa2p2em5KcVUycncvUTJ0SG1YOTROQU0ya3psektGL2hXUGhDQ1VtdThJSEx2Q25IRFM2bVNwdHZKRERQL3NwM292dHpPWGtQMW1sTS9YanU1ZnRlc1V2Y2NWRVFHZmZYT1JhMWRFNWhFTWJLSWlLWHoxdERkZHVTWEUxOWc5LyttUk1BamFRaHB3aEk4WG1pbENUeDFhZGIxTGw1cUsrVmpDOUdOZkVaemNic0dCUFZhT2wrYW5HOHJFTXErWG5oam83SitOcVROb2xhdkhnY3VWOGtKc0NlSlpJZ2VkMzNVQThlT1plRm8rd0FFQ01ndXhNb1NxZ3BHSCtzdGhxeW52RC9GSkQ2ci90aVUyTjN1cVZxOE5FOFYzN2Fzck42VDE0WjBGR0JKT2U2RVQxK1BHQXBtM3MxMU9ZOS94aEZFQjlUNUJFUFVHRWJ2UmNMY1cybmNGUVgwRVUreHdlaVBxbzFRMWhOVWcvZEN0U0krbFo3YzJIOFhoZWVQWmF2WjBUSlE4b05DU0F1S2lUcUptSTBmVkdwd2JYd2ZhQURrRWlwdWF3ejNmSXVNSkJOZ01VME90QTdIbTU5djJmR0xJQnV2aTZZZUtTNkdnVmszQklQZitQL2VLYWh3b3pyeFFaYUZub0hUU3FNa3ZjdDd4Q1A0YXRCUk9mWEtmNVd3MENjRktwKzJXWDlCSXNrVE9vMmpqazZiQXl5WUorRWxVQjFmZ0xLTms1bS9ZU01jOWlZQ0xJQk1JR044RjBZdnkzdFo3Y3ZoN1VlNUtsbzk4VVMvSStuVzFHN1pKTUhSZ1VPOGg4bHBuZUhxRU1lZ0tkOGd5bk80VkY3UnBDakprdW5EbVcwVGErUmtYQVA2MTlwZzBkcUhNRmtvT2drbk43OG9CYkdUVjZmSlVLb3R2K3ZpNjFrTGhBZVhaR1dvSEdDUlhoMndVQzZZZ2ZQZ0tBNkVTUk5IdEZuN0U1QjNISHBMYzVyVk1EU05oS1pZZmRodXBWNEV6ZjYrNURoTWNaTFpoaTBraytpdkRpTjFnZEhsVnRTTjU1eHB2ZitjK1haRHpSMHVoZ2N2Z3kwTEFibXpnazZ5NFdiWUgrTFFzTXB6Tk5qK2FDNzJ2TWlXb3ZXcktoOWpZNE1ZQ21kZ3hzUy9za1B0TGRwMThtdWlFSVJYVGJaUUdVbWh4RnBKQUliQklzQ3NjTXB6TDBCZ2V1anhVd001d3I3OVNkOXI0eHdiZ1NNd21CbEJmVUhSVkJkTnlnOGZlZXBlSmJDUzYzbkQ2ZUhPdUxxTVJzUElpbzN3L2tpL0VBYTkyVVVFaVplYXZMc01VRC95L3FBdldVZHpkUDVZK0MvVE0rQ01HUy9rR0w0TEVkWS8yOE1RZVR2VTFxdjFYMjFrUXQyYWlhajNwUFZMMzZoQXp4YmNMZ3FjTW85b3ltRFJ5ODdrZENYVy8rZzRvS0x0TWg2Zm0vRzZXNlkvQjAxSmx4b2h5eXZ1ZUhRSUc1NTd1emtFa1RKM0ZuT1ZPRFNLQktwYjNXWjY1ckV4ZlY3MXpTWmEyNUYzR21wYUlHNkhpWXJYMllZaFFBa0lFOXBLRVFCSGJud0h1d05ER290dFpUWFp3PTsgV0xTPUM9OWRmM2Y5ZDg1MThmYWUxOSZOPXdlbjsgV0xJRD1wR1k4SGdXQ3U0cDVYWUNPazJvYTArREJkZnRrTVVmbU5JbjhYdFNqU1RLc2d2L0lsN0dVbFlzMEpwamYvRTEyalpNZ1Y3eDQ0RHkzZlhPZ2pqVW9KeDdZL0NsTHJMaHNrMjBUSGtzSkpvST07IF9FREdFX1M9Rj0xJlNJRD0xN0NGNkVFMDA2NDI2NDQ4MjEzQzdEQjkwNzQzNjU4OCZta3Q9emgtQ047IE1VSUQ9MjI1NjIxMDkzRDhBNkMyNzMwMTYzMjQxM0MwRTZEMDg7IE1VSURCPTIyNTYyMTA5M0Q4QTZDMjczMDE2MzI0MTNDMEU2RDA4OyBTVUlEPUE7IFNOUkhPUD1JPSZUUz07IF9VPW5HeXpLUXJ1RXNEd0xpdTY1ZlpGSUc2ZTEyaGYybHdUSm1yb1dfX2s4am9VSklLbUczT0lqYXlYS0dXOWRDVlIzc05oRjc2bUVWeHlXNnlqVUdQb2RPZmp0U2EzczNKX0R4TU9yRUsxQnFYQ09CSTliQzY2c3BBSUFTVjdwcnNZRmxWQUp6NzNqVk5FTnBfdEJ1YkxISnk2RWJUMEJLUmU0QWpyWWtILTl1TW5tQ0tCOFpteWc7IF9TUz1TSUQ9MTdDRjZFRTAwNjQyNjQ0ODIxM0M3REI5MDc0MzY1ODgmUj0wJlJCPTAmR0I9MCZSRz0yMDAmUlA9MCZQQz1VNTMxOyBTUkNIUz1QQz1VNTMxOyBVU1JMT0M9SFM9MSZFTE9DPUxBVD0yMi41MDE1Mjk2OTM2MDM1MTZ8TE9OPTExMy45MjYzNjg3MTMzNzg5fE49JUU1JThEJTk3JUU1JUIxJUIxJUU1JThDJUJBJUVGJUJDJThDJUU1JUI5JUJGJUU0JUI4JTlDJUU3JTlDJTgxfEVMVD0yfCZDTE9DPUxBVD0yMi41MDE1MzAyOTA0NjQ2MXxMT049MTEzLjkyNjM3MDcwNjMyOTI4fEE9NzMzLjQ0NjQ1ODYxMjA4MzJ8VFM9MjMwNzI2MTUxMDM0fFNSQz1XOyBTUkNIVVNSPURPQj0yMDIzMDcyNSZUPTE2OTAzODQ5MDgwMDAmUE9FWD1XOyBpcHY2PWhpdD0xNjkwMzg4NTA5OTc0JnQ9NjsgU1JDSEhQR1VTUj1IVj0xNjkwMzg0OTQ1JlNSQ0hMQU5HPXpoLUhhbnMmUFY9MTUuMC4wJkJSVz1NVyZCUkg9TVQmQ1c9NDEwJkNIPTc5NCZTQ1c9NDEwJlNDSD03OTQmRFBSPTEuNSZVVEM9NDgwJkRNPTAmV1RTPTYzODI1ODc5NjI3JlBSVkNXPTQxMCZQUlZDSD03OTQmUFI9MS41OyBjY3Q9QWpXSUJZT29WUC1BZnE2Z1d3dHg4MElmNnlIbjZpQnVFVkhBMVhIZEFLcG55NllfQ1Z5aV9NU3lNOTRWeU1XbmpkWWtrY2NWdG0zY3pvSUF0WFVHUUE7IEdDPUFqV0lCWU9vVlAtQWZxNmdXd3R4ODBJZjZ5SG42aUJ1RVZIQTFYSGRBS3BSM1lfRDlZdGNrczRIdDZYaGFkWGs3NWR2aHpQNFlPVVMwVW1vRXlxeXh3JyBcICAgLUggJ2RudDogMScgXCAgIC1IICdzZWMtY2gtdWE6ICJDaHJvbWl1bSI7dj0iMTE2IiwgIk5vdClBO0JyYW5kIjt2PSIyNCIsICJNaWNyb3NvZnQgRWRnZSI7dj0iMTE2IicgXCAgIC1IICdzZWMtY2gtdWEtYXJjaDogIng4NiInIFwgICAtSCAnc2VjLWNoLXVhLWJpdG5lc3M6ICI2NCInIFwgICAtSCAnc2VjLWNoLXVhLWZ1bGwtdmVyc2lvbjogIjExNi4wLjE5MzguMjkiJyBcICAgLUggJ3NlYy1jaC11YS1mdWxsLXZlcnNpb24tbGlzdDogIkNocm9taXVtIjt2PSIxMTYuMC41ODQ1LjQyIiwgIk5vdClBO0JyYW5kIjt2PSIyNC4wLjAuMCIsICJNaWNyb3NvZnQgRWRnZSI7dj0iMTE2LjAuMTkzOC4yOSInIFwgICAtSCAnc2VjLWNoLXVhLW1vYmlsZTogPzAnIFwgICAtSCAnc2VjLWNoLXVhLW1vZGVsOiAiIicgXCAgIC1IICdzZWMtY2gtdWEtcGxhdGZvcm06ICJXaW5kb3dzIicgXCAgIC1IICdzZWMtY2gtdWEtcGxhdGZvcm0tdmVyc2lvbjogIjE1LjAuMCInIFwgICAtSCAnc2VjLWZldGNoLWRlc3Q6IGRvY3VtZW50JyBcICAgLUggJ3NlYy1mZXRjaC1tb2RlOiBuYXZpZ2F0ZScgXCAgIC1IICdzZWMtZmV0Y2gtc2l0ZTogbm9uZScgXCAgIC1IICdzZWMtZmV0Y2gtdXNlcjogPzEnIFwgICAtSCAnc2VjLW1zLWdlYzogQjNGNDdBRDRBMjgzQ0FCMzc0QzA0NTFDNDZBQUZEMTQ3QzZBNERBQ0FGRjZBMUMxM0YzNEIyQzcyQjAyNDQ5NCcgXCAgIC1IICdzZWMtbXMtZ2VjLXZlcnNpb246IDEtMTE2LjAuMTkzOC4yOScgXCAgIC1IICd1cGdyYWRlLWluc2VjdXJlLXJlcXVlc3RzOiAxJyBcICAgLUggJ3VzZXItYWdlbnQ6IE1vemlsbGEvNS4wIChXaW5kb3dzIE5UIDEwLjA7IFdpbjY0OyB4NjQpIEFwcGxlV2ViS2l0LzUzNy4zNiAoS0hUTUwsIGxpa2UgR2Vja28pIENocm9tZS8xMTYuMC4wLjAgU2FmYXJpLzUzNy4zNiBFZGcvMTE2LjAuMC4wJyBcICAgLUggJ3gtY2xpZW50LWRhdGE6IGV5SXhJam9pTWlJc0lqRXdJam9pWENKVE1HZzNSMDVIT1RGMmFEUTFUVVpTVW5aNU5ITjJha1JtTVdkbGFWSktlbk54TmxBM2FVMVdibkYzUFZ3aUlpd2lNaUk2SWpFaUxDSXpJam9pTVNJc0lqUWlPaUl5TVRVNE9EUTVOVE00TWpZNE9UTTVOVEEzSWl3aU5TSTZJbHdpU205R1VXcFBURGszT1M5TWJrUlJabmxDZDJOMU0yRnNPVU4zZVRaVFFtZGFNR05ZTVhCdE9XVk1aejFjSWlJc0lqWWlPaUppWlhSaElpd2lOeUk2SWpFNE1ETTRPRFl5TmpRek5TSXNJamtpT2lKa1pYTnJkRzl3SW4wPScgXCAgIC1IICd4LWVkZ2Utc2hvcHBpbmctZmxhZzogMScgXCAgIC0tY29tcHJlc3NlZA== -``` -
      - - -## 鸣谢 - - 感谢 [EdgeGPT](https://github.com/acheong08/EdgeGPT) 提供的代理 API 的方法。 - - 感谢 [Vercel AI](https://github.com/vercel-labs/ai-chatbot) 提供的基础脚手架和 [ChatHub](https://github.com/chathub-dev/chathub) [go-proxy-bingai](https://github.com/adams549659584/go-proxy-bingai) 提供的部分代码。 - - -## 答疑及交流 - - - -## License - -MIT © [LICENSE](https://github.com/weaigc/bingo/blob/main/LICENSE). - - diff --git a/spaces/lightli/bingo-newbing/src/components/chat-panel.tsx b/spaces/lightli/bingo-newbing/src/components/chat-panel.tsx deleted file mode 100644 index 1fbc3c2bf05b914e0c229661832fbb560745f488..0000000000000000000000000000000000000000 --- a/spaces/lightli/bingo-newbing/src/components/chat-panel.tsx +++ /dev/null @@ -1,153 +0,0 @@ -'use client' - -import * as React from 'react' -import Image from 'next/image' -import Textarea from 'react-textarea-autosize' -import { useAtomValue } from 'jotai' -import { useEnterSubmit } from '@/lib/hooks/use-enter-submit' -import { cn } from '@/lib/utils' - -import BrushIcon from '@/assets/images/brush.svg' -import ChatIcon from '@/assets/images/chat.svg' -import VisualSearchIcon from '@/assets/images/visual-search.svg' -import SendIcon from '@/assets/images/send.svg' -import PinIcon from '@/assets/images/pin.svg' -import PinFillIcon from '@/assets/images/pin-fill.svg' - -import { useBing } from '@/lib/hooks/use-bing' -import { voiceListenAtom } from '@/state' -import Voice from './voice' -import { ChatImage } from './chat-image' -import { ChatAttachments } from './chat-attachments' - -export interface ChatPanelProps - extends Pick< - ReturnType, - | 'generating' - | 'input' - | 'setInput' - | 'sendMessage' - | 'resetConversation' - | 'isSpeaking' - | 'attachmentList' - | 'uploadImage' - | 'setAttachmentList' - > { - id?: string - className?: string -} - -export function ChatPanel({ - isSpeaking, - generating, - input, - setInput, - className, - sendMessage, - resetConversation, - attachmentList, - uploadImage, - setAttachmentList -}: ChatPanelProps) { - const inputRef = React.useRef(null) - const {formRef, onKeyDown} = useEnterSubmit() - const [focused, setFocused] = React.useState(false) - const [active, setActive] = React.useState(false) - const [pin, setPin] = React.useState(false) - const [tid, setTid] = React.useState() - const voiceListening = useAtomValue(voiceListenAtom) - - const setBlur = React.useCallback(() => { - clearTimeout(tid) - setActive(false) - const _tid = setTimeout(() => setFocused(false), 2000); - setTid(_tid) - }, [tid]) - - const setFocus = React.useCallback(() => { - setFocused(true) - setActive(true) - clearTimeout(tid) - inputRef.current?.focus() - }, [tid]) - - React.useEffect(() => { - if (input) { - setFocus() - } - }, [input]) - - return ( -
      { - e.preventDefault() - if (generating) { - return; - } - if (!input?.trim()) { - return - } - setInput('') - setPin(false) - await sendMessage(input) - }} - ref={formRef} - > -
      -
      -
      -
      -
      -
      -
      - -
      -
      -
      -
      - chat -