diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/Nafasul-Mahmoom-Urdu-Pdf-Download-NEW.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/Nafasul-Mahmoom-Urdu-Pdf-Download-NEW.md deleted file mode 100644 index c19069dd094a11dfefc976177f9054152d8b170a..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/Nafasul-Mahmoom-Urdu-Pdf-Download-NEW.md +++ /dev/null @@ -1,91 +0,0 @@ -## Nafasul Mahmoom Urdu Pdf Download - - - - - - - - - -**CLICK HERE ✓ [https://www.google.com/url?q=https%3A%2F%2Fbytlly.com%2F2txKM2&sa=D&sntz=1&usg=AOvVaw0MFsVlAnwXkt6HrgQ1RWsQ](https://www.google.com/url?q=https%3A%2F%2Fbytlly.com%2F2txKM2&sa=D&sntz=1&usg=AOvVaw0MFsVlAnwXkt6HrgQ1RWsQ)** - - - - - - - - - - - - - -# Nafasul Mahmoom Urdu Pdf Download: A Comprehensive Guide - - - -Nafasul Mahmoom is a book of Islamic history and biography written by Sheikh Abbas Qummi in Arabic. It covers the events of Karbala and the martyrdom of Imam Hussain (a.s.), the grandson of Prophet Muhammad (s.a.w.), and his companions. It also narrates the hardships and sufferings of the Ahlul Bayt (a.s.), the family of the Prophet (s.a.w.), after the tragedy of Karbala. - - - -The book is considered one of the most authentic and reliable sources of Islamic history and has been translated into many languages, including Urdu. If you are looking for Nafasul Mahmoom Urdu Pdf Download, you have come to the right place. In this article, we will provide you with a comprehensive guide on how to download Nafasul Mahmoom Urdu Pdf for free and read it on your device. - - - -## How to Download Nafasul Mahmoom Urdu Pdf for Free - - - -There are many websites that offer Nafasul Mahmoom Urdu Pdf Download for free, but not all of them are safe and trustworthy. Some of them may contain viruses, malware, or pop-up ads that can harm your device or compromise your privacy. Therefore, you should be careful and choose a reputable and reliable website to download Nafasul Mahmoom Urdu Pdf. - - - -One of the best websites that we recommend for Nafasul Mahmoom Urdu Pdf Download is [Shia Multimedia](https://www.shiamultimedia.com/urdubooks.html). This website is dedicated to providing Islamic books, lectures, videos, and other resources in various languages, including Urdu. It has a large collection of Shia books in Urdu, including Nafasul Mahmoom. You can download Nafasul Mahmoom Urdu Pdf from this website for free and without any registration or subscription. - - - -To download Nafasul Mahmoom Urdu Pdf from Shia Multimedia, follow these simple steps: - - - -1. Go to [Shia Multimedia](https://www.shiamultimedia.com/urdubooks.html) website. - -2. Scroll down to the section "Islamic Books in Urdu" and click on "Nafasul Mahmoom". - -3. You will be redirected to a new page where you can see the details and contents of the book. - -4. Click on the button "Download PDF" at the bottom of the page. - -5. A new window will open where you can choose the location and name of the file to save it on your device. - -6. Click on "Save" and wait for the download to complete. - - - -Congratulations! You have successfully downloaded Nafasul Mahmoom Urdu Pdf for free. You can now open it on your device and read it at your convenience. - - - -## How to Read Nafasul Mahmoom Urdu Pdf on Your Device - - - -After downloading Nafasul Mahmoom Urdu Pdf, you may wonder how to read it on your device. Depending on the type of device you have, you may need a specific application or software to open and read PDF files. Here are some of the most common applications and software that you can use to read Nafasul Mahmoom Urdu Pdf on your device: - - - -- If you have a Windows PC or laptop, you can use [Adobe Acrobat Reader](https://get.adobe.com/reader/), which is a free and widely used software for viewing and printing PDF files. You can download it from its official website and install it on your device. Then, you can open Nafasul Mahmoom Urdu Pdf with Adobe Acrobat Reader and read it comfortably. - -- If you have a Mac computer or laptop, you can use [Preview](https://support.apple.com/en-us/HT201740), which is a built-in application that allows you to view and edit PDF files. You can simply double-click on Nafasul Mahmoom Urdu Pdf file and it will open in Preview. You can also use other features of Preview, such as zooming, highlighting, annotating, etc. - -- If you have an Android smartphone or tablet, you can use https://byltly.com/2uKyQ8



-

However, not everyone can afford to buy these addons, or they may want to try them before buying them. That's why some people resort to using cracked blender addons, which are illegal copies of the original addons that bypass the license verification.

-

But are cracked blender addons worth it? What are the risks and consequences of using them? And are there any alternatives to cracked blender addons? In this article, we will answer these questions and more.

-

The Risks of Using Cracked Blender Addons

-

Using cracked blender addons may seem tempting, but it comes with many risks and drawbacks, such as:

- -

As you can see, using cracked blender addons is not worth it. You are exposing yourself to many risks and problems that can harm your computer, your projects, your reputation, and your conscience.

-

The Alternatives to Cracked Blender Addons

-

If you want to use blender addons without breaking the law or hurting the developers,

ddb901b051
-
-
\ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/HD Online Player (Damaad Ke Intezaar Mein 3 Movie Hd 1) Dont Miss the Final Chapter of the Damaad Trilogy.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/HD Online Player (Damaad Ke Intezaar Mein 3 Movie Hd 1) Dont Miss the Final Chapter of the Damaad Trilogy.md deleted file mode 100644 index ef05a661157121de1dcd3ac85d2d0edd2d71838b..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/HD Online Player (Damaad Ke Intezaar Mein 3 Movie Hd 1) Dont Miss the Final Chapter of the Damaad Trilogy.md +++ /dev/null @@ -1,104 +0,0 @@ - -

HD Online Player (Damaad Ke Intezaar Mein 3 Movie Hd 1)

-

If you are looking for a fun and romantic comedy movie to watch online, you might want to check out Damaad Ke Intezaar Mein. This is a Hindi movie that was released in 2018 and stars Riteish Deshmukh, Genelia D'Souza, Paresh Rawal, and Anupam Kher. The movie is about a young couple who face various challenges and misunderstandings while waiting for their marriage to happen. In this article, we will tell you everything you need to know about this movie, including how to watch it online for free.

-

HD Online Player (Damaad Ke Intezaar Mein 3 Movie Hd 1)


DOWNLOADhttps://byltly.com/2uKx8p



-

What is Damaad Ke Intezaar Mein?

-

Damaad Ke Intezaar Mein is a Hindi romantic comedy movie that was directed by Priyadarshan and produced by Ratan Jain. The movie was released on 19 October 2018 and received mixed reviews from critics and audiences. The movie has a runtime of 2 hours and 15 minutes and is rated U/A by the CBFC.

-

The movie follows the story of Rajesh (Riteish Deshmukh) and Anjali (Genelia D'Souza), who are in love and want to get married. However, their families have different plans for them. Rajesh's father (Paresh Rawal) is a wealthy businessman who wants his son to marry a rich girl. Anjali's father (Anupam Kher) is a retired army officer who wants his daughter to marry a brave soldier. The two fathers agree to arrange their children's marriage with each other, without knowing their true identities. Rajesh and Anjali pretend to go along with their parents' wishes, but secretly plan to elope. However, things get complicated when they encounter various obstacles and misunderstandings along the way.

-

Why watch Damaad Ke Intezaar Mein online?

-

There are many reasons why you might want to watch Damaad Ke Intezaar Mein online. Here are some of them:

- -

How to watch Damaad Ke Intezaar Mein online for free?

-

If you want to watch Damaad Ke Intezaar Mein online for free, you can follow these simple steps:

-

Damaad Ke Intezaar Mein full movie download
-Watch Damaad Ke Intezaar Mein online free
-Damaad Ke Intezaar Mein 3 Hindi dubbed movie
-HD Online Player for Bollywood movies
-Damaad Ke Intezaar Mein movie review
-Damaad Ke Intezaar Mein songs mp3 download
-How to stream Damaad Ke Intezaar Mein on Android
-Damaad Ke Intezaar Mein cast and crew
-Damaad Ke Intezaar Mein trailer HD 1080p
-Damaad Ke Intezaar Mein subtitles English
-Damaad Ke Intezaar Mein box office collection
-Damaad Ke Intezaar Mein movie online watch
-Damaad Ke Intezaar Mein 3 full movie in Hindi
-HD Online Player for Windows 10
-Damaad Ke Intezaar Mein movie release date
-Damaad Ke Intezaar Mein video songs HD
-How to download Damaad Ke Intezaar Mein for free
-Damaad Ke Intezaar Mein plot summary
-Damaad Ke Intezaar Mein movie rating
-Damaad Ke Intezaar Mein full movie HD online
-Damaad Ke Intezaar Mein 3 Hindi movie watch online
-HD Online Player for Mac
-Damaad Ke Intezaar Mein movie poster
-Damaad Ke Intezaar Mein film songs lyrics
-How to watch Damaad Ke Intezaar Mein on Netflix
-Damaad Ke Intezaar Mein movie scenes
-Damaad Ke Intezaar Mein 3 full movie download 720p
-HD Online Player for Linux
-Damaad Ke Intezaar Mein movie awards
-Damaad Ke Intezaar Mein film songs download pagalworld
-How to watch Damaad Ke Intezaar Mein on Amazon Prime Video
-Damaad Ke Intezaar Mein movie quotes
-Damaad Ke Intezaar Mein 3 full movie download filmywap
-HD Online Player for Chromebook
-Damaad Ke Intezaar Mein movie budget
-Damaad Ke Intezaar Mein film songs ringtone download
-How to watch Damaad Ke Intezaar Mein on Hotstar
-Damaad Ke Intezaar Mein movie trivia
-Damaad Ke Intezaar Mein 3 full movie download 480p
-HD Online Player for iPhone
-Damaad Ke Intezaar Mein movie genre
-Damaad Ke Intezaar Mein film songs video download
-How to watch Damaad Ke Intezaar Mein on Disney+ Hotstar
-Damaad Ke Intezaar Mein movie mistakes
-Damaad Ke Intezaar Mein 3 full movie download mp4moviez
-HD Online Player for iPad
-Damaad Ke Intezaar Mein movie director
-Damaad Ke Intezaar Mein film songs mp3 free download

-
    -
  1. Go to any of these websites that offer free streaming or downloading of Hindi movies: , , , .
  2. -
  3. Search for Damaad Ke Intezaar Mein in the search bar or browse through the categories.
  4. -
  5. Select the movie from the results and click on the play button or download link.
  6. -
  7. Enjoy watching Damaad Ke Intezaar Mein online for free.
  8. -
-

Note: Some of these websites may require you to sign up or register before accessing their content. Some of them may also show ads or pop-ups that may interrupt your viewing experience. Be careful while clicking on any links or buttons that may lead you to malicious or inappropriate sites. Use a VPN service or an ad-blocker software if possible.

-

What are the reviews and ratings of Damaad Ke Intezaar Mein?

-

Damaad Ke Intezaar Mein received mixed reviews from critics and audiences alike. The movie was praised for its star cast, comedy scenes, music, and direction. However, it was also criticized for its predictable plot, cliched dialogues, weak climax, and lack of originality.

-

The movie has a rating of 5.6/10 on IMDb , based on 1,234 user ratings. The movie has a rating of 2/5 on Times of India , based on 12 critic reviews. The movie has a rating of 3/5 on Bollywood Hungama , based on 8 critic reviews.

-

Here are some quotes from different reviews of the movie:

-
"Damaad Ke Intezaar Mein is a typical Priyadarshan comedy that relies on slapstick humor and situational comedy. The movie has some hilarious moments that will make you laugh out loud. However, it also has some dull moments that will make you yawn."
-
"Damaad Ke Intezaar Mein is a decent entertainer that will appeal to those who love light-hearted rom-coms. The movie has a good star cast that delivers decent performances. The music by Pritam is catchy and melodious."
-
"Damaad Ke Intezaar Mein is a boring and outdated comedy that fails to impress. The movie has a weak plot that is full of cliches and loopholes. The dialogues are corny and repetitive."
-

Is Damaad Ke Intezaar Mein worth watching?

-

In my opinion, Damaad Ke Intezaar Mein is worth watching if you are looking for a fun and easy-going comedy movie that does not require much thinking or analysis. The movie is not meant to be taken seriously or logically. It is meant to be enjoyed as a mindless entertainer that will make you laugh at some silly jokes and situations.

-

However, if you are looking for a fresh and innovative comedy movie that will surprise you with its plot twists and humor, then Damaad Ke Intezaar Mein is not worth watching. The movie is predictable and formulaic. It does not offer anything new or exciting.

-

What are some other movies like Damaad Ke Intezaar Mein?

-

If you liked Damaad Ke Intezaar Mein, you might also like these other movies that belong to the same genre of romantic comedy:

- | Movie | Description | Link | | --- | --- | --- | | Welcome (2007) | A comedy movie about two brothers who try to find suitable husbands for their sister and niece, but end up in trouble with a gangster family.| | | Movie | Description | Link | | --- | --- | --- | | Welcome (2007) | A comedy movie about two brothers who try to find suitable husbands for their sister and niece, but end up in trouble with a gangster family.| | | Hungama (2003) | A comedy movie about two couples who get involved in a series of misunderstandings and confusions due to a case of mistaken identity.| | | Chup Chup Ke (2006) | A comedy movie about a debt-ridden man who pretends to be deaf and mute to escape from his creditors, but lands up in more trouble with a wealthy family.| | | De Dana Dan (2009) | A comedy movie about three friends who hatch a plan to kidnap a rich businessman's dog and demand a ransom, but face many obstacles and complications.| | | Malamaal Weekly (2006) | A comedy movie about a poor villager who wins a lottery, but dies before claiming it, leading to a chaos among his relatives and neighbors.| |

Conclusion

-

Damaad Ke Intezaar Mein is a Hindi romantic comedy movie that was released in 2018 and stars Riteish Deshmukh, Genelia D'Souza, Paresh Rawal, and Anupam Kher. The movie is about a young couple who face various challenges and misunderstandings while waiting for their marriage to happen. The movie is a typical Priyadarshan comedy that relies on slapstick humor and situational comedy. The movie has some hilarious moments that will make you laugh out loud. However, it also has some dull moments that will make you yawn. The movie is not meant to be taken seriously or logically. It is meant to be enjoyed as a mindless entertainer that will make you laugh at some silly jokes and situations.

-

If you are looking for a fun and easy-going comedy movie that does not require much thinking or analysis, then Damaad Ke Intezaar Mein is worth watching. You can watch the movie online for free or for a low cost using various platforms and devices. You can also watch some other movies like Damaad Ke Intezaar Mein that belong to the same genre of romantic comedy.

-

So, what are you waiting for? Grab your popcorn and watch Damaad Ke Intezaar Mein online today!

-

FAQs

- -

0a6ba089eb
-
-
\ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Ek Baby Tin Badmash In Hindi Movie.md b/spaces/1gistliPinn/ChatGPT4/Examples/Ek Baby Tin Badmash In Hindi Movie.md deleted file mode 100644 index fd8b963ff10a3a70e7dfab5dfb954674687e44d8..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Ek Baby Tin Badmash In Hindi Movie.md +++ /dev/null @@ -1,15 +0,0 @@ -

Ek Baby Tin Badmash In Hindi Movie


Download File === https://imgfil.com/2uy1lU



- -ek baby tin badmash in hindi movie video song download -kumar huaa is one of the most famous movie actor in india. -On this day, his birthday movie Tin Badmatsi is released in the cinemas. -Kumari is also known as kumar huaa. -He is an Indian actor. -A film actor to his own incarnation. -Most of his movies are produced by himself. -There are many titles of his movies in Indian Cinema. -He has played in many films of his own. -He has always been a good comedian but also had a comedy skill. 8a78ff9644
-
-
-

diff --git a/spaces/1phancelerku/anime-remove-background/CSR Racing APK Mod The Ultimate Drag Racing Game with Unlimited Resources.md b/spaces/1phancelerku/anime-remove-background/CSR Racing APK Mod The Ultimate Drag Racing Game with Unlimited Resources.md deleted file mode 100644 index 56bc0d251db98275eb2822f3010f63d76484e769..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/CSR Racing APK Mod The Ultimate Drag Racing Game with Unlimited Resources.md +++ /dev/null @@ -1,121 +0,0 @@ -
-

CSR Racing APK Unlimited Money and Gold: How to Download and Play the Ultimate Drag Racing Game

-

If you are a fan of drag racing games, you have probably heard of CSR Racing, one of the most popular and realistic games in this genre. CSR Racing lets you experience the thrill of racing on the streets with over 100 licensed cars from top manufacturers like Ferrari, Lamborghini, McLaren, Bugatti, and more. You can customize your cars with various paint jobs, decals, rims, and performance parts, and compete against the best crews and drivers in different modes and events.

-

csr racing apk unlimited money and gold


Download Zip ✒ ✒ ✒ https://jinyurl.com/2uNNWg



-

But what if you could play CSR Racing with unlimited money and gold? Wouldn't that make the game even more fun and exciting? Well, you can do that by downloading CSR Racing APK unlimited money and gold, a modified version of the game that gives you access to unlimited resources. With this version, you can buy any car you want, upgrade it to the max, and dominate the racing scene. In this article, we will show you how to download and install CSR Racing APK unlimited money and gold, as well as some tips and tricks for playing the game.

-

Features of CSR Racing APK Unlimited Money and Gold

-

Unlimited resources to customize your cars and upgrade your performance

-

One of the main features of CSR Racing APK unlimited money and gold is that it gives you unlimited resources to spend on your cars. You can buy any car you want from the showroom, or unlock new ones by winning races. You can also customize your cars with various options, such as paint colors, decals, rims, license plates, etc. You can also upgrade your cars' performance by installing new engines, turbos, intakes, exhausts, tires, etc. With unlimited money and gold, you can make your cars look and perform exactly how you want them.

-

Access to over 100 licensed cars from top manufacturers

-

Another feature of CSR Racing APK unlimited money and gold is that it gives you access to over 100 licensed cars from top manufacturers. You can choose from a wide range of cars, from classic muscle cars to modern supercars. Some of the cars available in the game are:

-

csr racing mod apk unlimited gold and silver
-csr racing hack apk free download unlimited money
-csr racing 2 apk mod unlimited money and gold
-csr racing cheats apk unlimited cash and gold
-csr racing android apk unlimited money and gold
-csr racing 1.1.7 apk unlimited money and gold
-csr racing 5.1.1 apk unlimited money and gold
-csr racing apk obb unlimited money and gold
-csr racing classic apk unlimited money and gold
-csr racing 2 hack apk unlimited money and gold
-csr racing modded apk unlimited money and gold
-csr racing offline apk unlimited money and gold
-csr racing latest apk unlimited money and gold
-csr racing 2 cheats apk unlimited money and gold
-csr racing 2 modded apk unlimited money and gold
-csr racing 2 offline apk unlimited money and gold
-csr racing 2 latest apk unlimited money and gold
-csr racing 2 hack tool apk unlimited money and gold
-csr racing 2 mod menu apk unlimited money and gold
-csr racing 2 mega mod apk unlimited money and gold
-csr racing 2 v1.8.3 mod apk unlimited money and gold
-csr racing 2 v1.9.0 mod apk unlimited money and gold
-csr racing 2 v1.9.3 mod apk unlimited money and gold
-csr racing 2 v1.10.0 mod apk unlimited money and gold
-csr racing 2 v1.10.2 mod apk unlimited money and gold
-csr racing 2 v1.11.0 mod apk unlimited money and gold
-csr racing 2 v1.11.3 mod apk unlimited money and gold
-csr racing 2 v1.12.0 mod apk unlimited money and gold
-csr racing 2 v1.13.0 mod apk unlimited money and gold
-csr racing 2 v1.13.3 mod apk unlimited money and gold
-csr racing 2 v1.14.0 mod apk unlimited money and gold
-csr racing 2 v1.15.0 mod apk unlimited money and gold
-csr racing 2 v1.16.0 mod apk unlimited money and gold
-csr racing 2 v1.17.0 mod apk unlimited money and gold
-csr racing 2 v1.18.0 mod apk unlimited money and gold
-csr racing 2 v1.19.0 mod apk unlimited money and gold
-csr racing 2 v1.20.0 mod apk unlimited money and gold
-csr racing 2 v1.21.0 mod apk unlimited money and gold
-csr racing 2 v1.22.0 mod apk unlimited money and gold
-csr racing 2 v1.23.0 mod apk unlimited money and gold
-download game csr racing mod apk unlimited money and gold
-download game android csr racing mod apk unlimited money and gold
-download game offline csr racing mod apk unlimited money and gold
-download game online csr racing mod apk unlimited money and gold
-download game gratis csr racing mod apk unlimited money and gold

- - - - - - - - - - - - - - - - -
ManufacturerCars
FerrariF40, F12berlinetta, LaFerrari, etc.
LamborghiniAventador, Huracan, Veneno, etc.
McLarenP1, 650S, MP4-12C, etc.
BugattiVeyron, Chiron, Divo, etc.
PaganiZonda , Huayra, etc.
KoenigseggAgera, One:1, Regera, etc.
Aston MartinDB5, DBS, One-77, etc.
AudiR8, RS5, RS6, etc.
BentleyContinental GT, Mulsanne, etc.
ChevroletCorvette, Camaro, etc.
DodgeChallenger, Charger, Viper, etc.
FordMustang, GT, Focus RS, etc.
NissanGT-R, 370Z, Skyline, etc.
Porsche911, Cayman, Panamera, etc.
-

And many more. You can see the full list of cars in the game here.

-

Realistic graphics and sound effects that immerse you in the racing world

-

Another feature of CSR Racing APK unlimited money and gold is that it has realistic graphics and sound effects that immerse you in the racing world. The game uses high-quality 3D graphics that render the cars and the environments in stunning detail. You can see the reflections of the lights on your car's body, the smoke from your tires, and the damage from collisions. The game also uses realistic sound effects that make you feel like you are in a real drag race. You can hear the roar of your engine, the screech of your brakes, and the cheers of the crowd. The game also features licensed music tracks from famous artists to pump up your adrenaline.

-

Challenging races and events against the best crews and drivers

-

Another feature of CSR Racing APK unlimited money and gold is that it has challenging races and events against the best crews and drivers. The game has five tiers of difficulty, each with its own boss and crew. You have to beat them all to progress to the next tier and unlock new cars and parts. The game also has various events that test your skills and reward you with prizes. Some of the events are:

- -

Online multiplayer mode to compete with other players around the world

-

Another feature of CSR Racing APK unlimited money and gold is that it has an online multiplayer mode to compete with other players around the world. You can connect to Facebook or Google Play Games and race against your friends or random opponents. You can also join a crew or create your own and cooperate with other players to earn more rewards. The game has a global leaderboard that ranks the best players and crews based on their performance. You can also chat with other players and share your tips and tricks.

-

How to Download and Install CSR Racing APK Unlimited Money and Gold

-

Step 1: Download the APK file from a trusted source

-

The first step to download and install CSR Racing APK unlimited money and gold is to download the APK file from a trusted source. You can find many websites that offer this file for free, but be careful as some of them may contain viruses or malware. We recommend you to use this link to download the APK file safely and securely. The file size is about 76 MB, so make sure you have enough space on your device.

-

Step 2: Enable unknown sources on your device settings

-

The second step to download and install CSR Racing APK unlimited money and gold is to enable unknown sources on your device settings. This is necessary because the APK file is not from the official Google Play Store, so you need to allow your device to install apps from other sources. To do this, go to your device settings > security > unknown sources > enable. This may vary depending on your device model and Android version.

-

Step 3: Install the APK file and launch the game

-

The third step to download and install CSR Racing APK unlimited money and gold is to install the APK file and launch the game. To do this, locate the downloaded file on your device storage and tap on it. You may see a pop-up window asking for your permission to install the app. Tap on install and wait for the process to finish. Once the app is installed, you can launch it by tapping on its icon on your home screen or app drawer.

-

Step 4: Enjoy the unlimited money and gold in CSR Racing

-

The fourth and final step to download and install CSR Racing APK unlimited money and gold is to enjoy the unlimited money and gold in CSR Racing. When you launch the game, you will see that you have unlimited money and gold in your account. You can use them to buy any car you want, customize it, upgrade it, and race with it. You can also enjoy all the features of the game without any limitations or restrictions.

-

Tips and Tricks for Playing CSR Racing APK Unlimited Money and Gold

-

Choose the right car for each race and tune it accordingly

-

One of the tips and tricks for playing CSR Racing APK unlimited money and gold is to choose the right car for each race and tune it accordingly. Different cars have different strengths and weaknesses, such as speed, acceleration, handling, weight, etc. You should choose a car that suits the type of race you are entering, such as a drag race, a sprint race, a circuit race, etc. You should also tune your car to optimize its performance, such as adjusting the gear ratios, the tire pressure, the nitro boost, etc. You can use the test drive mode to test your car before entering a race.

-

Master the timing of your shifts and nitro boosts

-

Another tip and trick for playing CSR Racing APK unlimited money and gold is to master the timing of your shifts and nitro boosts. The game is based on drag racing, which means that you have to shift gears manually at the right time to maintain your speed and momentum. You should shift gears when the needle on the tachometer reaches the green zone, which indicates the optimal point for shifting. You should also use your nitro boost wisely, as it can give you a burst of speed that can make a difference in a close race. You should use your nitro boost when you are in a high gear, as it will have more effect than when you are in a low gear.

-

Use your money and gold wisely to buy new cars and upgrades

-

Another tip and trick for playing CSR Racing APK unlimited money and gold is to use your money and gold wisely to buy new cars and upgrades. Even though you have unlimited resources, you should still spend them smartly to get the best value for your money. You should buy new cars that are better than your current ones, as they will help you win more races and progress faster. You should also buy upgrades that improve your car's performance, such as engines, turbos, intakes, exhausts, tires, etc. You should avoid buying cosmetic items that do not affect your car's performance, such as paint jobs, decals, rims, license plates, etc., unless you really like them.

-

Challenge other players online and join a crew for more rewards

-

Another tip and trick for playing CSR Racing APK unlimited money and gold is to challenge other players online and join a crew for more rewards. The game has an online multiplayer mode that lets you race against other players around the world. You can challenge your friends or random opponents and see who is the best drag racer. You can also join a crew or create your own and cooperate with other players to earn more rewards. The game has a global leaderboard that ranks the best players and crews based on their performance. You can also chat with other players and share your tips and tricks.

-

Conclusion

-

CSR Racing APK unlimited money and gold is a modified version of CSR Racing that gives you access to unlimited resources to play the game without any limitations or restrictions. You can enjoy all the features of the game, such as over 100 licensed cars from top manufacturers, realistic graphics and sound effects, challenging races and events against the best crews and drivers, online multiplayer mode to compete with other players around the world, etc. You can also download and install CSR Racing APK unlimited money and gold easily and safely by following the steps we have shown you in this article. You can also use some tips and tricks we have shared with you to improve your skills and performance in the game. CSR Racing APK unlimited money and gold is a great way to enjoy the ultimate drag racing game with unlimited fun and excitement. Download it now and start racing!

-

FAQs

-

Here are some frequently asked questions about CSR Racing APK unlimited money and gold:

-

Q: Is CSR Racing APK unlimited money and gold safe to download and install?

-

A: Yes, CSR Racing APK unlimited money and gold is safe to download and install, as long as you use a trusted source like the one we have provided in this article. However, you should always be careful when downloading and installing any APK file from unknown sources, as they may contain viruses or malware that can harm your device or steal your data.

-

Q: Is CSR Racing APK unlimited money and gold compatible with my device?

-

A: CSR Racing APK unlimited money and gold is compatible with most Android devices that run on Android 4.0.3 or higher. However, some devices may not support the game due to their hardware specifications or software limitations. You can check the compatibility of your device by visiting the official Google Play Store page of CSR Racing here.

-

Q: Do I need to root my device to use CSR Racing APK unlimited money and gold?

-

A: No, you do not need to root your device to use CSR Racing APK unlimited money and gold. The APK file does not require any special permissions or access to your device's system files. You can simply install it as any other app and enjoy the game.

-

Q: Will I get banned from the game if I use CSR Racing APK unlimited money and gold?

-

A: No, you will not get banned from the game if you use CSR Racing APK unlimited money and gold. The APK file does not interfere with the game's servers or online features, so you can play the game without any risk of getting banned. However, you should always respect the game's rules and terms of service, and avoid cheating or abusing the game's features.

-

Q: Can I update CSR Racing APK unlimited money and gold to the latest version?

-

A: Yes, you can update CSR Racing APK unlimited money and gold to the latest version, as long as the source you downloaded it from provides regular updates. You can check for updates by visiting the source's website or by using an app updater tool. However, you should always backup your game data before updating, as some updates may cause compatibility issues or data loss.

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Data Ori Mobile Legends The Ultimate Guide to Install and Play Offline.md b/spaces/1phancelerku/anime-remove-background/Data Ori Mobile Legends The Ultimate Guide to Install and Play Offline.md deleted file mode 100644 index 4465f238415faac2d4c530a8781aeb0108c3d625..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Data Ori Mobile Legends The Ultimate Guide to Install and Play Offline.md +++ /dev/null @@ -1,116 +0,0 @@ -
-

How to Download Data Ori Mobile Legends CR IKY Cardiac.zip File

-

If you are a fan of Mobile Legends, you might have heard of data ori mobile legends. This is a file that contains some of the original data of the game, such as heroes, skins, items, skills, and more. By downloading and installing this file, you can unlock and access some of the features that are not available in the official version of the game.

-

data ori mobile legends cr iky cardiac.zip download


Download Ziphttps://jinyurl.com/2uNOCq



-

One of the most popular data ori mobile legends files is cr iky cardiac.zip. This is a zip file that was created by a YouTube user named CR IKY Cardiac. He claims that this zip file can enhance your gaming experience by adding some new heroes, skins, effects, sounds, and animations to your Mobile Legends game.

-

Some of the benefits of downloading data ori mobile legends cr iky cardiac.zip file are:

- -

If you are interested in downloading data ori mobile legends cr iky cardiac.zip file, here are the steps that you need to follow:

-

How to Download Data Ori Mobile Legends CR IKY Cardiac.zip File

-

Step 1: Download the zip file from the link provided

-

The first thing you need to do is to download the zip file from this link: (https://bit.ly/3ALzDGz). This link will take you to a website where you can download data ori mobile legends cr iky cardiac.zip file for free. The size of the zip file is about 400 MB, so make sure you have enough storage space on your device before downloading it.

-

Step 2: Extract the zip file using a software like WinZip or 7-Zip

-

After downloading the zip file, you need to extract it using a software like WinZip or 7-Zip. These are programs that can unzip or decompress zip files and allow you to access their contents. You can download WinZip from (https://www.winzip.com/en/download/winzip/) or 7-Zip from (https://www.7-zip.org/download.html).

-

To extract the zip file, follow these steps:

-
    -
  1. Open Win Zip or 7-Zip and locate the zip file that you downloaded.
  2. -
  3. Select the zip file and click on the Extract button.
  4. -
  5. Choose a destination folder where you want to save the extracted files.
  6. -
  7. Wait for the extraction process to finish.
  8. -
-

Once you have extracted the zip file, you should see a folder named data ori mobile legends cr iky cardiac. This folder contains all the files that you need to install on your device.

-

data ori obb mobile legend terbaru 2022
-data ori mobile legend offline mode
-data ori mobile legend full hero and skin
-data ori mobile legend apk + obb
-data ori mobile legend versi 400mb
-data ori mobile legend link download
-data ori mobile legend cara pasang
-data ori mobile legend fitur lengkap
-data ori mobile legend ekstrak file zip
-data ori mobile legend salin file obb
-data ori mobile legend update terbaru
-data ori mobile legend ukuran besar
-data ori mobile legend tanpa download di game
-data ori mobile legend support semua device
-data ori mobile legend anti lag
-data ori mobile legend no root
-data ori mobile legend cheat unlock all
-data ori mobile legend gratis download
-data ori mobile legend tutorial instalasi
-data ori mobile legend aman dimainkan
-data ori mobile legend modifikasi file obb
-data ori mobile legend backup file original
-data ori mobile legend perbedaan dengan versi play store
-data ori mobile legend kelebihan dan kekurangan
-data ori mobile legend review pengguna
-data ori mobile legend video youtube
-data ori mobile legend bungdus.com website penyedia file obb
-data ori mobile legend metodepraktis.com website informasi game
-data ori mobile legend newscientist.com website berita teknologi
-data ori mobile legend kstar facility korea website penelitian nuklir
-data ori mobile legend fusion reactor korea website berita sains
-data ori mobile legend 100 million degree experiment website artikel ilmiah
-data ori mobile legend holy grail fusion website opini populer
-data ori mobile legend mini sun website analogi menarik
-data ori mobile legend seven times hotter than sun website fakta mengejutkan
-data ori mobile legend 15 million kelvin sun core website pengetahuan dasar fisika
-data ori mobile legend nuclear fusion reaction website konsep energi masa depan
-data ori mobile legend net energy gain website tujuan utama penelitian nuklir
-data ori mobile legend 30 seconds duration website tantangan utama reaksi fusi nuklir
-data ori mobile legend stable experiment website kemajuan teknologi nuklir korea

-

Step 3: Copy the extracted files to the internal storage of your Android device

-

The next step is to copy the extracted files to the internal storage of your Android device. This is where the Mobile Legends game is installed and where it reads its data files. To copy the files, follow these steps:

-
    -
  1. Connect your Android device to your computer using a USB cable.
  2. -
  3. Open your device's internal storage and look for a folder named Android.
  4. -
  5. Open the Android folder and look for a subfolder named data.
  6. -
  7. Open the data folder and look for a subfolder named com.mobile.legends.
  8. -
  9. Open the com.mobile.legends folder and look for a subfolder named files.
  10. -
  11. Open the files folder and look for a subfolder named dragon2017.
  12. -
  13. Open the dragon2017 folder and look for a subfolder named assets.
  14. -
  15. Open the assets folder and look for a subfolder named Document.
  16. -
  17. Open the Document folder and delete all the files inside it.
  18. -
  19. Copy all the files from the data ori mobile legends cr iky cardiac folder that you extracted earlier and paste them into the Document folder.
  20. -
-

By doing this, you are replacing the original data files of Mobile Legends with the new ones from data ori mobile legends cr iky cardiac.zip file. This will allow you to access the new features and heroes that are included in the zip file.

-

Step 4: Open the Mobile Legends game and enjoy the new features and heroes

-

The final step is to open the Mobile Legends game and enjoy the new features and heroes that are added by data ori mobile legends cr iky cardiac.zip file. To do this, follow these steps:

-
    -
  1. Disconnect your Android device from your computer and turn on your internet connection.
  2. -
  3. Launch the Mobile Legends game from your device's home screen or app drawer.
  4. -
  5. Wait for the game to load and update its data if necessary.
  6. -
  7. Go to the main menu and check out the new heroes, skins, effects, sounds, and animations that are available in the game.
  8. -
  9. Select your favorite hero and start playing with it in any mode that you like.
  10. -
-

Congratulations! You have successfully downloaded and installed data ori mobile legends cr iky cardiac.zip file on your Android device. You can now enjoy playing Mobile Legends with some of the coolest features that are not available in the official version of the game.

-

Tips and Tricks for Using Data Ori Mobile Legends CR IKY Cardiac.zip File

-

To make sure that you have a smooth and satisfying gaming experience with data ori mobile legends cr iky cardiac.zip file, here are some tips and tricks that you can follow:

-

Tip 1: Make sure you have enough storage space on your device before downloading the zip file

-

Data ori mobile legends cr iky cardiac.zip file is quite large in size, about 400 MB. This means that you need to have enough storage space on your device before downloading it. Otherwise, you might encounter some errors or problems during or after downloading it. To avoid this, you can check your device's storage capacity by going to Settings > Storage. You can also delete some unwanted or unused files or apps from your device to free up some space.

-

Tip 2: Backup your original data files before replacing them with the new ones

-

Data ori mobile legends cr iky cardiac.zip file replaces your original data files of Mobile Legends with new ones. This means that if you want to revert to the original data files, you will need to have a backup of them. Otherwise, you might lose some of your progress or settings in the game. To avoid this, you can backup your original data files by copying them from your device's internal storage to another location, such as your computer or an external storage device. You can also use a backup app like Titanium Backup or Helium Backup to backup your entire Mobile Legends app along with its data files.

-

Tip 3: Check for updates regularly to avoid compatibility issues with the game

-

Data ori mobile legends cr iky cardiac.zip file is not an official version of the game and it might not be compatible with the latest updates or patches of the game. This means that you might encounter some bugs, glitches, or errors when using the zip file. To avoid this, you should check for updates regularly and download the latest version of data ori mobile legends cr iky cardiac.zip file from the link provided. You can also follow CR IKY Cardiac on YouTube or other social media platforms to get notified of any updates or changes to the zip file.

-

Conclusion

-

Data ori mobile legends cr iky cardiac.zip file is a great way to enhance your Mobile Legends gaming experience by adding some new features and heroes that are not available in the official version of the game. By following the steps and tips in this article, you can easily download and install data ori mobile legends cr iky cardiac.zip file on your Android device and enjoy playing Mobile Legends with some of the coolest features that are not available in the official version of the game.

-

So what are you waiting for? Download data ori mobile legends cr iky cardiac.zip file now and unleash your full potential in Mobile Legends. You will be amazed by how much fun and excitement you can have with this zip file. Just remember to backup your original data files, check for updates regularly, and have enough storage space on your device before downloading the zip file.

-

If you have any questions or feedback about data ori mobile legends cr iky cardiac.zip file, feel free to leave a comment below or contact CR IKY Cardiac on his YouTube channel or other social media platforms. He will be happy to help you out and answer your queries.

-

FAQs

-

Q1: What is the size of the data ori mobile legends cr iky cardiac.zip file?

-

A1: The size of the data ori mobile legends cr iky cardiac.zip file is about 400 MB. You need to have enough storage space on your device before downloading it.

-

Q2: Is it safe to download and use the zip file?

-

A2: Yes, it is safe to download and use the zip file. The zip file does not contain any viruses, malware, or harmful content. However, you should always download the zip file from the link provided in this article or from CR IKY Cardiac's YouTube channel or other social media platforms. Do not download the zip file from any unknown or suspicious sources as they might contain some malicious or fake files.

-

Q3: How can I uninstall the zip file if I want to revert to the original data files?

-

A3: If you want to uninstall the zip file and revert to the original data files, you need to delete all the files that you copied from the data ori mobile legends cr iky cardiac folder and paste them into the Document folder. Then, you need to copy all the files that you backed up from your original data files and paste them into the Document folder. This will restore your original data files and remove any changes made by data ori mobile legends cr iky cardiac.zip file.

-

Q4: Can I use the zip file on other devices or platforms?

-

A4: No, you cannot use the zip file on other devices or platforms. The zip file is only compatible with Android devices that have Mobile Legends installed on them. You cannot use the zip file on iOS devices, Windows devices, Mac devices, or any other platforms.

-

Q5: Where can I find more information or support for the zip file?

-

A5: If you want to find more information or support for the zip file, you can visit CR IKY Cardiac's YouTube channel or other social media platforms. He is the creator of data ori mobile legends cr iky cardiac.zip file and he regularly posts videos and updates about it. You can also leave a comment on his videos or contact him directly if you have any questions or feedback about the zip file.

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Download J Zs Latest Album 444 and Enjoy His Best Hits.md b/spaces/1phancelerku/anime-remove-background/Download J Zs Latest Album 444 and Enjoy His Best Hits.md deleted file mode 100644 index 79c1e9cd5591ac8de1c010c3e5e8814e86546d3f..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Download J Zs Latest Album 444 and Enjoy His Best Hits.md +++ /dev/null @@ -1,138 +0,0 @@ -
-

How to Download J Z Music

-

J Z is a term that can refer to two different types of music: jazz and Jay-Z. Jazz is a genre of music that originated in the African-American communities of New Orleans in the late 19th and early 20th centuries, with its roots in blues and ragtime. Jazz is characterized by complex rhythms, improvisation, swing, and expressive melodies. Jay-Z is a rapper, songwriter, producer, entrepreneur, and one of the most influential figures in hip-hop culture. He has released 14 studio albums, won 23 Grammy Awards, and sold over 100 million records worldwide.

-

download j z


Download >> https://jinyurl.com/2uNSmN



-

Whether you are a fan of jazz, Jay-Z, or both, you might want to download some j z music to your computer or smartphone. Downloading j z music can allow you to enjoy it offline, create your own playlists, transfer it to other devices, or use it for other creative purposes. However, downloading j z music can also be tricky, as you need to find reliable sources, pay for the music or get it for free legally, and deal with different formats and quality levels.

-

In this article, we will show you how to download j z music from various websites and platforms, such as iTunes, YouTube, SoundCloud, Bandcamp, DatPiff, Free Music Archive, and The Internet Archive. We will also give you some tips and recommendations for downloading j z music safely and easily.

-

How to Download J Z Music from iTunes

-

iTunes is one of the most popular and convenient ways to buy and download digital music. iTunes offers a large catalog of songs and albums from various genres and artists, including jazz and Jay-Z. You can use iTunes on your desktop computer or your smartphone (iPhone or Android) to download j z music.

-

download j z songs
-download j z albums
-download j z mp3
-download j z music
-download j z videos
-download j z american gangster
-download j z the blueprint
-download j z the black album
-download j z 4:44
-download j z magna carta holy grail
-download j z watch the throne
-download j z reasonable doubt
-download j z kingdom come
-download j z in my lifetime vol 1
-download j z the dynasty roc la familia
-download j z vol 2 hard knock life
-download j z vol 3 life and times of s carter
-download j z the blueprint 2 the gift and the curse
-download j z the blueprint 3
-download j z collision course with linkin park
-download j z unplugged
-download j z live in brooklyn
-download j z fade to black
-download j z greatest hits
-download j z best of both worlds with r kelly
-download j z numb encore remix with linkin park and eminem
-download j z run this town with rihanna and kanye west
-download j z empire state of mind with alicia keys
-download j z holy grail with justin timberlake
-download j z otis with kanye west and otis redding
-download j z 99 problems with rick rubin
-download j z dirt off your shoulder with timbaland
-download j z big pimpin with ugk and timbaland
-download j z hard knock life with mark the 45 king and annie sample
-download j z can i get a with ja rule and amil
-download j z i just wanna love u give it 2 me with pharrell williams and chad hugo
-download j z change clothes with pharrell williams and chad hugo
-download j z excuse me miss with pharrell williams and chad hugo
-download j z song cry with just blaze and bobby glenn sample
-download j z izzo hova with kanye west and the jackson 5 sample
-download j z girls girls girls with just blaze and tom brock sample
-download j z bonnie and clyde with beyonce and tupac shakur sample
-download j z crazy in love with beyonce and the chi lites sample
-download j z drunk in love with beyonce and detail
-download j z umbrella with rihanna and tricky stewart
-download j z talk that talk with rihanna and stargate
-download j z suit and tie with justin timberlake and timbaland

-

Here are the steps to download j z music from iTunes:

-
    -
  1. Install iTunes if you're on Windows. Unfortunately, Windows does not include a built-in option for purchasing and downloading music; however, iTunes is a good alternative. You'll also need to create an Apple ID account and enter payment information for it before you can purchase music through iTunes on Windows. iTunes will be installed by default if you're on a Mac.
  2. -
  3. Open iTunes. Click or double-click the iTunes app icon, which resembles a multicolored musical note on a white background.
  4. -
  5. Sign in with your Apple ID. If you aren't signed into iTunes, do the following: Click the Account menu item at the top of iTunes (Windows) or the screen (Mac). Click Sign In in the drop-down menu. Enter your Apple ID email address and password in the resulting pop-up window.
  6. -
  7. Click Store. It's a tab near the top of the iTunes window.
  8. -
  9. Click the search bar. This is the \"Search\" text box in the upper-right side of the iTunes window.
  10. -
  11. Search for j z - music. You can type \"jazz\" or \"Jay-Z\" or both to find the music you want. You can also use filters such as genre, artist, album, or song to narrow down your search results.
  12. -
  13. Select the music you want to download. You can preview the music by clicking the play button next to the song or album title. You can also see the price and the rating of the music.
  14. -
  15. Click Buy. This is a blue button below the music's price tag. If you have enough funds in your Apple ID account, this will purchase the music and add it to your iTunes library. If not, you will be prompted to enter your payment information or redeem a gift card.
  16. -
  17. View and transfer your downloaded music. You can find your downloaded music in the Library tab of iTunes. You can also sync your music to your iPhone, iPad, iPod, or other devices by connecting them to your computer and following the instructions on iTunes.
  18. -
-

How to Download J Z Music from YouTube and SoundCloud

-

YouTube and SoundCloud are two of the most popular platforms for streaming and sharing music online. You can find a lot of j z music on these platforms, from official releases to remixes, covers, live performances, and more. However, these platforms do not offer a direct way to download the music to your computer or smartphone. You will need to use third-party apps or websites that can convert and download the music from YouTube and SoundCloud.

-

Here are some of the steps to download j z music from YouTube and SoundCloud:

-
    -
  1. Find the music you want to download on YouTube or SoundCloud. You can use the search bar or browse by categories, channels, playlists, or recommendations.
  2. -
  3. Copy the URL of the music. You can do this by right-clicking on the video or audio and selecting Copy video URL (YouTube) or Copy link (SoundCloud). Alternatively, you can copy the URL from the address bar of your browser.
  4. -
  5. Paste the URL into a converter website or app. There are many websites and apps that can convert and download YouTube and SoundCloud music, such as YTMP3, 4K Video Downloader, MP3Juices, SoundCloud Downloader, etc. You can find them by searching on Google or Bing. Make sure you use a reputable and safe website or app that does not contain malware or viruses.
  6. -
  7. Select the format and quality of the music. Most converter websites and apps will let you choose between MP3 (audio only) or MP4 (video and audio) formats, as well as different quality levels such as 128 kbps, 192 kbps, 320 kbps, etc. Choose the format and quality that suit your needs and preferences.
  8. -
  9. Click Download or Convert. This will start the conversion and downloading process. Depending on the size and length of the music, this may take a few seconds or minutes.
  10. -
  11. Save and transfer your downloaded music. Once the download is complete, you can save the music file to your computer or smartphone. You can also transfer it to other devices or use it for other purposes.
  12. -
-

How to Download J Z Music from Other Websites

-

Besides iTunes, YouTube, and SoundCloud, there are many other websites that offer free or paid music downloads, such as Bandcamp, DatPiff, Free Music Archive, and The Internet Archive. These websites have a variety of j z music from different artists, genres, eras, and regions. You can download j z music from these websites by following their instructions and terms of use.

-

Here are some of the steps to download j z music from other websites:

-
    -
  1. Find the website that has the music you want to download. You can search for j z music on these websites by using their search bar or browsing by categories, tags, genres, artists, albums, songs, etc.
  2. -
  3. Select the music you want to download. You can preview the music by clicking the play button next to the title. You can also see the details such as the name, artist, album, genre, release date, etc.
  4. -
  5. Click Download or Buy Now. Depending on the website and the music, you may be able to download it for free or pay a certain amount of money. Some websites may also ask you to enter your email address or create an account before downloading.
  6. -
  7. Choose the format and quality of the music. Some websites may give you options to choose between different formats such as MP3, WAV, FLAC, etc., and different quality levels such as low, medium - high, etc. Choose the format and quality that suit your needs and preferences.
  8. -
  9. Click Save or Confirm. This will start the downloading process. Depending on the size and length of the music, this may take a few seconds or minutes.
  10. -
  11. Save and transfer your downloaded music. Once the download is complete, you can save the music file to your computer or smartphone. You can also transfer it to other devices or use it for other purposes.
  12. -
-

Conclusion

-

Downloading j z music can be a fun and rewarding way to enjoy your favorite genre or artist. You can download j z music from various websites and platforms, such as iTunes, YouTube, SoundCloud, Bandcamp, DatPiff, Free Music Archive, and The Internet Archive. However, you need to be careful and responsible when downloading j z music, as you may encounter some risks or challenges, such as malware, viruses, legal issues, low quality, incompatible formats, etc.

-

Here are some tips and recommendations for downloading j z music safely and easily:

- -

We hope this article has helped you learn how to download j z music from various sources. If you have any questions or feedback, please feel free to leave a comment below. Happy downloading!

-

FAQs

-

What is j z music?

-

J Z is a term that can refer to two different types of music: jazz and Jay-Z. Jazz is a genre of music that originated in the African-American communities of New Orleans in the late 19th and early 20th centuries, with its roots in blues and ragtime. Jazz is characterized by complex rhythms, improvisation, swing, and expressive melodies. Jay-Z is a rapper, songwriter, producer, entrepreneur, and one of the most influential figures in hip-hop culture. He has released 14 studio albums, won 23 Grammy Awards, and sold over 100 million records worldwide.

-

What are some of the best j z songs or albums?

-

This is a subjective question that depends on your personal taste and preferences. However, some of the most popular and acclaimed j z songs or albums are:

- -

What are some of the benefits of downloading j z music?

-

Some of the benefits of downloading j z music are:

- -

What are some of the risks or challenges of downloading j z music?

-

Some of the risks or challenges of downloading j z music are:

- -

How can I discover new j z music?

-

There are many ways to discover new j z music, such as:

-

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Extra Lives APK - A Unique Zombie Game with an Above-Average Combat System.md b/spaces/1phancelerku/anime-remove-background/Extra Lives APK - A Unique Zombie Game with an Above-Average Combat System.md deleted file mode 100644 index 653100a34becdd5189c4fb501b977629209ffc67..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Extra Lives APK - A Unique Zombie Game with an Above-Average Combat System.md +++ /dev/null @@ -1,162 +0,0 @@ - -

Extra Lives APK Download: How to Survive in a Zombie Apocalypse

-

Do you love zombie games? Do you want to experience a realistic and immersive survival adventure in a post-apocalyptic world? If yes, then you should try Extra Lives APK, a free Android game that lets you see how long you can survive in a world full of the brainless and the heartless!

-

What is Extra Lives APK?

-

Extra Lives APK is an Android game developed by MDickie, the creator of popular games like Wrestling Revolution, Hard Time, and Super City. It is a 2D pixelated game that combines adventure, action, and simulation genres. You can create your own character and interact with over 200 other characters across 8 warring factions, each with their own beliefs and agendas. You can explore over 50 different locations and use hundreds of interactive objects to help you along the way. You can also fight zombies and humans using an advanced combat system that allows you to tear enemies apart with your bare hands or weapons.

-

extra lives apk download


DOWNLOAD ✫✫✫ https://jinyurl.com/2uNSkz



-

Features of Extra Lives APK

-

Some of the features that make Extra Lives APK stand out from other zombie games are:

- -

How to download and install Extra Lives APK

-

If you want to download and install Extra Lives APK on your Android device, you can follow these simple steps:

-
    -
  1. Go to [Extra Lives APK (Android Game) - Free Download - APKCombo](^1^) and click on the "Download APK" button.
  2. -
  3. Wait for the download to finish and then open the file.
  4. -
  5. Allow the installation of unknown sources if prompted by your device.
  6. -
  7. Follow the instructions on the screen to complete the installation.
  8. -
  9. Launch the game and enjoy!
  10. -
-

How to play Extra Lives APK

-

Controls and commands

-

The controls and commands of Extra Lives APK are different from other games, so you may need some time to get used to them. Here are some basic tips:

- -

Tips and tricks

-

To survive longer in Extra Lives APK, you should keep these tips and tricks in mind:

- -

Why you should play Extra Lives APK

-

Pros and cons of Extra Lives APK

-

Extra Lives APK is not a perfect game, but it has many pros and cons that make it worth playing. Here are some of them:

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
ProsCons
It is free to play and download.It has ads and in-app purchases.
It has a unique and engaging storyline.It has some bugs and glitches.
It has a large and diverse map.It has low-quality graphics and sound.
It has a realistic physics engine.It has a steep learning curve.
It has a variety of weapons, items, and vehicles.It has a limited inventory space.
It has a customizable character system.It has a lack of character development.
It has a deathmatch mode for casual fun.It has no multiplayer mode or online features.
-

Ratings and reviews of Extra Lives APK

-

Extra Lives APK has received mostly positive ratings and reviews from players who have tried it. It has a 4.2 out of 5 stars rating on Google Play Store, with over 100,000 downloads and 20,000 reviews. Here are some of the comments from the users:

-

extra lives game apk download
-extra lives mod apk download
-extra lives zombie survival apk download
-extra lives mdickie apk download
-extra lives android game apk download
-extra lives hack apk download
-extra lives unlimited health apk download
-extra lives latest version apk download
-extra lives free apk download
-extra lives full apk download
-extra lives premium apk download
-extra lives cracked apk download
-extra lives unlocked apk download
-extra lives offline apk download
-extra lives no ads apk download
-extra lives 1.150.64 apk download
-extra lives 1.150.32 apk download
-extra lives 1.14 apk download
-extra lives adventure game apk download
-extra lives zombie game apk download
-extra lives mdickie game apk download
-extra lives android game mod apk download
-extra lives hack mod apk download
-extra lives unlimited health mod apk download
-extra lives latest version mod apk download
-extra lives free mod apk download
-extra lives full mod apk download
-extra lives premium mod apk download
-extra lives cracked mod apk download
-extra lives unlocked mod apk download
-extra lives offline mod apk download
-extra lives no ads mod apk download
-how to download extra lives apk
-how to install extra lives apk
-how to play extra lives apk
-how to update extra lives apk
-how to hack extra lives apk
-how to get unlimited health in extra lives apk
-how to get premium features in extra lives apk
-how to remove ads in extra lives apk
-where to download extra lives apk
-where to find extra lives apk
-where to get extra lives mod apk
-where to get unlimited health in extra lives mod apk
-where to get premium features in extra lives mod apk
-where to remove ads in extra lives mod apk
-why to download extra lives apk
-why to play extra lives game
-why to choose extra lives mod apk
-why to use unlimited health in extra lives mod apk

-
"This game is amazing! It's like GTA but with zombies. You can do anything you want, from killing zombies to making friends. The storyline is also very interesting and unpredictable. I love this game!" - John Smith
-
"This game is fun but challenging. You have to be careful of your health, hunger, thirst, and morale. You also have to deal with other factions and random events. It's not easy to survive in this game, but it's very rewarding." - Jane Doe
-
"This game is good but needs improvement. The graphics and sound are very low quality. The controls and commands are also very hard to master. The game also has some bugs and glitches that ruin the gameplay. I hope the developer fixes these issues soon." - Bob Lee
-

Conclusion

-

Summary of the article

-

In conclusion, Extra Lives APK is a free Android game that lets you survive in a zombie apocalypse. It has a unique storyline, a large map, a realistic physics engine, a variety of weapons, items, and vehicles, a customizable character system, and a deathmatch mode. It also has some drawbacks, such as ads, in-app purchases, bugs, glitches, low-quality graphics and sound, steep learning curve, limited inventory space, lack of character development, and no multiplayer mode or online features. However, if you are looking for a fun and immersive zombie game that offers endless possibilities and outcomes, you should give Extra Lives APK a try!

-

FAQs

-

Here are some frequently asked questions about Extra Lives APK:

-
    -
  1. What are the requirements to play Extra Lives APK?
  2. -

    You need an Android device with Android 4.0 or higher version and at least 50 MB of free storage space to play Extra Lives APK.

    -
  3. How can I upgrade to "infinitely" enhance my experience?
  4. -

    You can upgrade to "infinitely" by paying $4.99 through an in-app purchase. This will remove all ads, unlock all items and locations, increase your inventory space, allow you to edit any character or faction, enable cheat codes, and more.

    -
  5. < b>How can I change my faction or recruit followers?
  6. -

    You can change your faction or recruit followers by talking to other characters and choosing the appropriate options. You can also use items or weapons to influence their opinions. However, be careful of the consequences of your actions, as some factions may become hostile or friendly towards you.

    -
  7. How can I trigger events or challenges?
  8. -

    You can trigger events or challenges by doing certain actions or visiting certain locations. For example, you can start a riot by attacking a police officer, or you can enter a zombie-infested area by breaking a barricade. Some events or challenges may be random, while others may be scripted.

    -
  9. How can I use cheat codes or edit the game?
  10. -

    You can use cheat codes or edit the game by upgrading to "infinitely" and accessing the options menu. You can then enter cheat codes such as "GOD" to become invincible, or "EDIT" to edit any character or faction. You can also change the game settings such as difficulty, violence, population, and more.

    -
  11. Where can I find more information or support for Extra Lives APK?
  12. -

    You can find more information or support for Extra Lives APK by visiting the developer's website at [MDickie.com] or contacting them at [Mat@MDickie.com]. You can also join the community of Extra Lives APK players on social media platforms such as Facebook, Twitter, YouTube, and Reddit.

    -

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/3i2irg/SF-model/app.py b/spaces/3i2irg/SF-model/app.py deleted file mode 100644 index cac8ac145432f4dd19000d1ccabb1ac3003e166b..0000000000000000000000000000000000000000 --- a/spaces/3i2irg/SF-model/app.py +++ /dev/null @@ -1,169 +0,0 @@ -import torch -import requests -import torch.nn as nn -import torch.nn.functional as F - -# urls = ("https://raw.githubusercontent.com/g8j39/GPT/main/corpus.txt",) #"https://raw.githubusercontent.com/g8j39/GPT/main/merged_file.txt","https://raw.githubusercontent.com/g8j39/GPT/main/merged_file2.txt","https://raw.githubusercontent.com/g8j39/GPT/main/merged_file3.txt") ##https://raw.githubusercontent.com/g8j39/GPT/main/corpus.txt",) , "https://raw.githubusercontent.com/g8j39/GPT/main/merged_file.txt", "https://raw.githubusercontent.com/g8j39/GPT/main/friends_corpus.txt" -# raw_text = [] -# for i, url in enumerate(urls): -# r = requests.get(url) -# with open(f'{i}.txt', 'wb') as f: -# f.write(r.content) -# with open(f'{i}.txt', 'rb') as f: -# raw_text += f.readlines() -# raw_text = ''.join([t.decode('utf') for t in raw_text]) - -with open('corpus.txt', 'r') as f: - raw_text = f.read() - -# Remove double spaces -#raw_text = " ".join(raw_text.split()) -for char in ('=', '@', '^', '|', '\x7f'): - raw_text += char -for char in ('¿', 'à', 'è', 'é', 'í', 'ï', 'ñ', 'ó', 'ÿ', '–', '—', '…'): - raw_text = raw_text.replace(char,'') -raw_text+='\n' -raw_text+='\xa0' - -def prepare_data(input,encoding_type): - if encoding_type == 'single': - chars = sorted(list(set(input))) - n_vocab = len(chars) - stoi = { ch:i for i,ch in enumerate(chars) } - itos = { i:ch for i,ch in enumerate(chars) } - encode = lambda s: [stoi[c] for c in s] # encoder: take a string, output a list of integers - decode = lambda l: ''.join([itos[i] for i in l]) # decoder: take a list of integers, output a string - output = torch.tensor(encode(input), dtype = torch.long) - return output, encode, decode, n_vocab, stoi, itos - -text, encode, decode, n_vocab, stoi, itos = prepare_data(raw_text,encoding_type='single') - -max_iters = 10000 -eval_interval = 1000 -eval_iters = 100 -learning_rate = 1e-3 -device = 'cuda' if torch.cuda.is_available() else 'cpu' -train_split = 0.9 -d_batch = 150 -d_window = 140 -d_embd = 500 -d_mlp = 4 * d_embd -n_heads = 4 # Must divide d_embd -d_head = d_embd // n_heads -n_layers = 4 -dropout = 0.1 - -class AttnHead(nn.Module): - def __init__(self,mode): - super().__init__() - self.mode = mode - self.key = nn.Linear(d_embd, d_head, bias=False, device=device) - self.query = nn.Linear(d_embd, d_head, bias=False, device=device) - self.value = nn.Linear(d_embd, d_head, bias=False, device=device) - - def forward(self,x): - B,T,C = x.shape - k, q, v = self.key(x), self.query(x), self.value(x) - attn = (q @ k.transpose(1,2)) / (d_head**0.5) # (d_batch, T, T) - # apply mask - attn = attn.masked_fill(torch.tril(torch.ones(T,T).to(device))==0,float('-inf')) - attn = F.softmax(attn,dim=-1) - attn = attn @ v # (d_batch, T, d_head) - return attn - -class MultiHead(nn.Module): - def __init__(self,mode): - super().__init__() - self.mode = mode - self.heads = nn.ModuleList([AttnHead(mode) for _ in range(n_heads)]) - self.proj = nn.Linear(d_embd, d_embd) - self.dropout = nn.Dropout(dropout) - - def forward(self,x): - # apply the heads, concatenate and project - out = torch.cat([head(x) for head in self.heads],dim=-1) - out = self.dropout(out) - return out - -class PositionalEncoding(nn.Module): -# Create a unique vector in embedding space for each position - def __init__(self,mode,window): - super().__init__() - self.mode = mode - positions = torch.arange(window).unsqueeze(1) - div_term = torch.exp(torch.arange(0, d_embd, 2) * (-math.log(10000.0) / d_embd)) # (d_embd/2 = 64, starts at 1 and decays) - pe = torch.zeros(1, window, d_embd) - pe[0, :, 0::2] = torch.sin(positions * div_term) - pe[0, :, 1::2] = torch.cos(positions * div_term) - self.register_buffer('pe', pe) - - def forward(self, x): - return self.pe[:,:x.size(1)] - -class Tformer(nn.Module): - def __init__(self,mode): - super().__init__() - self.mode = mode - self.multihead = MultiHead(mode) - self.mlp = nn.Sequential( - nn.Linear(d_embd, d_mlp), - nn.ReLU(), - nn.Linear(d_mlp, d_embd), - nn.Dropout(dropout),) - self.ln1 = nn.LayerNorm(d_embd) - self.ln2 = nn.LayerNorm(d_embd) - def forward(self,x,y=None): - x = self.ln1(x) - x = x + self.multihead(x) - x = self.ln2(x) - out = x + self.mlp(x) - return out - -class LLM(nn.Module): - def __init__(self,mode='live',window=d_window): - super().__init__() - self.mode = mode - self.embed = nn.Embedding(n_vocab,d_embd) - self.pe = PositionalEncoding(mode,window) - self.blocks = nn.Sequential(*[Tformer(mode) for _ in range(n_layers)]) - self.unembed = nn.Linear(d_embd,n_vocab) - self.ln3 = nn.LayerNorm(d_embd) - - def forward(self,x,y=None): - B, T = x.shape - out = self.embed(x) # (d_batch, d_window, d_embd) - out = out + self.pe(out) - out = self.blocks(out) - out = self.ln3(out) - out = self.unembed(out) # (d_batch, d_window, n_vocab) - loss = None if y==None else F.cross_entropy(out.view(-1,n_vocab), y.view(-1)) - return out, loss - -@torch.inference_mode() - -def generate(length=500, input_text=' '): - encoded = torch.tensor(encode(input_text), dtype=torch.long).unsqueeze(0) # (1,len(input_text)) - encoded = encoded.to(device) - for _ in range(length): - encoded_curr = encoded[:, -d_window:] - y, _ = model(encoded_curr) # (1, len(input_text), n_vocab) - y = y[:, -1, :] - y_prob = F.softmax(y, dim=-1) - next = torch.multinomial(y_prob, num_samples=1) # (1,1) - encoded = torch.cat((encoded, next), dim=1) # (1, len(input_text)+1) - return decode(encoded.squeeze().tolist()[len(input_text):]) - -model = torch.load('final_finetuned_seinfeld.pt', map_location=torch.device('cpu')) -model.eval() - -import gradio as gr - -def generate_wrapper(input_text: str, length: int): - return generate(length=length, input_text=input_text) - -iface = gr.Interface( - fn=generate_wrapper, - inputs=[gr.inputs.Textbox(placeholder='Enter input text here...', label='Input text', default = 'KRAMER:'), gr.inputs.Slider(minimum=10, maximum=1000, step=10, default=100, label='Output length (characters)')], - outputs=gr.outputs.Textbox(), - live=False) -iface.launch() \ No newline at end of file diff --git a/spaces/7hao/bingo/src/lib/bots/bing/index.ts b/spaces/7hao/bingo/src/lib/bots/bing/index.ts deleted file mode 100644 index 2c4afae01a345b8415935228566cb30d695e768d..0000000000000000000000000000000000000000 --- a/spaces/7hao/bingo/src/lib/bots/bing/index.ts +++ /dev/null @@ -1,421 +0,0 @@ -import { fetch, WebSocket, debug } from '@/lib/isomorphic' -import WebSocketAsPromised from 'websocket-as-promised' -import { - SendMessageParams, - BingConversationStyle, - ConversationResponse, - ChatResponseMessage, - ConversationInfo, - InvocationEventType, - ChatError, - ErrorCode, - ChatUpdateCompleteResponse, - ImageInfo, - KBlobResponse -} from './types' - -import { convertMessageToMarkdown, websocketUtils, streamAsyncIterable } from './utils' -import { WatchDog, createChunkDecoder } from '@/lib/utils' - -type Params = SendMessageParams<{ bingConversationStyle: BingConversationStyle }> - -const OPTIONS_SETS = [ - 'nlu_direct_response_filter', - 'deepleo', - 'disable_emoji_spoken_text', - 'responsible_ai_policy_235', - 'enablemm', - 'iycapbing', - 'iyxapbing', - 'objopinion', - 'rweasgv2', - 'dagslnv1', - 'dv3sugg', - 'autosave', - 'iyoloxap', - 'iyoloneutral', - 'clgalileo', - 'gencontentv3', -] - -export class BingWebBot { - protected conversationContext?: ConversationInfo - protected cookie: string - protected ua: string - protected endpoint = '' - private lastText = '' - private asyncTasks: Array> = [] - - constructor(opts: { - cookie: string - ua: string - bingConversationStyle?: BingConversationStyle - conversationContext?: ConversationInfo - }) { - const { cookie, ua, conversationContext } = opts - this.cookie = cookie?.includes(';') ? cookie : `_EDGE_V=1; _U=${cookie}` - this.ua = ua - this.conversationContext = conversationContext - } - - static buildChatRequest(conversation: ConversationInfo) { - const optionsSets = OPTIONS_SETS - if (conversation.conversationStyle === BingConversationStyle.Precise) { - optionsSets.push('h3precise') - } else if (conversation.conversationStyle === BingConversationStyle.Creative) { - optionsSets.push('h3imaginative') - } - return { - arguments: [ - { - source: 'cib', - optionsSets, - allowedMessageTypes: [ - 'Chat', - 'InternalSearchQuery', - 'Disengaged', - 'InternalLoaderMessage', - 'SemanticSerp', - 'GenerateContentQuery', - 'SearchQuery', - ], - sliceIds: [ - 'winmuid1tf', - 'anssupfor_c', - 'imgchatgptv2', - 'tts2cf', - 'contansperf', - 'mlchatpc8500w', - 'mlchatpc2', - 'ctrlworkpay', - 'winshortmsgtf', - 'cibctrl', - 'sydtransctrl', - 'sydconfigoptc', - '0705trt4', - '517opinion', - '628ajcopus0', - '330uaugs0', - '529rwea', - '0626snptrcs0', - '424dagslnv1', - ], - isStartOfSession: conversation.invocationId === 0, - message: { - author: 'user', - inputMethod: 'Keyboard', - text: conversation.prompt, - imageUrl: conversation.imageUrl, - messageType: 'Chat', - }, - conversationId: conversation.conversationId, - conversationSignature: conversation.conversationSignature, - participant: { id: conversation.clientId }, - }, - ], - invocationId: conversation.invocationId.toString(), - target: 'chat', - type: InvocationEventType.StreamInvocation, - } - } - - async createConversation(): Promise { - const headers = { - 'Accept-Encoding': 'gzip, deflate, br, zsdch', - 'User-Agent': this.ua, - 'x-ms-useragent': 'azsdk-js-api-client-factory/1.0.0-beta.1 core-rest-pipeline/1.10.0 OS/Win32', - cookie: this.cookie, - } - - let resp: ConversationResponse | undefined - try { - const response = await fetch(this.endpoint + '/api/create', { method: 'POST', headers, redirect: 'error', mode: 'cors', credentials: 'include' }) - if (response.status === 404) { - throw new ChatError('Not Found', ErrorCode.NOTFOUND_ERROR) - } - resp = await response.json() as ConversationResponse - } catch (err) { - console.error('create conversation error', err) - } - - if (!resp?.result) { - throw new ChatError('Invalid response', ErrorCode.UNKOWN_ERROR) - } - - const { value, message } = resp.result || {} - if (value !== 'Success') { - const errorMsg = `${value}: ${message}` - if (value === 'UnauthorizedRequest') { - throw new ChatError(errorMsg, ErrorCode.BING_UNAUTHORIZED) - } - if (value === 'Forbidden') { - throw new ChatError(errorMsg, ErrorCode.BING_FORBIDDEN) - } - throw new ChatError(errorMsg, ErrorCode.UNKOWN_ERROR) - } - return resp - } - - private async createContext(conversationStyle: BingConversationStyle) { - if (!this.conversationContext) { - const conversation = await this.createConversation() - this.conversationContext = { - conversationId: conversation.conversationId, - conversationSignature: conversation.conversationSignature, - clientId: conversation.clientId, - invocationId: 0, - conversationStyle, - prompt: '', - } - } - return this.conversationContext - } - - async sendMessage(params: Params) { - try { - await this.createContext(params.options.bingConversationStyle) - Object.assign(this.conversationContext!, { prompt: params.prompt, imageUrl: params.imageUrl }) - return this.sydneyProxy(params) - } catch (error) { - params.onEvent({ - type: 'ERROR', - error: error instanceof ChatError ? error : new ChatError('Catch Error', ErrorCode.UNKOWN_ERROR), - }) - } - } - - private async sydneyProxy(params: Params) { - const abortController = new AbortController() - const response = await fetch(this.endpoint + '/api/sydney', { - method: 'POST', - headers: { - 'Content-Type': 'application/json', - }, - signal: abortController.signal, - body: JSON.stringify(this.conversationContext!) - }) - if (response.status !== 200) { - params.onEvent({ - type: 'ERROR', - error: new ChatError( - 'Unknown error', - ErrorCode.UNKOWN_ERROR, - ), - }) - } - params.signal?.addEventListener('abort', () => { - abortController.abort() - }) - - const textDecoder = createChunkDecoder() - for await (const chunk of streamAsyncIterable(response.body!)) { - this.parseEvents(params, websocketUtils.unpackMessage(textDecoder(chunk))) - } - } - - async sendWs() { - const wsConfig: ConstructorParameters[1] = { - packMessage: websocketUtils.packMessage, - unpackMessage: websocketUtils.unpackMessage, - createWebSocket: (url) => new WebSocket(url, { - headers: { - 'accept-language': 'zh-CN,zh;q=0.9', - 'cache-control': 'no-cache', - 'User-Agent': this.ua, - pragma: 'no-cache', - cookie: this.cookie, - } - }) - } - const wsp = new WebSocketAsPromised('wss://sydney.bing.com/sydney/ChatHub', wsConfig) - - wsp.open().then(() => { - wsp.sendPacked({ protocol: 'json', version: 1 }) - wsp.sendPacked({ type: 6 }) - wsp.sendPacked(BingWebBot.buildChatRequest(this.conversationContext!)) - }) - - return wsp - } - - private async useWs(params: Params) { - const wsp = await this.sendWs() - const watchDog = new WatchDog() - wsp.onUnpackedMessage.addListener((events) => { - watchDog.watch(() => { - wsp.sendPacked({ type: 6 }) - }) - this.parseEvents(params, events) - }) - - wsp.onClose.addListener(() => { - watchDog.reset() - params.onEvent({ type: 'DONE' }) - wsp.removeAllListeners() - }) - - params.signal?.addEventListener('abort', () => { - wsp.removeAllListeners() - wsp.close() - }) - } - - private async createImage(prompt: string, id: string) { - try { - const headers = { - 'Accept-Encoding': 'gzip, deflate, br, zsdch', - 'User-Agent': this.ua, - 'x-ms-useragent': 'azsdk-js-api-client-factory/1.0.0-beta.1 core-rest-pipeline/1.10.0 OS/Win32', - cookie: this.cookie, - } - const query = new URLSearchParams({ - prompt, - id - }) - const response = await fetch(this.endpoint + '/api/image?' + query.toString(), - { - method: 'POST', - headers, - mode: 'cors', - credentials: 'include' - }) - .then(res => res.text()) - if (response) { - this.lastText += '\n' + response - } - } catch (err) { - console.error('Create Image Error', err) - } - } - - private buildKnowledgeApiPayload(imageUrl: string, conversationStyle: BingConversationStyle) { - const imageInfo: ImageInfo = {} - let imageBase64: string | undefined = undefined - const knowledgeRequest = { - imageInfo, - knowledgeRequest: { - invokedSkills: [ - 'ImageById' - ], - subscriptionId: 'Bing.Chat.Multimodal', - invokedSkillsRequestData: { - enableFaceBlur: true - }, - convoData: { - convoid: this.conversationContext?.conversationId, - convotone: conversationStyle, - } - }, - } - - if (imageUrl.startsWith('data:image/')) { - imageBase64 = imageUrl.replace('data:image/', ''); - const partIndex = imageBase64.indexOf(',') - if (partIndex) { - imageBase64 = imageBase64.substring(partIndex + 1) - } - } else { - imageInfo.url = imageUrl - } - return { knowledgeRequest, imageBase64 } - } - - async uploadImage(imageUrl: string, conversationStyle: BingConversationStyle = BingConversationStyle.Creative): Promise { - if (!imageUrl) { - return - } - await this.createContext(conversationStyle) - const payload = this.buildKnowledgeApiPayload(imageUrl, conversationStyle) - - const response = await fetch(this.endpoint + '/api/kblob', - { - headers: { - 'Content-Type': 'application/json', - }, - method: 'POST', - mode: 'cors', - credentials: 'include', - body: JSON.stringify(payload), - }) - .then(res => res.json()) - .catch(e => { - console.log('Error', e) - }) - return response - } - - private async generateContent(message: ChatResponseMessage) { - if (message.contentType === 'IMAGE') { - this.asyncTasks.push(this.createImage(message.text, message.messageId)) - } - } - - private async parseEvents(params: Params, events: any) { - const conversation = this.conversationContext! - - events?.forEach(async (event: ChatUpdateCompleteResponse) => { - debug('bing event', event) - if (event.type === 3) { - await Promise.all(this.asyncTasks) - this.asyncTasks = [] - params.onEvent({ type: 'UPDATE_ANSWER', data: { text: this.lastText } }) - params.onEvent({ type: 'DONE' }) - conversation.invocationId = parseInt(event.invocationId, 10) + 1 - } else if (event.type === 1) { - const messages = event.arguments[0].messages - if (messages) { - const text = convertMessageToMarkdown(messages[0]) - this.lastText = text - params.onEvent({ type: 'UPDATE_ANSWER', data: { text, spokenText: messages[0].text, throttling: event.arguments[0].throttling } }) - } - } else if (event.type === 2) { - const messages = event.item.messages as ChatResponseMessage[] | undefined - if (!messages) { - params.onEvent({ - type: 'ERROR', - error: new ChatError( - event.item.result.error || 'Unknown error', - event.item.result.value === 'Throttled' ? ErrorCode.THROTTLE_LIMIT - : event.item.result.value === 'CaptchaChallenge' ? (this.conversationContext?.conversationId?.includes('BingProdUnAuthenticatedUsers') ? ErrorCode.BING_UNAUTHORIZED : ErrorCode.BING_CAPTCHA) - : ErrorCode.UNKOWN_ERROR - ), - }) - return - } - const limited = messages.some((message) => - message.contentOrigin === 'TurnLimiter' - || message.messageType === 'Disengaged' - ) - if (limited) { - params.onEvent({ - type: 'ERROR', - error: new ChatError( - 'Sorry, you have reached chat limit in this conversation.', - ErrorCode.CONVERSATION_LIMIT, - ), - }) - return - } - - const lastMessage = event.item.messages.at(-1) as ChatResponseMessage - const specialMessage = event.item.messages.find(message => message.author === 'bot' && message.contentType === 'IMAGE') - if (specialMessage) { - this.generateContent(specialMessage) - } - - if (lastMessage) { - const text = convertMessageToMarkdown(lastMessage) - this.lastText = text - params.onEvent({ - type: 'UPDATE_ANSWER', - data: { text, throttling: event.item.throttling, suggestedResponses: lastMessage.suggestedResponses, sourceAttributions: lastMessage.sourceAttributions }, - }) - } - } - }) - } - - resetConversation() { - this.conversationContext = undefined - } -} diff --git a/spaces/ADOPLE/AdopleAI-ResumeAnalyzer/README.md b/spaces/ADOPLE/AdopleAI-ResumeAnalyzer/README.md deleted file mode 100644 index 913a7fdb441636d82fe1fa96337a6a3851580007..0000000000000000000000000000000000000000 --- a/spaces/ADOPLE/AdopleAI-ResumeAnalyzer/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: AdopleAIResumeAnalyser -emoji: 🏃 -colorFrom: gray -colorTo: indigo -sdk: gradio -sdk_version: 3.35.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/AIGC-Audio/AudioGPT/sound_extraction/utils/create_mixtures.py b/spaces/AIGC-Audio/AudioGPT/sound_extraction/utils/create_mixtures.py deleted file mode 100644 index 2b30d0d3b60b1e67940183812893bd323495cfea..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/AudioGPT/sound_extraction/utils/create_mixtures.py +++ /dev/null @@ -1,98 +0,0 @@ -import torch -import numpy as np - -def add_noise_and_scale(front, noise, snr_l=0, snr_h=0, scale_lower=1.0, scale_upper=1.0): - """ - :param front: front-head audio, like vocal [samples,channel], will be normlized so any scale will be fine - :param noise: noise, [samples,channel], any scale - :param snr_l: Optional - :param snr_h: Optional - :param scale_lower: Optional - :param scale_upper: Optional - :return: scaled front and noise (noisy = front + noise), all_mel_e2e outputs are noramlized within [-1 , 1] - """ - snr = None - noise, front = normalize_energy_torch(noise), normalize_energy_torch(front) # set noise and vocal to equal range [-1,1] - # print("normalize:",torch.max(noise),torch.max(front)) - if snr_l is not None and snr_h is not None: - front, noise, snr = _random_noise(front, noise, snr_l=snr_l, snr_h=snr_h) # remix them with a specific snr - - noisy, noise, front = unify_energy_torch(noise + front, noise, front) # normalize noisy, noise and vocal energy into [-1,1] - - # print("unify:", torch.max(noise), torch.max(front), torch.max(noisy)) - scale = _random_scale(scale_lower, scale_upper) # random scale these three signal - - # print("Scale",scale) - noisy, noise, front = noisy * scale, noise * scale, front * scale # apply scale - # print("after scale", torch.max(noisy), torch.max(noise), torch.max(front), snr, scale) - - front, noise = _to_numpy(front), _to_numpy(noise) # [num_samples] - mixed_wav = front + noise - - return front, noise, mixed_wav, snr, scale - -def _random_scale(lower=0.3, upper=0.9): - return float(uniform_torch(lower, upper)) - -def _random_noise(clean, noise, snr_l=None, snr_h=None): - snr = uniform_torch(snr_l,snr_h) - clean_weight = 10 ** (float(snr) / 20) - return clean, noise/clean_weight, snr - -def _to_numpy(wav): - return np.transpose(wav, (1, 0))[0].numpy() # [num_samples] - -def normalize_energy(audio, alpha = 1): - ''' - :param audio: 1d waveform, [batchsize, *], - :param alpha: the value of output range from: [-alpha,alpha] - :return: 1d waveform which value range from: [-alpha,alpha] - ''' - val_max = activelev(audio) - return (audio / val_max) * alpha - -def normalize_energy_torch(audio, alpha = 1): - ''' - If the signal is almost empty(determined by threshold), if will only be divided by 2**15 - :param audio: 1d waveform, 2**15 - :param alpha: the value of output range from: [-alpha,alpha] - :return: 1d waveform which value range from: [-alpha,alpha] - ''' - val_max = activelev_torch([audio]) - return (audio / val_max) * alpha - -def unify_energy(*args): - max_amp = activelev(args) - mix_scale = 1.0/max_amp - return [x * mix_scale for x in args] - -def unify_energy_torch(*args): - max_amp = activelev_torch(args) - mix_scale = 1.0/max_amp - return [x * mix_scale for x in args] - -def activelev(*args): - ''' - need to update like matlab - ''' - return np.max(np.abs([*args])) - -def activelev_torch(*args): - ''' - need to update like matlab - ''' - res = [] - args = args[0] - for each in args: - res.append(torch.max(torch.abs(each))) - return max(res) - -def uniform_torch(lower, upper): - if(abs(lower-upper)<1e-5): - return upper - return (upper-lower)*torch.rand(1)+lower - -if __name__ == "__main__": - wav1 = torch.randn(1, 32000) - wav2 = torch.randn(1, 32000) - target, noise, snr, scale = add_noise_and_scale(wav1, wav2) diff --git a/spaces/AIGC-Audio/AudioGPT/text_to_speech/utils/metrics/ssim.py b/spaces/AIGC-Audio/AudioGPT/text_to_speech/utils/metrics/ssim.py deleted file mode 100644 index cb8c6a47b14fbd450a6717a21236906d6de9679f..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/AudioGPT/text_to_speech/utils/metrics/ssim.py +++ /dev/null @@ -1,84 +0,0 @@ -""" -Adapted from https://github.com/Po-Hsun-Su/pytorch-ssim -""" - -import torch -import torch.nn.functional as F -from torch.autograd import Variable -import numpy as np -from math import exp - - -def gaussian(window_size, sigma): - gauss = torch.Tensor([exp(-(x - window_size // 2) ** 2 / float(2 * sigma ** 2)) for x in range(window_size)]) - return gauss / gauss.sum() - - -def create_window(window_size, channel): - _1D_window = gaussian(window_size, 1.5).unsqueeze(1) - _2D_window = _1D_window.mm(_1D_window.t()).float().unsqueeze(0).unsqueeze(0) - window = Variable(_2D_window.expand(channel, 1, window_size, window_size).contiguous()) - return window - - -def _ssim(img1, img2, window, window_size, channel, size_average=True): - mu1 = F.conv2d(img1, window, padding=window_size // 2, groups=channel) - mu2 = F.conv2d(img2, window, padding=window_size // 2, groups=channel) - - mu1_sq = mu1.pow(2) - mu2_sq = mu2.pow(2) - mu1_mu2 = mu1 * mu2 - - sigma1_sq = F.conv2d(img1 * img1, window, padding=window_size // 2, groups=channel) - mu1_sq - sigma2_sq = F.conv2d(img2 * img2, window, padding=window_size // 2, groups=channel) - mu2_sq - sigma12 = F.conv2d(img1 * img2, window, padding=window_size // 2, groups=channel) - mu1_mu2 - - C1 = 0.01 ** 2 - C2 = 0.03 ** 2 - - ssim_map = ((2 * mu1_mu2 + C1) * (2 * sigma12 + C2)) / ((mu1_sq + mu2_sq + C1) * (sigma1_sq + sigma2_sq + C2)) - - if size_average: - return ssim_map.mean() - else: - return ssim_map.mean(1) - - -class SSIM(torch.nn.Module): - def __init__(self, window_size=11, size_average=True): - super(SSIM, self).__init__() - self.window_size = window_size - self.size_average = size_average - self.channel = 1 - self.window = create_window(window_size, self.channel) - - def forward(self, img1, img2): - (_, channel, _, _) = img1.size() - - if channel == self.channel and self.window.data.type() == img1.data.type(): - window = self.window - else: - window = create_window(self.window_size, channel) - - if img1.is_cuda: - window = window.cuda(img1.get_device()) - window = window.type_as(img1) - - self.window = window - self.channel = channel - - return _ssim(img1, img2, window, self.window_size, channel, self.size_average) - - -window = None - - -def ssim(img1, img2, window_size=11, size_average=True): - (_, channel, _, _) = img1.size() - global window - if window is None: - window = create_window(window_size, channel) - if img1.is_cuda: - window = window.cuda(img1.get_device()) - window = window.type_as(img1) - return _ssim(img1, img2, window, window_size, channel, size_average) diff --git a/spaces/AILab-CVC/SEED-LLaMA/scripts/seed_llama_inference_8B.py b/spaces/AILab-CVC/SEED-LLaMA/scripts/seed_llama_inference_8B.py deleted file mode 100644 index 73b89609e10ff91761e76d0d09a19aafdc7eda7a..0000000000000000000000000000000000000000 --- a/spaces/AILab-CVC/SEED-LLaMA/scripts/seed_llama_inference_8B.py +++ /dev/null @@ -1,120 +0,0 @@ -import hydra - -import pyrootutils -import os -import torch - -from omegaconf import OmegaConf -import json -from typing import Optional -import transformers -from PIL import Image -from torchvision.transforms.functional import InterpolationMode - -pyrootutils.setup_root(__file__, indicator=".project-root", pythonpath=True) - -BOI_TOKEN = '' -EOI_TOKEN = '' -IMG_TOKEN = '' - -IMG_FLAG = '' -NUM_IMG_TOKNES = 32 -NUM_IMG_CODES = 8192 -image_id_shift = 32000 - - - - -def generate(tokenizer, input_tokens, generation_config, model): - - input_ids = tokenizer(input_tokens, add_special_tokens=False, return_tensors='pt').input_ids - input_ids = input_ids.to("cuda") - - generate_ids = model.generate( - input_ids=input_ids, - **generation_config - ) - generate_ids = generate_ids[0][input_ids.shape[1]:] - - return generate_ids - -def decode_image_text(generate_ids, tokenizer, save_path=None): - - boi_list = torch.where(generate_ids == tokenizer(BOI_TOKEN, add_special_tokens=False).input_ids[0])[0] - eoi_list = torch.where(generate_ids == tokenizer(EOI_TOKEN, add_special_tokens=False).input_ids[0])[0] - - if len(boi_list) == 0 and len(eoi_list) == 0: - text_ids = generate_ids - texts = tokenizer.decode(text_ids, skip_special_tokens=True) - print(texts) - - else: - boi_index = boi_list[0] - eoi_index = eoi_list[0] - - text_ids = generate_ids[:boi_index] - if len(text_ids) != 0: - texts = tokenizer.decode(text_ids, skip_special_tokens=True) - print(texts) - - image_ids = (generate_ids[boi_index+1:eoi_index] - image_id_shift).reshape(1,-1) - - images = tokenizer.decode_image(image_ids) - - images[0].save(save_path) - - -device = "cuda" - -tokenizer_cfg_path = 'configs/tokenizer/seed_llama_tokenizer.yaml' -tokenizer_cfg = OmegaConf.load(tokenizer_cfg_path) -tokenizer = hydra.utils.instantiate(tokenizer_cfg, device=device, load_diffusion=True) - -transform_cfg_path = 'configs/transform/clip_transform.yaml' -transform_cfg = OmegaConf.load(transform_cfg_path) -transform = hydra.utils.instantiate(transform_cfg) - -model_cfg = OmegaConf.load('configs/llm/seed_llama_8b.yaml') -model = hydra.utils.instantiate(model_cfg, torch_dtype=torch.float16) -model = model.eval().to(device) - -generation_config = { - 'temperature': 1.0, - 'num_beams': 1, - 'max_new_tokens': 512, - 'top_p': 0.5, - 'do_sample': True - } - -s_token = "USER:" -e_token = "ASSISTANT:" -sep = "\n" - - -### visual question answering -image_path = "images/cat.jpg" -image = Image.open(image_path).convert('RGB') -image_tensor = transform(image).to(device) -img_ids = tokenizer.encode_image(image_torch=image_tensor) -img_ids = img_ids.view(-1).cpu().numpy() -img_tokens = BOI_TOKEN + ''.join([IMG_TOKEN.format(item) for item in img_ids]) + EOI_TOKEN - -question = "What is this animal?" - -input_tokens = tokenizer.bos_token + s_token + " " + img_tokens + question + sep + e_token -generate_ids = generate(tokenizer, input_tokens, generation_config, model) -decode_image_text(generate_ids, tokenizer) - -### text-to-image generation -prompt = "Can you generate an image of a dog on the green grass?" -input_tokens = tokenizer.bos_token + s_token + " " + prompt + sep + e_token -generate_ids = generate(tokenizer, input_tokens, generation_config, model) -save_path = 'dog.jpg' -decode_image_text(generate_ids, tokenizer, save_path) - -### multimodal prompt image generation -instruction = "Can you make the cat wear sunglasses?" -input_tokens = tokenizer.bos_token + s_token + " " + img_tokens + instruction + sep + e_token -generate_ids = generate(tokenizer, input_tokens, generation_config, model) -save_path = 'cat_sunglasses.jpg' -decode_image_text(generate_ids, tokenizer, save_path) \ No newline at end of file diff --git a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_0_ClothesDetection/mmyolo/configs/yolov5/yolov5_s-v61_syncbn_8xb16-300e_coco.py b/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_0_ClothesDetection/mmyolo/configs/yolov5/yolov5_s-v61_syncbn_8xb16-300e_coco.py deleted file mode 100644 index 305034132bdff5b508995ff39d900b03b6df5679..0000000000000000000000000000000000000000 --- a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_0_ClothesDetection/mmyolo/configs/yolov5/yolov5_s-v61_syncbn_8xb16-300e_coco.py +++ /dev/null @@ -1,292 +0,0 @@ -_base_ = ['../_base_/default_runtime.py', '../_base_/det_p5_tta.py'] - -# ========================Frequently modified parameters====================== -# -----data related----- -data_root = 'data/coco/' # Root path of data -# Path of train annotation file -train_ann_file = 'annotations/instances_train2017.json' -train_data_prefix = 'train2017/' # Prefix of train image path -# Path of val annotation file -val_ann_file = 'annotations/instances_val2017.json' -val_data_prefix = 'val2017/' # Prefix of val image path - -num_classes = 80 # Number of classes for classification -# Batch size of a single GPU during training -train_batch_size_per_gpu = 16 -# Worker to pre-fetch data for each single GPU during training -train_num_workers = 8 -# persistent_workers must be False if num_workers is 0 -persistent_workers = True - -# -----model related----- -# Basic size of multi-scale prior box -anchors = [ - [(10, 13), (16, 30), (33, 23)], # P3/8 - [(30, 61), (62, 45), (59, 119)], # P4/16 - [(116, 90), (156, 198), (373, 326)] # P5/32 -] - -# -----train val related----- -# Base learning rate for optim_wrapper. Corresponding to 8xb16=128 bs -base_lr = 0.01 -max_epochs = 300 # Maximum training epochs - -model_test_cfg = dict( - # The config of multi-label for multi-class prediction. - multi_label=True, - # The number of boxes before NMS - nms_pre=30000, - score_thr=0.001, # Threshold to filter out boxes. - nms=dict(type='nms', iou_threshold=0.65), # NMS type and threshold - max_per_img=300) # Max number of detections of each image - -# ========================Possible modified parameters======================== -# -----data related----- -img_scale = (640, 640) # width, height -# Dataset type, this will be used to define the dataset -dataset_type = 'YOLOv5CocoDataset' -# Batch size of a single GPU during validation -val_batch_size_per_gpu = 1 -# Worker to pre-fetch data for each single GPU during validation -val_num_workers = 2 - -# Config of batch shapes. Only on val. -# It means not used if batch_shapes_cfg is None. -batch_shapes_cfg = dict( - type='BatchShapePolicy', - batch_size=val_batch_size_per_gpu, - img_size=img_scale[0], - # The image scale of padding should be divided by pad_size_divisor - size_divisor=32, - # Additional paddings for pixel scale - extra_pad_ratio=0.5) - -# -----model related----- -# The scaling factor that controls the depth of the network structure -deepen_factor = 0.33 -# The scaling factor that controls the width of the network structure -widen_factor = 0.5 -# Strides of multi-scale prior box -strides = [8, 16, 32] -num_det_layers = 3 # The number of model output scales -norm_cfg = dict(type='BN', momentum=0.03, eps=0.001) # Normalization config - -# -----train val related----- -affine_scale = 0.5 # YOLOv5RandomAffine scaling ratio -loss_cls_weight = 0.5 -loss_bbox_weight = 0.05 -loss_obj_weight = 1.0 -prior_match_thr = 4. # Priori box matching threshold -# The obj loss weights of the three output layers -obj_level_weights = [4., 1., 0.4] -lr_factor = 0.01 # Learning rate scaling factor -weight_decay = 0.0005 -# Save model checkpoint and validation intervals -save_checkpoint_intervals = 10 -# The maximum checkpoints to keep. -max_keep_ckpts = 3 -# Single-scale training is recommended to -# be turned on, which can speed up training. -env_cfg = dict(cudnn_benchmark=True) - -# ===============================Unmodified in most cases==================== -model = dict( - type='YOLODetector', - data_preprocessor=dict( - type='mmdet.DetDataPreprocessor', - mean=[0., 0., 0.], - std=[255., 255., 255.], - bgr_to_rgb=True), - backbone=dict( - type='YOLOv5CSPDarknet', - deepen_factor=deepen_factor, - widen_factor=widen_factor, - norm_cfg=norm_cfg, - act_cfg=dict(type='SiLU', inplace=True)), - neck=dict( - type='YOLOv5PAFPN', - deepen_factor=deepen_factor, - widen_factor=widen_factor, - in_channels=[256, 512, 1024], - out_channels=[256, 512, 1024], - num_csp_blocks=3, - norm_cfg=norm_cfg, - act_cfg=dict(type='SiLU', inplace=True)), - bbox_head=dict( - type='YOLOv5Head', - head_module=dict( - type='YOLOv5HeadModule', - num_classes=num_classes, - in_channels=[256, 512, 1024], - widen_factor=widen_factor, - featmap_strides=strides, - num_base_priors=3), - prior_generator=dict( - type='mmdet.YOLOAnchorGenerator', - base_sizes=anchors, - strides=strides), - # scaled based on number of detection layers - loss_cls=dict( - type='mmdet.CrossEntropyLoss', - use_sigmoid=True, - reduction='mean', - loss_weight=loss_cls_weight * - (num_classes / 80 * 3 / num_det_layers)), - loss_bbox=dict( - type='IoULoss', - iou_mode='ciou', - bbox_format='xywh', - eps=1e-7, - reduction='mean', - loss_weight=loss_bbox_weight * (3 / num_det_layers), - return_iou=True), - loss_obj=dict( - type='mmdet.CrossEntropyLoss', - use_sigmoid=True, - reduction='mean', - loss_weight=loss_obj_weight * - ((img_scale[0] / 640)**2 * 3 / num_det_layers)), - prior_match_thr=prior_match_thr, - obj_level_weights=obj_level_weights), - test_cfg=model_test_cfg) - -albu_train_transforms = [ - dict(type='Blur', p=0.01), - dict(type='MedianBlur', p=0.01), - dict(type='ToGray', p=0.01), - dict(type='CLAHE', p=0.01) -] - -pre_transform = [ - dict(type='LoadImageFromFile', file_client_args=_base_.file_client_args), - dict(type='LoadAnnotations', with_bbox=True) -] - -train_pipeline = [ - *pre_transform, - dict( - type='Mosaic', - img_scale=img_scale, - pad_val=114.0, - pre_transform=pre_transform), - dict( - type='YOLOv5RandomAffine', - max_rotate_degree=0.0, - max_shear_degree=0.0, - scaling_ratio_range=(1 - affine_scale, 1 + affine_scale), - # img_scale is (width, height) - border=(-img_scale[0] // 2, -img_scale[1] // 2), - border_val=(114, 114, 114)), - dict( - type='mmdet.Albu', - transforms=albu_train_transforms, - bbox_params=dict( - type='BboxParams', - format='pascal_voc', - label_fields=['gt_bboxes_labels', 'gt_ignore_flags']), - keymap={ - 'img': 'image', - 'gt_bboxes': 'bboxes' - }), - dict(type='YOLOv5HSVRandomAug'), - dict(type='mmdet.RandomFlip', prob=0.5), - dict( - type='mmdet.PackDetInputs', - meta_keys=('img_id', 'img_path', 'ori_shape', 'img_shape', 'flip', - 'flip_direction')) -] - -train_dataloader = dict( - batch_size=train_batch_size_per_gpu, - num_workers=train_num_workers, - persistent_workers=persistent_workers, - pin_memory=True, - sampler=dict(type='DefaultSampler', shuffle=True), - dataset=dict( - type=dataset_type, - data_root=data_root, - ann_file=train_ann_file, - data_prefix=dict(img=train_data_prefix), - filter_cfg=dict(filter_empty_gt=False, min_size=32), - pipeline=train_pipeline)) - -test_pipeline = [ - dict(type='LoadImageFromFile', file_client_args=_base_.file_client_args), - dict(type='YOLOv5KeepRatioResize', scale=img_scale), - dict( - type='LetterResize', - scale=img_scale, - allow_scale_up=False, - pad_val=dict(img=114)), - dict(type='LoadAnnotations', with_bbox=True, _scope_='mmdet'), - dict( - type='mmdet.PackDetInputs', - meta_keys=('img_id', 'img_path', 'ori_shape', 'img_shape', - 'scale_factor', 'pad_param')) -] - -val_dataloader = dict( - batch_size=val_batch_size_per_gpu, - num_workers=val_num_workers, - persistent_workers=persistent_workers, - pin_memory=True, - drop_last=False, - sampler=dict(type='DefaultSampler', shuffle=False), - dataset=dict( - type=dataset_type, - data_root=data_root, - test_mode=True, - data_prefix=dict(img=val_data_prefix), - ann_file=val_ann_file, - pipeline=test_pipeline, - batch_shapes_cfg=batch_shapes_cfg)) - -test_dataloader = val_dataloader - -param_scheduler = None -optim_wrapper = dict( - type='OptimWrapper', - optimizer=dict( - type='SGD', - lr=base_lr, - momentum=0.937, - weight_decay=weight_decay, - nesterov=True, - batch_size_per_gpu=train_batch_size_per_gpu), - constructor='YOLOv5OptimizerConstructor') - -default_hooks = dict( - param_scheduler=dict( - type='YOLOv5ParamSchedulerHook', - scheduler_type='linear', - lr_factor=lr_factor, - max_epochs=max_epochs), - checkpoint=dict( - type='CheckpointHook', - interval=save_checkpoint_intervals, - save_best='auto', - max_keep_ckpts=max_keep_ckpts)) - -custom_hooks = [ - dict( - type='EMAHook', - ema_type='ExpMomentumEMA', - momentum=0.0001, - update_buffers=True, - strict_load=False, - priority=49) -] - -val_evaluator = dict( - type='mmdet.CocoMetric', - proposal_nums=(100, 1, 10), - ann_file=data_root + val_ann_file, - metric='bbox') -test_evaluator = val_evaluator - -train_cfg = dict( - type='EpochBasedTrainLoop', - max_epochs=max_epochs, - val_interval=save_checkpoint_intervals) -val_cfg = dict(type='ValLoop') -test_cfg = dict(type='TestLoop') diff --git a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/resnet/resnetv1c50_8xb32_in1k.py b/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/resnet/resnetv1c50_8xb32_in1k.py deleted file mode 100644 index aa1c8b6475ce373f4a35123a72e31419b87027c0..0000000000000000000000000000000000000000 --- a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/resnet/resnetv1c50_8xb32_in1k.py +++ /dev/null @@ -1,5 +0,0 @@ -_base_ = [ - '../_base_/models/resnetv1c50.py', - '../_base_/datasets/imagenet_bs32_pil_resize.py', - '../_base_/schedules/imagenet_bs256.py', '../_base_/default_runtime.py' -] diff --git a/spaces/AchyuthGamer/NeonAI-Chat-UI/neon.ai.py b/spaces/AchyuthGamer/NeonAI-Chat-UI/neon.ai.py deleted file mode 100644 index 63c75faaf3f69513d1d55b22a3d60600cec7a888..0000000000000000000000000000000000000000 --- a/spaces/AchyuthGamer/NeonAI-Chat-UI/neon.ai.py +++ /dev/null @@ -1,37 +0,0 @@ -import random -import gradio as gr -import openai - -openai.api_type = "azure" -openai.api_base = "https://hrangaopenaillm.openai.azure.com" -openai.api_version = "2023-03-15-preview" -openai.api_key = "e951b48da7c548e18af601a15cb6aefa" - - -def gptresponse(message, history): - system_prompt = "You are OpenGPT chatbot developed by Achyuth to help people. Your developer is 13 years old and a young programmer." - - messages = [{"role":"system","content":system_prompt}] - for human, assistant in history: - messages.append({"role":"user", "content":human}) - messages.append({"role":"assistant", "content":assistant}) - - if message != '': - messages.append({"role":"user", "content":message}) - - response = openai.ChatCompletion.create(engine = "NGA_AI_ASSISTANT", - messages = messages, - temperature =0.7, - max_tokens = 4000, - top_p = 0.95, - frequency_penalty = 0, - presence_penalty = 0, - stop = None) - - return response["choices"][0]["message"]["content"] - -title = "NeonAI Chat✨" - -gr.HTML(title) - -gr.ChatInterface(gptresponse, title=title).launch() \ No newline at end of file diff --git a/spaces/AdVisual/MaskCut/predict.py b/spaces/AdVisual/MaskCut/predict.py deleted file mode 100644 index fe41f70fea9bbe97d4caeaeeadb707b55962a9d6..0000000000000000000000000000000000000000 --- a/spaces/AdVisual/MaskCut/predict.py +++ /dev/null @@ -1,27 +0,0 @@ -import base64 -from io import BytesIO -from PIL import Image -import numpy as np -from model import Model - -def predict(package, image_base64: str, threshold: float, num_objects: int): - # Decode the image from base64 to PIL.Image - # We use BytesIO to convert the base64 to bytes - base64_split = image_base64.split(',')[1] - buf = BytesIO(base64.b64decode(base64_split)) - - image = Image.open(buf) - - # Get the image path from tmp_image - canvas = Image.new('RGB', image.size, (0, 0, 0)) - - # We copy the image that and fill it with black, to get the dimensions - rgb = np.array(canvas) - model : Model = package.get('model') - masks = model(image, threshold, num_objects) - - for mask in masks: - fg = mask > 0.5 - rgb[fg] = 255 - - return Image.fromarray(rgb) \ No newline at end of file diff --git a/spaces/Adeeb-F/AI-Genrated-Image-Detector/app.py b/spaces/Adeeb-F/AI-Genrated-Image-Detector/app.py deleted file mode 100644 index 4d413c4f03726ca1506f9b2dc42de6164ead8eba..0000000000000000000000000000000000000000 --- a/spaces/Adeeb-F/AI-Genrated-Image-Detector/app.py +++ /dev/null @@ -1,30 +0,0 @@ -import gradio as gr -import numpy as np -from tensorflow.keras.models import load_model - - -model = load_model("large_model_3lakh_v1.h5") - -title = '🧠 AI FORGED IMAGE DETECTOR' - -description = 'THROUGH THIS APPLICATION YOU CAN INPUT AN IMAGE AND THE WEBSITE WILL TELL WHETHER THE IMAGE IS AI GENERATED OR NOT.' -list_num = [0, 1] -#0 is fake 1 is true - -def closest(lst, K): - return lst[min(range(len(lst)), key=lambda i: abs(lst[i] - K))] -def hell(image): - pred = model.predict(np.expand_dims(image / 255, 0)) - result = closest(list_num, pred[0]) - if result == 0: - return "The image is generated by AI" - if result == 1: - return "The Image is not generated by AI" - -demo = gr.Interface(fn=hell, inputs=[gr.Image(shape=(256,256))], outputs=["text"], - # Pass through title and description - title=title, description=description, - # Set theme and launch parameters - theme='finlaymacklon/boxy_violet') - -demo.launch() \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/agentverse/environments/tasksolving_env/rules/decision_maker/__init__.py b/spaces/AgentVerse/agentVerse/agentverse/environments/tasksolving_env/rules/decision_maker/__init__.py deleted file mode 100644 index 439ea54ccefeb1dcc33447f8b4a95e1d0bdf2c76..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/agentverse/environments/tasksolving_env/rules/decision_maker/__init__.py +++ /dev/null @@ -1,13 +0,0 @@ -from agentverse.registry import Registry - -decision_maker_registry = Registry(name="DecisionMakerRegistry") - -from .base import BaseDecisionMaker, DummyDecisionMaker -from .horizontal import HorizontalDecisionMaker -from .vertical import VerticalDecisionMaker -from .dynamic import DynamicDecisionMaker -from .vertical_solver_first import VerticalSolverFirstDecisionMaker -from .concurrent import ConcurrentDecisionMaker -from .horizontal_tool import HorizontalToolDecisionMaker -from .central import CentralDecisionMaker -from .brainstorming import BrainstormingDecisionMaker diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/bejeweled/board/match/GetAllMatch.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/bejeweled/board/match/GetAllMatch.js deleted file mode 100644 index 14d32fba5ca13a13f3c2188e75e17e6818d3b159..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/bejeweled/board/match/GetAllMatch.js +++ /dev/null @@ -1,35 +0,0 @@ -import RefreshSymbolCache from './RefreshSymbolCache.js'; -import GetMatchN from './GetMatchN.js'; - -const SetStruct = Phaser.Structs.Set; -var GetAllMatch = function () { - RefreshSymbolCache.call(this) // only refresh symbol cache once - // Get match5, match4, match3 - var self = this; - var matchLines = []; - for (var n = 5; n >= 3; n--) { - GetMatchN.call(this, n, function (result, board) { - var newSet = new SetStruct(board.tileXYArrayToChessArray(result.tileXY, self.chessTileZ)); - for (var i = 0, cnt = matchLines.length; i < cnt; i++) { - if (subSetTest(matchLines[i], newSet)) { - return; // not a new set - } - } - matchLines.push(newSet); - }); - } - return matchLines; -} - -var subSetTest = function (setA, setB) { - // Return true if setB is a subset of setA - var itemsA = setA.entries; - for (var i = 0, cnt = itemsA.length; i < cnt; i++) { - if (!setB.contains(itemsA[i])) { - return false; - } - } - return true; -}; - -export default GetAllMatch; \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/gridbuttons/GridButtons.d.ts b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/gridbuttons/GridButtons.d.ts deleted file mode 100644 index 99ee01cdb6e3ab3b43f70e61e26bdd25f0728af8..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/gridbuttons/GridButtons.d.ts +++ /dev/null @@ -1,101 +0,0 @@ -// import * as Phaser from 'phaser'; -import GridSizer from '../gridsizer/GridSizer'; -import { IConfig as IConfigButtons } from '../utils/buttongroup/Buttons'; - - -export default GridButtons; - -declare namespace GridButtons { - type CreateCellContainerCallbackType = ( - scene: Phaser.Scene, - x: number, y: number, - config: { - column?: number, row?: number, - - align?: GridSizer.AlignTypes, - padding?: GridSizer.PaddingTypes, - expand?: boolean, - key?: string - } - ) => Phaser.GameObjects.GameObject; - - interface IConfig extends GridSizer.IConfig, IConfigButtons { - background?: Phaser.GameObjects.GameObject, - - buttons?: Phaser.GameObjects.GameObject[][], - createCellContainerCallback?: CreateCellContainerCallbackType - } -} - -declare class GridButtons extends GridSizer { - constructor( - scene: Phaser.Scene, - config?: GridButtons.IConfig - ); - - emitButtonClick( - index: number | Phaser.GameObjects.GameObject - ): this; - - setButtonEnable( - index?: number | Phaser.GameObjects.GameObject | boolean, - enable?: boolean - ): this; - - toggleButtonEnable( - index?: number | Phaser.GameObjects.GameObject - ): this; - - getButtonEnable( - index: number | Phaser.GameObjects.GameObject - ): boolean; - - getButton( - index: number - ): Phaser.GameObjects.GameObject | null; - - addButton( - gameObject: Phaser.GameObjects.GameObject - ): this; - - removeButton( - gameObject: Phaser.GameObjects.GameObject, - destroyChild?: boolean - ): this; - - clearButtons( - destroyChild?: boolean - ): this; - - showButton( - index: number | Phaser.GameObjects.GameObject - ): this; - - hideButton( - index: number | Phaser.GameObjects.GameObject - ): this; - - forEachButtton( - callback: (button: Phaser.GameObjects.GameObject, index: number, buttons: Phaser.GameObjects.GameObject[]) => void, - scop?: unknown - ): this; - - readonly buttons: Phaser.GameObjects.GameObject[]; - - value: unknown; - - setSelectedButtonName( - name: string - ): this; - - getSelectedButtonName(): string; - - setButtonState( - name: string, - state?: boolean - ): this; - - getButtonState( - name: string - ): boolean; -} \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/maker/builders/utils/CreateChildren.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/maker/builders/utils/CreateChildren.js deleted file mode 100644 index a9f3d75f8875d22173f6460db2fcafcb9ed2b388..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/maker/builders/utils/CreateChildren.js +++ /dev/null @@ -1,26 +0,0 @@ -import CreateChild from './CreateChild.js'; - -var CreateChildren = function (scene, data, subKey, view, styles, customBuilders) { - var childData = data[subKey]; - if (!childData) { - return undefined; - } - - if (Array.isArray(childData)) { - for (var i = 0, cnt = childData.length; i < cnt; i++) { - if (Array.isArray(childData[i])) { // Nested array - CreateChildren(scene, childData, i, view, styles, customBuilders); - } else { - CreateChild(scene, childData, i, view, styles, customBuilders); - } - } - } else { - for (var key in childData) { - CreateChild(scene, childData, key, view, styles, customBuilders); - } - } - - return childData; -} - -export default CreateChildren; \ No newline at end of file diff --git a/spaces/Alpaca233/SadTalker/src/utils/model2safetensor.py b/spaces/Alpaca233/SadTalker/src/utils/model2safetensor.py deleted file mode 100644 index 50c485000d43ba9c230a0bc64ce8aeaaec6e2b29..0000000000000000000000000000000000000000 --- a/spaces/Alpaca233/SadTalker/src/utils/model2safetensor.py +++ /dev/null @@ -1,141 +0,0 @@ -import torch -import yaml -import os - -import safetensors -from safetensors.torch import save_file -from yacs.config import CfgNode as CN -import sys - -sys.path.append('/apdcephfs/private_shadowcun/SadTalker') - -from src.face3d.models import networks - -from src.facerender.modules.keypoint_detector import HEEstimator, KPDetector -from src.facerender.modules.mapping import MappingNet -from src.facerender.modules.generator import OcclusionAwareGenerator, OcclusionAwareSPADEGenerator - -from src.audio2pose_models.audio2pose import Audio2Pose -from src.audio2exp_models.networks import SimpleWrapperV2 -from src.test_audio2coeff import load_cpk - -size = 256 -############ face vid2vid -config_path = os.path.join('src', 'config', 'facerender.yaml') -current_root_path = '.' - -path_of_net_recon_model = os.path.join(current_root_path, 'checkpoints', 'epoch_20.pth') -net_recon = networks.define_net_recon(net_recon='resnet50', use_last_fc=False, init_path='') -checkpoint = torch.load(path_of_net_recon_model, map_location='cpu') -net_recon.load_state_dict(checkpoint['net_recon']) - -with open(config_path) as f: - config = yaml.safe_load(f) - -generator = OcclusionAwareSPADEGenerator(**config['model_params']['generator_params'], - **config['model_params']['common_params']) -kp_extractor = KPDetector(**config['model_params']['kp_detector_params'], - **config['model_params']['common_params']) -he_estimator = HEEstimator(**config['model_params']['he_estimator_params'], - **config['model_params']['common_params']) -mapping = MappingNet(**config['model_params']['mapping_params']) - -def load_cpk_facevid2vid(checkpoint_path, generator=None, discriminator=None, - kp_detector=None, he_estimator=None, optimizer_generator=None, - optimizer_discriminator=None, optimizer_kp_detector=None, - optimizer_he_estimator=None, device="cpu"): - - checkpoint = torch.load(checkpoint_path, map_location=torch.device(device)) - if generator is not None: - generator.load_state_dict(checkpoint['generator']) - if kp_detector is not None: - kp_detector.load_state_dict(checkpoint['kp_detector']) - if he_estimator is not None: - he_estimator.load_state_dict(checkpoint['he_estimator']) - if discriminator is not None: - try: - discriminator.load_state_dict(checkpoint['discriminator']) - except: - print ('No discriminator in the state-dict. Dicriminator will be randomly initialized') - if optimizer_generator is not None: - optimizer_generator.load_state_dict(checkpoint['optimizer_generator']) - if optimizer_discriminator is not None: - try: - optimizer_discriminator.load_state_dict(checkpoint['optimizer_discriminator']) - except RuntimeError as e: - print ('No discriminator optimizer in the state-dict. Optimizer will be not initialized') - if optimizer_kp_detector is not None: - optimizer_kp_detector.load_state_dict(checkpoint['optimizer_kp_detector']) - if optimizer_he_estimator is not None: - optimizer_he_estimator.load_state_dict(checkpoint['optimizer_he_estimator']) - - return checkpoint['epoch'] - - -def load_cpk_facevid2vid_safetensor(checkpoint_path, generator=None, - kp_detector=None, he_estimator=None, - device="cpu"): - - checkpoint = safetensors.torch.load_file(checkpoint_path) - - if generator is not None: - x_generator = {} - for k,v in checkpoint.items(): - if 'generator' in k: - x_generator[k.replace('generator.', '')] = v - generator.load_state_dict(x_generator) - if kp_detector is not None: - x_generator = {} - for k,v in checkpoint.items(): - if 'kp_extractor' in k: - x_generator[k.replace('kp_extractor.', '')] = v - kp_detector.load_state_dict(x_generator) - if he_estimator is not None: - x_generator = {} - for k,v in checkpoint.items(): - if 'he_estimator' in k: - x_generator[k.replace('he_estimator.', '')] = v - he_estimator.load_state_dict(x_generator) - - return None - -free_view_checkpoint = '/apdcephfs/private_shadowcun/SadTalker/checkpoints/facevid2vid_'+str(size)+'-model.pth.tar' -load_cpk_facevid2vid(free_view_checkpoint, kp_detector=kp_extractor, generator=generator, he_estimator=he_estimator) - -wav2lip_checkpoint = os.path.join(current_root_path, 'checkpoints', 'wav2lip.pth') - -audio2pose_checkpoint = os.path.join(current_root_path, 'checkpoints', 'auido2pose_00140-model.pth') -audio2pose_yaml_path = os.path.join(current_root_path, 'src', 'config', 'auido2pose.yaml') - -audio2exp_checkpoint = os.path.join(current_root_path, 'checkpoints', 'auido2exp_00300-model.pth') -audio2exp_yaml_path = os.path.join(current_root_path, 'src', 'config', 'auido2exp.yaml') - -fcfg_pose = open(audio2pose_yaml_path) -cfg_pose = CN.load_cfg(fcfg_pose) -cfg_pose.freeze() -audio2pose_model = Audio2Pose(cfg_pose, wav2lip_checkpoint) -audio2pose_model.eval() -load_cpk(audio2pose_checkpoint, model=audio2pose_model, device='cpu') - -# load audio2exp_model -netG = SimpleWrapperV2() -netG.eval() -load_cpk(audio2exp_checkpoint, model=netG, device='cpu') - -class SadTalker(torch.nn.Module): - def __init__(self, kp_extractor, generator, netG, audio2pose, face_3drecon): - super(SadTalker, self).__init__() - self.kp_extractor = kp_extractor - self.generator = generator - self.audio2exp = netG - self.audio2pose = audio2pose - self.face_3drecon = face_3drecon - - -model = SadTalker(kp_extractor, generator, netG, audio2pose_model, net_recon) - -# here, we want to convert it to safetensor -save_file(model.state_dict(), "checkpoints/SadTalker_V0.0.2_"+str(size)+".safetensors") - -### test -load_cpk_facevid2vid_safetensor('checkpoints/SadTalker_V0.0.2_'+str(size)+'.safetensors', kp_detector=kp_extractor, generator=generator, he_estimator=None) \ No newline at end of file diff --git a/spaces/Ameaou/academic-chatgpt3.1/crazy_functions/test_project/cpp/cppipc/queue.h b/spaces/Ameaou/academic-chatgpt3.1/crazy_functions/test_project/cpp/cppipc/queue.h deleted file mode 100644 index a21f3446e06b5826af7b554c8a7d9c5d80848b62..0000000000000000000000000000000000000000 --- a/spaces/Ameaou/academic-chatgpt3.1/crazy_functions/test_project/cpp/cppipc/queue.h +++ /dev/null @@ -1,216 +0,0 @@ -#pragma once - -#include -#include -#include // [[since C++14]]: std::exchange -#include -#include -#include -#include -#include -#include -#include // assert - -#include "libipc/def.h" -#include "libipc/shm.h" -#include "libipc/rw_lock.h" - -#include "libipc/utility/log.h" -#include "libipc/platform/detail.h" -#include "libipc/circ/elem_def.h" - -namespace ipc { -namespace detail { - -class queue_conn { -protected: - circ::cc_t connected_ = 0; - shm::handle elems_h_; - - template - Elems* open(char const * name) { - if (name == nullptr || name[0] == '\0') { - ipc::error("fail open waiter: name is empty!\n"); - return nullptr; - } - if (!elems_h_.acquire(name, sizeof(Elems))) { - return nullptr; - } - auto elems = static_cast(elems_h_.get()); - if (elems == nullptr) { - ipc::error("fail acquire elems: %s\n", name); - return nullptr; - } - elems->init(); - return elems; - } - - void close() { - elems_h_.release(); - } - -public: - queue_conn() = default; - queue_conn(const queue_conn&) = delete; - queue_conn& operator=(const queue_conn&) = delete; - - bool connected() const noexcept { - return connected_ != 0; - } - - circ::cc_t connected_id() const noexcept { - return connected_; - } - - template - auto connect(Elems* elems) noexcept - /*needs 'optional' here*/ - -> std::tuple().cursor())> { - if (elems == nullptr) return {}; - // if it's already connected, just return - if (connected()) return {connected(), false, 0}; - connected_ = elems->connect_receiver(); - return {connected(), true, elems->cursor()}; - } - - template - bool disconnect(Elems* elems) noexcept { - if (elems == nullptr) return false; - // if it's already disconnected, just return false - if (!connected()) return false; - elems->disconnect_receiver(std::exchange(connected_, 0)); - return true; - } -}; - -template -class queue_base : public queue_conn { - using base_t = queue_conn; - -public: - using elems_t = Elems; - using policy_t = typename elems_t::policy_t; - -protected: - elems_t * elems_ = nullptr; - decltype(std::declval().cursor()) cursor_ = 0; - bool sender_flag_ = false; - -public: - using base_t::base_t; - - queue_base() = default; - - explicit queue_base(char const * name) - : queue_base{} { - elems_ = open(name); - } - - explicit queue_base(elems_t * elems) noexcept - : queue_base{} { - assert(elems != nullptr); - elems_ = elems; - } - - /* not virtual */ ~queue_base() { - base_t::close(); - } - - elems_t * elems() noexcept { return elems_; } - elems_t const * elems() const noexcept { return elems_; } - - bool ready_sending() noexcept { - if (elems_ == nullptr) return false; - return sender_flag_ || (sender_flag_ = elems_->connect_sender()); - } - - void shut_sending() noexcept { - if (elems_ == nullptr) return; - if (!sender_flag_) return; - elems_->disconnect_sender(); - } - - bool connect() noexcept { - auto tp = base_t::connect(elems_); - if (std::get<0>(tp) && std::get<1>(tp)) { - cursor_ = std::get<2>(tp); - return true; - } - return std::get<0>(tp); - } - - bool disconnect() noexcept { - return base_t::disconnect(elems_); - } - - std::size_t conn_count() const noexcept { - return (elems_ == nullptr) ? static_cast(invalid_value) : elems_->conn_count(); - } - - bool valid() const noexcept { - return elems_ != nullptr; - } - - bool empty() const noexcept { - return !valid() || (cursor_ == elems_->cursor()); - } - - template - bool push(F&& prep, P&&... params) { - if (elems_ == nullptr) return false; - return elems_->push(this, [&](void* p) { - if (prep(p)) ::new (p) T(std::forward

(params)...); - }); - } - - template - bool force_push(F&& prep, P&&... params) { - if (elems_ == nullptr) return false; - return elems_->force_push(this, [&](void* p) { - if (prep(p)) ::new (p) T(std::forward

(params)...); - }); - } - - template - bool pop(T& item, F&& out) { - if (elems_ == nullptr) { - return false; - } - return elems_->pop(this, &(this->cursor_), [&item](void* p) { - ::new (&item) T(std::move(*static_cast(p))); - }, std::forward(out)); - } -}; - -} // namespace detail - -template -class queue final : public detail::queue_base> { - using base_t = detail::queue_base>; - -public: - using value_t = T; - - using base_t::base_t; - - template - bool push(P&&... params) { - return base_t::template push(std::forward

(params)...); - } - - template - bool force_push(P&&... params) { - return base_t::template force_push(std::forward

(params)...); - } - - bool pop(T& item) { - return base_t::pop(item, [](bool) {}); - } - - template - bool pop(T& item, F&& out) { - return base_t::pop(item, std::forward(out)); - } -}; - -} // namespace ipc diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/schedulers/pndm.md b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/schedulers/pndm.md deleted file mode 100644 index 6670914b7ac0a0fd77224b06805fed2e463866e4..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/schedulers/pndm.md +++ /dev/null @@ -1,20 +0,0 @@ - - -# Pseudo numerical methods for diffusion models (PNDM) - -## Overview - -Original implementation can be found [here](https://github.com/crowsonkb/k-diffusion/blob/481677d114f6ea445aa009cf5bd7a9cdee909e47/k_diffusion/sampling.py#L181). - -## PNDMScheduler -[[autodoc]] PNDMScheduler \ No newline at end of file diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/text_to_video_synthesis/__init__.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/text_to_video_synthesis/__init__.py deleted file mode 100644 index 97683885aac984ed0f7050d8524a63ff2c367f6c..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/text_to_video_synthesis/__init__.py +++ /dev/null @@ -1,32 +0,0 @@ -from dataclasses import dataclass -from typing import List, Optional, Union - -import numpy as np -import torch - -from ...utils import BaseOutput, OptionalDependencyNotAvailable, is_torch_available, is_transformers_available - - -@dataclass -class TextToVideoSDPipelineOutput(BaseOutput): - """ - Output class for text-to-video pipelines. - - Args: - frames (`List[np.ndarray]` or `torch.FloatTensor`) - List of denoised frames (essentially images) as NumPy arrays of shape `(height, width, num_channels)` or as - a `torch` tensor. The length of the list denotes the video length (the number of frames). - """ - - frames: Union[List[np.ndarray], torch.FloatTensor] - - -try: - if not (is_transformers_available() and is_torch_available()): - raise OptionalDependencyNotAvailable() -except OptionalDependencyNotAvailable: - from ...utils.dummy_torch_and_transformers_objects import * # noqa F403 -else: - from .pipeline_text_to_video_synth import TextToVideoSDPipeline - from .pipeline_text_to_video_synth_img2img import VideoToVideoSDPipeline # noqa: F401 - from .pipeline_text_to_video_zero import TextToVideoZeroPipeline diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/fixtures/custom_pipeline/pipeline.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/fixtures/custom_pipeline/pipeline.py deleted file mode 100644 index 0bb10c3d51851a064c4980420e5bdbb1149958cc..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/fixtures/custom_pipeline/pipeline.py +++ /dev/null @@ -1,101 +0,0 @@ -# Copyright 2023 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and - -# limitations under the License. - - -from typing import Optional, Tuple, Union - -import torch - -from diffusers import DiffusionPipeline, ImagePipelineOutput - - -class CustomLocalPipeline(DiffusionPipeline): - r""" - This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the - library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) - - Parameters: - unet ([`UNet2DModel`]): U-Net architecture to denoise the encoded image. - scheduler ([`SchedulerMixin`]): - A scheduler to be used in combination with `unet` to denoise the encoded image. Can be one of - [`DDPMScheduler`], or [`DDIMScheduler`]. - """ - - def __init__(self, unet, scheduler): - super().__init__() - self.register_modules(unet=unet, scheduler=scheduler) - - @torch.no_grad() - def __call__( - self, - batch_size: int = 1, - generator: Optional[torch.Generator] = None, - num_inference_steps: int = 50, - output_type: Optional[str] = "pil", - return_dict: bool = True, - **kwargs, - ) -> Union[ImagePipelineOutput, Tuple]: - r""" - Args: - batch_size (`int`, *optional*, defaults to 1): - The number of images to generate. - generator (`torch.Generator`, *optional*): - A [torch generator](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation - deterministic. - eta (`float`, *optional*, defaults to 0.0): - The eta parameter which controls the scale of the variance (0 is DDIM and 1 is one type of DDPM). - num_inference_steps (`int`, *optional*, defaults to 50): - The number of denoising steps. More denoising steps usually lead to a higher quality image at the - expense of slower inference. - output_type (`str`, *optional*, defaults to `"pil"`): - The output format of the generate image. Choose between - [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`. - return_dict (`bool`, *optional*, defaults to `True`): - Whether or not to return a [`~pipelines.ImagePipelineOutput`] instead of a plain tuple. - - Returns: - [`~pipelines.ImagePipelineOutput`] or `tuple`: [`~pipelines.utils.ImagePipelineOutput`] if - `return_dict` is True, otherwise a `tuple. When returning a tuple, the first element is a list with the - generated images. - """ - - # Sample gaussian noise to begin loop - image = torch.randn( - (batch_size, self.unet.config.in_channels, self.unet.config.sample_size, self.unet.config.sample_size), - generator=generator, - ) - image = image.to(self.device) - - # set step values - self.scheduler.set_timesteps(num_inference_steps) - - for t in self.progress_bar(self.scheduler.timesteps): - # 1. predict noise model_output - model_output = self.unet(image, t).sample - - # 2. predict previous mean of image x_t-1 and add variance depending on eta - # eta corresponds to η in paper and should be between [0, 1] - # do x_t -> x_t-1 - image = self.scheduler.step(model_output, t, image).prev_sample - - image = (image / 2 + 0.5).clamp(0, 1) - image = image.cpu().permute(0, 2, 3, 1).numpy() - if output_type == "pil": - image = self.numpy_to_pil(image) - - if not return_dict: - return (image,), "This is a local test" - - return ImagePipelineOutput(images=image), "This is a local test" diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/schedulers/test_scheduler_ddim_inverse.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/schedulers/test_scheduler_ddim_inverse.py deleted file mode 100644 index 39ee26306cc619de0fc23b5399732cf2a885ee3c..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/schedulers/test_scheduler_ddim_inverse.py +++ /dev/null @@ -1,135 +0,0 @@ -import torch - -from diffusers import DDIMInverseScheduler - -from .test_schedulers import SchedulerCommonTest - - -class DDIMInverseSchedulerTest(SchedulerCommonTest): - scheduler_classes = (DDIMInverseScheduler,) - forward_default_kwargs = (("eta", 0.0), ("num_inference_steps", 50)) - - def get_scheduler_config(self, **kwargs): - config = { - "num_train_timesteps": 1000, - "beta_start": 0.0001, - "beta_end": 0.02, - "beta_schedule": "linear", - "clip_sample": True, - } - - config.update(**kwargs) - return config - - def full_loop(self, **config): - scheduler_class = self.scheduler_classes[0] - scheduler_config = self.get_scheduler_config(**config) - scheduler = scheduler_class(**scheduler_config) - - num_inference_steps, eta = 10, 0.0 - - model = self.dummy_model() - sample = self.dummy_sample_deter - - scheduler.set_timesteps(num_inference_steps) - - for t in scheduler.timesteps: - residual = model(sample, t) - sample = scheduler.step(residual, t, sample, eta).prev_sample - - return sample - - def test_timesteps(self): - for timesteps in [100, 500, 1000]: - self.check_over_configs(num_train_timesteps=timesteps) - - def test_steps_offset(self): - for steps_offset in [0, 1]: - self.check_over_configs(steps_offset=steps_offset) - - scheduler_class = self.scheduler_classes[0] - scheduler_config = self.get_scheduler_config(steps_offset=1) - scheduler = scheduler_class(**scheduler_config) - scheduler.set_timesteps(5) - assert torch.equal(scheduler.timesteps, torch.LongTensor([-199, 1, 201, 401, 601])) - - def test_betas(self): - for beta_start, beta_end in zip([0.0001, 0.001, 0.01, 0.1], [0.002, 0.02, 0.2, 2]): - self.check_over_configs(beta_start=beta_start, beta_end=beta_end) - - def test_schedules(self): - for schedule in ["linear", "squaredcos_cap_v2"]: - self.check_over_configs(beta_schedule=schedule) - - def test_prediction_type(self): - for prediction_type in ["epsilon", "v_prediction"]: - self.check_over_configs(prediction_type=prediction_type) - - def test_clip_sample(self): - for clip_sample in [True, False]: - self.check_over_configs(clip_sample=clip_sample) - - def test_timestep_spacing(self): - for timestep_spacing in ["trailing", "leading"]: - self.check_over_configs(timestep_spacing=timestep_spacing) - - def test_rescale_betas_zero_snr(self): - for rescale_betas_zero_snr in [True, False]: - self.check_over_configs(rescale_betas_zero_snr=rescale_betas_zero_snr) - - def test_thresholding(self): - self.check_over_configs(thresholding=False) - for threshold in [0.5, 1.0, 2.0]: - for prediction_type in ["epsilon", "v_prediction"]: - self.check_over_configs( - thresholding=True, - prediction_type=prediction_type, - sample_max_value=threshold, - ) - - def test_time_indices(self): - for t in [1, 10, 49]: - self.check_over_forward(time_step=t) - - def test_inference_steps(self): - for t, num_inference_steps in zip([1, 10, 50], [10, 50, 500]): - self.check_over_forward(time_step=t, num_inference_steps=num_inference_steps) - - def test_add_noise_device(self): - pass - - def test_full_loop_no_noise(self): - sample = self.full_loop() - - result_sum = torch.sum(torch.abs(sample)) - result_mean = torch.mean(torch.abs(sample)) - - assert abs(result_sum.item() - 509.1079) < 1e-2 - assert abs(result_mean.item() - 0.6629) < 1e-3 - - def test_full_loop_with_v_prediction(self): - sample = self.full_loop(prediction_type="v_prediction") - - result_sum = torch.sum(torch.abs(sample)) - result_mean = torch.mean(torch.abs(sample)) - - assert abs(result_sum.item() - 1029.129) < 1e-2 - assert abs(result_mean.item() - 1.3400) < 1e-3 - - def test_full_loop_with_set_alpha_to_one(self): - # We specify different beta, so that the first alpha is 0.99 - sample = self.full_loop(set_alpha_to_one=True, beta_start=0.01) - result_sum = torch.sum(torch.abs(sample)) - result_mean = torch.mean(torch.abs(sample)) - - assert abs(result_sum.item() - 259.8116) < 1e-2 - assert abs(result_mean.item() - 0.3383) < 1e-3 - - def test_full_loop_with_no_set_alpha_to_one(self): - # We specify different beta, so that the first alpha is 0.99 - sample = self.full_loop(set_alpha_to_one=False, beta_start=0.01) - result_sum = torch.sum(torch.abs(sample)) - result_mean = torch.mean(torch.abs(sample)) - - assert abs(result_sum.item() - 239.055) < 1e-2 - assert abs(result_mean.item() - 0.3113) < 1e-3 diff --git a/spaces/Andy0409/text_generator/app.py b/spaces/Andy0409/text_generator/app.py deleted file mode 100644 index 75e5f0d83f42be1f7fccfc3337293bb3a1f3d17a..0000000000000000000000000000000000000000 --- a/spaces/Andy0409/text_generator/app.py +++ /dev/null @@ -1,11 +0,0 @@ -import gradio as gr -from gradio.mix import Parallel - -ttl="Doing magic" -desc="Generate now" - -model1 = gr.Interface.load("huggingface/gpt2") -model2 = gr.Interface.load("huggingface/EleutherAI/gpt-j-6B") -model3 = gr.Interface.load("huggingface/EleutherAI/gpt-neo-1.3B") - -gr.Parallel(model1, model2, model3, title=ttl, description=desc).launch() diff --git a/spaces/Andy1621/uniformer_image_detection/configs/_base_/models/faster_rcnn_r50_fpn.py b/spaces/Andy1621/uniformer_image_detection/configs/_base_/models/faster_rcnn_r50_fpn.py deleted file mode 100644 index 0f038d12cb61e8b901fa47cb3bfdcf8164b3c859..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/_base_/models/faster_rcnn_r50_fpn.py +++ /dev/null @@ -1,107 +0,0 @@ -model = dict( - type='FasterRCNN', - pretrained='torchvision://resnet50', - backbone=dict( - type='ResNet', - depth=50, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=1, - norm_cfg=dict(type='BN', requires_grad=True), - norm_eval=True, - style='pytorch'), - neck=dict( - type='FPN', - in_channels=[256, 512, 1024, 2048], - out_channels=256, - num_outs=5), - rpn_head=dict( - type='RPNHead', - in_channels=256, - feat_channels=256, - anchor_generator=dict( - type='AnchorGenerator', - scales=[8], - ratios=[0.5, 1.0, 2.0], - strides=[4, 8, 16, 32, 64]), - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[.0, .0, .0, .0], - target_stds=[1.0, 1.0, 1.0, 1.0]), - loss_cls=dict( - type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0), - loss_bbox=dict(type='L1Loss', loss_weight=1.0)), - roi_head=dict( - type='StandardRoIHead', - bbox_roi_extractor=dict( - type='SingleRoIExtractor', - roi_layer=dict(type='RoIAlign', output_size=7, sampling_ratio=0), - out_channels=256, - featmap_strides=[4, 8, 16, 32]), - bbox_head=dict( - type='Shared2FCBBoxHead', - in_channels=256, - fc_out_channels=1024, - roi_feat_size=7, - num_classes=80, - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[0., 0., 0., 0.], - target_stds=[0.1, 0.1, 0.2, 0.2]), - reg_class_agnostic=False, - loss_cls=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0), - loss_bbox=dict(type='L1Loss', loss_weight=1.0))), - # model training and testing settings - train_cfg=dict( - rpn=dict( - assigner=dict( - type='MaxIoUAssigner', - pos_iou_thr=0.7, - neg_iou_thr=0.3, - min_pos_iou=0.3, - match_low_quality=True, - ignore_iof_thr=-1), - sampler=dict( - type='RandomSampler', - num=256, - pos_fraction=0.5, - neg_pos_ub=-1, - add_gt_as_proposals=False), - allowed_border=-1, - pos_weight=-1, - debug=False), - rpn_proposal=dict( - nms_pre=2000, - max_per_img=1000, - nms=dict(type='nms', iou_threshold=0.7), - min_bbox_size=0), - rcnn=dict( - assigner=dict( - type='MaxIoUAssigner', - pos_iou_thr=0.5, - neg_iou_thr=0.5, - min_pos_iou=0.5, - match_low_quality=False, - ignore_iof_thr=-1), - sampler=dict( - type='RandomSampler', - num=512, - pos_fraction=0.25, - neg_pos_ub=-1, - add_gt_as_proposals=True), - pos_weight=-1, - debug=False)), - test_cfg=dict( - rpn=dict( - nms_pre=1000, - max_per_img=1000, - nms=dict(type='nms', iou_threshold=0.7), - min_bbox_size=0), - rcnn=dict( - score_thr=0.05, - nms=dict(type='nms', iou_threshold=0.5), - max_per_img=100) - # soft-nms is also supported for rcnn testing - # e.g., nms=dict(type='soft_nms', iou_threshold=0.5, min_score=0.05) - )) diff --git a/spaces/Andy1621/uniformer_image_detection/mmdet/datasets/pipelines/test_time_aug.py b/spaces/Andy1621/uniformer_image_detection/mmdet/datasets/pipelines/test_time_aug.py deleted file mode 100644 index b6226e040499882c99f15594c66ebf3d07829168..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/mmdet/datasets/pipelines/test_time_aug.py +++ /dev/null @@ -1,119 +0,0 @@ -import warnings - -import mmcv - -from ..builder import PIPELINES -from .compose import Compose - - -@PIPELINES.register_module() -class MultiScaleFlipAug(object): - """Test-time augmentation with multiple scales and flipping. - - An example configuration is as followed: - - .. code-block:: - - img_scale=[(1333, 400), (1333, 800)], - flip=True, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']), - ] - - After MultiScaleFLipAug with above configuration, the results are wrapped - into lists of the same length as followed: - - .. code-block:: - - dict( - img=[...], - img_shape=[...], - scale=[(1333, 400), (1333, 400), (1333, 800), (1333, 800)] - flip=[False, True, False, True] - ... - ) - - Args: - transforms (list[dict]): Transforms to apply in each augmentation. - img_scale (tuple | list[tuple] | None): Images scales for resizing. - scale_factor (float | list[float] | None): Scale factors for resizing. - flip (bool): Whether apply flip augmentation. Default: False. - flip_direction (str | list[str]): Flip augmentation directions, - options are "horizontal" and "vertical". If flip_direction is list, - multiple flip augmentations will be applied. - It has no effect when flip == False. Default: "horizontal". - """ - - def __init__(self, - transforms, - img_scale=None, - scale_factor=None, - flip=False, - flip_direction='horizontal'): - self.transforms = Compose(transforms) - assert (img_scale is None) ^ (scale_factor is None), ( - 'Must have but only one variable can be setted') - if img_scale is not None: - self.img_scale = img_scale if isinstance(img_scale, - list) else [img_scale] - self.scale_key = 'scale' - assert mmcv.is_list_of(self.img_scale, tuple) - else: - self.img_scale = scale_factor if isinstance( - scale_factor, list) else [scale_factor] - self.scale_key = 'scale_factor' - - self.flip = flip - self.flip_direction = flip_direction if isinstance( - flip_direction, list) else [flip_direction] - assert mmcv.is_list_of(self.flip_direction, str) - if not self.flip and self.flip_direction != ['horizontal']: - warnings.warn( - 'flip_direction has no effect when flip is set to False') - if (self.flip - and not any([t['type'] == 'RandomFlip' for t in transforms])): - warnings.warn( - 'flip has no effect when RandomFlip is not in transforms') - - def __call__(self, results): - """Call function to apply test time augment transforms on results. - - Args: - results (dict): Result dict contains the data to transform. - - Returns: - dict[str: list]: The augmented data, where each value is wrapped - into a list. - """ - - aug_data = [] - flip_args = [(False, None)] - if self.flip: - flip_args += [(True, direction) - for direction in self.flip_direction] - for scale in self.img_scale: - for flip, direction in flip_args: - _results = results.copy() - _results[self.scale_key] = scale - _results['flip'] = flip - _results['flip_direction'] = direction - data = self.transforms(_results) - aug_data.append(data) - # list of dict to dict of list - aug_data_dict = {key: [] for key in aug_data[0]} - for data in aug_data: - for key, val in data.items(): - aug_data_dict[key].append(val) - return aug_data_dict - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(transforms={self.transforms}, ' - repr_str += f'img_scale={self.img_scale}, flip={self.flip}, ' - repr_str += f'flip_direction={self.flip_direction})' - return repr_str diff --git a/spaces/Andy1621/uniformer_image_detection/mmdet/models/detectors/atss.py b/spaces/Andy1621/uniformer_image_detection/mmdet/models/detectors/atss.py deleted file mode 100644 index db7139c6b4fcd7e83007cdb785520743ddae7066..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/mmdet/models/detectors/atss.py +++ /dev/null @@ -1,17 +0,0 @@ -from ..builder import DETECTORS -from .single_stage import SingleStageDetector - - -@DETECTORS.register_module() -class ATSS(SingleStageDetector): - """Implementation of `ATSS `_.""" - - def __init__(self, - backbone, - neck, - bbox_head, - train_cfg=None, - test_cfg=None, - pretrained=None): - super(ATSS, self).__init__(backbone, neck, bbox_head, train_cfg, - test_cfg, pretrained) diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/fcn/fcn_r50-d8_512x512_40k_voc12aug.py b/spaces/Andy1621/uniformer_image_segmentation/configs/fcn/fcn_r50-d8_512x512_40k_voc12aug.py deleted file mode 100644 index 8cec429c3e27ad2543b7e38fa206e6606fda4d5a..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_segmentation/configs/fcn/fcn_r50-d8_512x512_40k_voc12aug.py +++ /dev/null @@ -1,6 +0,0 @@ -_base_ = [ - '../_base_/models/fcn_r50-d8.py', '../_base_/datasets/pascal_voc12_aug.py', - '../_base_/default_runtime.py', '../_base_/schedules/schedule_40k.py' -] -model = dict( - decode_head=dict(num_classes=21), auxiliary_head=dict(num_classes=21)) diff --git a/spaces/Arnaudding001/OpenAI_whisperLive/app-local.py b/spaces/Arnaudding001/OpenAI_whisperLive/app-local.py deleted file mode 100644 index d8eabbc62924dab3d0cc03a8a2373ffffe01eadc..0000000000000000000000000000000000000000 --- a/spaces/Arnaudding001/OpenAI_whisperLive/app-local.py +++ /dev/null @@ -1,3 +0,0 @@ -# Run the app with no audio file restrictions -from app import create_ui -create_ui(-1) \ No newline at end of file diff --git a/spaces/Artrajz/vits-simple-api/vits/__init__.py b/spaces/Artrajz/vits-simple-api/vits/__init__.py deleted file mode 100644 index 002cdc085c0977b9568c4e1a8fe3eb7d6dc61a51..0000000000000000000000000000000000000000 --- a/spaces/Artrajz/vits-simple-api/vits/__init__.py +++ /dev/null @@ -1 +0,0 @@ -from .vits import VITS \ No newline at end of file diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/pygments/modeline.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/pygments/modeline.py deleted file mode 100644 index 43630835ca677066a315ac0a04d17cb6839da38d..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/pygments/modeline.py +++ /dev/null @@ -1,43 +0,0 @@ -""" - pygments.modeline - ~~~~~~~~~~~~~~~~~ - - A simple modeline parser (based on pymodeline). - - :copyright: Copyright 2006-2022 by the Pygments team, see AUTHORS. - :license: BSD, see LICENSE for details. -""" - -import re - -__all__ = ['get_filetype_from_buffer'] - - -modeline_re = re.compile(r''' - (?: vi | vim | ex ) (?: [<=>]? \d* )? : - .* (?: ft | filetype | syn | syntax ) = ( [^:\s]+ ) -''', re.VERBOSE) - - -def get_filetype_from_line(l): - m = modeline_re.search(l) - if m: - return m.group(1) - - -def get_filetype_from_buffer(buf, max_lines=5): - """ - Scan the buffer for modelines and return filetype if one is found. - """ - lines = buf.splitlines() - for l in lines[-1:-max_lines-1:-1]: - ret = get_filetype_from_line(l) - if ret: - return ret - for i in range(max_lines, -1, -1): - if i < len(lines): - ret = get_filetype_from_line(lines[i]) - if ret: - return ret - - return None diff --git a/spaces/Atom007/SDXL-base-9-CPU/app.py b/spaces/Atom007/SDXL-base-9-CPU/app.py deleted file mode 100644 index 239cfb2d44fee4db276096901b166addbd86daf8..0000000000000000000000000000000000000000 --- a/spaces/Atom007/SDXL-base-9-CPU/app.py +++ /dev/null @@ -1,28 +0,0 @@ -import gradio as gr -import torch -import numpy as np -import modin.pandas as pd -from PIL import Image -from diffusers import DiffusionPipeline -from huggingface_hub import login -import os - -login(token=os.environ.get('HF_KEY')) - -device = "cuda" if torch.cuda.is_available() else "cpu" - -pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-xl-base-0.9", add_to_git_credential=True) -pipe = pipe.to(device) -pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) - -refiner = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-xl-refiner-0.9") -refiner = refiner.to(device) -refiner.unet = torch.compile(refiner.unet, mode="reduce-overhead", fullgraph=True) - -def genie (prompt, negative_prompt, scale, steps, seed): - generator = torch.Generator(device=device).manual_seed(seed) - int_image = pipe(prompt, negative_prompt=negative_prompt, num_inference_steps=steps, guidance_scale=scale, num_images_per_prompt=1, generator=generator, width=768, height=768, output_type="latent").images - image = refiner(prompt=prompt, image=int_image).images[0] - return image - -gr.Interface(fn=genie, inputs=[gr.Textbox(label='What you want the AI to generate. 77 Token Limit.'), gr.Textbox(label='What you Do Not want the AI-model to generate.'), gr.Slider(1, 15, 10), gr.Slider(25, maximum=50, value=25, step=1), gr.Slider(minimum=1, step=1, maximum=999999999999999999, randomize=True)], outputs='image', title="Stable Diffusion XL base 9 CPU", description="SDXL-base-9 CPU. WARNING: Extremely Slow. 65seconds/Iteration. Expected time to wait 25-50 minutes for an image of 25-50 iterations respectively.", article = "Code: Atom007").launch(debug=True, max_threads=80) \ No newline at end of file diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/modeling/meta_arch/retinanet.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/modeling/meta_arch/retinanet.py deleted file mode 100644 index 3ea88f61759e497ca629d1d1add43b7bd44e8072..0000000000000000000000000000000000000000 --- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/modeling/meta_arch/retinanet.py +++ /dev/null @@ -1,439 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import logging -import math -from typing import List, Tuple -import torch -from fvcore.nn import sigmoid_focal_loss_jit -from torch import Tensor, nn -from torch.nn import functional as F - -from detectron2.config import configurable -from detectron2.layers import CycleBatchNormList, ShapeSpec, batched_nms, cat, get_norm -from detectron2.structures import Boxes, ImageList, Instances, pairwise_iou -from detectron2.utils.events import get_event_storage - -from ..anchor_generator import build_anchor_generator -from ..backbone import Backbone, build_backbone -from ..box_regression import Box2BoxTransform, _dense_box_regression_loss -from ..matcher import Matcher -from .build import META_ARCH_REGISTRY -from .dense_detector import DenseDetector, permute_to_N_HWA_K # noqa - -__all__ = ["RetinaNet"] - - -logger = logging.getLogger(__name__) - - -@META_ARCH_REGISTRY.register() -class RetinaNet(DenseDetector): - """ - Implement RetinaNet in :paper:`RetinaNet`. - """ - - @configurable - def __init__( - self, - *, - backbone: Backbone, - head: nn.Module, - head_in_features, - anchor_generator, - box2box_transform, - anchor_matcher, - num_classes, - focal_loss_alpha=0.25, - focal_loss_gamma=2.0, - smooth_l1_beta=0.0, - box_reg_loss_type="smooth_l1", - test_score_thresh=0.05, - test_topk_candidates=1000, - test_nms_thresh=0.5, - max_detections_per_image=100, - pixel_mean, - pixel_std, - vis_period=0, - input_format="BGR", - ): - """ - NOTE: this interface is experimental. - - Args: - backbone: a backbone module, must follow detectron2's backbone interface - head (nn.Module): a module that predicts logits and regression deltas - for each level from a list of per-level features - head_in_features (Tuple[str]): Names of the input feature maps to be used in head - anchor_generator (nn.Module): a module that creates anchors from a - list of features. Usually an instance of :class:`AnchorGenerator` - box2box_transform (Box2BoxTransform): defines the transform from anchors boxes to - instance boxes - anchor_matcher (Matcher): label the anchors by matching them with ground truth. - num_classes (int): number of classes. Used to label background proposals. - - # Loss parameters: - focal_loss_alpha (float): focal_loss_alpha - focal_loss_gamma (float): focal_loss_gamma - smooth_l1_beta (float): smooth_l1_beta - box_reg_loss_type (str): Options are "smooth_l1", "giou", "diou", "ciou" - - # Inference parameters: - test_score_thresh (float): Inference cls score threshold, only anchors with - score > INFERENCE_TH are considered for inference (to improve speed) - test_topk_candidates (int): Select topk candidates before NMS - test_nms_thresh (float): Overlap threshold used for non-maximum suppression - (suppress boxes with IoU >= this threshold) - max_detections_per_image (int): - Maximum number of detections to return per image during inference - (100 is based on the limit established for the COCO dataset). - - pixel_mean, pixel_std: see :class:`DenseDetector`. - """ - super().__init__( - backbone, head, head_in_features, pixel_mean=pixel_mean, pixel_std=pixel_std - ) - self.num_classes = num_classes - - # Anchors - self.anchor_generator = anchor_generator - self.box2box_transform = box2box_transform - self.anchor_matcher = anchor_matcher - - # Loss parameters: - self.focal_loss_alpha = focal_loss_alpha - self.focal_loss_gamma = focal_loss_gamma - self.smooth_l1_beta = smooth_l1_beta - self.box_reg_loss_type = box_reg_loss_type - # Inference parameters: - self.test_score_thresh = test_score_thresh - self.test_topk_candidates = test_topk_candidates - self.test_nms_thresh = test_nms_thresh - self.max_detections_per_image = max_detections_per_image - # Vis parameters - self.vis_period = vis_period - self.input_format = input_format - - @classmethod - def from_config(cls, cfg): - backbone = build_backbone(cfg) - backbone_shape = backbone.output_shape() - feature_shapes = [backbone_shape[f] for f in cfg.MODEL.RETINANET.IN_FEATURES] - head = RetinaNetHead(cfg, feature_shapes) - anchor_generator = build_anchor_generator(cfg, feature_shapes) - return { - "backbone": backbone, - "head": head, - "anchor_generator": anchor_generator, - "box2box_transform": Box2BoxTransform(weights=cfg.MODEL.RETINANET.BBOX_REG_WEIGHTS), - "anchor_matcher": Matcher( - cfg.MODEL.RETINANET.IOU_THRESHOLDS, - cfg.MODEL.RETINANET.IOU_LABELS, - allow_low_quality_matches=True, - ), - "pixel_mean": cfg.MODEL.PIXEL_MEAN, - "pixel_std": cfg.MODEL.PIXEL_STD, - "num_classes": cfg.MODEL.RETINANET.NUM_CLASSES, - "head_in_features": cfg.MODEL.RETINANET.IN_FEATURES, - # Loss parameters: - "focal_loss_alpha": cfg.MODEL.RETINANET.FOCAL_LOSS_ALPHA, - "focal_loss_gamma": cfg.MODEL.RETINANET.FOCAL_LOSS_GAMMA, - "smooth_l1_beta": cfg.MODEL.RETINANET.SMOOTH_L1_LOSS_BETA, - "box_reg_loss_type": cfg.MODEL.RETINANET.BBOX_REG_LOSS_TYPE, - # Inference parameters: - "test_score_thresh": cfg.MODEL.RETINANET.SCORE_THRESH_TEST, - "test_topk_candidates": cfg.MODEL.RETINANET.TOPK_CANDIDATES_TEST, - "test_nms_thresh": cfg.MODEL.RETINANET.NMS_THRESH_TEST, - "max_detections_per_image": cfg.TEST.DETECTIONS_PER_IMAGE, - # Vis parameters - "vis_period": cfg.VIS_PERIOD, - "input_format": cfg.INPUT.FORMAT, - } - - def forward_training(self, images, features, predictions, gt_instances): - # Transpose the Hi*Wi*A dimension to the middle: - pred_logits, pred_anchor_deltas = self._transpose_dense_predictions( - predictions, [self.num_classes, 4] - ) - anchors = self.anchor_generator(features) - gt_labels, gt_boxes = self.label_anchors(anchors, gt_instances) - return self.losses(anchors, pred_logits, gt_labels, pred_anchor_deltas, gt_boxes) - - def losses(self, anchors, pred_logits, gt_labels, pred_anchor_deltas, gt_boxes): - """ - Args: - anchors (list[Boxes]): a list of #feature level Boxes - gt_labels, gt_boxes: see output of :meth:`RetinaNet.label_anchors`. - Their shapes are (N, R) and (N, R, 4), respectively, where R is - the total number of anchors across levels, i.e. sum(Hi x Wi x Ai) - pred_logits, pred_anchor_deltas: both are list[Tensor]. Each element in the - list corresponds to one level and has shape (N, Hi * Wi * Ai, K or 4). - Where K is the number of classes used in `pred_logits`. - - Returns: - dict[str, Tensor]: - mapping from a named loss to a scalar tensor storing the loss. - Used during training only. The dict keys are: "loss_cls" and "loss_box_reg" - """ - num_images = len(gt_labels) - gt_labels = torch.stack(gt_labels) # (N, R) - - valid_mask = gt_labels >= 0 - pos_mask = (gt_labels >= 0) & (gt_labels != self.num_classes) - num_pos_anchors = pos_mask.sum().item() - get_event_storage().put_scalar("num_pos_anchors", num_pos_anchors / num_images) - normalizer = self._ema_update("loss_normalizer", max(num_pos_anchors, 1), 100) - - # classification and regression loss - gt_labels_target = F.one_hot(gt_labels[valid_mask], num_classes=self.num_classes + 1)[ - :, :-1 - ] # no loss for the last (background) class - loss_cls = sigmoid_focal_loss_jit( - cat(pred_logits, dim=1)[valid_mask], - gt_labels_target.to(pred_logits[0].dtype), - alpha=self.focal_loss_alpha, - gamma=self.focal_loss_gamma, - reduction="sum", - ) - - loss_box_reg = _dense_box_regression_loss( - anchors, - self.box2box_transform, - pred_anchor_deltas, - gt_boxes, - pos_mask, - box_reg_loss_type=self.box_reg_loss_type, - smooth_l1_beta=self.smooth_l1_beta, - ) - - return { - "loss_cls": loss_cls / normalizer, - "loss_box_reg": loss_box_reg / normalizer, - } - - @torch.no_grad() - def label_anchors(self, anchors, gt_instances): - """ - Args: - anchors (list[Boxes]): A list of #feature level Boxes. - The Boxes contains anchors of this image on the specific feature level. - gt_instances (list[Instances]): a list of N `Instances`s. The i-th - `Instances` contains the ground-truth per-instance annotations - for the i-th input image. - - Returns: - list[Tensor]: List of #img tensors. i-th element is a vector of labels whose length is - the total number of anchors across all feature maps (sum(Hi * Wi * A)). - Label values are in {-1, 0, ..., K}, with -1 means ignore, and K means background. - - list[Tensor]: i-th element is a Rx4 tensor, where R is the total number of anchors - across feature maps. The values are the matched gt boxes for each anchor. - Values are undefined for those anchors not labeled as foreground. - """ - anchors = Boxes.cat(anchors) # Rx4 - - gt_labels = [] - matched_gt_boxes = [] - for gt_per_image in gt_instances: - match_quality_matrix = pairwise_iou(gt_per_image.gt_boxes, anchors) - matched_idxs, anchor_labels = self.anchor_matcher(match_quality_matrix) - del match_quality_matrix - - if len(gt_per_image) > 0: - matched_gt_boxes_i = gt_per_image.gt_boxes.tensor[matched_idxs] - - gt_labels_i = gt_per_image.gt_classes[matched_idxs] - # Anchors with label 0 are treated as background. - gt_labels_i[anchor_labels == 0] = self.num_classes - # Anchors with label -1 are ignored. - gt_labels_i[anchor_labels == -1] = -1 - else: - matched_gt_boxes_i = torch.zeros_like(anchors.tensor) - gt_labels_i = torch.zeros_like(matched_idxs) + self.num_classes - - gt_labels.append(gt_labels_i) - matched_gt_boxes.append(matched_gt_boxes_i) - - return gt_labels, matched_gt_boxes - - def forward_inference( - self, images: ImageList, features: List[Tensor], predictions: List[List[Tensor]] - ): - pred_logits, pred_anchor_deltas = self._transpose_dense_predictions( - predictions, [self.num_classes, 4] - ) - anchors = self.anchor_generator(features) - - results: List[Instances] = [] - for img_idx, image_size in enumerate(images.image_sizes): - scores_per_image = [x[img_idx].sigmoid_() for x in pred_logits] - deltas_per_image = [x[img_idx] for x in pred_anchor_deltas] - results_per_image = self.inference_single_image( - anchors, scores_per_image, deltas_per_image, image_size - ) - results.append(results_per_image) - return results - - def inference_single_image( - self, - anchors: List[Boxes], - box_cls: List[Tensor], - box_delta: List[Tensor], - image_size: Tuple[int, int], - ): - """ - Single-image inference. Return bounding-box detection results by thresholding - on scores and applying non-maximum suppression (NMS). - - Arguments: - anchors (list[Boxes]): list of #feature levels. Each entry contains - a Boxes object, which contains all the anchors in that feature level. - box_cls (list[Tensor]): list of #feature levels. Each entry contains - tensor of size (H x W x A, K) - box_delta (list[Tensor]): Same shape as 'box_cls' except that K becomes 4. - image_size (tuple(H, W)): a tuple of the image height and width. - - Returns: - Same as `inference`, but for only one image. - """ - pred = self._decode_multi_level_predictions( - anchors, - box_cls, - box_delta, - self.test_score_thresh, - self.test_topk_candidates, - image_size, - ) - keep = batched_nms( # per-class NMS - pred.pred_boxes.tensor, pred.scores, pred.pred_classes, self.test_nms_thresh - ) - return pred[keep[: self.max_detections_per_image]] - - -class RetinaNetHead(nn.Module): - """ - The head used in RetinaNet for object classification and box regression. - It has two subnets for the two tasks, with a common structure but separate parameters. - """ - - @configurable - def __init__( - self, - *, - input_shape: List[ShapeSpec], - num_classes, - num_anchors, - conv_dims: List[int], - norm="", - prior_prob=0.01, - ): - """ - NOTE: this interface is experimental. - - Args: - input_shape (List[ShapeSpec]): input shape - num_classes (int): number of classes. Used to label background proposals. - num_anchors (int): number of generated anchors - conv_dims (List[int]): dimensions for each convolution layer - norm (str or callable): - Normalization for conv layers except for the two output layers. - See :func:`detectron2.layers.get_norm` for supported types. - prior_prob (float): Prior weight for computing bias - """ - super().__init__() - - self._num_features = len(input_shape) - if norm == "BN" or norm == "SyncBN": - logger.info( - f"Using domain-specific {norm} in RetinaNetHead with len={self._num_features}." - ) - bn_class = nn.BatchNorm2d if norm == "BN" else nn.SyncBatchNorm - - def norm(c): - return CycleBatchNormList( - length=self._num_features, bn_class=bn_class, num_features=c - ) - - else: - norm_name = str(type(get_norm(norm, 1))) - if "BN" in norm_name: - logger.warning( - f"Shared BatchNorm (type={norm_name}) may not work well in RetinaNetHead." - ) - - cls_subnet = [] - bbox_subnet = [] - for in_channels, out_channels in zip( - [input_shape[0].channels] + list(conv_dims), conv_dims - ): - cls_subnet.append( - nn.Conv2d(in_channels, out_channels, kernel_size=3, stride=1, padding=1) - ) - if norm: - cls_subnet.append(get_norm(norm, out_channels)) - cls_subnet.append(nn.ReLU()) - bbox_subnet.append( - nn.Conv2d(in_channels, out_channels, kernel_size=3, stride=1, padding=1) - ) - if norm: - bbox_subnet.append(get_norm(norm, out_channels)) - bbox_subnet.append(nn.ReLU()) - - self.cls_subnet = nn.Sequential(*cls_subnet) - self.bbox_subnet = nn.Sequential(*bbox_subnet) - self.cls_score = nn.Conv2d( - conv_dims[-1], num_anchors * num_classes, kernel_size=3, stride=1, padding=1 - ) - self.bbox_pred = nn.Conv2d( - conv_dims[-1], num_anchors * 4, kernel_size=3, stride=1, padding=1 - ) - - # Initialization - for modules in [self.cls_subnet, self.bbox_subnet, self.cls_score, self.bbox_pred]: - for layer in modules.modules(): - if isinstance(layer, nn.Conv2d): - torch.nn.init.normal_(layer.weight, mean=0, std=0.01) - torch.nn.init.constant_(layer.bias, 0) - - # Use prior in model initialization to improve stability - bias_value = -(math.log((1 - prior_prob) / prior_prob)) - torch.nn.init.constant_(self.cls_score.bias, bias_value) - - @classmethod - def from_config(cls, cfg, input_shape: List[ShapeSpec]): - num_anchors = build_anchor_generator(cfg, input_shape).num_cell_anchors - assert ( - len(set(num_anchors)) == 1 - ), "Using different number of anchors between levels is not currently supported!" - num_anchors = num_anchors[0] - - return { - "input_shape": input_shape, - "num_classes": cfg.MODEL.RETINANET.NUM_CLASSES, - "conv_dims": [input_shape[0].channels] * cfg.MODEL.RETINANET.NUM_CONVS, - "prior_prob": cfg.MODEL.RETINANET.PRIOR_PROB, - "norm": cfg.MODEL.RETINANET.NORM, - "num_anchors": num_anchors, - } - - def forward(self, features: List[Tensor]): - """ - Arguments: - features (list[Tensor]): FPN feature map tensors in high to low resolution. - Each tensor in the list correspond to different feature levels. - - Returns: - logits (list[Tensor]): #lvl tensors, each has shape (N, AxK, Hi, Wi). - The tensor predicts the classification probability - at each spatial position for each of the A anchors and K object - classes. - bbox_reg (list[Tensor]): #lvl tensors, each has shape (N, Ax4, Hi, Wi). - The tensor predicts 4-vector (dx,dy,dw,dh) box - regression values for every anchor. These values are the - relative offset between the anchor and the ground truth box. - """ - assert len(features) == self._num_features - logits = [] - bbox_reg = [] - for feature in features: - logits.append(self.cls_score(self.cls_subnet(feature))) - bbox_reg.append(self.bbox_pred(self.bbox_subnet(feature))) - return logits, bbox_reg diff --git a/spaces/BWQ/Chatgpt/README.md b/spaces/BWQ/Chatgpt/README.md deleted file mode 100644 index 57a5c8ff85300c38dc94b710d473a8dfbffdc5e1..0000000000000000000000000000000000000000 --- a/spaces/BWQ/Chatgpt/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Chatgpt -emoji: 🌍 -colorFrom: pink -colorTo: purple -sdk: gradio -sdk_version: 3.24.1 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/BasalGanglia/stabilityai-stable-diffusion-2/app.py b/spaces/BasalGanglia/stabilityai-stable-diffusion-2/app.py deleted file mode 100644 index d2782cea00b1bfcd22df7c204d9e52a6baf46ac2..0000000000000000000000000000000000000000 --- a/spaces/BasalGanglia/stabilityai-stable-diffusion-2/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/stabilityai/stable-diffusion-2").launch() \ No newline at end of file diff --git a/spaces/Benson/text-generation/Examples/Artesana Y Construccin.md b/spaces/Benson/text-generation/Examples/Artesana Y Construccin.md deleted file mode 100644 index 36c965b2d5147f883ddce858ca7e9bc2cc8f336d..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Artesana Y Construccin.md +++ /dev/null @@ -1,163 +0,0 @@ - -

Arte y Construcción: Una Guía para Principiantes

-

¿Te gustan los juegos de construcción? ¿Quieres crear tu propia casa o castillo o la mía? ¿Quieres explorar un vasto mundo lleno de sorpresas y aventuras? Si respondiste sí a cualquiera de estas preguntas, ¡deberías probar con juegos de manualidades y construcción!

-

Los juegos de manualidades y construcción son un género de videojuegos que te permiten crear cualquier cosa que puedas imaginar usando varios bloques y elementos. También puede explorar diferentes mundos e interactuar con otros jugadores en línea o fuera de línea. Los juegos de manualidades y construcción son divertidos y populares porque te permiten expresar tu creatividad e imaginación en un entorno virtual.

-

artesanía y construcción


Download File ---> https://bltlly.com/2v6JZS



-

Pero ¿cuáles son los beneficios de jugar a juegos de manualidades y construcción? Bueno, aquí están algunos de ellos:

-
    -
  • Mejoran tus habilidades espaciales y de resolución de problemas.
  • -
  • Mejoran tu capacidad de atención y concentración.
  • -
  • Estimulan el cerebro y la memoria.
  • -
  • Fomentan tus habilidades de trabajo en equipo y comunicación.
  • -
  • Relajan tu mente y estado de ánimo.
  • -
-

Así que si usted está interesado en el arte y la construcción de juegos, pero no saben por dónde empezar o cómo jugar, no se preocupe! ¡Este artículo le guiará a través de los fundamentos de los juegos de artesanía y construcción y le ayudará a construir su casa de ensueño en ningún momento!

-

Cómo empezar a crear y construir

-

Elegir un juego

-

Lo primero que tienes que hacer es elegir un juego que se adapte a tus preferencias e intereses. Hay muchos juegos de manualidades y construcción disponibles en diferentes plataformas, como PC, móvil, consola, etc. Algunos de los más populares son:

-
    -
  • Elaboración y construcción : Este es un juego gratuito para dispositivos Android que te permite construir tu propio mundo con recursos ilimitados. También puedes explorar diferentes mapas, como ciudad, bosque, desierto, etc. También puedes personalizar tu personaje y jugar con tus amigos online.
  • - -
  • Minecraft: Este es el juego de artesanía y construcción más famoso y popular del mundo. Está disponible en PC, móvil, consola y otras plataformas. Puedes crear tu propio mundo con diferentes modos, como supervivencia, creatividad, aventura, etc. También puedes explorar un mundo enorme con diferentes biomas, animales, pueblos, mazmorras, etc. También puedes jugar con millones de jugadores online o offline.
  • -
-

Hay muchos otros juegos de artesanía y construcción que puedes probar, como Roblox, Terraria, Stardew Valley, etc. Cada juego tiene sus propias características y ventajas. Puedes elegir el que más te guste y se ajuste a tu presupuesto y dispositivo.

-

Para principiantes, recomendamos Elaboración y Construcción o Artesano: Construcción de Artesanía porque son libres y fáciles de jugar en dispositivos móviles. También tienen muchos recursos y mapas para elegir. Sin embargo, si quieres una experiencia más avanzada e inmersiva, puedes probar Minecraft, que tiene más opciones y desafíos.

-

Aprender lo básico

-

Una vez que hayas elegido un juego, necesitas aprender los fundamentos de cómo jugarlo. Los conceptos básicos incluyen cómo controlar a tu personaje, cómo acceder a los menús, cómo usar las herramientas y los bloques, etc. Cada juego tiene su propio tutorial o guía que puedes seguir para aprender lo básico. También puede ver algunos videos o leer algunos artículos en línea para obtener algunos consejos y trucos.

-

Aquí hay algunos consejos generales sobre cómo aprender los fundamentos de los juegos de artesanía y construcción:

-

-
    -
  • Usa la pantalla táctil o el teclado y el ratón para mover a tu personaje. También puedes usar los botones o las teclas para saltar, agacharse, volar, etc.
  • -
  • Utilice el menú o el inventario para acceder a sus herramientas y bloques. También puede usar los botones o teclas para cambiar entre ellos.
  • -
  • Usa las herramientas y los bloques para crear o destruir cualquier cosa en el mundo. También puedes usar los botones o teclas para colocarlos o romperlos.
  • - -
  • Usa la ayuda o el icono de interrogación para obtener más información sobre las características y funciones del juego.
  • -
-

Aquí hay algunas capturas de pantalla o videos que muestran cómo jugar Crafting y BuildingCraftsman: Building Craft :

Crafting y Building screenshotimg src=( 5 )" =alt"Craftsman: Building Craft screenshot" width="300" height="200">

-

Como puedes ver, ambos juegos tienen gráficos y jugabilidad similares, pero tienen algunas diferencias en los mapas, elementos y modos. Puedes probar ambos juegos y ver cuál te gusta más.

-

Encontrar una ubicación

-

Después de haber aprendido los fundamentos del juego, es necesario encontrar una ubicación para su edificio. La ubicación es importante porque afecta el estilo, el tamaño y la forma de su edificio. También debe considerar el medio ambiente, los recursos y los peligros a su alrededor.

-

Cada juego tiene diferentes mundos o mapas que puedes explorar. Algunos de ellos son generados aleatoriamente, mientras que otros son pre-hechos. Algunos de ellos se basan en lugares de la vida real, mientras que otros son ficticios o de fantasía. Algunos de ellos son planos y simples, mientras que otros son complejos y diversos.

-

Aquí hay algunos ejemplos de diferentes biomas o terrenos que puedes encontrar en juegos de artesanía y construcción:

-
    -
  • Bosque: Este es un bioma que tiene muchos árboles, hierba, flores y animales. Es un buen lugar para encontrar madera y comida. También es un lugar hermoso y tranquilo para construir su casa.
  • -
  • Desierto: Este es un bioma que tiene mucha arena, cactus y arbustos muertos. Es un buen lugar para encontrar arenisca y vidrio. También es un lugar desafiante y caliente para construir su casa.
  • -
  • Nieve: Este es un bioma que tiene mucha nieve, hielo y muñecos de nieve. Es un buen lugar para encontrar bolas de nieve y bloques de hielo. También es un lugar frío y festivo para construir su casa.
  • - -
  • Ocean: Este es un bioma que tiene mucha agua, coral y peces. Es un buen lugar para encontrar prismarinas y linternas de mar. También es un lugar húmedo y aventurero para construir su casa.
  • -
-

Puedes elegir cualquier bioma o terreno que te guste para tu edificio. También puede mezclar y combinar diferentes biomas o terrenos para crear su propio mundo único. ¡El único límite es su imaginación!

Cómo crear y construir la casa de tus sueños

-

Planificación de su diseño

-

Ahora que ha encontrado una ubicación para su edificio, debe planificar su diseño antes de comenzar a construir. Planificar su diseño le ayudará a ahorrar tiempo, recursos y esfuerzo. También le ayudará a crear una casa más hermosa y funcional.

-

Aquí hay algunos pasos sobre cómo planificar su diseño:

-
    -
  1. Dibuja tu idea: Puedes usar un papel y un lápiz o una aplicación digital para esbozar tu idea de cómo quieres que se vea tu casa. Puede dibujar la forma, el tamaño, el estilo y el color de su casa. También puede agregar algunos detalles, como ventanas, puertas, techo, etc.
  2. -
  3. Mide tu espacio: Puedes usar una regla o una cinta métrica para medir tu espacio en el juego. También puede utilizar los bloques como unidad de medida. Por ejemplo, un bloque es igual a un metro. A continuación, puede calcular cuántos bloques necesita para cada parte de su casa.
  4. -
  5. Elige un tema: Puedes elegir un tema para tu casa que coincida con tu personalidad y preferencia. Un tema es una idea o concepto general que guía su diseño. Por ejemplo, puede elegir un tema moderno, un tema medieval, un tema de fantasía, etc. También puede elegir un esquema de color que complemente su tema.
  6. -
-

Aquí hay algunos ejemplos de diferentes temas y esquemas de color para juegos de artesanía y construcción:

- - -Tema -Esquema de colores - - -Moderno -Blanco, negro, gris, azul, etc. - - - -Marrón, beige, verde, rojo, etc. - - -Fantasía -Púrpura, rosa, amarillo, naranja, etc. - - -

Recopilación de recursos

-

Después de haber planeado su diseño, necesita reunir recursos para construir. Los recursos son los materiales que utilizas para crear o crear objetos y bloques en el juego. Puedes encontrar recursos en diferentes lugares del mundo del juego, como árboles, rocas, minerales, plantas, etc.

-

Aquí hay algunos consejos sobre cómo reunir recursos:

-
    -
  • Mine them: Puede utilizar herramientas como picos, ejes, palas, etc. para extraer recursos de la tierra o el medio ambiente. También puede usar explosivos como TNT o dinamita para detonarlos. Tenga cuidado de no dañar su entorno o usted mismo.
  • -
  • Craft them: Puedes usar recursos que hayas extraído o recopilado para crear otros recursos que necesites. Puedes usar mesas de manualidades u otros dispositivos para crear objetos y bloques en el juego. También puedes usar recetas o planos para guiarte.
  • -
  • Guárdelos: Puede usar contenedores como cofres, barriles, cajas, etc. para almacenar sus recursos en el juego. También puede usar etiquetas o letreros para organizarlos. Puede colocar sus contenedores cerca de su sitio de construcción o en un lugar seguro.
  • -
-

Construyendo su estructura

Después de haber reunido suficientes recursos, puede comenzar a construir su estructura. Su estructura es la parte principal de su casa que define su forma y tamaño. Puede construir su estructura paso a paso, a partir de la base, luego las paredes, luego el techo, etc.

-

Aquí hay algunos consejos sobre cómo construir su estructura:

-
    -
  • Echa los cimientos: Puedes usar bloques como piedra, ladrillo, hormigón, etc. para poner los cimientos de tu casa. También puedes usar bloques como hierba, tierra, arena, etc. para nivelar el suelo. Puedes usar una cuadrícula o un plano para guiarte.
  • - -
  • Construye el techo: Puedes usar bloques como madera, piedra, ladrillo, vidrio, etc. para construir el techo de tu casa. También puede utilizar bloques como escaleras, losas, vallas, etc. para crear diferentes pendientes y ángulos. También puede añadir tragaluces y chimeneas a su techo.
  • -
-

Aquí hay algunas capturas de pantalla o videos que le muestran cómo construir su estructura en Elaboración y construcción o Craftsman: Building Craft :

-

Crafting and Building videoCraftsman: Building Craft videoCraftsman: Building Craft screenshotCraftsman: Building Craft videoCraftsman: Building Craft screenshothttps://bltlly.com/2v6JsS



-

Bus Simulator Ultimate es un juego que te permite crear tu propia compañía de autobuses y operar rutas a través de varios países, como Alemania, Turquía, Italia, Francia, España y más. Puede personalizar sus autobuses, contratar conductores, administrar sus ingresos y gastos, e interactuar con los pasajeros y otros conductores. También puede conducir su autobús en diferentes condiciones climáticas, situaciones de tráfico y eventos en la carretera.

-

En este artículo, le diremos todo lo que necesita saber sobre Bus Simulator Ultimate, incluidas sus características, cómo descargarlo e instalarlo en su dispositivo Android usando happymod, por qué debe elegir la versión happymod y algunos consejos y trucos para jugar el juego. ¡Vamos a empezar!

-

¿Qué es Bus Simulator Ultimate?

-

Bus Simulator Ultimate es un juego de simulación desarrollado por Zuuks Games, un estudio de juegos turco que también creó otros juegos populares como Truck Simulator 2018: Europe y Euro Truck Driver 2018. El juego fue lanzado en agosto de 2019 y desde entonces ha sido descargado más de 50 millones de veces en Google Play Store. También ha recibido críticas positivas de jugadores y críticos, que elogiaron sus gráficos, jugabilidad y realismo.

-

Como su nombre indica, Bus Simulator Ultimate es un juego que simula la experiencia de conducir un autobús y gestionar una empresa de autobuses. Puede elegir entre más de 30 autobuses diferentes, cada uno con sus propias especificaciones y características. También puede diseñar sus propios autobuses cambiando sus colores, pieles, interiores y accesorios.

- -

Puede conducir su autobús en entornos 3D realistas que se basan en ubicaciones reales. Verá monumentos, edificios, puentes y paisajes famosos mientras conduce. También encontrará diferentes condiciones climáticas, como lluvia, nieve, niebla y noche. Usted tendrá que seguir las reglas de tráfico y señales, evitar accidentes y violaciones, y respetar los límites de velocidad. También tendrá que interactuar con sus pasajeros y otros conductores a través de un sistema de radio.

-

-

Características de Bus Simulator Ultimate

-

Bus Simulator Ultimate es un juego que ofrece muchas características que lo hacen divertido y atractivo. Algunas de estas características son:

-
    -
  • Modo multijugador: Puedes jugar online con tus amigos u otros jugadores de todo el mundo. Puede unirse o crear un convoy y conducir juntos en la misma ruta. También puede chatear con otros jugadores utilizando la función de chat de voz o texto.
  • -
  • Sistema de radio: Puede escuchar varias estaciones de radio que reproducen música, noticias, deportes y más. También puede crear su propia estación de radio agregando sus canciones favoritas desde su dispositivo.
  • -
  • Sonidos realistas: Puede escuchar los sonidos realistas de su motor de autobús, bocina, frenos, limpiaparabrisas, puertas y más. También puede escuchar los sonidos del tráfico, el clima, los pasajeros y otros conductores.
  • -
  • Sistema de retroalimentación: Puede obtener retroalimentación de sus pasajeros y conductores en función de su rendimiento. Te evaluarán en diferentes aspectos, como habilidades de conducción, seguridad, comodidad, puntualidad y servicio. También puedes ver sus comentarios y sugerencias sobre cómo mejorar tu negocio.
  • -
  • Logros y tablas de clasificación: Puedes desbloquear varios logros completando ciertas tareas o objetivos en el juego. También puedes ver tu rango y progreso en las tablas de clasificación globales y regionales.
  • -
-

Cómo descargar

Cómo descargar e instalar Bus Simulator Ultimate son sürüm apk happymod

- -

Descargar e instalar Bus Simulator Ultimate son sürüm apk happymod es muy fácil y simple. Solo tienes que seguir estos pasos:

-
    -
  1. Ir al sitio web oficial de happymod o la tienda de aplicaciones Uptodown y buscar Bus Simulator Ultimate.
  2. -
  3. Seleccione la última versión del juego y haga clic en el botón de descarga. Verá una ventana emergente pidiéndole que confirme la descarga. Haga clic en Aceptar.
  4. -
  5. Espere a que la descarga termine y luego abra el archivo apk. Es posible que necesite habilitar la instalación desde fuentes desconocidas en la configuración del dispositivo.
  6. -
  7. Siga las instrucciones en la pantalla e instale el juego en su dispositivo.
  8. -
  9. Iniciar el juego y disfrutar de Bus Simulator Ultimate son sürüm apk happymod!
  10. -

¿Por qué elegir Bus Simulator Ultimate son sürüm apk happymod?

-

Como mencionamos anteriormente, Bus Simulator Ultimate son sürüm apk happymod es una versión modificada del juego que le da más ventajas y características que la versión original. Estas son algunas de las razones por las que debería elegir esta versión:

-
    -
  • Dinero ilimitado: Puedes obtener dinero ilimitado en el juego, que puedes usar para comprar nuevos autobuses, actualizar los existentes, contratar más conductores, expandir tus rutas y más. También puede gastar su dinero en otras cosas, como combustible, mantenimiento, impuestos y salarios.
  • -
  • Autobuses desbloqueados: Puede desbloquear todos los autobuses en el juego, que normalmente están disponibles solo después de completar ciertos niveles o pagar dinero real. Puede elegir entre más de 30 autobuses diferentes, cada uno con sus propias especificaciones y características. También puede personalizar sus autobuses como desee.
  • -
  • Compras gratis: Puedes comprar cualquier cosa en el juego sin gastar dinero. Puede comprar nuevas pieles, interiores, accesorios y más para sus autobuses. También puede comprar regalos para sus pasajeros y conductores para aumentar su satisfacción y lealtad.
  • - -
-

Estos son solo algunos de los beneficios de usar Bus Simulator Ultimate son sürüm apk happymod. Hay muchas más características y opciones que puedes descubrir y disfrutar jugando el juego tú mismo.

-

Consejos y trucos para jugar Bus Simulator Ultimate

-

Bus Simulator Ultimate es un juego que requiere habilidad, estrategia y paciencia. No se trata solo de conducir un autobús, sino también de gestionar una empresa de autobuses y satisfacer a sus clientes. Aquí hay algunos consejos y trucos que pueden ayudarte a jugar mejor:

-
    -
  • Planifique sus rutas cuidadosamente: Antes de comenzar una ruta, debe revisar el mapa y ver la distancia, el tráfico, el clima y las condiciones de la carretera. También debe considerar la demanda y las preferencias de sus pasajeros. Debe elegir una ruta que sea rentable, segura y cómoda para usted y sus clientes.
  • -
  • Conduzca con seguridad y sin problemas: Cuando conduzca su autobús, debe seguir las reglas de tráfico y las señales, evitar accidentes y violaciones, y respetar los límites de velocidad. También debe conducir sin problemas y evitar el frenado repentino o la aceleración. Esto mejorará sus habilidades de conducción, seguridad, comodidad, puntualidad y calificaciones de servicio.
  • -
  • Interactuar con sus pasajeros y conductores: Debe comunicarse con sus pasajeros y conductores a través del sistema de radio. Debe saludarlos, informarles sobre la ruta y el destino, agradecerles por elegir su empresa y disculparse por cualquier inconveniente o retraso. También debe escuchar sus comentarios y sugerencias sobre cómo mejorar su servicio.
  • -
  • Actualizar sus autobuses y la empresa: Usted debe invertir su dinero en la mejora de sus autobuses y la empresa. Deberías comprar nuevos autobuses, mejorar los existentes, contratar más conductores, expandir tus rutas y más. Esto aumentará sus ingresos, cuota de mercado, satisfacción del cliente y reputación.
  • -
-

Conclusión

- -

Si desea disfrutar de Bus Simulator Ultimate con más características y beneficios, debe descargar e instalar la versión apk happymod son sürüm. Esta es una versión modificada del juego que le da acceso a dinero ilimitado, autobuses desbloqueados, compras gratuitas y más. También puede deshacerse de los anuncios y disfrutar de descargas más rápidas con happymod.

-

Esperamos que este artículo le ha ayudado a aprender más acerca de Bus Simulator Ultimate son sürüm apk happymod. Si tiene alguna pregunta o comentario, por favor siéntase libre de dejarlos abajo. ¡Gracias por leer!

-

Preguntas frecuentes

-

Aquí están algunas de las preguntas más frecuentes sobre Bus Simulator Ultimate son sürüm apk happymod:

-
    -
  1. ¿Es seguro usar Bus Simulator Ultimate son sürüm apk happymod?
    -Sí, Bus Simulator Ultimate son sürüm apk happymod es seguro de usar siempre y cuando se descarga de una fuente de confianza como happymod o Uptodown. Estas fuentes escanean los archivos apk en busca de virus y malware antes de subirlos a sus sitios web. Sin embargo, siempre debe tener cuidado al descargar cualquier archivo de Internet y verificar sus permisos antes de instalarlo en su dispositivo. ¿Cuáles son los requisitos para jugar Bus Simulator Ultimate son sürüm apk happymod?
    -Para jugar Bus Simulator Ultimate son sürüm apk happymod, es necesario tener un dispositivo Android que se ejecuta en Android 5.0 o superior. También necesita tener al menos 1 GB de RAM y 500 MB de espacio de almacenamiento gratuito. También es posible que necesite una conexión a Internet para jugar en línea o acceder a algunas características del juego.
  2. -
  3. ¿Cómo puedo actualizar Bus Simulator Ultimate son sürüm apk happymod?
    -Para actualizar Bus Simulator Ultimate son sürüm apk happymod, es necesario visitar el sitio web de happymod o Uptodown y descargar la última versión del juego. A continuación, puede instalarlo sobre la versión existente en su dispositivo. Es posible que necesite habilitar la instalación desde fuentes desconocidas en la configuración del dispositivo.
  4. - -Sí, se puede jugar Bus Simulator Ultimate son sürüm apk happymod en el PC con un emulador de Android. Un emulador de Android es un software que le permite ejecutar aplicaciones y juegos de Android en su PC. Algunos de los emuladores populares de Android son BlueStacks, NoxPlayer y LDPlayer. Puede descargar e instalar cualquiera de estos emuladores en su PC y luego descargar e instalar Bus Simulator Ultimate son sürüm apk happymod de happymod o Uptodown. -
  5. ¿Puedo transferir mi progreso desde la versión original de Bus Simulator Ultimate a la versión happymod?
    -Desafortunadamente, no. No puede transferir su progreso desde la versión original de Bus Simulator Ultimate a la versión happymod. La versión happymod es una versión modificada del juego que tiene diferentes archivos y datos que la versión original. Por lo tanto, tendrá que empezar desde cero si cambia a la versión happymod.
  6. -

64aa2da5cf
-
-
\ No newline at end of file diff --git a/spaces/Benson/text-generation/Examples/Ciudad Smash Descarga Gratuita Apk.md b/spaces/Benson/text-generation/Examples/Ciudad Smash Descarga Gratuita Apk.md deleted file mode 100644 index 5192f333d12f1d1010e859422e40bc8f1757da27..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Ciudad Smash Descarga Gratuita Apk.md +++ /dev/null @@ -1,62 +0,0 @@ -
-
- Dos métodos para descargar archivos APK de Google Play Store: utilizando una herramienta web o una aplicación extractora de APK.
- Cómo instalar archivos APK en dispositivos Android. | | H2: Cómo descargar muy pequeñas pesadillas 2 APK de Google Play Store | - Cómo utilizar APK Mirror o Evozi’s APK Downloader para generar y guardar el archivo APK para Very Little Nightmares 2.
- Cómo utilizar App APK Extractor & Analyzer para extraer el archivo APK de la aplicación instalada. | | H2: Cómo descargar muy pequeñas pesadillas 2 APK de tiendas de aplicaciones alternativas | - Cómo utilizar Aptoide o APKPure para encontrar y descargar el archivo APK para Very Little Nightmares 2.
- Cómo comprobar la seguridad y compatibilidad del archivo APK antes de instalarlo. | | H2: Conclusión y preguntas frecuentes | - Resumen de los principales puntos y consejos para descargar Muy Pequeñas Pesadillas 2 APK.
- Cinco preguntas frecuentes únicas con respuestas. | Tabla 2: Artículo con formato HTML

Cómo descargar pesadillas muy pequeñas 2 APK

-

Si eres un fan de los juegos de aventura de terror, es posible que hayas oído hablar de Very Little Nightmares 2, una secuela del popular juego Little Nightmares. En este juego, juegas como Mono, un joven que tiene que escapar de un mundo retorcido que está controlado por una torre de señal misteriosa. En el camino, te encontrarás con muchas criaturas aterradoras y rompecabezas que pondrán a prueba tu coraje e ingenio.

-

Very Little Nightmares 2 está disponible en Steam, PlayStation, Xbox, Nintendo Switch y Google Play Store. Sin embargo, si quieres descargar el juego como un archivo APK, necesitarás usar algunos métodos alternativos. En este artículo, le mostraremos cómo descargar Very Little Nightmares 2 APK de diferentes fuentes y cómo instalarlo en su dispositivo Android.

-

ciudad smash descarga gratuita apk


DOWNLOADhttps://bltlly.com/2v6Lfe



-

¿Qué es un archivo APK y cómo descargarlo

- -

Para descargar archivos APK, hay varios métodos. Una forma es ir a APK Mirror en su dispositivo Android a través de Chrome u otro navegador, buscar la aplicación que desee, y toque en 'Descargar APK'. Otra forma es utilizar un APK downloader web, como un -

Para instalar un archivo APK en tu Android, descarga el archivo usando el navegador predeterminado, Chrome, y acepta cualquier ventana emergente. Es importante descargar solo archivos APK de fuentes confiables y hacer una búsqueda rápida en Google para verificar la reputación de la aplicación o empresa.

-

Cómo descargar muy pequeñas pesadillas 2 APK de Google Play Store

-

Si quieres obtener la versión oficial de Very Little Nightmares 2 de Google Play Store, puedes usar uno de estos dos métodos:

-

Método 1: Usar una herramienta web

-
    -
  1. Abra el Google Play Store en su dispositivo Android o computadora y busque Very Little Nightmares 2.
  2. -
  3. Copiar la URL de la aplicación desde la barra de direcciones.
  4. -
  5. Ir a Evozi’s APK Downloader en un navegador web y pegar la URL en el cuadro en la parte superior.
  6. -
  7. Seleccione un tipo de dispositivo en el menú desplegable "Dispositivo" y haga clic en el botón "Generar enlace de descarga".
  8. -
  9. Haga clic en el verde "Haga clic aquí para descargar Very Little Nightmares 2 APK" botón y guardar el archivo en su dispositivo u ordenador.
  10. -
-

Método 2: Usar una aplicación APK Extractor

-
    -
  1. Descargar e instalar App APK Extractor & Analyzer desde la Google Play Store en su dispositivo Android.
  2. -
  3. Abra la aplicación y otorgue los permisos necesarios para acceder a sus archivos y aplicaciones.
  4. -
  5. Encuentra muy pequeñas pesadillas 2 en la lista de aplicaciones instaladas y toque en él.
  6. - -
  7. Transfiera el archivo a otro dispositivo u ordenador si lo desea.
  8. -
-

Cómo descargar muy pequeñas pesadillas 2 APK de tiendas de aplicaciones alternativas

-

Si no puedes encontrar Very Little Nightmares 2 en Google Play Store, o quieres probar una versión diferente del juego, puedes usar una de estas tiendas de aplicaciones alternativas:

-

Aptoide

-

Aptoide es una popular tienda de aplicaciones que ofrece millones de aplicaciones y juegos gratis. Puede descargar Aptoide desde su sitio web oficial o desde otras fuentes. Para descargar Very Little Nightmares 2 APK de Aptoide, sigue estos pasos:

-
    -
  1. Abra Aptoide en su dispositivo Android o computadora y busque Very Little Nightmares 2.
  2. -
  3. Toque en el icono de la aplicación y leer la descripción, calificaciones y comentarios.
  4. -
  5. Toque en el botón "Instalar" y espere a que termine la descarga.
  6. -
  7. Abra el administrador de archivos en su dispositivo y busque el archivo APK descargado.
  8. -
  9. Toque en él y siga las instrucciones para instalarlo.
  10. -
-

APKPure

-

APKPure es otra tienda de aplicaciones que ofrece aplicaciones y juegos gratuitos y seguros. Puede descargar APKPure desde su sitio web oficial o desde otras fuentes. Para descargar Very Little Nightmares 2 APK de APKPure, sigue estos pasos:

-
    -
  1. Abra APKPure en su dispositivo Android o computadora y busque Very Little Nightmares 2.
  2. -
  3. Toque en el icono de la aplicación y leer la descripción, calificaciones y comentarios.
  4. -
  5. Toque en el botón "Descargar APK" y guarde el archivo en su dispositivo u ordenador.
  6. -
  7. Abra el administrador de archivos en su dispositivo y busque el archivo APK descargado.
  8. -
  9. Toque en él y siga las instrucciones para instalarlo.
  10. -
-

Conclusión y preguntas frecuentes

-

En este artículo, le hemos mostrado cómo descargar Very Little Nightmares 2 APK de diferentes fuentes y cómo instalarlo en su dispositivo Android. Esperamos que haya disfrutado de esta guía y la haya encontrado útil. Aquí hay algunas preguntas frecuentes que podrían ayudarlo más:

-

- -

A: Generalmente, sí, siempre y cuando lo descargue de fuentes confiables y verifique su seguridad antes de instalarlo. Sin embargo, siempre hay un riesgo de malware o virus al descargar cualquier archivo de fuentes desconocidas, así que tenga cuidado y utilice una aplicación antivirus confiable.

-

Q: Es muy pequeñas pesadillas 2 APK compatible con mi dispositivo?

-

A: La compatibilidad de Very Little Nightmares 2 APK depende de su modelo de dispositivo, versión de Android, y el espacio de almacenamiento disponible. Puede comprobar la compatibilidad del archivo APK leyendo su descripción, requisitos y comentarios en la tienda de aplicaciones o sitio web donde lo descargó. También puede utilizar una aplicación como APK Analyzer para comprobar los detalles del archivo APK antes de instalarlo.

-

Q: ¿Cómo puedo actualizar muy pequeñas pesadillas 2 APK?

-

A: Para actualizar Very Little Nightmares 2 APK, tendrá que descargar la última versión del archivo APK de la misma fuente donde lo obtuvo antes. Luego, puede desinstalar la versión anterior e instalar la nueva, o instalar la nueva sobre la anterior. Sin embargo, es posible que algunas actualizaciones no funcionen correctamente si las instalas en una versión anterior, por lo que es mejor desinstalarlas primero.

-

Q: ¿Cómo puedo desinstalar muy pequeñas pesadillas 2 APK?

-

A: Para desinstalar Very Little Nightmares 2 APK, puede ir a la configuración de su dispositivo > aplicaciones > Very Little Nightmares 2 > desinstalar, o utilizar una aplicación como (B, nh, T, T) - att = (q @ k.transpose(-2, -1)) * (1.0 / math.sqrt(k.size(-1))) - if layer_past is None: - att = att.masked_fill(self.mask[:,:,:T,:T] == 0, float('-inf')) - - att = F.softmax(att, dim=-1) - att = self.attn_drop(att) - y = att @ v # (B, nh, T, T) x (B, nh, T, hs) -> (B, nh, T, hs) - y = y.transpose(1, 2).contiguous().view(B, T, C) # re-assemble all head outputs side by side - - # output projection - y = self.resid_drop(self.proj(y)) - return y, present # TODO: check that this does not break anything - - -class Block(nn.Module): - """ an unassuming Transformer block """ - def __init__(self, config): - super().__init__() - self.ln1 = nn.LayerNorm(config.n_embd) - self.ln2 = nn.LayerNorm(config.n_embd) - self.attn = CausalSelfAttention(config) - self.mlp = nn.Sequential( - nn.Linear(config.n_embd, 4 * config.n_embd), - nn.GELU(), # nice - nn.Linear(4 * config.n_embd, config.n_embd), - nn.Dropout(config.resid_pdrop), - ) - - def forward(self, x, layer_past=None, return_present=False): - # TODO: check that training still works - if return_present: assert not self.training - # layer past: tuple of length two with B, nh, T, hs - attn, present = self.attn(self.ln1(x), layer_past=layer_past) - - x = x + attn - x = x + self.mlp(self.ln2(x)) - if layer_past is not None or return_present: - return x, present - return x - - -class GPT(nn.Module): - """ the full GPT language model, with a context size of block_size """ - def __init__(self, vocab_size, block_size, n_layer=12, n_head=8, n_embd=256, - embd_pdrop=0., resid_pdrop=0., attn_pdrop=0., n_unmasked=0): - super().__init__() - config = GPTConfig(vocab_size=vocab_size, block_size=block_size, - embd_pdrop=embd_pdrop, resid_pdrop=resid_pdrop, attn_pdrop=attn_pdrop, - n_layer=n_layer, n_head=n_head, n_embd=n_embd, - n_unmasked=n_unmasked) - # input embedding stem - self.tok_emb = nn.Embedding(config.vocab_size, config.n_embd) - self.pos_emb = nn.Parameter(torch.zeros(1, config.block_size, config.n_embd)) - self.drop = nn.Dropout(config.embd_pdrop) - # transformer - self.blocks = nn.Sequential(*[Block(config) for _ in range(config.n_layer)]) - # decoder head - self.ln_f = nn.LayerNorm(config.n_embd) - self.head = nn.Linear(config.n_embd, config.vocab_size, bias=False) - self.block_size = config.block_size - self.apply(self._init_weights) - self.config = config - logger.info("number of parameters: %e", sum(p.numel() for p in self.parameters())) - - def get_block_size(self): - return self.block_size - - def _init_weights(self, module): - if isinstance(module, (nn.Linear, nn.Embedding)): - module.weight.data.normal_(mean=0.0, std=0.02) - if isinstance(module, nn.Linear) and module.bias is not None: - module.bias.data.zero_() - elif isinstance(module, nn.LayerNorm): - module.bias.data.zero_() - module.weight.data.fill_(1.0) - - def forward(self, idx, embeddings=None, targets=None): - # forward the GPT model - token_embeddings = self.tok_emb(idx) # each index maps to a (learnable) vector - - if embeddings is not None: # prepend explicit embeddings - token_embeddings = torch.cat((embeddings, token_embeddings), dim=1) - - t = token_embeddings.shape[1] - assert t <= self.block_size, "Cannot forward, model block size is exhausted." - position_embeddings = self.pos_emb[:, :t, :] # each position maps to a (learnable) vector - x = self.drop(token_embeddings + position_embeddings) - x = self.blocks(x) - x = self.ln_f(x) - logits = self.head(x) - - # if we are given some desired targets also calculate the loss - loss = None - if targets is not None: - loss = F.cross_entropy(logits.view(-1, logits.size(-1)), targets.view(-1)) - - return logits, loss - - def forward_with_past(self, idx, embeddings=None, targets=None, past=None, past_length=None): - # inference only - assert not self.training - token_embeddings = self.tok_emb(idx) # each index maps to a (learnable) vector - if embeddings is not None: # prepend explicit embeddings - token_embeddings = torch.cat((embeddings, token_embeddings), dim=1) - - if past is not None: - assert past_length is not None - past = torch.cat(past, dim=-2) # n_layer, 2, b, nh, len_past, dim_head - past_shape = list(past.shape) - expected_shape = [self.config.n_layer, 2, idx.shape[0], self.config.n_head, past_length, self.config.n_embd//self.config.n_head] - assert past_shape == expected_shape, f"{past_shape} =/= {expected_shape}" - position_embeddings = self.pos_emb[:, past_length, :] # each position maps to a (learnable) vector - else: - position_embeddings = self.pos_emb[:, :token_embeddings.shape[1], :] - - x = self.drop(token_embeddings + position_embeddings) - presents = [] # accumulate over layers - for i, block in enumerate(self.blocks): - x, present = block(x, layer_past=past[i, ...] if past is not None else None, return_present=True) - presents.append(present) - - x = self.ln_f(x) - logits = self.head(x) - # if we are given some desired targets also calculate the loss - loss = None - if targets is not None: - loss = F.cross_entropy(logits.view(-1, logits.size(-1)), targets.view(-1)) - - return logits, loss, torch.stack(presents) # _, _, n_layer, 2, b, nh, 1, dim_head - - -class DummyGPT(nn.Module): - # for debugging - def __init__(self, add_value=1): - super().__init__() - self.add_value = add_value - - def forward(self, idx): - return idx + self.add_value, None - - -class CodeGPT(nn.Module): - """Takes in semi-embeddings""" - def __init__(self, vocab_size, block_size, in_channels, n_layer=12, n_head=8, n_embd=256, - embd_pdrop=0., resid_pdrop=0., attn_pdrop=0., n_unmasked=0): - super().__init__() - config = GPTConfig(vocab_size=vocab_size, block_size=block_size, - embd_pdrop=embd_pdrop, resid_pdrop=resid_pdrop, attn_pdrop=attn_pdrop, - n_layer=n_layer, n_head=n_head, n_embd=n_embd, - n_unmasked=n_unmasked) - # input embedding stem - self.tok_emb = nn.Linear(in_channels, config.n_embd) - self.pos_emb = nn.Parameter(torch.zeros(1, config.block_size, config.n_embd)) - self.drop = nn.Dropout(config.embd_pdrop) - # transformer - self.blocks = nn.Sequential(*[Block(config) for _ in range(config.n_layer)]) - # decoder head - self.ln_f = nn.LayerNorm(config.n_embd) - self.head = nn.Linear(config.n_embd, config.vocab_size, bias=False) - self.block_size = config.block_size - self.apply(self._init_weights) - self.config = config - logger.info("number of parameters: %e", sum(p.numel() for p in self.parameters())) - - def get_block_size(self): - return self.block_size - - def _init_weights(self, module): - if isinstance(module, (nn.Linear, nn.Embedding)): - module.weight.data.normal_(mean=0.0, std=0.02) - if isinstance(module, nn.Linear) and module.bias is not None: - module.bias.data.zero_() - elif isinstance(module, nn.LayerNorm): - module.bias.data.zero_() - module.weight.data.fill_(1.0) - - def forward(self, idx, embeddings=None, targets=None): - # forward the GPT model - token_embeddings = self.tok_emb(idx) # each index maps to a (learnable) vector - - if embeddings is not None: # prepend explicit embeddings - token_embeddings = torch.cat((embeddings, token_embeddings), dim=1) - - t = token_embeddings.shape[1] - assert t <= self.block_size, "Cannot forward, model block size is exhausted." - position_embeddings = self.pos_emb[:, :t, :] # each position maps to a (learnable) vector - x = self.drop(token_embeddings + position_embeddings) - x = self.blocks(x) - x = self.taming_cinln_f(x) - logits = self.head(x) - - # if we are given some desired targets also calculate the loss - loss = None - if targets is not None: - loss = F.cross_entropy(logits.view(-1, logits.size(-1)), targets.view(-1)) - - return logits, loss - - - -#### sampling utils - -def top_k_logits(logits, k): - v, ix = torch.topk(logits, k) - out = logits.clone() - out[out < v[:, [-1]]] = -float('Inf') - return out - -@torch.no_grad() -def sample(model, x, steps, temperature=1.0, sample=False, top_k=None): - """ - take a conditioning sequence of indices in x (of shape (b,t)) and predict the next token in - the sequence, feeding the predictions back into the model each time. Clearly the sampling - has quadratic complexity unlike an RNN that is only linear, and has a finite context window - of block_size, unlike an RNN that has an infinite context window. - """ - block_size = model.get_block_size() - model.eval() - for k in range(steps): - x_cond = x if x.size(1) <= block_size else x[:, -block_size:] # crop context if needed - logits, _ = model(x_cond) - # pluck the logits at the final step and scale by temperature - logits = logits[:, -1, :] / temperature - # optionally crop probabilities to only the top k options - if top_k is not None: - logits = top_k_logits(logits, top_k) - # apply softmax to convert to probabilities - probs = F.softmax(logits, dim=-1) - # sample from the distribution or take the most likely - if sample: - ix = torch.multinomial(probs, num_samples=1) - else: - _, ix = torch.topk(probs, k=1, dim=-1) - # append to the sequence and continue - x = torch.cat((x, ix), dim=1) - - return x - - -@torch.no_grad() -def sample_with_past(x, model, steps, temperature=1., sample_logits=True, - top_k=None, top_p=None, callback=None): - # x is conditioning - sample = x - cond_len = x.shape[1] - past = None - for n in range(steps): - if callback is not None: - callback(n) - logits, _, present = model.forward_with_past(x, past=past, past_length=(n+cond_len-1)) - if past is None: - past = [present] - else: - past.append(present) - logits = logits[:, -1, :] / temperature - if top_k is not None: - logits = top_k_top_p_filtering(logits, top_k=top_k, top_p=top_p) - - probs = F.softmax(logits, dim=-1) - if not sample_logits: - _, x = torch.topk(probs, k=1, dim=-1) - else: - x = torch.multinomial(probs, num_samples=1) - # append to the sequence and continue - sample = torch.cat((sample, x), dim=1) - del past - sample = sample[:, cond_len:] # cut conditioning off - return sample - - -#### clustering utils - -class KMeans(nn.Module): - def __init__(self, ncluster=512, nc=3, niter=10): - super().__init__() - self.ncluster = ncluster - self.nc = nc - self.niter = niter - self.shape = (3,32,32) - self.register_buffer("C", torch.zeros(self.ncluster,nc)) - self.register_buffer('initialized', torch.tensor(0, dtype=torch.uint8)) - - def is_initialized(self): - return self.initialized.item() == 1 - - @torch.no_grad() - def initialize(self, x): - N, D = x.shape - assert D == self.nc, D - c = x[torch.randperm(N)[:self.ncluster]] # init clusters at random - for i in range(self.niter): - # assign all pixels to the closest codebook element - a = ((x[:, None, :] - c[None, :, :])**2).sum(-1).argmin(1) - # move each codebook element to be the mean of the pixels that assigned to it - c = torch.stack([x[a==k].mean(0) for k in range(self.ncluster)]) - # re-assign any poorly positioned codebook elements - nanix = torch.any(torch.isnan(c), dim=1) - ndead = nanix.sum().item() - print('done step %d/%d, re-initialized %d dead clusters' % (i+1, self.niter, ndead)) - c[nanix] = x[torch.randperm(N)[:ndead]] # re-init dead clusters - - self.C.copy_(c) - self.initialized.fill_(1) - - - def forward(self, x, reverse=False, shape=None): - if not reverse: - # flatten - bs,c,h,w = x.shape - assert c == self.nc - x = x.reshape(bs,c,h*w,1) - C = self.C.permute(1,0) - C = C.reshape(1,c,1,self.ncluster) - a = ((x-C)**2).sum(1).argmin(-1) # bs, h*w indices - return a - else: - # flatten - bs, HW = x.shape - """ - c = self.C.reshape( 1, self.nc, 1, self.ncluster) - c = c[bs*[0],:,:,:] - c = c[:,:,HW*[0],:] - x = x.reshape(bs, 1, HW, 1) - x = x[:,3*[0],:,:] - x = torch.gather(c, dim=3, index=x) - """ - x = self.C[x] - x = x.permute(0,2,1) - shape = shape if shape is not None else self.shape - x = x.reshape(bs, *shape) - - return x diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/utils/__init__.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/utils/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/cachecontrol/heuristics.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/cachecontrol/heuristics.py deleted file mode 100644 index ebe4a96f589474f6f441858de2bb961c5e473c6d..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/cachecontrol/heuristics.py +++ /dev/null @@ -1,139 +0,0 @@ -# SPDX-FileCopyrightText: 2015 Eric Larson -# -# SPDX-License-Identifier: Apache-2.0 - -import calendar -import time - -from email.utils import formatdate, parsedate, parsedate_tz - -from datetime import datetime, timedelta - -TIME_FMT = "%a, %d %b %Y %H:%M:%S GMT" - - -def expire_after(delta, date=None): - date = date or datetime.utcnow() - return date + delta - - -def datetime_to_header(dt): - return formatdate(calendar.timegm(dt.timetuple())) - - -class BaseHeuristic(object): - - def warning(self, response): - """ - Return a valid 1xx warning header value describing the cache - adjustments. - - The response is provided too allow warnings like 113 - http://tools.ietf.org/html/rfc7234#section-5.5.4 where we need - to explicitly say response is over 24 hours old. - """ - return '110 - "Response is Stale"' - - def update_headers(self, response): - """Update the response headers with any new headers. - - NOTE: This SHOULD always include some Warning header to - signify that the response was cached by the client, not - by way of the provided headers. - """ - return {} - - def apply(self, response): - updated_headers = self.update_headers(response) - - if updated_headers: - response.headers.update(updated_headers) - warning_header_value = self.warning(response) - if warning_header_value is not None: - response.headers.update({"Warning": warning_header_value}) - - return response - - -class OneDayCache(BaseHeuristic): - """ - Cache the response by providing an expires 1 day in the - future. - """ - - def update_headers(self, response): - headers = {} - - if "expires" not in response.headers: - date = parsedate(response.headers["date"]) - expires = expire_after(timedelta(days=1), date=datetime(*date[:6])) - headers["expires"] = datetime_to_header(expires) - headers["cache-control"] = "public" - return headers - - -class ExpiresAfter(BaseHeuristic): - """ - Cache **all** requests for a defined time period. - """ - - def __init__(self, **kw): - self.delta = timedelta(**kw) - - def update_headers(self, response): - expires = expire_after(self.delta) - return {"expires": datetime_to_header(expires), "cache-control": "public"} - - def warning(self, response): - tmpl = "110 - Automatically cached for %s. Response might be stale" - return tmpl % self.delta - - -class LastModified(BaseHeuristic): - """ - If there is no Expires header already, fall back on Last-Modified - using the heuristic from - http://tools.ietf.org/html/rfc7234#section-4.2.2 - to calculate a reasonable value. - - Firefox also does something like this per - https://developer.mozilla.org/en-US/docs/Web/HTTP/Caching_FAQ - http://lxr.mozilla.org/mozilla-release/source/netwerk/protocol/http/nsHttpResponseHead.cpp#397 - Unlike mozilla we limit this to 24-hr. - """ - cacheable_by_default_statuses = { - 200, 203, 204, 206, 300, 301, 404, 405, 410, 414, 501 - } - - def update_headers(self, resp): - headers = resp.headers - - if "expires" in headers: - return {} - - if "cache-control" in headers and headers["cache-control"] != "public": - return {} - - if resp.status not in self.cacheable_by_default_statuses: - return {} - - if "date" not in headers or "last-modified" not in headers: - return {} - - date = calendar.timegm(parsedate_tz(headers["date"])) - last_modified = parsedate(headers["last-modified"]) - if date is None or last_modified is None: - return {} - - now = time.time() - current_age = max(0, now - date) - delta = date - calendar.timegm(last_modified) - freshness_lifetime = max(0, min(delta / 10, 24 * 3600)) - if freshness_lifetime <= current_age: - return {} - - expires = date + freshness_lifetime - return {"expires": time.strftime(TIME_FMT, time.gmtime(expires))} - - def warning(self, resp): - return None diff --git a/spaces/Blessin/one-liners/app.py b/spaces/Blessin/one-liners/app.py deleted file mode 100644 index 5554e7e46473978e1c133fe3e3fd5cb3b16cc2cf..0000000000000000000000000000000000000000 --- a/spaces/Blessin/one-liners/app.py +++ /dev/null @@ -1,33 +0,0 @@ -import gradio as gr -import random -from datasets import load_dataset - -# Load the dataset from Hugging Face -dataset = load_dataset("Blessin/dialogues-one-liners") - -# Extract the dialogues from the dataset -DIALOGUES = dataset["train"]["dialogues"] - -def generate_statement(): - """Return a random dialogue from the dataset.""" - # Pick a random sublist from the dataset - random_dialogue_list = random.choice(DIALOGUES) - # Pick a random dialogue from the sublist - return random.choice(random_dialogue_list) - - -def main(): - # Define the UI using gr.Interface - interface = gr.Interface( - fn=generate_statement, # Function to call on button press - inputs=[], # No inputs required - outputs="text", # Output is a text area - live=False, # Only generate statement after button press - description="Press the button to generate a random statement from the dataset." - ) - - # Launch the UI - interface.launch(share=True) - -if __name__ == "__main__": - main() diff --git a/spaces/CVPR/GFPGAN-example/tests/test_arcface_arch.py b/spaces/CVPR/GFPGAN-example/tests/test_arcface_arch.py deleted file mode 100644 index b4b28d33800ae78a354e078e14373d2ee159dc7b..0000000000000000000000000000000000000000 --- a/spaces/CVPR/GFPGAN-example/tests/test_arcface_arch.py +++ /dev/null @@ -1,49 +0,0 @@ -import torch - -from gfpgan.archs.arcface_arch import BasicBlock, Bottleneck, ResNetArcFace - - -def test_resnetarcface(): - """Test arch: ResNetArcFace.""" - - # model init and forward (gpu) - if torch.cuda.is_available(): - net = ResNetArcFace(block='IRBlock', layers=(2, 2, 2, 2), use_se=True).cuda().eval() - img = torch.rand((1, 1, 128, 128), dtype=torch.float32).cuda() - output = net(img) - assert output.shape == (1, 512) - - # -------------------- without SE block ----------------------- # - net = ResNetArcFace(block='IRBlock', layers=(2, 2, 2, 2), use_se=False).cuda().eval() - output = net(img) - assert output.shape == (1, 512) - - -def test_basicblock(): - """Test the BasicBlock in arcface_arch""" - block = BasicBlock(1, 3, stride=1, downsample=None).cuda() - img = torch.rand((1, 1, 12, 12), dtype=torch.float32).cuda() - output = block(img) - assert output.shape == (1, 3, 12, 12) - - # ----------------- use the downsmaple module--------------- # - downsample = torch.nn.UpsamplingNearest2d(scale_factor=0.5).cuda() - block = BasicBlock(1, 3, stride=2, downsample=downsample).cuda() - img = torch.rand((1, 1, 12, 12), dtype=torch.float32).cuda() - output = block(img) - assert output.shape == (1, 3, 6, 6) - - -def test_bottleneck(): - """Test the Bottleneck in arcface_arch""" - block = Bottleneck(1, 1, stride=1, downsample=None).cuda() - img = torch.rand((1, 1, 12, 12), dtype=torch.float32).cuda() - output = block(img) - assert output.shape == (1, 4, 12, 12) - - # ----------------- use the downsmaple module--------------- # - downsample = torch.nn.UpsamplingNearest2d(scale_factor=0.5).cuda() - block = Bottleneck(1, 1, stride=2, downsample=downsample).cuda() - img = torch.rand((1, 1, 12, 12), dtype=torch.float32).cuda() - output = block(img) - assert output.shape == (1, 4, 6, 6) diff --git a/spaces/CVPR/LIVE/pybind11/tests/test_multiple_inheritance.cpp b/spaces/CVPR/LIVE/pybind11/tests/test_multiple_inheritance.cpp deleted file mode 100644 index 70e34178540d210770fc862b5520a3b3c9d91a5c..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/pybind11/tests/test_multiple_inheritance.cpp +++ /dev/null @@ -1,220 +0,0 @@ -/* - tests/test_multiple_inheritance.cpp -- multiple inheritance, - implicit MI casts - - Copyright (c) 2016 Wenzel Jakob - - All rights reserved. Use of this source code is governed by a - BSD-style license that can be found in the LICENSE file. -*/ - -#include "pybind11_tests.h" -#include "constructor_stats.h" - -// Many bases for testing that multiple inheritance from many classes (i.e. requiring extra -// space for holder constructed flags) works. -template struct BaseN { - BaseN(int i) : i(i) { } - int i; -}; - -// test_mi_static_properties -struct Vanilla { - std::string vanilla() { return "Vanilla"; }; -}; -struct WithStatic1 { - static std::string static_func1() { return "WithStatic1"; }; - static int static_value1; -}; -struct WithStatic2 { - static std::string static_func2() { return "WithStatic2"; }; - static int static_value2; -}; -struct VanillaStaticMix1 : Vanilla, WithStatic1, WithStatic2 { - static std::string static_func() { return "VanillaStaticMix1"; } - static int static_value; -}; -struct VanillaStaticMix2 : WithStatic1, Vanilla, WithStatic2 { - static std::string static_func() { return "VanillaStaticMix2"; } - static int static_value; -}; -int WithStatic1::static_value1 = 1; -int WithStatic2::static_value2 = 2; -int VanillaStaticMix1::static_value = 12; -int VanillaStaticMix2::static_value = 12; - -TEST_SUBMODULE(multiple_inheritance, m) { - - // test_multiple_inheritance_mix1 - // test_multiple_inheritance_mix2 - struct Base1 { - Base1(int i) : i(i) { } - int foo() { return i; } - int i; - }; - py::class_ b1(m, "Base1"); - b1.def(py::init()) - .def("foo", &Base1::foo); - - struct Base2 { - Base2(int i) : i(i) { } - int bar() { return i; } - int i; - }; - py::class_ b2(m, "Base2"); - b2.def(py::init()) - .def("bar", &Base2::bar); - - - // test_multiple_inheritance_cpp - struct Base12 : Base1, Base2 { - Base12(int i, int j) : Base1(i), Base2(j) { } - }; - struct MIType : Base12 { - MIType(int i, int j) : Base12(i, j) { } - }; - py::class_(m, "Base12"); - py::class_(m, "MIType") - .def(py::init()); - - - // test_multiple_inheritance_python_many_bases - #define PYBIND11_BASEN(N) py::class_>(m, "BaseN" #N).def(py::init()).def("f" #N, [](BaseN &b) { return b.i + N; }) - PYBIND11_BASEN( 1); PYBIND11_BASEN( 2); PYBIND11_BASEN( 3); PYBIND11_BASEN( 4); - PYBIND11_BASEN( 5); PYBIND11_BASEN( 6); PYBIND11_BASEN( 7); PYBIND11_BASEN( 8); - PYBIND11_BASEN( 9); PYBIND11_BASEN(10); PYBIND11_BASEN(11); PYBIND11_BASEN(12); - PYBIND11_BASEN(13); PYBIND11_BASEN(14); PYBIND11_BASEN(15); PYBIND11_BASEN(16); - PYBIND11_BASEN(17); - - // Uncommenting this should result in a compile time failure (MI can only be specified via - // template parameters because pybind has to know the types involved; see discussion in #742 for - // details). -// struct Base12v2 : Base1, Base2 { -// Base12v2(int i, int j) : Base1(i), Base2(j) { } -// }; -// py::class_(m, "Base12v2", b1, b2) -// .def(py::init()); - - - // test_multiple_inheritance_virtbase - // Test the case where not all base classes are specified, and where pybind11 requires the - // py::multiple_inheritance flag to perform proper casting between types. - struct Base1a { - Base1a(int i) : i(i) { } - int foo() { return i; } - int i; - }; - py::class_>(m, "Base1a") - .def(py::init()) - .def("foo", &Base1a::foo); - - struct Base2a { - Base2a(int i) : i(i) { } - int bar() { return i; } - int i; - }; - py::class_>(m, "Base2a") - .def(py::init()) - .def("bar", &Base2a::bar); - - struct Base12a : Base1a, Base2a { - Base12a(int i, int j) : Base1a(i), Base2a(j) { } - }; - py::class_>(m, "Base12a", py::multiple_inheritance()) - .def(py::init()); - - m.def("bar_base2a", [](Base2a *b) { return b->bar(); }); - m.def("bar_base2a_sharedptr", [](std::shared_ptr b) { return b->bar(); }); - - // test_mi_unaligned_base - // test_mi_base_return - // Issue #801: invalid casting to derived type with MI bases - struct I801B1 { int a = 1; I801B1() = default; I801B1(const I801B1 &) = default; virtual ~I801B1() = default; }; - struct I801B2 { int b = 2; I801B2() = default; I801B2(const I801B2 &) = default; virtual ~I801B2() = default; }; - struct I801C : I801B1, I801B2 {}; - struct I801D : I801C {}; // Indirect MI - // Unregistered classes: - struct I801B3 { int c = 3; virtual ~I801B3() = default; }; - struct I801E : I801B3, I801D {}; - - py::class_>(m, "I801B1").def(py::init<>()).def_readonly("a", &I801B1::a); - py::class_>(m, "I801B2").def(py::init<>()).def_readonly("b", &I801B2::b); - py::class_>(m, "I801C").def(py::init<>()); - py::class_>(m, "I801D").def(py::init<>()); - - // Two separate issues here: first, we want to recognize a pointer to a base type as being a - // known instance even when the pointer value is unequal (i.e. due to a non-first - // multiple-inheritance base class): - m.def("i801b1_c", [](I801C *c) { return static_cast(c); }); - m.def("i801b2_c", [](I801C *c) { return static_cast(c); }); - m.def("i801b1_d", [](I801D *d) { return static_cast(d); }); - m.def("i801b2_d", [](I801D *d) { return static_cast(d); }); - - // Second, when returned a base class pointer to a derived instance, we cannot assume that the - // pointer is `reinterpret_cast`able to the derived pointer because, like above, the base class - // pointer could be offset. - m.def("i801c_b1", []() -> I801B1 * { return new I801C(); }); - m.def("i801c_b2", []() -> I801B2 * { return new I801C(); }); - m.def("i801d_b1", []() -> I801B1 * { return new I801D(); }); - m.def("i801d_b2", []() -> I801B2 * { return new I801D(); }); - - // Return a base class pointer to a pybind-registered type when the actual derived type - // isn't pybind-registered (and uses multiple-inheritance to offset the pybind base) - m.def("i801e_c", []() -> I801C * { return new I801E(); }); - m.def("i801e_b2", []() -> I801B2 * { return new I801E(); }); - - - // test_mi_static_properties - py::class_(m, "Vanilla") - .def(py::init<>()) - .def("vanilla", &Vanilla::vanilla); - - py::class_(m, "WithStatic1") - .def(py::init<>()) - .def_static("static_func1", &WithStatic1::static_func1) - .def_readwrite_static("static_value1", &WithStatic1::static_value1); - - py::class_(m, "WithStatic2") - .def(py::init<>()) - .def_static("static_func2", &WithStatic2::static_func2) - .def_readwrite_static("static_value2", &WithStatic2::static_value2); - - py::class_( - m, "VanillaStaticMix1") - .def(py::init<>()) - .def_static("static_func", &VanillaStaticMix1::static_func) - .def_readwrite_static("static_value", &VanillaStaticMix1::static_value); - - py::class_( - m, "VanillaStaticMix2") - .def(py::init<>()) - .def_static("static_func", &VanillaStaticMix2::static_func) - .def_readwrite_static("static_value", &VanillaStaticMix2::static_value); - - -#if !(defined(PYPY_VERSION) && (PYPY_VERSION_NUM < 0x06000000)) - struct WithDict { }; - struct VanillaDictMix1 : Vanilla, WithDict { }; - struct VanillaDictMix2 : WithDict, Vanilla { }; - py::class_(m, "WithDict", py::dynamic_attr()).def(py::init<>()); - py::class_(m, "VanillaDictMix1").def(py::init<>()); - py::class_(m, "VanillaDictMix2").def(py::init<>()); -#endif - - // test_diamond_inheritance - // Issue #959: segfault when constructing diamond inheritance instance - // All of these have int members so that there will be various unequal pointers involved. - struct B { int b; B() = default; B(const B&) = default; virtual ~B() = default; }; - struct C0 : public virtual B { int c0; }; - struct C1 : public virtual B { int c1; }; - struct D : public C0, public C1 { int d; }; - py::class_(m, "B") - .def("b", [](B *self) { return self; }); - py::class_(m, "C0") - .def("c0", [](C0 *self) { return self; }); - py::class_(m, "C1") - .def("c1", [](C1 *self) { return self; }); - py::class_(m, "D") - .def(py::init<>()); -} diff --git a/spaces/CVPR/LIVE/thrust/thrust/detail/complex/catrig.h b/spaces/CVPR/LIVE/thrust/thrust/detail/complex/catrig.h deleted file mode 100644 index 6549fbb2eea699078da00b5d93e346ac93c6f73e..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/detail/complex/catrig.h +++ /dev/null @@ -1,785 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * Copyright 2013 Filipe RNC Maia - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -/*- - * Copyright (c) 2012 Stephen Montgomery-Smith - * All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * 1. Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * 2. Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in the - * documentation and/or other materials provided with the distribution. - * - * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND - * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE - * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE - * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE - * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL - * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS - * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) - * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT - * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY - * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF - * SUCH DAMAGE. - */ - -/* - * Adapted from FreeBSD by Filipe Maia : - * freebsd/lib/msun/src/catrig.c - */ - -#pragma once - -#include -#include -#include -#include -#include - -namespace thrust{ -namespace detail{ -namespace complex{ - -using thrust::complex; - -__host__ __device__ -inline void raise_inexact(){ - const volatile float tiny = 7.888609052210118054117286e-31; /* 0x1p-100; */ - // needs the volatile to prevent compiler from ignoring it - volatile float junk = 1 + tiny; - (void)junk; -} - -__host__ __device__ inline complex clog_for_large_values(complex z); - -/* - * Testing indicates that all these functions are accurate up to 4 ULP. - * The functions casin(h) and cacos(h) are about 2.5 times slower than asinh. - * The functions catan(h) are a little under 2 times slower than atanh. - * - * The code for casinh, casin, cacos, and cacosh comes first. The code is - * rather complicated, and the four functions are highly interdependent. - * - * The code for catanh and catan comes at the end. It is much simpler than - * the other functions, and the code for these can be disconnected from the - * rest of the code. - */ - -/* - * ================================ - * | casinh, casin, cacos, cacosh | - * ================================ - */ - -/* - * The algorithm is very close to that in "Implementing the complex arcsine - * and arccosine functions using exception handling" by T. E. Hull, Thomas F. - * Fairgrieve, and Ping Tak Peter Tang, published in ACM Transactions on - * Mathematical Software, Volume 23 Issue 3, 1997, Pages 299-335, - * http://dl.acm.org/citation.cfm?id=275324. - * - * Throughout we use the convention z = x + I*y. - * - * casinh(z) = sign(x)*log(A+sqrt(A*A-1)) + I*asin(B) - * where - * A = (|z+I| + |z-I|) / 2 - * B = (|z+I| - |z-I|) / 2 = y/A - * - * These formulas become numerically unstable: - * (a) for Re(casinh(z)) when z is close to the line segment [-I, I] (that - * is, Re(casinh(z)) is close to 0); - * (b) for Im(casinh(z)) when z is close to either of the intervals - * [I, I*infinity) or (-I*infinity, -I] (that is, |Im(casinh(z))| is - * close to PI/2). - * - * These numerical problems are overcome by defining - * f(a, b) = (hypot(a, b) - b) / 2 = a*a / (hypot(a, b) + b) / 2 - * Then if A < A_crossover, we use - * log(A + sqrt(A*A-1)) = log1p((A-1) + sqrt((A-1)*(A+1))) - * A-1 = f(x, 1+y) + f(x, 1-y) - * and if B > B_crossover, we use - * asin(B) = atan2(y, sqrt(A*A - y*y)) = atan2(y, sqrt((A+y)*(A-y))) - * A-y = f(x, y+1) + f(x, y-1) - * where without loss of generality we have assumed that x and y are - * non-negative. - * - * Much of the difficulty comes because the intermediate computations may - * produce overflows or underflows. This is dealt with in the paper by Hull - * et al by using exception handling. We do this by detecting when - * computations risk underflow or overflow. The hardest part is handling the - * underflows when computing f(a, b). - * - * Note that the function f(a, b) does not appear explicitly in the paper by - * Hull et al, but the idea may be found on pages 308 and 309. Introducing the - * function f(a, b) allows us to concentrate many of the clever tricks in this - * paper into one function. - */ - -/* - * Function f(a, b, hypot_a_b) = (hypot(a, b) - b) / 2. - * Pass hypot(a, b) as the third argument. - */ -__host__ __device__ -inline double -f(double a, double b, double hypot_a_b) -{ - if (b < 0) - return ((hypot_a_b - b) / 2); - if (b == 0) - return (a / 2); - return (a * a / (hypot_a_b + b) / 2); -} - -/* - * All the hard work is contained in this function. - * x and y are assumed positive or zero, and less than RECIP_EPSILON. - * Upon return: - * rx = Re(casinh(z)) = -Im(cacos(y + I*x)). - * B_is_usable is set to 1 if the value of B is usable. - * If B_is_usable is set to 0, sqrt_A2my2 = sqrt(A*A - y*y), and new_y = y. - * If returning sqrt_A2my2 has potential to result in an underflow, it is - * rescaled, and new_y is similarly rescaled. - */ -__host__ __device__ -inline void -do_hard_work(double x, double y, double *rx, int *B_is_usable, double *B, - double *sqrt_A2my2, double *new_y) -{ - double R, S, A; /* A, B, R, and S are as in Hull et al. */ - double Am1, Amy; /* A-1, A-y. */ - const double A_crossover = 10; /* Hull et al suggest 1.5, but 10 works better */ - const double FOUR_SQRT_MIN = 5.966672584960165394632772e-154; /* =0x1p-509; >= 4 * sqrt(DBL_MIN) */ - const double B_crossover = 0.6417; /* suggested by Hull et al */ - - R = hypot(x, y + 1); /* |z+I| */ - S = hypot(x, y - 1); /* |z-I| */ - - /* A = (|z+I| + |z-I|) / 2 */ - A = (R + S) / 2; - /* - * Mathematically A >= 1. There is a small chance that this will not - * be so because of rounding errors. So we will make certain it is - * so. - */ - if (A < 1) - A = 1; - - if (A < A_crossover) { - /* - * Am1 = fp + fm, where fp = f(x, 1+y), and fm = f(x, 1-y). - * rx = log1p(Am1 + sqrt(Am1*(A+1))) - */ - if (y == 1 && x < DBL_EPSILON * DBL_EPSILON / 128) { - /* - * fp is of order x^2, and fm = x/2. - * A = 1 (inexactly). - */ - *rx = sqrt(x); - } else if (x >= DBL_EPSILON * fabs(y - 1)) { - /* - * Underflow will not occur because - * x >= DBL_EPSILON^2/128 >= FOUR_SQRT_MIN - */ - Am1 = f(x, 1 + y, R) + f(x, 1 - y, S); - *rx = log1p(Am1 + sqrt(Am1 * (A + 1))); - } else if (y < 1) { - /* - * fp = x*x/(1+y)/4, fm = x*x/(1-y)/4, and - * A = 1 (inexactly). - */ - *rx = x / sqrt((1 - y) * (1 + y)); - } else { /* if (y > 1) */ - /* - * A-1 = y-1 (inexactly). - */ - *rx = log1p((y - 1) + sqrt((y - 1) * (y + 1))); - } - } else { - *rx = log(A + sqrt(A * A - 1)); - } - - *new_y = y; - - if (y < FOUR_SQRT_MIN) { - /* - * Avoid a possible underflow caused by y/A. For casinh this - * would be legitimate, but will be picked up by invoking atan2 - * later on. For cacos this would not be legitimate. - */ - *B_is_usable = 0; - *sqrt_A2my2 = A * (2 / DBL_EPSILON); - *new_y = y * (2 / DBL_EPSILON); - return; - } - - /* B = (|z+I| - |z-I|) / 2 = y/A */ - *B = y / A; - *B_is_usable = 1; - - if (*B > B_crossover) { - *B_is_usable = 0; - /* - * Amy = fp + fm, where fp = f(x, y+1), and fm = f(x, y-1). - * sqrt_A2my2 = sqrt(Amy*(A+y)) - */ - if (y == 1 && x < DBL_EPSILON / 128) { - /* - * fp is of order x^2, and fm = x/2. - * A = 1 (inexactly). - */ - *sqrt_A2my2 = sqrt(x) * sqrt((A + y) / 2); - } else if (x >= DBL_EPSILON * fabs(y - 1)) { - /* - * Underflow will not occur because - * x >= DBL_EPSILON/128 >= FOUR_SQRT_MIN - * and - * x >= DBL_EPSILON^2 >= FOUR_SQRT_MIN - */ - Amy = f(x, y + 1, R) + f(x, y - 1, S); - *sqrt_A2my2 = sqrt(Amy * (A + y)); - } else if (y > 1) { - /* - * fp = x*x/(y+1)/4, fm = x*x/(y-1)/4, and - * A = y (inexactly). - * - * y < RECIP_EPSILON. So the following - * scaling should avoid any underflow problems. - */ - *sqrt_A2my2 = x * (4 / DBL_EPSILON / DBL_EPSILON) * y / - sqrt((y + 1) * (y - 1)); - *new_y = y * (4 / DBL_EPSILON / DBL_EPSILON); - } else { /* if (y < 1) */ - /* - * fm = 1-y >= DBL_EPSILON, fp is of order x^2, and - * A = 1 (inexactly). - */ - *sqrt_A2my2 = sqrt((1 - y) * (1 + y)); - } - } -} - -/* - * casinh(z) = z + O(z^3) as z -> 0 - * - * casinh(z) = sign(x)*clog(sign(x)*z) + O(1/z^2) as z -> infinity - * The above formula works for the imaginary part as well, because - * Im(casinh(z)) = sign(x)*atan2(sign(x)*y, fabs(x)) + O(y/z^3) - * as z -> infinity, uniformly in y - */ -__host__ __device__ inline -complex casinh(complex z) -{ - double x, y, ax, ay, rx, ry, B, sqrt_A2my2, new_y; - int B_is_usable; - complex w; - const double RECIP_EPSILON = 1.0 / DBL_EPSILON; - const double m_ln2 = 6.9314718055994531e-1; /* 0x162e42fefa39ef.0p-53 */ - x = z.real(); - y = z.imag(); - ax = fabs(x); - ay = fabs(y); - - if (isnan(x) || isnan(y)) { - /* casinh(+-Inf + I*NaN) = +-Inf + I*NaN */ - if (isinf(x)) - return (complex(x, y + y)); - /* casinh(NaN + I*+-Inf) = opt(+-)Inf + I*NaN */ - if (isinf(y)) - return (complex(y, x + x)); - /* casinh(NaN + I*0) = NaN + I*0 */ - if (y == 0) - return (complex(x + x, y)); - /* - * All other cases involving NaN return NaN + I*NaN. - * C99 leaves it optional whether to raise invalid if one of - * the arguments is not NaN, so we opt not to raise it. - */ - return (complex(x + 0.0 + (y + 0.0), x + 0.0 + (y + 0.0))); - } - - if (ax > RECIP_EPSILON || ay > RECIP_EPSILON) { - /* clog...() will raise inexact unless x or y is infinite. */ - if (signbit(x) == 0) - w = clog_for_large_values(z) + m_ln2; - else - w = clog_for_large_values(-z) + m_ln2; - return (complex(copysign(w.real(), x), copysign(w.imag(), y))); - } - - /* Avoid spuriously raising inexact for z = 0. */ - if (x == 0 && y == 0) - return (z); - - /* All remaining cases are inexact. */ - raise_inexact(); - - const double SQRT_6_EPSILON = 3.6500241499888571e-8; /* 0x13988e1409212e.0p-77 */ - if (ax < SQRT_6_EPSILON / 4 && ay < SQRT_6_EPSILON / 4) - return (z); - - do_hard_work(ax, ay, &rx, &B_is_usable, &B, &sqrt_A2my2, &new_y); - if (B_is_usable) - ry = asin(B); - else - ry = atan2(new_y, sqrt_A2my2); - return (complex(copysign(rx, x), copysign(ry, y))); -} - -/* - * casin(z) = reverse(casinh(reverse(z))) - * where reverse(x + I*y) = y + I*x = I*conj(z). - */ -__host__ __device__ inline -complex casin(complex z) -{ - complex w = casinh(complex(z.imag(), z.real())); - - return (complex(w.imag(), w.real())); -} - -/* - * cacos(z) = PI/2 - casin(z) - * but do the computation carefully so cacos(z) is accurate when z is - * close to 1. - * - * cacos(z) = PI/2 - z + O(z^3) as z -> 0 - * - * cacos(z) = -sign(y)*I*clog(z) + O(1/z^2) as z -> infinity - * The above formula works for the real part as well, because - * Re(cacos(z)) = atan2(fabs(y), x) + O(y/z^3) - * as z -> infinity, uniformly in y - */ -__host__ __device__ inline -complex cacos(complex z) -{ - double x, y, ax, ay, rx, ry, B, sqrt_A2mx2, new_x; - int sx, sy; - int B_is_usable; - complex w; - const double pio2_hi = 1.5707963267948966e0; /* 0x1921fb54442d18.0p-52 */ - const volatile double pio2_lo = 6.1232339957367659e-17; /* 0x11a62633145c07.0p-106 */ - const double m_ln2 = 6.9314718055994531e-1; /* 0x162e42fefa39ef.0p-53 */ - - x = z.real(); - y = z.imag(); - sx = signbit(x); - sy = signbit(y); - ax = fabs(x); - ay = fabs(y); - - if (isnan(x) || isnan(y)) { - /* cacos(+-Inf + I*NaN) = NaN + I*opt(-)Inf */ - if (isinf(x)) - return (complex(y + y, -infinity())); - /* cacos(NaN + I*+-Inf) = NaN + I*-+Inf */ - if (isinf(y)) - return (complex(x + x, -y)); - /* cacos(0 + I*NaN) = PI/2 + I*NaN with inexact */ - if (x == 0) - return (complex(pio2_hi + pio2_lo, y + y)); - /* - * All other cases involving NaN return NaN + I*NaN. - * C99 leaves it optional whether to raise invalid if one of - * the arguments is not NaN, so we opt not to raise it. - */ - return (complex(x + 0.0 + (y + 0), x + 0.0 + (y + 0))); - } - - const double RECIP_EPSILON = 1.0 / DBL_EPSILON; - if (ax > RECIP_EPSILON || ay > RECIP_EPSILON) { - /* clog...() will raise inexact unless x or y is infinite. */ - w = clog_for_large_values(z); - rx = fabs(w.imag()); - ry = w.real() + m_ln2; - if (sy == 0) - ry = -ry; - return (complex(rx, ry)); - } - - /* Avoid spuriously raising inexact for z = 1. */ - if (x == 1.0 && y == 0.0) - return (complex(0, -y)); - - /* All remaining cases are inexact. */ - raise_inexact(); - - const double SQRT_6_EPSILON = 3.6500241499888571e-8; /* 0x13988e1409212e.0p-77 */ - if (ax < SQRT_6_EPSILON / 4 && ay < SQRT_6_EPSILON / 4) - return (complex(pio2_hi - (x - pio2_lo), -y)); - - do_hard_work(ay, ax, &ry, &B_is_usable, &B, &sqrt_A2mx2, &new_x); - if (B_is_usable) { - if (sx == 0) - rx = acos(B); - else - rx = acos(-B); - } else { - if (sx == 0) - rx = atan2(sqrt_A2mx2, new_x); - else - rx = atan2(sqrt_A2mx2, -new_x); - } - if (sy == 0) - ry = -ry; - return (complex(rx, ry)); -} - -/* - * cacosh(z) = I*cacos(z) or -I*cacos(z) - * where the sign is chosen so Re(cacosh(z)) >= 0. - */ -__host__ __device__ inline -complex cacosh(complex z) -{ - complex w; - double rx, ry; - - w = cacos(z); - rx = w.real(); - ry = w.imag(); - /* cacosh(NaN + I*NaN) = NaN + I*NaN */ - if (isnan(rx) && isnan(ry)) - return (complex(ry, rx)); - /* cacosh(NaN + I*+-Inf) = +Inf + I*NaN */ - /* cacosh(+-Inf + I*NaN) = +Inf + I*NaN */ - if (isnan(rx)) - return (complex(fabs(ry), rx)); - /* cacosh(0 + I*NaN) = NaN + I*NaN */ - if (isnan(ry)) - return (complex(ry, ry)); - return (complex(fabs(ry), copysign(rx, z.imag()))); -} - -/* - * Optimized version of clog() for |z| finite and larger than ~RECIP_EPSILON. - */ -__host__ __device__ inline -complex clog_for_large_values(complex z) -{ - double x, y; - double ax, ay, t; - const double m_e = 2.7182818284590452e0; /* 0x15bf0a8b145769.0p-51 */ - - x = z.real(); - y = z.imag(); - ax = fabs(x); - ay = fabs(y); - if (ax < ay) { - t = ax; - ax = ay; - ay = t; - } - - /* - * Avoid overflow in hypot() when x and y are both very large. - * Divide x and y by E, and then add 1 to the logarithm. This depends - * on E being larger than sqrt(2). - * Dividing by E causes an insignificant loss of accuracy; however - * this method is still poor since it is uneccessarily slow. - */ - if (ax > DBL_MAX / 2) - return (complex(log(hypot(x / m_e, y / m_e)) + 1, atan2(y, x))); - - /* - * Avoid overflow when x or y is large. Avoid underflow when x or - * y is small. - */ - const double QUARTER_SQRT_MAX = 5.966672584960165394632772e-154; /* = 0x1p509; <= sqrt(DBL_MAX) / 4 */ - const double SQRT_MIN = 1.491668146240041348658193e-154; /* = 0x1p-511; >= sqrt(DBL_MIN) */ - if (ax > QUARTER_SQRT_MAX || ay < SQRT_MIN) - return (complex(log(hypot(x, y)), atan2(y, x))); - - return (complex(log(ax * ax + ay * ay) / 2, atan2(y, x))); -} - -/* - * ================= - * | catanh, catan | - * ================= - */ - -/* - * sum_squares(x,y) = x*x + y*y (or just x*x if y*y would underflow). - * Assumes x*x and y*y will not overflow. - * Assumes x and y are finite. - * Assumes y is non-negative. - * Assumes fabs(x) >= DBL_EPSILON. - */ -__host__ __device__ -inline double sum_squares(double x, double y) -{ - const double SQRT_MIN = 1.491668146240041348658193e-154; /* = 0x1p-511; >= sqrt(DBL_MIN) */ - /* Avoid underflow when y is small. */ - if (y < SQRT_MIN) - return (x * x); - - return (x * x + y * y); -} - -/* - * real_part_reciprocal(x, y) = Re(1/(x+I*y)) = x/(x*x + y*y). - * Assumes x and y are not NaN, and one of x and y is larger than - * RECIP_EPSILON. We avoid unwarranted underflow. It is important to not use - * the code creal(1/z), because the imaginary part may produce an unwanted - * underflow. - * This is only called in a context where inexact is always raised before - * the call, so no effort is made to avoid or force inexact. - */ -__host__ __device__ -inline double real_part_reciprocal(double x, double y) -{ - double scale; - uint32_t hx, hy; - int32_t ix, iy; - - /* - * This code is inspired by the C99 document n1124.pdf, Section G.5.1, - * example 2. - */ - get_high_word(hx, x); - ix = hx & 0x7ff00000; - get_high_word(hy, y); - iy = hy & 0x7ff00000; - //#define BIAS (DBL_MAX_EXP - 1) - const int BIAS = DBL_MAX_EXP - 1; - /* XXX more guard digits are useful iff there is extra precision. */ - //#define CUTOFF (DBL_MANT_DIG / 2 + 1) /* just half or 1 guard digit */ - const int CUTOFF = (DBL_MANT_DIG / 2 + 1); - if (ix - iy >= CUTOFF << 20 || isinf(x)) - return (1 / x); /* +-Inf -> +-0 is special */ - if (iy - ix >= CUTOFF << 20) - return (x / y / y); /* should avoid double div, but hard */ - if (ix <= (BIAS + DBL_MAX_EXP / 2 - CUTOFF) << 20) - return (x / (x * x + y * y)); - scale = 1; - set_high_word(scale, 0x7ff00000 - ix); /* 2**(1-ilogb(x)) */ - x *= scale; - y *= scale; - return (x / (x * x + y * y) * scale); -} - - -/* - * catanh(z) = log((1+z)/(1-z)) / 2 - * = log1p(4*x / |z-1|^2) / 4 - * + I * atan2(2*y, (1-x)*(1+x)-y*y) / 2 - * - * catanh(z) = z + O(z^3) as z -> 0 - * - * catanh(z) = 1/z + sign(y)*I*PI/2 + O(1/z^3) as z -> infinity - * The above formula works for the real part as well, because - * Re(catanh(z)) = x/|z|^2 + O(x/z^4) - * as z -> infinity, uniformly in x - */ -#if THRUST_CPP_DIALECT >= 2011 || THRUST_HOST_COMPILER != THRUST_HOST_COMPILER_MSVC -__host__ __device__ inline -complex catanh(complex z) -{ - double x, y, ax, ay, rx, ry; - const volatile double pio2_lo = 6.1232339957367659e-17; /* 0x11a62633145c07.0p-106 */ - const double pio2_hi = 1.5707963267948966e0;/* 0x1921fb54442d18.0p-52 */ - - - x = z.real(); - y = z.imag(); - ax = fabs(x); - ay = fabs(y); - - /* This helps handle many cases. */ - if (y == 0 && ax <= 1) - return (complex(atanh(x), y)); - - /* To ensure the same accuracy as atan(), and to filter out z = 0. */ - if (x == 0) - return (complex(x, atan(y))); - - if (isnan(x) || isnan(y)) { - /* catanh(+-Inf + I*NaN) = +-0 + I*NaN */ - if (isinf(x)) - return (complex(copysign(0.0, x), y + y)); - /* catanh(NaN + I*+-Inf) = sign(NaN)0 + I*+-PI/2 */ - if (isinf(y)) - return (complex(copysign(0.0, x), - copysign(pio2_hi + pio2_lo, y))); - /* - * All other cases involving NaN return NaN + I*NaN. - * C99 leaves it optional whether to raise invalid if one of - * the arguments is not NaN, so we opt not to raise it. - */ - return (complex(x + 0.0 + (y + 0), x + 0.0 + (y + 0))); - } - - const double RECIP_EPSILON = 1.0 / DBL_EPSILON; - if (ax > RECIP_EPSILON || ay > RECIP_EPSILON) - return (complex(real_part_reciprocal(x, y), - copysign(pio2_hi + pio2_lo, y))); - - const double SQRT_3_EPSILON = 2.5809568279517849e-8; /* 0x1bb67ae8584caa.0p-78 */ - if (ax < SQRT_3_EPSILON / 2 && ay < SQRT_3_EPSILON / 2) { - /* - * z = 0 was filtered out above. All other cases must raise - * inexact, but this is the only only that needs to do it - * explicitly. - */ - raise_inexact(); - return (z); - } - - const double m_ln2 = 6.9314718055994531e-1; /* 0x162e42fefa39ef.0p-53 */ - if (ax == 1 && ay < DBL_EPSILON) - rx = (m_ln2 - log(ay)) / 2; - else - rx = log1p(4 * ax / sum_squares(ax - 1, ay)) / 4; - - if (ax == 1) - ry = atan2(2.0, -ay) / 2; - else if (ay < DBL_EPSILON) - ry = atan2(2 * ay, (1 - ax) * (1 + ax)) / 2; - else - ry = atan2(2 * ay, (1 - ax) * (1 + ax) - ay * ay) / 2; - - return (complex(copysign(rx, x), copysign(ry, y))); -} - -/* - * catan(z) = reverse(catanh(reverse(z))) - * where reverse(x + I*y) = y + I*x = I*conj(z). - */ -__host__ __device__ inline -complexcatan(complex z) -{ - complex w = catanh(complex(z.imag(), z.real())); - return (complex(w.imag(), w.real())); -} - -#endif - -} // namespace complex - -} // namespace detail - - -template -__host__ __device__ -inline complex acos(const complex& z){ - const complex ret = thrust::asin(z); - const ValueType pi = ValueType(3.14159265358979323846); - return complex(pi/2 - ret.real(),-ret.imag()); -} - - -template -__host__ __device__ -inline complex asin(const complex& z){ - const complex i(0,1); - return -i*asinh(i*z); -} - -template -__host__ __device__ -inline complex atan(const complex& z){ - const complex i(0,1); - return -i*thrust::atanh(i*z); -} - - -template -__host__ __device__ -inline complex acosh(const complex& z){ - thrust::complex ret((z.real() - z.imag()) * (z.real() + z.imag()) - ValueType(1.0), - ValueType(2.0) * z.real() * z.imag()); - ret = thrust::sqrt(ret); - if (z.real() < ValueType(0.0)){ - ret = -ret; - } - ret += z; - ret = thrust::log(ret); - if (ret.real() < ValueType(0.0)){ - ret = -ret; - } - return ret; -} - -template -__host__ __device__ -inline complex asinh(const complex& z){ - return thrust::log(thrust::sqrt(z*z+ValueType(1))+z); -} - -template -__host__ __device__ -inline complex atanh(const complex& z){ - ValueType imag2 = z.imag() * z.imag(); - ValueType n = ValueType(1.0) + z.real(); - n = imag2 + n * n; - - ValueType d = ValueType(1.0) - z.real(); - d = imag2 + d * d; - complex ret(ValueType(0.25) * (std::log(n) - std::log(d)),0); - - d = ValueType(1.0) - z.real() * z.real() - imag2; - - ret.imag(ValueType(0.5) * std::atan2(ValueType(2.0) * z.imag(), d)); - return ret; -} - -template <> -__host__ __device__ -inline complex acos(const complex& z){ - return detail::complex::cacos(z); -} - -template <> -__host__ __device__ -inline complex asin(const complex& z){ - return detail::complex::casin(z); -} - -#if THRUST_CPP_DIALECT >= 2011 || THRUST_HOST_COMPILER != THRUST_HOST_COMPILER_MSVC -template <> -__host__ __device__ -inline complex atan(const complex& z){ - return detail::complex::catan(z); -} -#endif - -template <> -__host__ __device__ -inline complex acosh(const complex& z){ - return detail::complex::cacosh(z); -} - - -template <> -__host__ __device__ -inline complex asinh(const complex& z){ - return detail::complex::casinh(z); -} - -#if THRUST_CPP_DIALECT >= 2011 || THRUST_HOST_COMPILER != THRUST_HOST_COMPILER_MSVC -template <> -__host__ __device__ -inline complex atanh(const complex& z){ - return detail::complex::catanh(z); -} -#endif - -} // namespace thrust diff --git a/spaces/CVPR/LIVE/thrust/thrust/optional.h b/spaces/CVPR/LIVE/thrust/thrust/optional.h deleted file mode 100644 index 133deab56600d22f831be271888d643786f51011..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/optional.h +++ /dev/null @@ -1,2886 +0,0 @@ -/// -// optional - An implementation of std::optional with extensions -// Written in 2017 by Simon Brand (@TartanLlama) -// -// To the extent possible under law, the author(s) have dedicated all -// copyright and related and neighboring rights to this software to the -// public domain worldwide. This software is distributed without any warranty. -// -// You should have received a copy of the CC0 Public Domain Dedication -// along with this software. If not, see -// . -/// - -#pragma once - -#include -#include - -#if THRUST_CPP_DIALECT >= 2011 - -#include -#include - -#define THRUST_OPTIONAL_VERSION_MAJOR 0 -#define THRUST_OPTIONAL_VERSION_MINOR 2 - -#include -#include -#include -#include -#include - -#if (THRUST_HOST_COMPILER == THRUST_HOST_COMPILER_MSVC && _MSC_VER == 1900) -#define THRUST_OPTIONAL_MSVC2015 -#endif - -#if (defined(__GNUC__) && __GNUC__ == 4 && __GNUC_MINOR__ <= 9 && \ - !defined(__clang__)) -#define THRUST_OPTIONAL_GCC49 -#endif - -#if (defined(__GNUC__) && __GNUC__ == 5 && __GNUC_MINOR__ <= 4 && \ - !defined(__clang__)) -#define THRUST_OPTIONAL_GCC54 -#endif - -#if (defined(__GNUC__) && __GNUC__ == 5 && __GNUC_MINOR__ <= 5 && \ - !defined(__clang__)) -#define THRUST_OPTIONAL_GCC55 -#endif - -#if (defined(__GNUC__) && __GNUC__ == 4 && __GNUC_MINOR__ <= 9 && \ - !defined(__clang__)) -// GCC < 5 doesn't support overloading on const&& for member functions -#define THRUST_OPTIONAL_NO_CONSTRR - -// GCC < 5 doesn't support some standard C++11 type traits -#define THRUST_OPTIONAL_IS_TRIVIALLY_COPY_CONSTRUCTIBLE(T) \ - std::has_trivial_copy_constructor::value -#define THRUST_OPTIONAL_IS_TRIVIALLY_COPY_ASSIGNABLE(T) std::has_trivial_copy_assign::value - -// GCC < 5 doesn't provide a way to emulate std::is_trivially_move_*, -// so don't enable any optimizations that rely on them: -#define THRUST_OPTIONAL_IS_TRIVIALLY_MOVE_CONSTRUCTIBLE(T) false -#define THRUST_OPTIONAL_IS_TRIVIALLY_MOVE_ASSIGNABLE(T) false - -// This one will be different for GCC 5.7 if it's ever supported -#define THRUST_OPTIONAL_IS_TRIVIALLY_DESTRUCTIBLE(T) std::is_trivially_destructible::value - -// GCC 5 < v < 8 has a bug in is_trivially_copy_constructible which breaks std::vector -// for non-copyable types -#elif (defined(__GNUC__) && __GNUC__ < 8 && \ - !defined(__clang__)) -#ifndef THRUST_GCC_LESS_8_TRIVIALLY_COPY_CONSTRUCTIBLE_MUTEX -#define THRUST_GCC_LESS_8_TRIVIALLY_COPY_CONSTRUCTIBLE_MUTEX -namespace thrust -{ - namespace detail { - template - struct is_trivially_copy_constructible : std::is_trivially_copy_constructible{}; -#ifdef _GLIBCXX_VECTOR - template - struct is_trivially_copy_constructible> - : std::is_trivially_copy_constructible{}; -#endif - } -} // end namespace thrust -#endif - -#define THRUST_OPTIONAL_IS_TRIVIALLY_COPY_CONSTRUCTIBLE(T) \ - thrust::detail::is_trivially_copy_constructible::value -#define THRUST_OPTIONAL_IS_TRIVIALLY_COPY_ASSIGNABLE(T) \ - std::is_trivially_copy_assignable::value -#define THRUST_OPTIONAL_IS_TRIVIALLY_MOVE_CONSTRUCTIBLE(T) \ - std::is_trivially_move_constructible::value -#define THRUST_OPTIONAL_IS_TRIVIALLY_MOVE_ASSIGNABLE(T) \ - std::is_trivially_move_assignable::value -#define THRUST_OPTIONAL_IS_TRIVIALLY_DESTRUCTIBLE(T) std::is_trivially_destructible::value -#else - -// To support clang + old libstdc++ without type traits, check for equivalent -// clang built-ins and use them if present. See note above -// is_trivially_copyable_impl in -// thrust/type_traits/is_trivially_relocatable.h for more details. - -#ifndef __has_feature -#define __has_feature(x) 0 -#endif - -#if defined(__GLIBCXX__) && __has_feature(is_trivially_constructible) -#define THRUST_OPTIONAL_IS_TRIVIALLY_COPY_CONSTRUCTIBLE(T) \ - __is_trivially_constructible(T, T const&) -#else -#define THRUST_OPTIONAL_IS_TRIVIALLY_COPY_CONSTRUCTIBLE(T) \ - std::is_trivially_copy_constructible::value -#endif - -#if defined(__GLIBCXX__) && __has_feature(is_trivially_assignable) -#define THRUST_OPTIONAL_IS_TRIVIALLY_COPY_ASSIGNABLE(T) \ - __is_trivially_assignable(T, T const&) -#else -#define THRUST_OPTIONAL_IS_TRIVIALLY_COPY_ASSIGNABLE(T) \ - std::is_trivially_copy_assignable::value -#endif - -#if defined(__GLIBCXX__) && __has_feature(is_trivially_constructible) -#define THRUST_OPTIONAL_IS_TRIVIALLY_MOVE_CONSTRUCTIBLE(T) \ - __is_trivially_constructible(T, T&&) -#else -#define THRUST_OPTIONAL_IS_TRIVIALLY_MOVE_CONSTRUCTIBLE(T) \ - std::is_trivially_move_constructible::value -#endif - -#if defined(__GLIBCXX__) && __has_feature(is_trivially_assignable) -#define THRUST_OPTIONAL_IS_TRIVIALLY_MOVE_ASSIGNABLE(T) \ - __is_trivially_assignable(T, T&&) -#else -#define THRUST_OPTIONAL_IS_TRIVIALLY_MOVE_ASSIGNABLE(T) \ - std::is_trivially_move_assignable::value -#endif - -#if defined(__GLIBCXX__) && __has_feature(is_trivially_destructible) -#define THRUST_OPTIONAL_IS_TRIVIALLY_DESTRUCTIBLE(T) \ - __is_trivially_destructible(T) -#else -#define THRUST_OPTIONAL_IS_TRIVIALLY_DESTRUCTIBLE(T) \ - std::is_trivially_destructible::value -#endif - -#endif - -#if THRUST_CPP_DIALECT > 2011 -#define THRUST_OPTIONAL_CPP14 -#endif - -// constexpr implies const in C++11, not C++14 -#if (THRUST_CPP_DIALECT == 2011 || defined(THRUST_OPTIONAL_MSVC2015) || \ - defined(THRUST_OPTIONAL_GCC49)) -/// \exclude -#define THRUST_OPTIONAL_CPP11_CONSTEXPR -#else -/// \exclude -#define THRUST_OPTIONAL_CPP11_CONSTEXPR constexpr -#endif - -namespace thrust -{ -#ifndef THRUST_MONOSTATE_INPLACE_MUTEX -#define THRUST_MONOSTATE_INPLACE_MUTEX -/// \brief Used to represent an optional with no data; essentially a bool -class monostate {}; - -/// \brief A tag type to tell optional to construct its value in-place -struct in_place_t { - explicit in_place_t() = default; -}; -/// \brief A tag to tell optional to construct its value in-place -static constexpr in_place_t in_place{}; -#endif - -template class optional; - -/// \exclude -namespace detail { -#ifndef THRUST_TRAITS_MUTEX -#define THRUST_TRAITS_MUTEX -// C++14-style aliases for brevity -template using remove_const_t = typename std::remove_const::type; -template -using remove_reference_t = typename std::remove_reference::type; -template using decay_t = typename std::decay::type; -template -using enable_if_t = typename std::enable_if::type; -template -using conditional_t = typename std::conditional::type; - -// std::conjunction from C++17 -template struct conjunction : std::true_type {}; -template struct conjunction : B {}; -template -struct conjunction - : std::conditional, B>::type {}; - -#if defined(_LIBCPP_VERSION) && THRUST_CPP_DIALECT == 2011 -#define THRUST_OPTIONAL_LIBCXX_MEM_FN_WORKAROUND -#endif - -// In C++11 mode, there's an issue in libc++'s std::mem_fn -// which results in a hard-error when using it in a noexcept expression -// in some cases. This is a check to workaround the common failing case. -#ifdef THRUST_OPTIONAL_LIBCXX_MEM_FN_WORKAROUND -template struct is_pointer_to_non_const_member_func : std::false_type{}; -template -struct is_pointer_to_non_const_member_func : std::true_type{}; -template -struct is_pointer_to_non_const_member_func : std::true_type{}; -template -struct is_pointer_to_non_const_member_func : std::true_type{}; -template -struct is_pointer_to_non_const_member_func : std::true_type{}; -template -struct is_pointer_to_non_const_member_func : std::true_type{}; -template -struct is_pointer_to_non_const_member_func : std::true_type{}; - -template struct is_const_or_const_ref : std::false_type{}; -template struct is_const_or_const_ref : std::true_type{}; -template struct is_const_or_const_ref : std::true_type{}; -#endif - -// std::invoke from C++17 -// https://stackoverflow.com/questions/38288042/c11-14-invoke-workaround -__thrust_exec_check_disable__ -template ::value - && is_const_or_const_ref::value)>, -#endif - typename = enable_if_t>::value>, - int = 0> -__host__ __device__ -constexpr auto invoke(Fn &&f, Args &&... args) noexcept( - noexcept(std::mem_fn(f)(std::forward(args)...))) - -> decltype(std::mem_fn(f)(std::forward(args)...)) { - return std::mem_fn(f)(std::forward(args)...); -} - -__thrust_exec_check_disable__ -template >::value>> -__host__ __device__ -constexpr auto invoke(Fn &&f, Args &&... args) noexcept( - noexcept(std::forward(f)(std::forward(args)...))) - -> decltype(std::forward(f)(std::forward(args)...)) { - return std::forward(f)(std::forward(args)...); -} - -// std::invoke_result from C++17 -template struct invoke_result_impl; - -template -struct invoke_result_impl< - F, decltype(detail::invoke(std::declval(), std::declval()...), void()), - Us...> { - using type = decltype(detail::invoke(std::declval(), std::declval()...)); -}; - -template -using invoke_result = invoke_result_impl; - -template -using invoke_result_t = typename invoke_result::type; -#endif - -// std::void_t from C++17 -template struct voider { using type = void; }; -template using void_t = typename voider::type; - -// Trait for checking if a type is a thrust::optional -template struct is_optional_impl : std::false_type {}; -template struct is_optional_impl> : std::true_type {}; -template using is_optional = is_optional_impl>; - -// Change void to thrust::monostate -template -using fixup_void = conditional_t::value, monostate, U>; - -template > -using get_map_return = optional>>; - -// Check if invoking F for some Us returns void -template struct returns_void_impl; -template -struct returns_void_impl>, U...> - : std::is_void> {}; -template -using returns_void = returns_void_impl; - -template -using enable_if_ret_void = enable_if_t::value>; - -template -using disable_if_ret_void = enable_if_t::value>; - -template -using enable_forward_value = - detail::enable_if_t::value && - !std::is_same, in_place_t>::value && - !std::is_same, detail::decay_t>::value>; - -template -using enable_from_other = detail::enable_if_t< - std::is_constructible::value && - !std::is_constructible &>::value && - !std::is_constructible &&>::value && - !std::is_constructible &>::value && - !std::is_constructible &&>::value && - !std::is_convertible &, T>::value && - !std::is_convertible &&, T>::value && - !std::is_convertible &, T>::value && - !std::is_convertible &&, T>::value>; - -template -using enable_assign_forward = detail::enable_if_t< - !std::is_same, detail::decay_t>::value && - !detail::conjunction, - std::is_same>>::value && - std::is_constructible::value && std::is_assignable::value>; - -template -using enable_assign_from_other = detail::enable_if_t< - std::is_constructible::value && - std::is_assignable::value && - !std::is_constructible &>::value && - !std::is_constructible &&>::value && - !std::is_constructible &>::value && - !std::is_constructible &&>::value && - !std::is_convertible &, T>::value && - !std::is_convertible &&, T>::value && - !std::is_convertible &, T>::value && - !std::is_convertible &&, T>::value && - !std::is_assignable &>::value && - !std::is_assignable &&>::value && - !std::is_assignable &>::value && - !std::is_assignable &&>::value>; - -#if THRUST_HOST_COMPILER == THRUST_HOST_COMPILER_MSVC -// TODO make a version which works with MSVC -template struct is_swappable : std::true_type {}; - -template struct is_nothrow_swappable : std::true_type {}; -#else -// https://stackoverflow.com/questions/26744589/what-is-a-proper-way-to-implement-is-swappable-to-test-for-the-swappable-concept -namespace swap_adl_tests { -// if swap ADL finds this then it would call std::swap otherwise (same -// signature) -struct tag {}; - -template tag swap(T &, T &); -template tag swap(T (&a)[N], T (&b)[N]); - -// helper functions to test if an unqualified swap is possible, and if it -// becomes std::swap -template std::false_type can_swap(...) noexcept(false); -template (), std::declval()))> -std::true_type can_swap(int) noexcept(noexcept(swap(std::declval(), - std::declval()))); - -template std::false_type uses_std(...); -template -std::is_same(), std::declval())), tag> -uses_std(int); - -template -struct is_std_swap_noexcept - : std::integral_constant::value && - std::is_nothrow_move_assignable::value> {}; - -template -struct is_std_swap_noexcept : is_std_swap_noexcept {}; - -template -struct is_adl_swap_noexcept - : std::integral_constant(0))> {}; -} // namespace swap_adl_tests - -template -struct is_swappable - : std::integral_constant< - bool, - decltype(detail::swap_adl_tests::can_swap(0))::value && - (!decltype(detail::swap_adl_tests::uses_std(0))::value || - (std::is_move_assignable::value && - std::is_move_constructible::value))> {}; - -template -struct is_swappable - : std::integral_constant< - bool, - decltype(detail::swap_adl_tests::can_swap(0))::value && - (!decltype( - detail::swap_adl_tests::uses_std(0))::value || - is_swappable::value)> {}; - -template -struct is_nothrow_swappable - : std::integral_constant< - bool, - is_swappable::value && - ((decltype(detail::swap_adl_tests::uses_std(0))::value - &&detail::swap_adl_tests::is_std_swap_noexcept::value) || - (!decltype(detail::swap_adl_tests::uses_std(0))::value && - detail::swap_adl_tests::is_adl_swap_noexcept::value))> { -}; -#endif - -// The storage base manages the actual storage, and correctly propagates -// trivial destruction from T. This case is for when T is not trivially -// destructible. -template ::value> -struct optional_storage_base { - __thrust_exec_check_disable__ - __host__ __device__ - THRUST_OPTIONAL_CPP11_CONSTEXPR optional_storage_base() noexcept - : m_dummy(), m_has_value(false) {} - - __thrust_exec_check_disable__ - template - __host__ __device__ - THRUST_OPTIONAL_CPP11_CONSTEXPR optional_storage_base(in_place_t, U &&... u) - : m_value(std::forward(u)...), m_has_value(true) {} - - __thrust_exec_check_disable__ - __host__ __device__ - ~optional_storage_base() { - if (m_has_value) { - m_value.~T(); - m_has_value = false; - } - } - - struct dummy {}; - union { - dummy m_dummy; - T m_value; - }; - - bool m_has_value; -}; - -// This case is for when T is trivially destructible. -template struct optional_storage_base { - __thrust_exec_check_disable__ - __host__ __device__ - THRUST_OPTIONAL_CPP11_CONSTEXPR optional_storage_base() noexcept - : m_dummy(), m_has_value(false) {} - - __thrust_exec_check_disable__ - template - __host__ __device__ - THRUST_OPTIONAL_CPP11_CONSTEXPR optional_storage_base(in_place_t, U &&... u) - : m_value(std::forward(u)...), m_has_value(true) {} - - // No destructor, so this class is trivially destructible - - struct dummy {}; - union { - dummy m_dummy; - T m_value; - }; - - bool m_has_value = false; -}; - -// This base class provides some handy member functions which can be used in -// further derived classes -template struct optional_operations_base : optional_storage_base { - using optional_storage_base::optional_storage_base; - - __thrust_exec_check_disable__ - __host__ __device__ - void hard_reset() noexcept { - get().~T(); - this->m_has_value = false; - } - - __thrust_exec_check_disable__ - template - __host__ __device__ - void construct(Args &&... args) noexcept { - new (addressof(this->m_value)) T(std::forward(args)...); - this->m_has_value = true; - } - - __thrust_exec_check_disable__ - template - __host__ __device__ - void assign(Opt &&rhs) { - if (this->has_value()) { - if (rhs.has_value()) { - this->m_value = std::forward(rhs).get(); - } else { - this->m_value.~T(); - this->m_has_value = false; - } - } - - if (rhs.has_value()) { - construct(std::forward(rhs).get()); - } - } - - __thrust_exec_check_disable__ - __host__ __device__ - bool has_value() const { return this->m_has_value; } - - __thrust_exec_check_disable__ - __host__ __device__ - THRUST_OPTIONAL_CPP11_CONSTEXPR T &get() & { return this->m_value; } - __thrust_exec_check_disable__ - __host__ __device__ - THRUST_OPTIONAL_CPP11_CONSTEXPR const T &get() const & { return this->m_value; } - __thrust_exec_check_disable__ - __host__ __device__ - THRUST_OPTIONAL_CPP11_CONSTEXPR T &&get() && { return std::move(this->m_value); } -#ifndef THRUST_OPTIONAL_NO_CONSTRR - __thrust_exec_check_disable__ - __host__ __device__ - constexpr const T &&get() const && { return std::move(this->m_value); } -#endif -}; - -// This class manages conditionally having a trivial copy constructor -// This specialization is for when T is trivially copy constructible -template -struct optional_copy_base : optional_operations_base { - using optional_operations_base::optional_operations_base; -}; - -// This specialization is for when T is not trivially copy constructible -template -struct optional_copy_base : optional_operations_base { - using optional_operations_base::optional_operations_base; - - __thrust_exec_check_disable__ - optional_copy_base() = default; - __thrust_exec_check_disable__ - __host__ __device__ - optional_copy_base(const optional_copy_base &rhs) { - if (rhs.has_value()) { - this->construct(rhs.get()); - } else { - this->m_has_value = false; - } - } - - __thrust_exec_check_disable__ - optional_copy_base(optional_copy_base &&rhs) = default; - __thrust_exec_check_disable__ - optional_copy_base &operator=(const optional_copy_base &rhs) = default; - __thrust_exec_check_disable__ - optional_copy_base &operator=(optional_copy_base &&rhs) = default; -}; - -template -struct optional_move_base : optional_copy_base { - using optional_copy_base::optional_copy_base; -}; -template struct optional_move_base : optional_copy_base { - using optional_copy_base::optional_copy_base; - - __thrust_exec_check_disable__ - optional_move_base() = default; - __thrust_exec_check_disable__ - optional_move_base(const optional_move_base &rhs) = default; - - __thrust_exec_check_disable__ - __host__ __device__ - optional_move_base(optional_move_base &&rhs) noexcept( - std::is_nothrow_move_constructible::value) { - if (rhs.has_value()) { - this->construct(std::move(rhs.get())); - } else { - this->m_has_value = false; - } - } - __thrust_exec_check_disable__ - optional_move_base &operator=(const optional_move_base &rhs) = default; - __thrust_exec_check_disable__ - optional_move_base &operator=(optional_move_base &&rhs) = default; -}; - -// This class manages conditionally having a trivial copy assignment operator -template -struct optional_copy_assign_base : optional_move_base { - using optional_move_base::optional_move_base; -}; - -template -struct optional_copy_assign_base : optional_move_base { - using optional_move_base::optional_move_base; - - __thrust_exec_check_disable__ - optional_copy_assign_base() = default; - __thrust_exec_check_disable__ - optional_copy_assign_base(const optional_copy_assign_base &rhs) = default; - - __thrust_exec_check_disable__ - optional_copy_assign_base(optional_copy_assign_base &&rhs) = default; - __thrust_exec_check_disable__ - __host__ __device__ - optional_copy_assign_base &operator=(const optional_copy_assign_base &rhs) { - this->assign(rhs); - return *this; - } - __thrust_exec_check_disable__ - optional_copy_assign_base & - operator=(optional_copy_assign_base &&rhs) = default; -}; - -template -struct optional_move_assign_base : optional_copy_assign_base { - using optional_copy_assign_base::optional_copy_assign_base; -}; - -template -struct optional_move_assign_base : optional_copy_assign_base { - using optional_copy_assign_base::optional_copy_assign_base; - - __thrust_exec_check_disable__ - optional_move_assign_base() = default; - __thrust_exec_check_disable__ - optional_move_assign_base(const optional_move_assign_base &rhs) = default; - - __thrust_exec_check_disable__ - optional_move_assign_base(optional_move_assign_base &&rhs) = default; - - __thrust_exec_check_disable__ - optional_move_assign_base & - operator=(const optional_move_assign_base &rhs) = default; - - __thrust_exec_check_disable__ - __host__ __device__ - optional_move_assign_base & - operator=(optional_move_assign_base &&rhs) noexcept( - std::is_nothrow_move_constructible::value - &&std::is_nothrow_move_assignable::value) { - this->assign(std::move(rhs)); - return *this; - } -}; - -// optional_delete_ctor_base will conditionally delete copy and move -// constructors depending on whether T is copy/move constructible -template ::value, - bool EnableMove = std::is_move_constructible::value> -struct optional_delete_ctor_base { - __thrust_exec_check_disable__ - optional_delete_ctor_base() = default; - __thrust_exec_check_disable__ - optional_delete_ctor_base(const optional_delete_ctor_base &) = default; - __thrust_exec_check_disable__ - optional_delete_ctor_base(optional_delete_ctor_base &&) noexcept = default; - __thrust_exec_check_disable__ - optional_delete_ctor_base & - operator=(const optional_delete_ctor_base &) = default; - __thrust_exec_check_disable__ - optional_delete_ctor_base & - operator=(optional_delete_ctor_base &&) noexcept = default; -}; - -template struct optional_delete_ctor_base { - __thrust_exec_check_disable__ - optional_delete_ctor_base() = default; - __thrust_exec_check_disable__ - optional_delete_ctor_base(const optional_delete_ctor_base &) = default; - __thrust_exec_check_disable__ - optional_delete_ctor_base(optional_delete_ctor_base &&) noexcept = delete; - __thrust_exec_check_disable__ - optional_delete_ctor_base & - operator=(const optional_delete_ctor_base &) = default; - __thrust_exec_check_disable__ - optional_delete_ctor_base & - operator=(optional_delete_ctor_base &&) noexcept = default; -}; - -template struct optional_delete_ctor_base { - __thrust_exec_check_disable__ - optional_delete_ctor_base() = default; - __thrust_exec_check_disable__ - optional_delete_ctor_base(const optional_delete_ctor_base &) = delete; - __thrust_exec_check_disable__ - optional_delete_ctor_base(optional_delete_ctor_base &&) noexcept = default; - __thrust_exec_check_disable__ - optional_delete_ctor_base & - operator=(const optional_delete_ctor_base &) = default; - __thrust_exec_check_disable__ - optional_delete_ctor_base & - operator=(optional_delete_ctor_base &&) noexcept = default; -}; - -template struct optional_delete_ctor_base { - __thrust_exec_check_disable__ - optional_delete_ctor_base() = default; - __thrust_exec_check_disable__ - optional_delete_ctor_base(const optional_delete_ctor_base &) = delete; - __thrust_exec_check_disable__ - optional_delete_ctor_base(optional_delete_ctor_base &&) noexcept = delete; - __thrust_exec_check_disable__ - optional_delete_ctor_base & - operator=(const optional_delete_ctor_base &) = default; - __thrust_exec_check_disable__ - optional_delete_ctor_base & - operator=(optional_delete_ctor_base &&) noexcept = default; -}; - -// optional_delete_assign_base will conditionally delete copy and move -// constructors depending on whether T is copy/move constructible + assignable -template ::value && - std::is_copy_assignable::value), - bool EnableMove = (std::is_move_constructible::value && - std::is_move_assignable::value)> -struct optional_delete_assign_base { - __thrust_exec_check_disable__ - optional_delete_assign_base() = default; - __thrust_exec_check_disable__ - optional_delete_assign_base(const optional_delete_assign_base &) = default; - __thrust_exec_check_disable__ - optional_delete_assign_base(optional_delete_assign_base &&) noexcept = - default; - __thrust_exec_check_disable__ - optional_delete_assign_base & - operator=(const optional_delete_assign_base &) = default; - __thrust_exec_check_disable__ - optional_delete_assign_base & - operator=(optional_delete_assign_base &&) noexcept = default; -}; - -template struct optional_delete_assign_base { - __thrust_exec_check_disable__ - optional_delete_assign_base() = default; - __thrust_exec_check_disable__ - optional_delete_assign_base(const optional_delete_assign_base &) = default; - __thrust_exec_check_disable__ - optional_delete_assign_base(optional_delete_assign_base &&) noexcept = - default; - __thrust_exec_check_disable__ - optional_delete_assign_base & - operator=(const optional_delete_assign_base &) = default; - __thrust_exec_check_disable__ - optional_delete_assign_base & - operator=(optional_delete_assign_base &&) noexcept = delete; -}; - -template struct optional_delete_assign_base { - __thrust_exec_check_disable__ - optional_delete_assign_base() = default; - __thrust_exec_check_disable__ - optional_delete_assign_base(const optional_delete_assign_base &) = default; - __thrust_exec_check_disable__ - optional_delete_assign_base(optional_delete_assign_base &&) noexcept = - default; - __thrust_exec_check_disable__ - optional_delete_assign_base & - operator=(const optional_delete_assign_base &) = delete; - __thrust_exec_check_disable__ - optional_delete_assign_base & - operator=(optional_delete_assign_base &&) noexcept = default; -}; - -template struct optional_delete_assign_base { - __thrust_exec_check_disable__ - optional_delete_assign_base() = default; - __thrust_exec_check_disable__ - optional_delete_assign_base(const optional_delete_assign_base &) = default; - __thrust_exec_check_disable__ - optional_delete_assign_base(optional_delete_assign_base &&) noexcept = - default; - __thrust_exec_check_disable__ - optional_delete_assign_base & - operator=(const optional_delete_assign_base &) = delete; - __thrust_exec_check_disable__ - optional_delete_assign_base & - operator=(optional_delete_assign_base &&) noexcept = delete; -}; - -} // namespace detail - -/// \brief A tag type to represent an empty optional -struct nullopt_t { - struct do_not_use {}; - __host__ __device__ - constexpr explicit nullopt_t(do_not_use, do_not_use) noexcept {} -}; -/// \brief Represents an empty optional -/// \synopsis static constexpr nullopt_t nullopt; -/// -/// *Examples*: -/// ``` -/// thrust::optional a = thrust::nullopt; -/// void foo (thrust::optional); -/// foo(thrust::nullopt); //pass an empty optional -/// ``` -static constexpr nullopt_t nullopt{nullopt_t::do_not_use{}, - nullopt_t::do_not_use{}}; - -class bad_optional_access : public std::exception { -public: - bad_optional_access() = default; - __host__ - const char *what() const noexcept { return "Optional has no value"; } -}; - -/// An optional object is an object that contains the storage for another -/// object and manages the lifetime of this contained object, if any. The -/// contained object may be initialized after the optional object has been -/// initialized, and may be destroyed before the optional object has been -/// destroyed. The initialization state of the contained object is tracked by -/// the optional object. -template -class optional : private detail::optional_move_assign_base, - private detail::optional_delete_ctor_base, - private detail::optional_delete_assign_base { - using base = detail::optional_move_assign_base; - - static_assert(!std::is_same::value, - "instantiation of optional with in_place_t is ill-formed"); - static_assert(!std::is_same, nullopt_t>::value, - "instantiation of optional with nullopt_t is ill-formed"); - -public: -// The different versions for C++14 and 11 are needed because deduced return -// types are not SFINAE-safe. This provides better support for things like -// generic lambdas. C.f. -// http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2017/p0826r0.html -#if defined(THRUST_OPTIONAL_CPP14) && !defined(THRUST_OPTIONAL_GCC49) && \ - !defined(THRUST_OPTIONAL_GCC54) && !defined(THRUST_OPTIONAL_GCC55) - /// \group and_then - /// Carries out some operation which returns an optional on the stored - /// object if there is one. \requires `std::invoke(std::forward(f), - /// value())` returns a `std::optional` for some `U`. \returns Let `U` be - /// the result of `std::invoke(std::forward(f), value())`. Returns a - /// `std::optional`. The return value is empty if `*this` is empty, - /// otherwise the return value of `std::invoke(std::forward(f), value())` - /// is returned. - /// \group and_then - /// \synopsis template \nconstexpr auto and_then(F &&f) &; - __thrust_exec_check_disable__ - template - __host__ __device__ - THRUST_OPTIONAL_CPP11_CONSTEXPR auto and_then(F &&f) & { - using result = detail::invoke_result_t; - static_assert(detail::is_optional::value, - "F must return an optional"); - - return has_value() ? detail::invoke(std::forward(f), **this) - : result(nullopt); - } - - /// \group and_then - /// \synopsis template \nconstexpr auto and_then(F &&f) &&; - __thrust_exec_check_disable__ - template - __host__ __device__ - THRUST_OPTIONAL_CPP11_CONSTEXPR auto and_then(F &&f) && { - using result = detail::invoke_result_t; - static_assert(detail::is_optional::value, - "F must return an optional"); - - return has_value() ? detail::invoke(std::forward(f), std::move(**this)) - : result(nullopt); - } - - /// \group and_then - /// \synopsis template \nconstexpr auto and_then(F &&f) const &; - __thrust_exec_check_disable__ - template - __host__ __device__ - constexpr auto and_then(F &&f) const & { - using result = detail::invoke_result_t; - static_assert(detail::is_optional::value, - "F must return an optional"); - - return has_value() ? detail::invoke(std::forward(f), **this) - : result(nullopt); - } - -#ifndef THRUST_OPTIONAL_NO_CONSTRR - /// \group and_then - /// \synopsis template \nconstexpr auto and_then(F &&f) const &&; - __thrust_exec_check_disable__ - template - __host__ __device__ - constexpr auto and_then(F &&f) const && { - using result = detail::invoke_result_t; - static_assert(detail::is_optional::value, - "F must return an optional"); - - return has_value() ? detail::invoke(std::forward(f), std::move(**this)) - : result(nullopt); - } -#endif -#else - /// \group and_then - /// Carries out some operation which returns an optional on the stored - /// object if there is one. \requires `std::invoke(std::forward(f), - /// value())` returns a `std::optional` for some `U`. - /// \returns Let `U` be the result of `std::invoke(std::forward(f), - /// value())`. Returns a `std::optional`. The return value is empty if - /// `*this` is empty, otherwise the return value of - /// `std::invoke(std::forward(f), value())` is returned. - /// \group and_then - /// \synopsis template \nconstexpr auto and_then(F &&f) &; - __thrust_exec_check_disable__ - template - __host__ __device__ - THRUST_OPTIONAL_CPP11_CONSTEXPR detail::invoke_result_t and_then(F &&f) & { - using result = detail::invoke_result_t; - static_assert(detail::is_optional::value, - "F must return an optional"); - - return has_value() ? detail::invoke(std::forward(f), **this) - : result(nullopt); - } - - /// \group and_then - /// \synopsis template \nconstexpr auto and_then(F &&f) &&; - __thrust_exec_check_disable__ - template - __host__ __device__ - THRUST_OPTIONAL_CPP11_CONSTEXPR detail::invoke_result_t and_then(F &&f) && { - using result = detail::invoke_result_t; - static_assert(detail::is_optional::value, - "F must return an optional"); - - return has_value() ? detail::invoke(std::forward(f), std::move(**this)) - : result(nullopt); - } - - /// \group and_then - /// \synopsis template \nconstexpr auto and_then(F &&f) const &; - __thrust_exec_check_disable__ - template - __host__ __device__ - constexpr detail::invoke_result_t and_then(F &&f) const & { - using result = detail::invoke_result_t; - static_assert(detail::is_optional::value, - "F must return an optional"); - - return has_value() ? detail::invoke(std::forward(f), **this) - : result(nullopt); - } - -#ifndef THRUST_OPTIONAL_NO_CONSTRR - /// \group and_then - /// \synopsis template \nconstexpr auto and_then(F &&f) const &&; - __thrust_exec_check_disable__ - template - __host__ __device__ - constexpr detail::invoke_result_t and_then(F &&f) const && { - using result = detail::invoke_result_t; - static_assert(detail::is_optional::value, - "F must return an optional"); - - return has_value() ? detail::invoke(std::forward(f), std::move(**this)) - : result(nullopt); - } -#endif -#endif - -#if defined(THRUST_OPTIONAL_CPP14) && !defined(THRUST_OPTIONAL_GCC49) && \ - !defined(THRUST_OPTIONAL_GCC54) && !defined(THRUST_OPTIONAL_GCC55) - /// \brief Carries out some operation on the stored object if there is one. - /// \returns Let `U` be the result of `std::invoke(std::forward(f), - /// value())`. Returns a `std::optional`. The return value is empty if - /// `*this` is empty, otherwise an `optional` is constructed from the - /// return value of `std::invoke(std::forward(f), value())` and is - /// returned. - /// - /// \group map - /// \synopsis template constexpr auto map(F &&f) &; - __thrust_exec_check_disable__ - template - __host__ __device__ - THRUST_OPTIONAL_CPP11_CONSTEXPR auto map(F &&f) & { - return optional_map_impl(*this, std::forward(f)); - } - - /// \group map - /// \synopsis template constexpr auto map(F &&f) &&; - __thrust_exec_check_disable__ - template - __host__ __device__ - THRUST_OPTIONAL_CPP11_CONSTEXPR auto map(F &&f) && { - return optional_map_impl(std::move(*this), std::forward(f)); - } - - /// \group map - /// \synopsis template constexpr auto map(F &&f) const&; - __thrust_exec_check_disable__ - template - __host__ __device__ - constexpr auto map(F &&f) const & { - return optional_map_impl(*this, std::forward(f)); - } - - /// \group map - /// \synopsis template constexpr auto map(F &&f) const&&; - __thrust_exec_check_disable__ - template - __host__ __device__ - constexpr auto map(F &&f) const && { - return optional_map_impl(std::move(*this), std::forward(f)); - } -#else - /// \brief Carries out some operation on the stored object if there is one. - /// \returns Let `U` be the result of `std::invoke(std::forward(f), - /// value())`. Returns a `std::optional`. The return value is empty if - /// `*this` is empty, otherwise an `optional` is constructed from the - /// return value of `std::invoke(std::forward(f), value())` and is - /// returned. - /// - /// \group map - /// \synopsis template auto map(F &&f) &; - __thrust_exec_check_disable__ - template - __host__ __device__ - THRUST_OPTIONAL_CPP11_CONSTEXPR decltype(optional_map_impl(std::declval(), - std::declval())) - map(F &&f) & { - return optional_map_impl(*this, std::forward(f)); - } - - /// \group map - /// \synopsis template auto map(F &&f) &&; - __thrust_exec_check_disable__ - template - __host__ __device__ - THRUST_OPTIONAL_CPP11_CONSTEXPR decltype(optional_map_impl(std::declval(), - std::declval())) - map(F &&f) && { - return optional_map_impl(std::move(*this), std::forward(f)); - } - - /// \group map - /// \synopsis template auto map(F &&f) const&; - __thrust_exec_check_disable__ - template - __host__ __device__ - constexpr decltype(optional_map_impl(std::declval(), - std::declval())) - map(F &&f) const & { - return optional_map_impl(*this, std::forward(f)); - } - -#ifndef THRUST_OPTIONAL_NO_CONSTRR - /// \group map - /// \synopsis template auto map(F &&f) const&&; - __thrust_exec_check_disable__ - template - __host__ __device__ - constexpr decltype(optional_map_impl(std::declval(), - std::declval())) - map(F &&f) const && { - return optional_map_impl(std::move(*this), std::forward(f)); - } -#endif -#endif - - /// \brief Calls `f` if the optional is empty - /// \requires `std::invoke_result_t` must be void or convertible to - /// `optional`. - /// \effects If `*this` has a value, returns `*this`. - /// Otherwise, if `f` returns `void`, calls `std::forward(f)` and returns - /// `std::nullopt`. Otherwise, returns `std::forward(f)()`. - /// - /// \group or_else - /// \synopsis template optional or_else (F &&f) &; - __thrust_exec_check_disable__ - template * = nullptr> - __host__ __device__ - optional THRUST_OPTIONAL_CPP11_CONSTEXPR or_else(F &&f) & { - if (has_value()) - return *this; - - std::forward(f)(); - return nullopt; - } - - /// \exclude - __thrust_exec_check_disable__ - template * = nullptr> - __host__ __device__ - optional THRUST_OPTIONAL_CPP11_CONSTEXPR or_else(F &&f) & { - return has_value() ? *this : std::forward(f)(); - } - - /// \group or_else - /// \synopsis template optional or_else (F &&f) &&; - __thrust_exec_check_disable__ - template * = nullptr> - __host__ __device__ - optional or_else(F &&f) && { - if (has_value()) - return std::move(*this); - - std::forward(f)(); - return nullopt; - } - - /// \exclude - __thrust_exec_check_disable__ - template * = nullptr> - __host__ __device__ - optional THRUST_OPTIONAL_CPP11_CONSTEXPR or_else(F &&f) && { - return has_value() ? std::move(*this) : std::forward(f)(); - } - - /// \group or_else - /// \synopsis template optional or_else (F &&f) const &; - __thrust_exec_check_disable__ - template * = nullptr> - __host__ __device__ - optional or_else(F &&f) const & { - if (has_value()) - return *this; - - std::forward(f)(); - return nullopt; - } - - /// \exclude - __thrust_exec_check_disable__ - template * = nullptr> - __host__ __device__ - optional THRUST_OPTIONAL_CPP11_CONSTEXPR or_else(F &&f) const & { - return has_value() ? *this : std::forward(f)(); - } - -#ifndef THRUST_OPTIONAL_NO_CONSTRR - /// \exclude - __thrust_exec_check_disable__ - template * = nullptr> - __host__ __device__ - optional or_else(F &&f) const && { - if (has_value()) - return std::move(*this); - - std::forward(f)(); - return nullopt; - } - - /// \exclude - __thrust_exec_check_disable__ - template * = nullptr> - __host__ __device__ - optional or_else(F &&f) const && { - return has_value() ? std::move(*this) : std::forward(f)(); - } -#endif - - /// \brief Maps the stored value with `f` if there is one, otherwise returns - /// `u`. - /// - /// \details If there is a value stored, then `f` is called with `**this` - /// and the value is returned. Otherwise `u` is returned. - /// - /// \group map_or - __thrust_exec_check_disable__ - template - __host__ __device__ - U map_or(F &&f, U &&u) & { - return has_value() ? detail::invoke(std::forward(f), **this) - : std::forward(u); - } - - /// \group map_or - __thrust_exec_check_disable__ - template - __host__ __device__ - U map_or(F &&f, U &&u) && { - return has_value() ? detail::invoke(std::forward(f), std::move(**this)) - : std::forward(u); - } - - /// \group map_or - __thrust_exec_check_disable__ - template - __host__ __device__ - U map_or(F &&f, U &&u) const & { - return has_value() ? detail::invoke(std::forward(f), **this) - : std::forward(u); - } - -#ifndef THRUST_OPTIONAL_NO_CONSTRR - /// \group map_or - __thrust_exec_check_disable__ - template - __host__ __device__ - U map_or(F &&f, U &&u) const && { - return has_value() ? detail::invoke(std::forward(f), std::move(**this)) - : std::forward(u); - } -#endif - - /// \brief Maps the stored value with `f` if there is one, otherwise calls - /// `u` and returns the result. - /// - /// \details If there is a value stored, then `f` is - /// called with `**this` and the value is returned. Otherwise - /// `std::forward(u)()` is returned. - /// - /// \group map_or_else - /// \synopsis template \nauto map_or_else(F &&f, U &&u) &; - __thrust_exec_check_disable__ - template - __host__ __device__ - detail::invoke_result_t map_or_else(F &&f, U &&u) & { - return has_value() ? detail::invoke(std::forward(f), **this) - : std::forward(u)(); - } - - /// \group map_or_else - /// \synopsis template \nauto map_or_else(F &&f, U &&u) - /// &&; - __thrust_exec_check_disable__ - template - __host__ __device__ - detail::invoke_result_t map_or_else(F &&f, U &&u) && { - return has_value() ? detail::invoke(std::forward(f), std::move(**this)) - : std::forward(u)(); - } - - /// \group map_or_else - /// \synopsis template \nauto map_or_else(F &&f, U &&u) - /// const &; - __thrust_exec_check_disable__ - template - __host__ __device__ - detail::invoke_result_t map_or_else(F &&f, U &&u) const & { - return has_value() ? detail::invoke(std::forward(f), **this) - : std::forward(u)(); - } - -#ifndef THRUST_OPTIONAL_NO_CONSTRR - /// \group map_or_else - /// \synopsis template \nauto map_or_else(F &&f, U &&u) - /// const &&; - __thrust_exec_check_disable__ - template - __host__ __device__ - detail::invoke_result_t map_or_else(F &&f, U &&u) const && { - return has_value() ? detail::invoke(std::forward(f), std::move(**this)) - : std::forward(u)(); - } -#endif - - /// \returns `u` if `*this` has a value, otherwise an empty optional. - __thrust_exec_check_disable__ - template - __host__ __device__ - constexpr optional::type> conjunction(U &&u) const { - using result = optional>; - return has_value() ? result{u} : result{nullopt}; - } - - /// \returns `rhs` if `*this` is empty, otherwise the current value. - /// \group disjunction - __thrust_exec_check_disable__ - __host__ __device__ - THRUST_OPTIONAL_CPP11_CONSTEXPR optional disjunction(const optional &rhs) & { - return has_value() ? *this : rhs; - } - - /// \group disjunction - __thrust_exec_check_disable__ - __host__ __device__ - constexpr optional disjunction(const optional &rhs) const & { - return has_value() ? *this : rhs; - } - - /// \group disjunction - __thrust_exec_check_disable__ - __host__ __device__ - THRUST_OPTIONAL_CPP11_CONSTEXPR optional disjunction(const optional &rhs) && { - return has_value() ? std::move(*this) : rhs; - } - -#ifndef THRUST_OPTIONAL_NO_CONSTRR - /// \group disjunction - __thrust_exec_check_disable__ - __host__ __device__ - constexpr optional disjunction(const optional &rhs) const && { - return has_value() ? std::move(*this) : rhs; - } -#endif - - /// \group disjunction - __thrust_exec_check_disable__ - __host__ __device__ - THRUST_OPTIONAL_CPP11_CONSTEXPR optional disjunction(optional &&rhs) & { - return has_value() ? *this : std::move(rhs); - } - - /// \group disjunction - __thrust_exec_check_disable__ - __host__ __device__ - constexpr optional disjunction(optional &&rhs) const & { - return has_value() ? *this : std::move(rhs); - } - - /// \group disjunction - __thrust_exec_check_disable__ - __host__ __device__ - THRUST_OPTIONAL_CPP11_CONSTEXPR optional disjunction(optional &&rhs) && { - return has_value() ? std::move(*this) : std::move(rhs); - } - -#ifndef THRUST_OPTIONAL_NO_CONSTRR - /// \group disjunction - __thrust_exec_check_disable__ - __host__ __device__ - constexpr optional disjunction(optional &&rhs) const && { - return has_value() ? std::move(*this) : std::move(rhs); - } -#endif - - /// Takes the value out of the optional, leaving it empty - /// \group take - __thrust_exec_check_disable__ - __host__ __device__ - optional take() & { - optional ret = *this; - reset(); - return ret; - } - - /// \group take - __thrust_exec_check_disable__ - __host__ __device__ - optional take() const & { - optional ret = *this; - reset(); - return ret; - } - - /// \group take - __thrust_exec_check_disable__ - __host__ __device__ - optional take() && { - optional ret = std::move(*this); - reset(); - return ret; - } - -#ifndef THRUST_OPTIONAL_NO_CONSTRR - /// \group take - __thrust_exec_check_disable__ - __host__ __device__ - optional take() const && { - optional ret = std::move(*this); - reset(); - return ret; - } -#endif - - using value_type = T; - - /// Constructs an optional that does not contain a value. - /// \group ctor_empty - __thrust_exec_check_disable__ - constexpr optional() noexcept = default; - - /// \group ctor_empty - __thrust_exec_check_disable__ - __host__ __device__ - constexpr optional(nullopt_t) noexcept {} - - /// Copy constructor - /// - /// If `rhs` contains a value, the stored value is direct-initialized with - /// it. Otherwise, the constructed optional is empty. - __thrust_exec_check_disable__ - THRUST_OPTIONAL_CPP11_CONSTEXPR optional(const optional &rhs) = default; - - /// Move constructor - /// - /// If `rhs` contains a value, the stored value is direct-initialized with - /// it. Otherwise, the constructed optional is empty. - __thrust_exec_check_disable__ - THRUST_OPTIONAL_CPP11_CONSTEXPR optional(optional &&rhs) = default; - - /// Constructs the stored value in-place using the given arguments. - /// \group in_place - /// \synopsis template constexpr explicit optional(in_place_t, Args&&... args); - __thrust_exec_check_disable__ - template - __host__ __device__ - constexpr explicit optional( - detail::enable_if_t::value, in_place_t>, - Args &&... args) - : base(in_place, std::forward(args)...) {} - - /// \group in_place - /// \synopsis template \nconstexpr explicit optional(in_place_t, std::initializer_list&, Args&&... args); - __thrust_exec_check_disable__ - template - __host__ __device__ - THRUST_OPTIONAL_CPP11_CONSTEXPR explicit optional( - detail::enable_if_t &, - Args &&...>::value, - in_place_t>, - std::initializer_list il, Args &&... args) { - this->construct(il, std::forward(args)...); - } - - /// Constructs the stored value with `u`. - /// \synopsis template constexpr optional(U &&u); - __thrust_exec_check_disable__ - template < - class U = T, - detail::enable_if_t::value> * = nullptr, - detail::enable_forward_value * = nullptr> - __host__ __device__ - constexpr optional(U &&u) : base(in_place, std::forward(u)) {} - - /// \exclude - __thrust_exec_check_disable__ - template < - class U = T, - detail::enable_if_t::value> * = nullptr, - detail::enable_forward_value * = nullptr> - __host__ __device__ - constexpr explicit optional(U &&u) : base(in_place, std::forward(u)) {} - - /// Converting copy constructor. - /// \synopsis template optional(const optional &rhs); - __thrust_exec_check_disable__ - template < - class U, detail::enable_from_other * = nullptr, - detail::enable_if_t::value> * = nullptr> - __host__ __device__ - optional(const optional &rhs) { - this->construct(*rhs); - } - - /// \exclude - __thrust_exec_check_disable__ - template * = nullptr, - detail::enable_if_t::value> * = - nullptr> - __host__ __device__ - explicit optional(const optional &rhs) { - this->construct(*rhs); - } - - /// Converting move constructor. - /// \synopsis template optional(optional &&rhs); - __thrust_exec_check_disable__ - template < - class U, detail::enable_from_other * = nullptr, - detail::enable_if_t::value> * = nullptr> - __host__ __device__ - optional(optional &&rhs) { - this->construct(std::move(*rhs)); - } - - /// \exclude - __thrust_exec_check_disable__ - template < - class U, detail::enable_from_other * = nullptr, - detail::enable_if_t::value> * = nullptr> - __host__ __device__ - explicit optional(optional &&rhs) { - this->construct(std::move(*rhs)); - } - - /// Destroys the stored value if there is one. - __thrust_exec_check_disable__ - ~optional() = default; - - /// Assignment to empty. - /// - /// Destroys the current value if there is one. - __thrust_exec_check_disable__ - __host__ __device__ - optional &operator=(nullopt_t) noexcept { - if (has_value()) { - this->m_value.~T(); - this->m_has_value = false; - } - - return *this; - } - - /// Copy assignment. - /// - /// Copies the value from `rhs` if there is one. Otherwise resets the stored - /// value in `*this`. - __thrust_exec_check_disable__ - optional &operator=(const optional &rhs) = default; - - /// Move assignment. - /// - /// Moves the value from `rhs` if there is one. Otherwise resets the stored - /// value in `*this`. - __thrust_exec_check_disable__ - optional &operator=(optional &&rhs) = default; - - /// Assigns the stored value from `u`, destroying the old value if there was - /// one. - /// \synopsis optional &operator=(U &&u); - __thrust_exec_check_disable__ - template * = nullptr> - __host__ __device__ - optional &operator=(U &&u) { - if (has_value()) { - this->m_value = std::forward(u); - } else { - this->construct(std::forward(u)); - } - - return *this; - } - - /// Converting copy assignment operator. - /// - /// Copies the value from `rhs` if there is one. Otherwise resets the stored - /// value in `*this`. - /// \synopsis optional &operator=(const optional & rhs); - __thrust_exec_check_disable__ - template * = nullptr> - __host__ __device__ - optional &operator=(const optional &rhs) { - if (has_value()) { - if (rhs.has_value()) { - this->m_value = *rhs; - } else { - this->hard_reset(); - } - } - - if (rhs.has_value()) { - this->construct(*rhs); - } - - return *this; - } - - // TODO check exception guarantee - /// Converting move assignment operator. - /// - /// Moves the value from `rhs` if there is one. Otherwise resets the stored - /// value in `*this`. - /// \synopsis optional &operator=(optional && rhs); - __thrust_exec_check_disable__ - template * = nullptr> - __host__ __device__ - optional &operator=(optional &&rhs) { - if (has_value()) { - if (rhs.has_value()) { - this->m_value = std::move(*rhs); - } else { - this->hard_reset(); - } - } - - if (rhs.has_value()) { - this->construct(std::move(*rhs)); - } - - return *this; - } - - /// Constructs the value in-place, destroying the current one if there is - /// one. - /// \group emplace - __thrust_exec_check_disable__ - template - __host__ __device__ - T &emplace(Args &&... args) { - static_assert(std::is_constructible::value, - "T must be constructible with Args"); - - *this = nullopt; - this->construct(std::forward(args)...); - return value(); - } - - /// \group emplace - /// \synopsis template \nT& emplace(std::initializer_list il, Args &&... args); - __thrust_exec_check_disable__ - template - __host__ __device__ - detail::enable_if_t< - std::is_constructible &, Args &&...>::value, - T &> - emplace(std::initializer_list il, Args &&... args) { - *this = nullopt; - this->construct(il, std::forward(args)...); - return value(); - } - - /// Swaps this optional with the other. - /// - /// If neither optionals have a value, nothing happens. - /// If both have a value, the values are swapped. - /// If one has a value, it is moved to the other and the movee is left - /// valueless. - __thrust_exec_check_disable__ - __host__ __device__ - void - swap(optional &rhs) noexcept(std::is_nothrow_move_constructible::value - &&detail::is_nothrow_swappable::value) { - if (has_value()) { - if (rhs.has_value()) { - using thrust::swap; - swap(**this, *rhs); - } else { - new (addressof(rhs.m_value)) T(std::move(this->m_value)); - this->m_value.T::~T(); - } - } else if (rhs.has_value()) { - new (addressof(this->m_value)) T(std::move(rhs.m_value)); - rhs.m_value.T::~T(); - } - } - - /// \returns a pointer to the stored value - /// \requires a value is stored - /// \group pointer - /// \synopsis constexpr const T *operator->() const; - __thrust_exec_check_disable__ - __host__ __device__ - constexpr const T *operator->() const { - return addressof(this->m_value); - } - - /// \group pointer - /// \synopsis constexpr T *operator->(); - __thrust_exec_check_disable__ - __host__ __device__ - THRUST_OPTIONAL_CPP11_CONSTEXPR T *operator->() { - return addressof(this->m_value); - } - - /// \returns the stored value - /// \requires a value is stored - /// \group deref - /// \synopsis constexpr T &operator*(); - __thrust_exec_check_disable__ - __host__ __device__ - THRUST_OPTIONAL_CPP11_CONSTEXPR T &operator*() & { return this->m_value; } - - /// \group deref - /// \synopsis constexpr const T &operator*() const; - __thrust_exec_check_disable__ - __host__ __device__ - constexpr const T &operator*() const & { return this->m_value; } - - /// \exclude - __thrust_exec_check_disable__ - __host__ __device__ - THRUST_OPTIONAL_CPP11_CONSTEXPR T &&operator*() && { - return std::move(this->m_value); - } - -#ifndef THRUST_OPTIONAL_NO_CONSTRR - /// \exclude - __thrust_exec_check_disable__ - __host__ __device__ - constexpr const T &&operator*() const && { return std::move(this->m_value); } -#endif - - /// \returns whether or not the optional has a value - /// \group has_value - __thrust_exec_check_disable__ - __host__ __device__ - constexpr bool has_value() const noexcept { return this->m_has_value; } - - /// \group has_value - __thrust_exec_check_disable__ - __host__ __device__ - constexpr explicit operator bool() const noexcept { - return this->m_has_value; - } - - /// \returns the contained value if there is one, otherwise throws - /// [bad_optional_access] - /// \group value - /// \synopsis constexpr T &value(); - __host__ - THRUST_OPTIONAL_CPP11_CONSTEXPR T &value() & { - if (has_value()) - return this->m_value; - throw bad_optional_access(); - } - /// \group value - /// \synopsis constexpr const T &value() const; - __host__ - THRUST_OPTIONAL_CPP11_CONSTEXPR const T &value() const & { - if (has_value()) - return this->m_value; - throw bad_optional_access(); - } - /// \exclude - __host__ - THRUST_OPTIONAL_CPP11_CONSTEXPR T &&value() && { - if (has_value()) - return std::move(this->m_value); - throw bad_optional_access(); - } - -#ifndef THRUST_OPTIONAL_NO_CONSTRR - /// \exclude - __host__ - THRUST_OPTIONAL_CPP11_CONSTEXPR const T &&value() const && { - if (has_value()) - return std::move(this->m_value); - throw bad_optional_access(); - } -#endif - - /// \returns the stored value if there is one, otherwise returns `u` - /// \group value_or - __thrust_exec_check_disable__ - template - __host__ __device__ - constexpr T value_or(U &&u) const & { - static_assert(std::is_copy_constructible::value && - std::is_convertible::value, - "T must be copy constructible and convertible from U"); - return has_value() ? **this : static_cast(std::forward(u)); - } - - /// \group value_or - __thrust_exec_check_disable__ - template - __host__ __device__ - THRUST_OPTIONAL_CPP11_CONSTEXPR T value_or(U &&u) && { - static_assert(std::is_move_constructible::value && - std::is_convertible::value, - "T must be move constructible and convertible from U"); - return has_value() ? **this : static_cast(std::forward(u)); - } - - /// Destroys the stored value if one exists, making the optional empty - __thrust_exec_check_disable__ - __host__ __device__ - void reset() noexcept { - if (has_value()) { - this->m_value.~T(); - this->m_has_value = false; - } - } -}; - -/// \group relop -/// \brief Compares two optional objects -/// \details If both optionals contain a value, they are compared with `T`s -/// relational operators. Otherwise `lhs` and `rhs` are equal only if they are -/// both empty, and `lhs` is less than `rhs` only if `rhs` is empty and `lhs` -/// is not. -__thrust_exec_check_disable__ -template -__host__ __device__ -inline constexpr bool operator==(const optional &lhs, - const optional &rhs) { - return lhs.has_value() == rhs.has_value() && - (!lhs.has_value() || *lhs == *rhs); -} -/// \group relop -__thrust_exec_check_disable__ -template -__host__ __device__ -inline constexpr bool operator!=(const optional &lhs, - const optional &rhs) { - return lhs.has_value() != rhs.has_value() || - (lhs.has_value() && *lhs != *rhs); -} -/// \group relop -__thrust_exec_check_disable__ -template -__host__ __device__ -inline constexpr bool operator<(const optional &lhs, - const optional &rhs) { - return rhs.has_value() && (!lhs.has_value() || *lhs < *rhs); -} -/// \group relop -__thrust_exec_check_disable__ -template -__host__ __device__ -inline constexpr bool operator>(const optional &lhs, - const optional &rhs) { - return lhs.has_value() && (!rhs.has_value() || *lhs > *rhs); -} -/// \group relop -__thrust_exec_check_disable__ -template -__host__ __device__ -inline constexpr bool operator<=(const optional &lhs, - const optional &rhs) { - return !lhs.has_value() || (rhs.has_value() && *lhs <= *rhs); -} -/// \group relop -__thrust_exec_check_disable__ -template -__host__ __device__ -inline constexpr bool operator>=(const optional &lhs, - const optional &rhs) { - return !rhs.has_value() || (lhs.has_value() && *lhs >= *rhs); -} - -/// \group relop_nullopt -/// \brief Compares an optional to a `nullopt` -/// \details Equivalent to comparing the optional to an empty optional -__thrust_exec_check_disable__ -template -__host__ __device__ -inline constexpr bool operator==(const optional &lhs, nullopt_t) noexcept { - return !lhs.has_value(); -} -/// \group relop_nullopt -__thrust_exec_check_disable__ -template -__host__ __device__ -inline constexpr bool operator==(nullopt_t, const optional &rhs) noexcept { - return !rhs.has_value(); -} -/// \group relop_nullopt -__thrust_exec_check_disable__ -template -__host__ __device__ -inline constexpr bool operator!=(const optional &lhs, nullopt_t) noexcept { - return lhs.has_value(); -} -/// \group relop_nullopt -__thrust_exec_check_disable__ -template -__host__ __device__ -inline constexpr bool operator!=(nullopt_t, const optional &rhs) noexcept { - return rhs.has_value(); -} -/// \group relop_nullopt -__thrust_exec_check_disable__ -template -__host__ __device__ -inline constexpr bool operator<(const optional &, nullopt_t) noexcept { - return false; -} -/// \group relop_nullopt -__thrust_exec_check_disable__ -template -__host__ __device__ -inline constexpr bool operator<(nullopt_t, const optional &rhs) noexcept { - return rhs.has_value(); -} -/// \group relop_nullopt -__thrust_exec_check_disable__ -template -__host__ __device__ -inline constexpr bool operator<=(const optional &lhs, nullopt_t) noexcept { - return !lhs.has_value(); -} -/// \group relop_nullopt -__thrust_exec_check_disable__ -template -__host__ __device__ -inline constexpr bool operator<=(nullopt_t, const optional &) noexcept { - return true; -} -/// \group relop_nullopt -__thrust_exec_check_disable__ -template -__host__ __device__ -inline constexpr bool operator>(const optional &lhs, nullopt_t) noexcept { - return lhs.has_value(); -} -/// \group relop_nullopt -__thrust_exec_check_disable__ -template -__host__ __device__ -inline constexpr bool operator>(nullopt_t, const optional &) noexcept { - return false; -} -/// \group relop_nullopt -__thrust_exec_check_disable__ -template -__host__ __device__ -inline constexpr bool operator>=(const optional &, nullopt_t) noexcept { - return true; -} -/// \group relop_nullopt -__thrust_exec_check_disable__ -template -__host__ __device__ -inline constexpr bool operator>=(nullopt_t, const optional &rhs) noexcept { - return !rhs.has_value(); -} - -/// \group relop_t -/// \brief Compares the optional with a value. -/// \details If the optional has a value, it is compared with the other value -/// using `T`s relational operators. Otherwise, the optional is considered -/// less than the value. -__thrust_exec_check_disable__ -template -__host__ __device__ -inline constexpr bool operator==(const optional &lhs, const U &rhs) { - return lhs.has_value() ? *lhs == rhs : false; -} -/// \group relop_t -__thrust_exec_check_disable__ -template -__host__ __device__ -inline constexpr bool operator==(const U &lhs, const optional &rhs) { - return rhs.has_value() ? lhs == *rhs : false; -} -/// \group relop_t -__thrust_exec_check_disable__ -template -__host__ __device__ -inline constexpr bool operator!=(const optional &lhs, const U &rhs) { - return lhs.has_value() ? *lhs != rhs : true; -} -/// \group relop_t -__thrust_exec_check_disable__ -template -__host__ __device__ -inline constexpr bool operator!=(const U &lhs, const optional &rhs) { - return rhs.has_value() ? lhs != *rhs : true; -} -/// \group relop_t -__thrust_exec_check_disable__ -template -__host__ __device__ -inline constexpr bool operator<(const optional &lhs, const U &rhs) { - return lhs.has_value() ? *lhs < rhs : true; -} -/// \group relop_t -__thrust_exec_check_disable__ -template -__host__ __device__ -inline constexpr bool operator<(const U &lhs, const optional &rhs) { - return rhs.has_value() ? lhs < *rhs : false; -} -/// \group relop_t -__thrust_exec_check_disable__ -template -__host__ __device__ -inline constexpr bool operator<=(const optional &lhs, const U &rhs) { - return lhs.has_value() ? *lhs <= rhs : true; -} -/// \group relop_t -__thrust_exec_check_disable__ -template -__host__ __device__ -inline constexpr bool operator<=(const U &lhs, const optional &rhs) { - return rhs.has_value() ? lhs <= *rhs : false; -} -/// \group relop_t -__thrust_exec_check_disable__ -template -__host__ __device__ -inline constexpr bool operator>(const optional &lhs, const U &rhs) { - return lhs.has_value() ? *lhs > rhs : false; -} -/// \group relop_t -__thrust_exec_check_disable__ -template -__host__ __device__ -inline constexpr bool operator>(const U &lhs, const optional &rhs) { - return rhs.has_value() ? lhs > *rhs : true; -} -/// \group relop_t -__thrust_exec_check_disable__ -template -__host__ __device__ -inline constexpr bool operator>=(const optional &lhs, const U &rhs) { - return lhs.has_value() ? *lhs >= rhs : false; -} -/// \group relop_t -__thrust_exec_check_disable__ -template -__host__ __device__ -inline constexpr bool operator>=(const U &lhs, const optional &rhs) { - return rhs.has_value() ? lhs >= *rhs : true; -} - -/// \synopsis template \nvoid swap(optional &lhs, optional &rhs); -__thrust_exec_check_disable__ -template ::value> * = nullptr, - detail::enable_if_t::value> * = nullptr> -__host__ __device__ -void swap(optional &lhs, - optional &rhs) noexcept(noexcept(lhs.swap(rhs))) { - return lhs.swap(rhs); -} - -namespace detail { -struct i_am_secret {}; -} // namespace detail - -__thrust_exec_check_disable__ -template ::value, - detail::decay_t, T>> -__host__ __device__ -inline constexpr optional make_optional(U &&v) { - return optional(std::forward(v)); -} - -__thrust_exec_check_disable__ -template -__host__ __device__ -inline constexpr optional make_optional(Args &&... args) { - return optional(in_place, std::forward(args)...); -} -__thrust_exec_check_disable__ -template -__host__ __device__ -inline constexpr optional make_optional(std::initializer_list il, - Args &&... args) { - return optional(in_place, il, std::forward(args)...); -} - -#if THRUST_CPP_DIALECT >= 2017 -template optional(T)->optional; -#endif - -/// \exclude -namespace detail { -#ifdef THRUST_OPTIONAL_CPP14 -__thrust_exec_check_disable__ -template (), - *std::declval())), - detail::enable_if_t::value> * = nullptr> -__host__ __device__ -constexpr auto optional_map_impl(Opt &&opt, F &&f) { - return opt.has_value() - ? detail::invoke(std::forward(f), *std::forward(opt)) - : optional(nullopt); -} - -__thrust_exec_check_disable__ -template (), - *std::declval())), - detail::enable_if_t::value> * = nullptr> -__host__ __device__ -auto optional_map_impl(Opt &&opt, F &&f) { - if (opt.has_value()) { - detail::invoke(std::forward(f), *std::forward(opt)); - return make_optional(monostate{}); - } - - return optional(nullopt); -} -#else -__thrust_exec_check_disable__ -template (), - *std::declval())), - detail::enable_if_t::value> * = nullptr> -__host__ __device__ -constexpr auto optional_map_impl(Opt &&opt, F &&f) -> optional { - return opt.has_value() - ? detail::invoke(std::forward(f), *std::forward(opt)) - : optional(nullopt); -} - -__thrust_exec_check_disable__ -template (), - *std::declval())), - detail::enable_if_t::value> * = nullptr> -__host__ __device__ -auto optional_map_impl(Opt &&opt, F &&f) -> optional { - if (opt.has_value()) { - detail::invoke(std::forward(f), *std::forward(opt)); - return monostate{}; - } - - return nullopt; -} -#endif -} // namespace detail - -/// Specialization for when `T` is a reference. `optional` acts similarly -/// to a `T*`, but provides more operations and shows intent more clearly. -/// -/// *Examples*: -/// -/// ``` -/// int i = 42; -/// thrust::optional o = i; -/// *o == 42; //true -/// i = 12; -/// *o = 12; //true -/// &*o == &i; //true -/// ``` -/// -/// Assignment has rebind semantics rather than assign-through semantics: -/// -/// ``` -/// int j = 8; -/// o = j; -/// -/// &*o == &j; //true -/// ``` -template class optional { -public: -// The different versions for C++14 and 11 are needed because deduced return -// types are not SFINAE-safe. This provides better support for things like -// generic lambdas. C.f. -// http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2017/p0826r0.html -#if defined(THRUST_OPTIONAL_CPP14) && !defined(THRUST_OPTIONAL_GCC49) && \ - !defined(THRUST_OPTIONAL_GCC54) && !defined(THRUST_OPTIONAL_GCC55) - /// \group and_then - /// Carries out some operation which returns an optional on the stored - /// object if there is one. \requires `std::invoke(std::forward(f), - /// value())` returns a `std::optional` for some `U`. \returns Let `U` be - /// the result of `std::invoke(std::forward(f), value())`. Returns a - /// `std::optional`. The return value is empty if `*this` is empty, - /// otherwise the return value of `std::invoke(std::forward(f), value())` - /// is returned. - /// \group and_then - /// \synopsis template \nconstexpr auto and_then(F &&f) &; - __thrust_exec_check_disable__ - template - __host__ __device__ - THRUST_OPTIONAL_CPP11_CONSTEXPR auto and_then(F &&f) & { - using result = detail::invoke_result_t; - static_assert(detail::is_optional::value, - "F must return an optional"); - - return has_value() ? detail::invoke(std::forward(f), **this) - : result(nullopt); - } - - /// \group and_then - /// \synopsis template \nconstexpr auto and_then(F &&f) &&; - __thrust_exec_check_disable__ - template - __host__ __device__ - THRUST_OPTIONAL_CPP11_CONSTEXPR auto and_then(F &&f) && { - using result = detail::invoke_result_t; - static_assert(detail::is_optional::value, - "F must return an optional"); - - return has_value() ? detail::invoke(std::forward(f), **this) - : result(nullopt); - } - - /// \group and_then - /// \synopsis template \nconstexpr auto and_then(F &&f) const &; - __thrust_exec_check_disable__ - template - __host__ __device__ - constexpr auto and_then(F &&f) const & { - using result = detail::invoke_result_t; - static_assert(detail::is_optional::value, - "F must return an optional"); - - return has_value() ? detail::invoke(std::forward(f), **this) - : result(nullopt); - } - -#ifndef THRUST_OPTIONAL_NO_CONSTRR - /// \group and_then - /// \synopsis template \nconstexpr auto and_then(F &&f) const &&; - __thrust_exec_check_disable__ - template - __host__ __device__ - constexpr auto and_then(F &&f) const && { - using result = detail::invoke_result_t; - static_assert(detail::is_optional::value, - "F must return an optional"); - - return has_value() ? detail::invoke(std::forward(f), **this) - : result(nullopt); - } -#endif -#else - /// \group and_then - /// Carries out some operation which returns an optional on the stored - /// object if there is one. \requires `std::invoke(std::forward(f), - /// value())` returns a `std::optional` for some `U`. \returns Let `U` be - /// the result of `std::invoke(std::forward(f), value())`. Returns a - /// `std::optional`. The return value is empty if `*this` is empty, - /// otherwise the return value of `std::invoke(std::forward(f), value())` - /// is returned. - /// \group and_then - /// \synopsis template \nconstexpr auto and_then(F &&f) &; - __thrust_exec_check_disable__ - template - __host__ __device__ - THRUST_OPTIONAL_CPP11_CONSTEXPR detail::invoke_result_t and_then(F &&f) & { - using result = detail::invoke_result_t; - static_assert(detail::is_optional::value, - "F must return an optional"); - - return has_value() ? detail::invoke(std::forward(f), **this) - : result(nullopt); - } - - /// \group and_then - /// \synopsis template \nconstexpr auto and_then(F &&f) &&; - __thrust_exec_check_disable__ - template - __host__ __device__ - THRUST_OPTIONAL_CPP11_CONSTEXPR detail::invoke_result_t and_then(F &&f) && { - using result = detail::invoke_result_t; - static_assert(detail::is_optional::value, - "F must return an optional"); - - return has_value() ? detail::invoke(std::forward(f), **this) - : result(nullopt); - } - - /// \group and_then - /// \synopsis template \nconstexpr auto and_then(F &&f) const &; - __thrust_exec_check_disable__ - template - __host__ __device__ - constexpr detail::invoke_result_t and_then(F &&f) const & { - using result = detail::invoke_result_t; - static_assert(detail::is_optional::value, - "F must return an optional"); - - return has_value() ? detail::invoke(std::forward(f), **this) - : result(nullopt); - } - -#ifndef THRUST_OPTIONAL_NO_CONSTRR - /// \group and_then - /// \synopsis template \nconstexpr auto and_then(F &&f) const &&; - __thrust_exec_check_disable__ - template - __host__ __device__ - constexpr detail::invoke_result_t and_then(F &&f) const && { - using result = detail::invoke_result_t; - static_assert(detail::is_optional::value, - "F must return an optional"); - - return has_value() ? detail::invoke(std::forward(f), **this) - : result(nullopt); - } -#endif -#endif - -#if defined(THRUST_OPTIONAL_CPP14) && !defined(THRUST_OPTIONAL_GCC49) && \ - !defined(THRUST_OPTIONAL_GCC54) && !defined(THRUST_OPTIONAL_GCC55) - /// \brief Carries out some operation on the stored object if there is one. - /// \returns Let `U` be the result of `std::invoke(std::forward(f), - /// value())`. Returns a `std::optional`. The return value is empty if - /// `*this` is empty, otherwise an `optional` is constructed from the - /// return value of `std::invoke(std::forward(f), value())` and is - /// returned. - /// - /// \group map - /// \synopsis template constexpr auto map(F &&f) &; - __thrust_exec_check_disable__ - template - __host__ __device__ - THRUST_OPTIONAL_CPP11_CONSTEXPR auto map(F &&f) & { - return detail::optional_map_impl(*this, std::forward(f)); - } - - /// \group map - /// \synopsis template constexpr auto map(F &&f) &&; - __thrust_exec_check_disable__ - template - __host__ __device__ - THRUST_OPTIONAL_CPP11_CONSTEXPR auto map(F &&f) && { - return detail::optional_map_impl(std::move(*this), std::forward(f)); - } - - /// \group map - /// \synopsis template constexpr auto map(F &&f) const&; - __thrust_exec_check_disable__ - template - __host__ __device__ - constexpr auto map(F &&f) const & { - return detail::optional_map_impl(*this, std::forward(f)); - } - - /// \group map - /// \synopsis template constexpr auto map(F &&f) const&&; - __thrust_exec_check_disable__ - template - __host__ __device__ - constexpr auto map(F &&f) const && { - return detail::optional_map_impl(std::move(*this), std::forward(f)); - } -#else - /// \brief Carries out some operation on the stored object if there is one. - /// \returns Let `U` be the result of `std::invoke(std::forward(f), - /// value())`. Returns a `std::optional`. The return value is empty if - /// `*this` is empty, otherwise an `optional` is constructed from the - /// return value of `std::invoke(std::forward(f), value())` and is - /// returned. - /// - /// \group map - /// \synopsis template auto map(F &&f) &; - __thrust_exec_check_disable__ - template - __host__ __device__ - THRUST_OPTIONAL_CPP11_CONSTEXPR decltype(detail::optional_map_impl(std::declval(), - std::declval())) - map(F &&f) & { - return detail::optional_map_impl(*this, std::forward(f)); - } - - /// \group map - /// \synopsis template auto map(F &&f) &&; - __thrust_exec_check_disable__ - template - __host__ __device__ - THRUST_OPTIONAL_CPP11_CONSTEXPR decltype(detail::optional_map_impl(std::declval(), - std::declval())) - map(F &&f) && { - return detail::optional_map_impl(std::move(*this), std::forward(f)); - } - - /// \group map - /// \synopsis template auto map(F &&f) const&; - __thrust_exec_check_disable__ - template - __host__ __device__ - constexpr decltype(detail::optional_map_impl(std::declval(), - std::declval())) - map(F &&f) const & { - return detail::optional_map_impl(*this, std::forward(f)); - } - -#ifndef THRUST_OPTIONAL_NO_CONSTRR - /// \group map - /// \synopsis template auto map(F &&f) const&&; - __thrust_exec_check_disable__ - template - __host__ __device__ - constexpr decltype(detail::optional_map_impl(std::declval(), - std::declval())) - map(F &&f) const && { - return detail::optional_map_impl(std::move(*this), std::forward(f)); - } -#endif -#endif - - /// \brief Calls `f` if the optional is empty - /// \requires `std::invoke_result_t` must be void or convertible to - /// `optional`. \effects If `*this` has a value, returns `*this`. - /// Otherwise, if `f` returns `void`, calls `std::forward(f)` and returns - /// `std::nullopt`. Otherwise, returns `std::forward(f)()`. - /// - /// \group or_else - /// \synopsis template optional or_else (F &&f) &; - __thrust_exec_check_disable__ - template * = nullptr> - __host__ __device__ - optional - THRUST_OPTIONAL_CPP11_CONSTEXPR or_else(F &&f) & { - if (has_value()) - return *this; - - std::forward(f)(); - return nullopt; - } - - /// \exclude - __thrust_exec_check_disable__ - template * = nullptr> - __host__ __device__ - optional - THRUST_OPTIONAL_CPP11_CONSTEXPR or_else(F &&f) & { - return has_value() ? *this : std::forward(f)(); - } - - /// \group or_else - /// \synopsis template optional or_else (F &&f) &&; - __thrust_exec_check_disable__ - template * = nullptr> - __host__ __device__ - optional or_else(F &&f) && { - if (has_value()) - return std::move(*this); - - std::forward(f)(); - return nullopt; - } - - /// \exclude - __thrust_exec_check_disable__ - template * = nullptr> - __host__ __device__ - optional THRUST_OPTIONAL_CPP11_CONSTEXPR or_else(F &&f) && { - return has_value() ? std::move(*this) : std::forward(f)(); - } - - /// \group or_else - /// \synopsis template optional or_else (F &&f) const &; - __thrust_exec_check_disable__ - template * = nullptr> - __host__ __device__ - optional or_else(F &&f) const & { - if (has_value()) - return *this; - - std::forward(f)(); - return nullopt; - } - - /// \exclude - __thrust_exec_check_disable__ - template * = nullptr> - __host__ __device__ - optional THRUST_OPTIONAL_CPP11_CONSTEXPR or_else(F &&f) const & { - return has_value() ? *this : std::forward(f)(); - } - -#ifndef THRUST_OPTIONAL_NO_CONSTRR - /// \exclude - __thrust_exec_check_disable__ - template * = nullptr> - __host__ __device__ - optional or_else(F &&f) const && { - if (has_value()) - return std::move(*this); - - std::forward(f)(); - return nullopt; - } - - /// \exclude - __thrust_exec_check_disable__ - template * = nullptr> - __host__ __device__ - optional or_else(F &&f) const && { - return has_value() ? std::move(*this) : std::forward(f)(); - } -#endif - - /// \brief Maps the stored value with `f` if there is one, otherwise returns - /// `u`. - /// - /// \details If there is a value stored, then `f` is called with `**this` - /// and the value is returned. Otherwise `u` is returned. - /// - /// \group map_or - __thrust_exec_check_disable__ - template - __host__ __device__ - U map_or(F &&f, U &&u) & { - return has_value() ? detail::invoke(std::forward(f), **this) - : std::forward(u); - } - - /// \group map_or - __thrust_exec_check_disable__ - template - __host__ __device__ - U map_or(F &&f, U &&u) && { - return has_value() ? detail::invoke(std::forward(f), std::move(**this)) - : std::forward(u); - } - - /// \group map_or - __thrust_exec_check_disable__ - template - __host__ __device__ - U map_or(F &&f, U &&u) const & { - return has_value() ? detail::invoke(std::forward(f), **this) - : std::forward(u); - } - -#ifndef THRUST_OPTIONAL_NO_CONSTRR - /// \group map_or - __thrust_exec_check_disable__ - template - __host__ __device__ - U map_or(F &&f, U &&u) const && { - return has_value() ? detail::invoke(std::forward(f), std::move(**this)) - : std::forward(u); - } -#endif - - /// \brief Maps the stored value with `f` if there is one, otherwise calls - /// `u` and returns the result. - /// - /// \details If there is a value stored, then `f` is - /// called with `**this` and the value is returned. Otherwise - /// `std::forward(u)()` is returned. - /// - /// \group map_or_else - /// \synopsis template \nauto map_or_else(F &&f, U &&u) &; - __thrust_exec_check_disable__ - template - __host__ __device__ - detail::invoke_result_t map_or_else(F &&f, U &&u) & { - return has_value() ? detail::invoke(std::forward(f), **this) - : std::forward(u)(); - } - - /// \group map_or_else - /// \synopsis template \nauto map_or_else(F &&f, U &&u) - /// &&; - __thrust_exec_check_disable__ - template - __host__ __device__ - detail::invoke_result_t map_or_else(F &&f, U &&u) && { - return has_value() ? detail::invoke(std::forward(f), std::move(**this)) - : std::forward(u)(); - } - - /// \group map_or_else - /// \synopsis template \nauto map_or_else(F &&f, U &&u) - /// const &; - __thrust_exec_check_disable__ - template - __host__ __device__ - detail::invoke_result_t map_or_else(F &&f, U &&u) const & { - return has_value() ? detail::invoke(std::forward(f), **this) - : std::forward(u)(); - } - -#ifndef THRUST_OPTIONAL_NO_CONSTRR - /// \group map_or_else - /// \synopsis template \nauto map_or_else(F &&f, U &&u) - /// const &&; - __thrust_exec_check_disable__ - template - __host__ __device__ - detail::invoke_result_t map_or_else(F &&f, U &&u) const && { - return has_value() ? detail::invoke(std::forward(f), std::move(**this)) - : std::forward(u)(); - } -#endif - - /// \returns `u` if `*this` has a value, otherwise an empty optional. - __thrust_exec_check_disable__ - template - __host__ __device__ - constexpr optional::type> conjunction(U &&u) const { - using result = optional>; - return has_value() ? result{u} : result{nullopt}; - } - - /// \returns `rhs` if `*this` is empty, otherwise the current value. - /// \group disjunction - __thrust_exec_check_disable__ - __host__ __device__ - THRUST_OPTIONAL_CPP11_CONSTEXPR optional disjunction(const optional &rhs) & { - return has_value() ? *this : rhs; - } - - /// \group disjunction - __thrust_exec_check_disable__ - __host__ __device__ - constexpr optional disjunction(const optional &rhs) const & { - return has_value() ? *this : rhs; - } - - /// \group disjunction - __thrust_exec_check_disable__ - __host__ __device__ - THRUST_OPTIONAL_CPP11_CONSTEXPR optional disjunction(const optional &rhs) && { - return has_value() ? std::move(*this) : rhs; - } - -#ifndef THRUST_OPTIONAL_NO_CONSTRR - /// \group disjunction - __thrust_exec_check_disable__ - __host__ __device__ - constexpr optional disjunction(const optional &rhs) const && { - return has_value() ? std::move(*this) : rhs; - } -#endif - - /// \group disjunction - __thrust_exec_check_disable__ - __host__ __device__ - THRUST_OPTIONAL_CPP11_CONSTEXPR optional disjunction(optional &&rhs) & { - return has_value() ? *this : std::move(rhs); - } - - /// \group disjunction - __thrust_exec_check_disable__ - __host__ __device__ - constexpr optional disjunction(optional &&rhs) const & { - return has_value() ? *this : std::move(rhs); - } - - /// \group disjunction - __thrust_exec_check_disable__ - __host__ __device__ - THRUST_OPTIONAL_CPP11_CONSTEXPR optional disjunction(optional &&rhs) && { - return has_value() ? std::move(*this) : std::move(rhs); - } - -#ifndef THRUST_OPTIONAL_NO_CONSTRR - /// \group disjunction - __thrust_exec_check_disable__ - __host__ __device__ - constexpr optional disjunction(optional &&rhs) const && { - return has_value() ? std::move(*this) : std::move(rhs); - } -#endif - - /// Takes the value out of the optional, leaving it empty - /// \group take - __thrust_exec_check_disable__ - __host__ __device__ - optional take() & { - optional ret = *this; - reset(); - return ret; - } - - /// \group take - __thrust_exec_check_disable__ - __host__ __device__ - optional take() const & { - optional ret = *this; - reset(); - return ret; - } - - /// \group take - __thrust_exec_check_disable__ - __host__ __device__ - optional take() && { - optional ret = std::move(*this); - reset(); - return ret; - } - -#ifndef THRUST_OPTIONAL_NO_CONSTRR - /// \group take - __thrust_exec_check_disable__ - __host__ __device__ - optional take() const && { - optional ret = std::move(*this); - reset(); - return ret; - } -#endif - - using value_type = T &; - - /// Constructs an optional that does not contain a value. - /// \group ctor_empty - __thrust_exec_check_disable__ - __host__ __device__ - constexpr optional() noexcept : m_value(nullptr) {} - - /// \group ctor_empty - __thrust_exec_check_disable__ - __host__ __device__ - constexpr optional(nullopt_t) noexcept : m_value(nullptr) {} - - /// Copy constructor - /// - /// If `rhs` contains a value, the stored value is direct-initialized with - /// it. Otherwise, the constructed optional is empty. - __thrust_exec_check_disable__ - THRUST_OPTIONAL_CPP11_CONSTEXPR optional(const optional &rhs) noexcept = default; - - /// Move constructor - /// - /// If `rhs` contains a value, the stored value is direct-initialized with - /// it. Otherwise, the constructed optional is empty. - __thrust_exec_check_disable__ - THRUST_OPTIONAL_CPP11_CONSTEXPR optional(optional &&rhs) = default; - - /// Constructs the stored value with `u`. - /// \synopsis template constexpr optional(U &&u); - __thrust_exec_check_disable__ - template >::value> - * = nullptr> - __host__ __device__ - constexpr optional(U &&u) : m_value(addressof(u)) { - static_assert(std::is_lvalue_reference::value, "U must be an lvalue"); - } - - /// \exclude - __thrust_exec_check_disable__ - template - __host__ __device__ - constexpr explicit optional(const optional &rhs) : optional(*rhs) {} - - /// No-op - __thrust_exec_check_disable__ - ~optional() = default; - - /// Assignment to empty. - /// - /// Destroys the current value if there is one. - __thrust_exec_check_disable__ - __host__ __device__ - optional &operator=(nullopt_t) noexcept { - m_value = nullptr; - return *this; - } - - /// Copy assignment. - /// - /// Rebinds this optional to the referee of `rhs` if there is one. Otherwise - /// resets the stored value in `*this`. - __thrust_exec_check_disable__ - optional &operator=(const optional &rhs) = default; - - /// Rebinds this optional to `u`. - /// - /// \requires `U` must be an lvalue reference. - /// \synopsis optional &operator=(U &&u); - __thrust_exec_check_disable__ - template >::value> - * = nullptr> - __host__ __device__ - optional &operator=(U &&u) { - static_assert(std::is_lvalue_reference::value, "U must be an lvalue"); - m_value = addressof(u); - return *this; - } - - /// Converting copy assignment operator. - /// - /// Rebinds this optional to the referee of `rhs` if there is one. Otherwise - /// resets the stored value in `*this`. - __thrust_exec_check_disable__ - template - __host__ __device__ - optional &operator=(const optional &rhs) { - m_value = addressof(rhs.value()); - return *this; - } - - /// Constructs the value in-place, destroying the current one if there is - /// one. - /// - /// \group emplace - __thrust_exec_check_disable__ - template - __host__ __device__ - T &emplace(Args &&... args) noexcept { - static_assert(std::is_constructible::value, - "T must be constructible with Args"); - - *this = nullopt; - this->construct(std::forward(args)...); - } - - /// Swaps this optional with the other. - /// - /// If neither optionals have a value, nothing happens. - /// If both have a value, the values are swapped. - /// If one has a value, it is moved to the other and the movee is left - /// valueless. - __thrust_exec_check_disable__ - __host__ __device__ - void swap(optional &rhs) noexcept { std::swap(m_value, rhs.m_value); } - - /// \returns a pointer to the stored value - /// \requires a value is stored - /// \group pointer - /// \synopsis constexpr const T *operator->() const; - __thrust_exec_check_disable__ - __host__ __device__ - constexpr const T *operator->() const { return m_value; } - - /// \group pointer - /// \synopsis constexpr T *operator->(); - __thrust_exec_check_disable__ - __host__ __device__ - THRUST_OPTIONAL_CPP11_CONSTEXPR T *operator->() { return m_value; } - - /// \returns the stored value - /// \requires a value is stored - /// \group deref - /// \synopsis constexpr T &operator*(); - __thrust_exec_check_disable__ - THRUST_OPTIONAL_CPP11_CONSTEXPR T &operator*() { return *m_value; } - - /// \group deref - /// \synopsis constexpr const T &operator*() const; - __thrust_exec_check_disable__ - __host__ __device__ - constexpr const T &operator*() const { return *m_value; } - - /// \returns whether or not the optional has a value - /// \group has_value - __thrust_exec_check_disable__ - __host__ __device__ - constexpr bool has_value() const noexcept { return m_value != nullptr; } - - /// \group has_value - __thrust_exec_check_disable__ - __host__ __device__ - constexpr explicit operator bool() const noexcept { - return m_value != nullptr; - } - - /// \returns the contained value if there is one, otherwise throws - /// [bad_optional_access] - /// \group value - /// synopsis constexpr T &value(); - __host__ - THRUST_OPTIONAL_CPP11_CONSTEXPR T &value() { - if (has_value()) - return *m_value; - throw bad_optional_access(); - } - /// \group value - /// \synopsis constexpr const T &value() const; - __host__ - THRUST_OPTIONAL_CPP11_CONSTEXPR const T &value() const { - if (has_value()) - return *m_value; - throw bad_optional_access(); - } - - /// \returns the stored value if there is one, otherwise returns `u` - /// \group value_or - __thrust_exec_check_disable__ - template - __host__ __device__ - constexpr T value_or(U &&u) const & { - static_assert(std::is_copy_constructible::value && - std::is_convertible::value, - "T must be copy constructible and convertible from U"); - return has_value() ? **this : static_cast(std::forward(u)); - } - - /// \group value_or - __thrust_exec_check_disable__ - template - __host__ __device__ - THRUST_OPTIONAL_CPP11_CONSTEXPR T value_or(U &&u) && { - static_assert(std::is_move_constructible::value && - std::is_convertible::value, - "T must be move constructible and convertible from U"); - return has_value() ? **this : static_cast(std::forward(u)); - } - - /// Destroys the stored value if one exists, making the optional empty - __thrust_exec_check_disable__ - void reset() noexcept { m_value = nullptr; } - -private: - T *m_value; -}; - -} // end namespace thrust - -namespace std { -// TODO SFINAE -template struct hash> { - __thrust_exec_check_disable__ - __host__ __device__ - ::std::size_t operator()(const thrust::optional &o) const { - if (!o.has_value()) - return 0; - - return std::hash>()(*o); - } -}; -} // namespace std - -#endif // THRUST_CPP_DIALECT >= 2011 - diff --git a/spaces/CVPR/LIVE/thrust/thrust/random.h b/spaces/CVPR/LIVE/thrust/thrust/random.h deleted file mode 100644 index c0e9e2282414b6e891808337eef41d016abbbe7e..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/random.h +++ /dev/null @@ -1,120 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -/*! \file random.h - * \brief Pseudo-random number generators. - */ - -#pragma once - -#include -#include - -// RNGs -#include -#include -#include -#include -#include - -// distributions -#include -#include -#include - -namespace thrust -{ - - -/*! \addtogroup random Random Number Generation - * \{ - */ - - -/*! \namespace thrust::random - * \brief \p thrust::random is the namespace which contains random number engine class templates, - * random number engine adaptor class templates, engines with predefined parameters, - * and random number distribution class templates. They are provided in a separate namespace - * for import convenience but are also aliased in the top-level \p thrust namespace for - * easy access. - */ -namespace random -{ - -/*! \addtogroup predefined_random Random Number Engines with Predefined Parameters - * \ingroup random - * \{ - */ - -/*! \typedef ranlux24 - * \brief A random number engine with predefined parameters which implements the - * RANLUX level-3 random number generation algorithm. - * \note The 10000th consecutive invocation of a default-constructed object of type \p ranlux24 - * shall produce the value \c 9901578 . - */ -typedef discard_block_engine ranlux24; - - -/*! \typedef ranlux48 - * \brief A random number engine with predefined parameters which implements the - * RANLUX level-4 random number generation algorithm. - * \note The 10000th consecutive invocation of a default-constructed object of type \p ranlux48 - * shall produce the value \c 88229545517833 . - */ -typedef discard_block_engine ranlux48; - - -/*! \typedef taus88 - * \brief A random number engine with predefined parameters which implements - * L'Ecuyer's 1996 three-component Tausworthe random number generator. - * - * \note The 10000th consecutive invocation of a default-constructed object of type \p taus88 - * shall produce the value \c 3535848941 . - */ -typedef xor_combine_engine< - linear_feedback_shift_engine, - 0, - xor_combine_engine< - linear_feedback_shift_engine, 0, - linear_feedback_shift_engine, 0 - >, - 0 -> taus88; - -/*! \typedef default_random_engine - * \brief An implementation-defined "default" random number engine. - * \note \p default_random_engine is currently an alias for \p minstd_rand, and may change - * in a future version. - */ -typedef minstd_rand default_random_engine; - -/*! \} // end predefined_random - */ - -} // end random - - -/*! \} // end random - */ - -// import names into thrust:: -using random::ranlux24; -using random::ranlux48; -using random::taus88; -using random::default_random_engine; - -} // end thrust - diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/cuda/detail/make_unsigned_special.h b/spaces/CVPR/LIVE/thrust/thrust/system/cuda/detail/make_unsigned_special.h deleted file mode 100644 index 683647cbede60d62a4160efe58e9e62ba53c9d12..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/system/cuda/detail/make_unsigned_special.h +++ /dev/null @@ -1,42 +0,0 @@ -/* - * Copyright 2019 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -namespace thrust -{ -namespace cuda_cub { - -namespace detail { - - template - struct make_unsigned_special; - - template<> - struct make_unsigned_special { typedef unsigned int type; }; - - // this is special, because CUDA's atomicAdd doesn't have an overload - // for unsigned long, for some godforsaken reason - template<> - struct make_unsigned_special { typedef unsigned long long type; }; - - template<> - struct make_unsigned_special { typedef unsigned long long type; }; - -} -} -} // end namespace thrust - diff --git a/spaces/CVPR/LIVE/thrust/thrust/transform_reduce.h b/spaces/CVPR/LIVE/thrust/thrust/transform_reduce.h deleted file mode 100644 index 32e172d1e1a818b791ec1e567b35dc4aba358d18..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/transform_reduce.h +++ /dev/null @@ -1,198 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - - -/*! \file transform_reduce.h - * \brief Fused transform / reduction - */ - -#pragma once - -#include -#include - -namespace thrust -{ - - -/*! \addtogroup reductions - * \{ - * \addtogroup transformed_reductions Transformed Reductions - * \ingroup reductions - * \{ - */ - - -/*! \p transform_reduce fuses the \p transform and \p reduce operations. - * \p transform_reduce is equivalent to performing a transformation defined by - * \p unary_op into a temporary sequence and then performing \p reduce on the - * transformed sequence. In most cases, fusing these two operations together is - * more efficient, since fewer memory reads and writes are required. - * - * \p transform_reduce performs a reduction on the transformation of the - * sequence [first, last) according to \p unary_op. Specifically, - * \p unary_op is applied to each element of the sequence and then the result - * is reduced to a single value with \p binary_op using the initial value - * \p init. Note that the transformation \p unary_op is not applied to - * the initial value \p init. The order of reduction is not specified, - * so \p binary_op must be both commutative and associative. - * - * The algorithm's execution is parallelized as determined by \p exec. - * - * \param exec The execution policy to use for parallelization. - * \param first The beginning of the sequence. - * \param last The end of the sequence. - * \param unary_op The function to apply to each element of the input sequence. - * \param init The result is initialized to this value. - * \param binary_op The reduction operation. - * \return The result of the transformed reduction. - * - * \tparam DerivedPolicy The name of the derived execution policy. - * \tparam InputIterator is a model of Input Iterator, - * and \p InputIterator's \c value_type is convertible to \p UnaryFunction's \c argument_type. - * \tparam UnaryFunction is a model of Unary Function, - * and \p UnaryFunction's \c result_type is convertible to \c OutputType. - * \tparam OutputType is a model of Assignable, - * and is convertible to \p BinaryFunction's \c first_argument_type and \c second_argument_type. - * \tparam BinaryFunction is a model of Binary Function, - * and \p BinaryFunction's \c result_type is convertible to \p OutputType. - * - * The following code snippet demonstrates how to use \p transform_reduce - * to compute the maximum value of the absolute value of the elements - * of a range using the \p thrust::host execution policy for parallelization: - * - * \code - * #include - * #include - * #include - * - * template - * struct absolute_value : public unary_function - * { - * __host__ __device__ T operator()(const T &x) const - * { - * return x < T(0) ? -x : x; - * } - * }; - * - * ... - * - * int data[6] = {-1, 0, -2, -2, 1, -3}; - * int result = thrust::transform_reduce(thrust::host, - * data, data + 6, - * absolute_value(), - * 0, - * thrust::maximum()); - * // result == 3 - * \endcode - * - * \see \c transform - * \see \c reduce - */ -template -__host__ __device__ - OutputType transform_reduce(const thrust::detail::execution_policy_base &exec, - InputIterator first, - InputIterator last, - UnaryFunction unary_op, - OutputType init, - BinaryFunction binary_op); - - -/*! \p transform_reduce fuses the \p transform and \p reduce operations. - * \p transform_reduce is equivalent to performing a transformation defined by - * \p unary_op into a temporary sequence and then performing \p reduce on the - * transformed sequence. In most cases, fusing these two operations together is - * more efficient, since fewer memory reads and writes are required. - * - * \p transform_reduce performs a reduction on the transformation of the - * sequence [first, last) according to \p unary_op. Specifically, - * \p unary_op is applied to each element of the sequence and then the result - * is reduced to a single value with \p binary_op using the initial value - * \p init. Note that the transformation \p unary_op is not applied to - * the initial value \p init. The order of reduction is not specified, - * so \p binary_op must be both commutative and associative. - * - * \param first The beginning of the sequence. - * \param last The end of the sequence. - * \param unary_op The function to apply to each element of the input sequence. - * \param init The result is initialized to this value. - * \param binary_op The reduction operation. - * \return The result of the transformed reduction. - * - * \tparam InputIterator is a model of Input Iterator, - * and \p InputIterator's \c value_type is convertible to \p UnaryFunction's \c argument_type. - * \tparam UnaryFunction is a model of Unary Function, - * and \p UnaryFunction's \c result_type is convertible to \c OutputType. - * \tparam OutputType is a model of Assignable, - * and is convertible to \p BinaryFunction's \c first_argument_type and \c second_argument_type. - * \tparam BinaryFunction is a model of Binary Function, - * and \p BinaryFunction's \c result_type is convertible to \p OutputType. - * - * The following code snippet demonstrates how to use \p transform_reduce - * to compute the maximum value of the absolute value of the elements - * of a range. - * - * \code - * #include - * #include - * - * template - * struct absolute_value : public unary_function - * { - * __host__ __device__ T operator()(const T &x) const - * { - * return x < T(0) ? -x : x; - * } - * }; - * - * ... - * - * int data[6] = {-1, 0, -2, -2, 1, -3}; - * int result = thrust::transform_reduce(data, data + 6, - * absolute_value(), - * 0, - * thrust::maximum()); - * // result == 3 - * \endcode - * - * \see \c transform - * \see \c reduce - */ -template - OutputType transform_reduce(InputIterator first, - InputIterator last, - UnaryFunction unary_op, - OutputType init, - BinaryFunction binary_op); - - -/*! \} // end transformed_reductions - * \} // end reductions - */ - - -} // end namespace thrust - -#include - diff --git a/spaces/CVPR/regionclip-demo/detectron2/modeling/roi_heads/mask_head.py b/spaces/CVPR/regionclip-demo/detectron2/modeling/roi_heads/mask_head.py deleted file mode 100644 index 5ac5c4b9aaa34653d6c50e512a5a4300da450c7f..0000000000000000000000000000000000000000 --- a/spaces/CVPR/regionclip-demo/detectron2/modeling/roi_heads/mask_head.py +++ /dev/null @@ -1,292 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -from typing import List -import fvcore.nn.weight_init as weight_init -import torch -from torch import nn -from torch.nn import functional as F - -from detectron2.config import configurable -from detectron2.layers import Conv2d, ConvTranspose2d, ShapeSpec, cat, get_norm -from detectron2.structures import Instances -from detectron2.utils.events import get_event_storage -from detectron2.utils.registry import Registry - -__all__ = [ - "BaseMaskRCNNHead", - "MaskRCNNConvUpsampleHead", - "build_mask_head", - "ROI_MASK_HEAD_REGISTRY", -] - - -ROI_MASK_HEAD_REGISTRY = Registry("ROI_MASK_HEAD") -ROI_MASK_HEAD_REGISTRY.__doc__ = """ -Registry for mask heads, which predicts instance masks given -per-region features. - -The registered object will be called with `obj(cfg, input_shape)`. -""" - - -@torch.jit.unused -def mask_rcnn_loss(pred_mask_logits: torch.Tensor, instances: List[Instances], vis_period: int = 0): - """ - Compute the mask prediction loss defined in the Mask R-CNN paper. - - Args: - pred_mask_logits (Tensor): A tensor of shape (B, C, Hmask, Wmask) or (B, 1, Hmask, Wmask) - for class-specific or class-agnostic, where B is the total number of predicted masks - in all images, C is the number of foreground classes, and Hmask, Wmask are the height - and width of the mask predictions. The values are logits. - instances (list[Instances]): A list of N Instances, where N is the number of images - in the batch. These instances are in 1:1 - correspondence with the pred_mask_logits. The ground-truth labels (class, box, mask, - ...) associated with each instance are stored in fields. - vis_period (int): the period (in steps) to dump visualization. - - Returns: - mask_loss (Tensor): A scalar tensor containing the loss. - """ - cls_agnostic_mask = pred_mask_logits.size(1) == 1 - total_num_masks = pred_mask_logits.size(0) - mask_side_len = pred_mask_logits.size(2) - assert pred_mask_logits.size(2) == pred_mask_logits.size(3), "Mask prediction must be square!" - - gt_classes = [] - gt_masks = [] - for instances_per_image in instances: - if len(instances_per_image) == 0: - continue - if not cls_agnostic_mask: - gt_classes_per_image = instances_per_image.gt_classes.to(dtype=torch.int64) - gt_classes.append(gt_classes_per_image) - - gt_masks_per_image = instances_per_image.gt_masks.crop_and_resize( - instances_per_image.proposal_boxes.tensor, mask_side_len - ).to(device=pred_mask_logits.device) - # A tensor of shape (N, M, M), N=#instances in the image; M=mask_side_len - gt_masks.append(gt_masks_per_image) - - if len(gt_masks) == 0: - return pred_mask_logits.sum() * 0 - - gt_masks = cat(gt_masks, dim=0) - - if cls_agnostic_mask: - pred_mask_logits = pred_mask_logits[:, 0] - else: - indices = torch.arange(total_num_masks) - gt_classes = cat(gt_classes, dim=0) - pred_mask_logits = pred_mask_logits[indices, gt_classes] - - if gt_masks.dtype == torch.bool: - gt_masks_bool = gt_masks - else: - # Here we allow gt_masks to be float as well (depend on the implementation of rasterize()) - gt_masks_bool = gt_masks > 0.5 - gt_masks = gt_masks.to(dtype=torch.float32) - - # Log the training accuracy (using gt classes and 0.5 threshold) - mask_incorrect = (pred_mask_logits > 0.0) != gt_masks_bool - mask_accuracy = 1 - (mask_incorrect.sum().item() / max(mask_incorrect.numel(), 1.0)) - num_positive = gt_masks_bool.sum().item() - false_positive = (mask_incorrect & ~gt_masks_bool).sum().item() / max( - gt_masks_bool.numel() - num_positive, 1.0 - ) - false_negative = (mask_incorrect & gt_masks_bool).sum().item() / max(num_positive, 1.0) - - storage = get_event_storage() - storage.put_scalar("mask_rcnn/accuracy", mask_accuracy) - storage.put_scalar("mask_rcnn/false_positive", false_positive) - storage.put_scalar("mask_rcnn/false_negative", false_negative) - if vis_period > 0 and storage.iter % vis_period == 0: - pred_masks = pred_mask_logits.sigmoid() - vis_masks = torch.cat([pred_masks, gt_masks], axis=2) - name = "Left: mask prediction; Right: mask GT" - for idx, vis_mask in enumerate(vis_masks): - vis_mask = torch.stack([vis_mask] * 3, axis=0) - storage.put_image(name + f" ({idx})", vis_mask) - - mask_loss = F.binary_cross_entropy_with_logits(pred_mask_logits, gt_masks, reduction="mean") - return mask_loss - - -def mask_rcnn_inference(pred_mask_logits: torch.Tensor, pred_instances: List[Instances]): - """ - Convert pred_mask_logits to estimated foreground probability masks while also - extracting only the masks for the predicted classes in pred_instances. For each - predicted box, the mask of the same class is attached to the instance by adding a - new "pred_masks" field to pred_instances. - - Args: - pred_mask_logits (Tensor): A tensor of shape (B, C, Hmask, Wmask) or (B, 1, Hmask, Wmask) - for class-specific or class-agnostic, where B is the total number of predicted masks - in all images, C is the number of foreground classes, and Hmask, Wmask are the height - and width of the mask predictions. The values are logits. - pred_instances (list[Instances]): A list of N Instances, where N is the number of images - in the batch. Each Instances must have field "pred_classes". - - Returns: - None. pred_instances will contain an extra "pred_masks" field storing a mask of size (Hmask, - Wmask) for predicted class. Note that the masks are returned as a soft (non-quantized) - masks the resolution predicted by the network; post-processing steps, such as resizing - the predicted masks to the original image resolution and/or binarizing them, is left - to the caller. - """ - cls_agnostic_mask = pred_mask_logits.size(1) == 1 - - if cls_agnostic_mask: - mask_probs_pred = pred_mask_logits.sigmoid() - else: - # Select masks corresponding to the predicted classes - num_masks = pred_mask_logits.shape[0] - class_pred = cat([i.pred_classes for i in pred_instances]) - indices = torch.arange(num_masks, device=class_pred.device) - mask_probs_pred = pred_mask_logits[indices, class_pred][:, None].sigmoid() - # mask_probs_pred.shape: (B, 1, Hmask, Wmask) - - num_boxes_per_image = [len(i) for i in pred_instances] - mask_probs_pred = mask_probs_pred.split(num_boxes_per_image, dim=0) - - for prob, instances in zip(mask_probs_pred, pred_instances): - instances.pred_masks = prob # (1, Hmask, Wmask) - - -class BaseMaskRCNNHead(nn.Module): - """ - Implement the basic Mask R-CNN losses and inference logic described in :paper:`Mask R-CNN` - """ - - @configurable - def __init__(self, *, loss_weight: float = 1.0, vis_period: int = 0): - """ - NOTE: this interface is experimental. - - Args: - loss_weight (float): multiplier of the loss - vis_period (int): visualization period - """ - super().__init__() - self.vis_period = vis_period - self.loss_weight = loss_weight - - @classmethod - def from_config(cls, cfg, input_shape): - return {"vis_period": cfg.VIS_PERIOD} - - def forward(self, x, instances: List[Instances]): - """ - Args: - x: input region feature(s) provided by :class:`ROIHeads`. - instances (list[Instances]): contains the boxes & labels corresponding - to the input features. - Exact format is up to its caller to decide. - Typically, this is the foreground instances in training, with - "proposal_boxes" field and other gt annotations. - In inference, it contains boxes that are already predicted. - - Returns: - A dict of losses in training. The predicted "instances" in inference. - """ - x = self.layers(x) - if self.training: - return {"loss_mask": mask_rcnn_loss(x, instances, self.vis_period) * self.loss_weight} - else: - mask_rcnn_inference(x, instances) - return instances - - def layers(self, x): - """ - Neural network layers that makes predictions from input features. - """ - raise NotImplementedError - - -# To get torchscript support, we make the head a subclass of `nn.Sequential`. -# Therefore, to add new layers in this head class, please make sure they are -# added in the order they will be used in forward(). -@ROI_MASK_HEAD_REGISTRY.register() -class MaskRCNNConvUpsampleHead(BaseMaskRCNNHead, nn.Sequential): - """ - A mask head with several conv layers, plus an upsample layer (with `ConvTranspose2d`). - Predictions are made with a final 1x1 conv layer. - """ - - @configurable - def __init__(self, input_shape: ShapeSpec, *, num_classes, conv_dims, conv_norm="", **kwargs): - """ - NOTE: this interface is experimental. - - Args: - input_shape (ShapeSpec): shape of the input feature - num_classes (int): the number of foreground classes (i.e. background is not - included). 1 if using class agnostic prediction. - conv_dims (list[int]): a list of N>0 integers representing the output dimensions - of N-1 conv layers and the last upsample layer. - conv_norm (str or callable): normalization for the conv layers. - See :func:`detectron2.layers.get_norm` for supported types. - """ - super().__init__(**kwargs) - assert len(conv_dims) >= 1, "conv_dims have to be non-empty!" - - self.conv_norm_relus = [] - - cur_channels = input_shape.channels - for k, conv_dim in enumerate(conv_dims[:-1]): - conv = Conv2d( - cur_channels, - conv_dim, - kernel_size=3, - stride=1, - padding=1, - bias=not conv_norm, - norm=get_norm(conv_norm, conv_dim), - activation=nn.ReLU(), - ) - self.add_module("mask_fcn{}".format(k + 1), conv) - self.conv_norm_relus.append(conv) - cur_channels = conv_dim - - self.deconv = ConvTranspose2d( - cur_channels, conv_dims[-1], kernel_size=2, stride=2, padding=0 - ) - self.add_module("deconv_relu", nn.ReLU()) - cur_channels = conv_dims[-1] - - self.predictor = Conv2d(cur_channels, num_classes, kernel_size=1, stride=1, padding=0) - - for layer in self.conv_norm_relus + [self.deconv]: - weight_init.c2_msra_fill(layer) - # use normal distribution initialization for mask prediction layer - nn.init.normal_(self.predictor.weight, std=0.001) - if self.predictor.bias is not None: - nn.init.constant_(self.predictor.bias, 0) - - @classmethod - def from_config(cls, cfg, input_shape): - ret = super().from_config(cfg, input_shape) - conv_dim = cfg.MODEL.ROI_MASK_HEAD.CONV_DIM - num_conv = cfg.MODEL.ROI_MASK_HEAD.NUM_CONV - ret.update( - conv_dims=[conv_dim] * (num_conv + 1), # +1 for ConvTranspose - conv_norm=cfg.MODEL.ROI_MASK_HEAD.NORM, - input_shape=input_shape, - ) - if cfg.MODEL.ROI_MASK_HEAD.CLS_AGNOSTIC_MASK: - ret["num_classes"] = 1 - else: - ret["num_classes"] = cfg.MODEL.ROI_HEADS.NUM_CLASSES - return ret - - def layers(self, x): - for layer in self: - x = layer(x) - return x - - -def build_mask_head(cfg, input_shape): - """ - Build a mask head defined by `cfg.MODEL.ROI_MASK_HEAD.NAME`. - """ - name = cfg.MODEL.ROI_MASK_HEAD.NAME - return ROI_MASK_HEAD_REGISTRY.get(name)(cfg, input_shape) diff --git a/spaces/CrabApple/prompthero-openjourney-v2/README.md b/spaces/CrabApple/prompthero-openjourney-v2/README.md deleted file mode 100644 index f9785b6d2e4dadd39e83ee9a1c8bba6b81c88026..0000000000000000000000000000000000000000 --- a/spaces/CrabApple/prompthero-openjourney-v2/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Prompthero Openjourney V2 -emoji: 🔥 -colorFrom: yellow -colorTo: indigo -sdk: gradio -sdk_version: 3.18.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Cropinky/hana_hanak_houses/realesrgan/models/realesrgan_model.py b/spaces/Cropinky/hana_hanak_houses/realesrgan/models/realesrgan_model.py deleted file mode 100644 index c298a09c42433177f90001a0a31d029576072ccd..0000000000000000000000000000000000000000 --- a/spaces/Cropinky/hana_hanak_houses/realesrgan/models/realesrgan_model.py +++ /dev/null @@ -1,258 +0,0 @@ -import numpy as np -import random -import torch -from basicsr.data.degradations import random_add_gaussian_noise_pt, random_add_poisson_noise_pt -from basicsr.data.transforms import paired_random_crop -from basicsr.models.srgan_model import SRGANModel -from basicsr.utils import DiffJPEG, USMSharp -from basicsr.utils.img_process_util import filter2D -from basicsr.utils.registry import MODEL_REGISTRY -from collections import OrderedDict -from torch.nn import functional as F - - -@MODEL_REGISTRY.register() -class RealESRGANModel(SRGANModel): - """RealESRGAN Model for Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data. - - It mainly performs: - 1. randomly synthesize LQ images in GPU tensors - 2. optimize the networks with GAN training. - """ - - def __init__(self, opt): - super(RealESRGANModel, self).__init__(opt) - self.jpeger = DiffJPEG(differentiable=False).cuda() # simulate JPEG compression artifacts - self.usm_sharpener = USMSharp().cuda() # do usm sharpening - self.queue_size = opt.get('queue_size', 180) - - @torch.no_grad() - def _dequeue_and_enqueue(self): - """It is the training pair pool for increasing the diversity in a batch. - - Batch processing limits the diversity of synthetic degradations in a batch. For example, samples in a - batch could not have different resize scaling factors. Therefore, we employ this training pair pool - to increase the degradation diversity in a batch. - """ - # initialize - b, c, h, w = self.lq.size() - if not hasattr(self, 'queue_lr'): - assert self.queue_size % b == 0, f'queue size {self.queue_size} should be divisible by batch size {b}' - self.queue_lr = torch.zeros(self.queue_size, c, h, w).cuda() - _, c, h, w = self.gt.size() - self.queue_gt = torch.zeros(self.queue_size, c, h, w).cuda() - self.queue_ptr = 0 - if self.queue_ptr == self.queue_size: # the pool is full - # do dequeue and enqueue - # shuffle - idx = torch.randperm(self.queue_size) - self.queue_lr = self.queue_lr[idx] - self.queue_gt = self.queue_gt[idx] - # get first b samples - lq_dequeue = self.queue_lr[0:b, :, :, :].clone() - gt_dequeue = self.queue_gt[0:b, :, :, :].clone() - # update the queue - self.queue_lr[0:b, :, :, :] = self.lq.clone() - self.queue_gt[0:b, :, :, :] = self.gt.clone() - - self.lq = lq_dequeue - self.gt = gt_dequeue - else: - # only do enqueue - self.queue_lr[self.queue_ptr:self.queue_ptr + b, :, :, :] = self.lq.clone() - self.queue_gt[self.queue_ptr:self.queue_ptr + b, :, :, :] = self.gt.clone() - self.queue_ptr = self.queue_ptr + b - - @torch.no_grad() - def feed_data(self, data): - """Accept data from dataloader, and then add two-order degradations to obtain LQ images. - """ - if self.is_train and self.opt.get('high_order_degradation', True): - # training data synthesis - self.gt = data['gt'].to(self.device) - self.gt_usm = self.usm_sharpener(self.gt) - - self.kernel1 = data['kernel1'].to(self.device) - self.kernel2 = data['kernel2'].to(self.device) - self.sinc_kernel = data['sinc_kernel'].to(self.device) - - ori_h, ori_w = self.gt.size()[2:4] - - # ----------------------- The first degradation process ----------------------- # - # blur - out = filter2D(self.gt_usm, self.kernel1) - # random resize - updown_type = random.choices(['up', 'down', 'keep'], self.opt['resize_prob'])[0] - if updown_type == 'up': - scale = np.random.uniform(1, self.opt['resize_range'][1]) - elif updown_type == 'down': - scale = np.random.uniform(self.opt['resize_range'][0], 1) - else: - scale = 1 - mode = random.choice(['area', 'bilinear', 'bicubic']) - out = F.interpolate(out, scale_factor=scale, mode=mode) - # add noise - gray_noise_prob = self.opt['gray_noise_prob'] - if np.random.uniform() < self.opt['gaussian_noise_prob']: - out = random_add_gaussian_noise_pt( - out, sigma_range=self.opt['noise_range'], clip=True, rounds=False, gray_prob=gray_noise_prob) - else: - out = random_add_poisson_noise_pt( - out, - scale_range=self.opt['poisson_scale_range'], - gray_prob=gray_noise_prob, - clip=True, - rounds=False) - # JPEG compression - jpeg_p = out.new_zeros(out.size(0)).uniform_(*self.opt['jpeg_range']) - out = torch.clamp(out, 0, 1) # clamp to [0, 1], otherwise JPEGer will result in unpleasant artifacts - out = self.jpeger(out, quality=jpeg_p) - - # ----------------------- The second degradation process ----------------------- # - # blur - if np.random.uniform() < self.opt['second_blur_prob']: - out = filter2D(out, self.kernel2) - # random resize - updown_type = random.choices(['up', 'down', 'keep'], self.opt['resize_prob2'])[0] - if updown_type == 'up': - scale = np.random.uniform(1, self.opt['resize_range2'][1]) - elif updown_type == 'down': - scale = np.random.uniform(self.opt['resize_range2'][0], 1) - else: - scale = 1 - mode = random.choice(['area', 'bilinear', 'bicubic']) - out = F.interpolate( - out, size=(int(ori_h / self.opt['scale'] * scale), int(ori_w / self.opt['scale'] * scale)), mode=mode) - # add noise - gray_noise_prob = self.opt['gray_noise_prob2'] - if np.random.uniform() < self.opt['gaussian_noise_prob2']: - out = random_add_gaussian_noise_pt( - out, sigma_range=self.opt['noise_range2'], clip=True, rounds=False, gray_prob=gray_noise_prob) - else: - out = random_add_poisson_noise_pt( - out, - scale_range=self.opt['poisson_scale_range2'], - gray_prob=gray_noise_prob, - clip=True, - rounds=False) - - # JPEG compression + the final sinc filter - # We also need to resize images to desired sizes. We group [resize back + sinc filter] together - # as one operation. - # We consider two orders: - # 1. [resize back + sinc filter] + JPEG compression - # 2. JPEG compression + [resize back + sinc filter] - # Empirically, we find other combinations (sinc + JPEG + Resize) will introduce twisted lines. - if np.random.uniform() < 0.5: - # resize back + the final sinc filter - mode = random.choice(['area', 'bilinear', 'bicubic']) - out = F.interpolate(out, size=(ori_h // self.opt['scale'], ori_w // self.opt['scale']), mode=mode) - out = filter2D(out, self.sinc_kernel) - # JPEG compression - jpeg_p = out.new_zeros(out.size(0)).uniform_(*self.opt['jpeg_range2']) - out = torch.clamp(out, 0, 1) - out = self.jpeger(out, quality=jpeg_p) - else: - # JPEG compression - jpeg_p = out.new_zeros(out.size(0)).uniform_(*self.opt['jpeg_range2']) - out = torch.clamp(out, 0, 1) - out = self.jpeger(out, quality=jpeg_p) - # resize back + the final sinc filter - mode = random.choice(['area', 'bilinear', 'bicubic']) - out = F.interpolate(out, size=(ori_h // self.opt['scale'], ori_w // self.opt['scale']), mode=mode) - out = filter2D(out, self.sinc_kernel) - - # clamp and round - self.lq = torch.clamp((out * 255.0).round(), 0, 255) / 255. - - # random crop - gt_size = self.opt['gt_size'] - (self.gt, self.gt_usm), self.lq = paired_random_crop([self.gt, self.gt_usm], self.lq, gt_size, - self.opt['scale']) - - # training pair pool - self._dequeue_and_enqueue() - # sharpen self.gt again, as we have changed the self.gt with self._dequeue_and_enqueue - self.gt_usm = self.usm_sharpener(self.gt) - self.lq = self.lq.contiguous() # for the warning: grad and param do not obey the gradient layout contract - else: - # for paired training or validation - self.lq = data['lq'].to(self.device) - if 'gt' in data: - self.gt = data['gt'].to(self.device) - self.gt_usm = self.usm_sharpener(self.gt) - - def nondist_validation(self, dataloader, current_iter, tb_logger, save_img): - # do not use the synthetic process during validation - self.is_train = False - super(RealESRGANModel, self).nondist_validation(dataloader, current_iter, tb_logger, save_img) - self.is_train = True - - def optimize_parameters(self, current_iter): - # usm sharpening - l1_gt = self.gt_usm - percep_gt = self.gt_usm - gan_gt = self.gt_usm - if self.opt['l1_gt_usm'] is False: - l1_gt = self.gt - if self.opt['percep_gt_usm'] is False: - percep_gt = self.gt - if self.opt['gan_gt_usm'] is False: - gan_gt = self.gt - - # optimize net_g - for p in self.net_d.parameters(): - p.requires_grad = False - - self.optimizer_g.zero_grad() - self.output = self.net_g(self.lq) - - l_g_total = 0 - loss_dict = OrderedDict() - if (current_iter % self.net_d_iters == 0 and current_iter > self.net_d_init_iters): - # pixel loss - if self.cri_pix: - l_g_pix = self.cri_pix(self.output, l1_gt) - l_g_total += l_g_pix - loss_dict['l_g_pix'] = l_g_pix - # perceptual loss - if self.cri_perceptual: - l_g_percep, l_g_style = self.cri_perceptual(self.output, percep_gt) - if l_g_percep is not None: - l_g_total += l_g_percep - loss_dict['l_g_percep'] = l_g_percep - if l_g_style is not None: - l_g_total += l_g_style - loss_dict['l_g_style'] = l_g_style - # gan loss - fake_g_pred = self.net_d(self.output) - l_g_gan = self.cri_gan(fake_g_pred, True, is_disc=False) - l_g_total += l_g_gan - loss_dict['l_g_gan'] = l_g_gan - - l_g_total.backward() - self.optimizer_g.step() - - # optimize net_d - for p in self.net_d.parameters(): - p.requires_grad = True - - self.optimizer_d.zero_grad() - # real - real_d_pred = self.net_d(gan_gt) - l_d_real = self.cri_gan(real_d_pred, True, is_disc=True) - loss_dict['l_d_real'] = l_d_real - loss_dict['out_d_real'] = torch.mean(real_d_pred.detach()) - l_d_real.backward() - # fake - fake_d_pred = self.net_d(self.output.detach().clone()) # clone for pt1.9 - l_d_fake = self.cri_gan(fake_d_pred, False, is_disc=True) - loss_dict['l_d_fake'] = l_d_fake - loss_dict['out_d_fake'] = torch.mean(fake_d_pred.detach()) - l_d_fake.backward() - self.optimizer_d.step() - - if self.ema_decay > 0: - self.model_ema(decay=self.ema_decay) - - self.log_dict = self.reduce_loss_dict(loss_dict) diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/altair/vegalite/v5/api.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/altair/vegalite/v5/api.py deleted file mode 100644 index ed449bcab3fe7b2679f1ffaadc97402f43381869..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/altair/vegalite/v5/api.py +++ /dev/null @@ -1,3434 +0,0 @@ -import warnings - -import hashlib -import io -import json -import jsonschema -import pandas as pd -from toolz.curried import pipe as _pipe -import itertools -import sys -from typing import cast - -# Have to rename it here as else it overlaps with schema.core.Type -from typing import Type as TypingType - -from .schema import core, channels, mixins, Undefined, SCHEMA_URL - -from .data import data_transformers -from ... import utils, expr -from .display import renderers, VEGALITE_VERSION, VEGAEMBED_VERSION, VEGA_VERSION -from .theme import themes - -if sys.version_info >= (3, 11): - from typing import Self -else: - from typing_extensions import Self - - -# ------------------------------------------------------------------------ -# Data Utilities -def _dataset_name(values): - """Generate a unique hash of the data - - Parameters - ---------- - values : list or dict - A list/dict representation of data values. - - Returns - ------- - name : string - A unique name generated from the hash of the values. - """ - if isinstance(values, core.InlineDataset): - values = values.to_dict() - if values == [{}]: - return "empty" - values_json = json.dumps(values, sort_keys=True) - hsh = hashlib.md5(values_json.encode()).hexdigest() - return "data-" + hsh - - -def _consolidate_data(data, context): - """If data is specified inline, then move it to context['datasets'] - - This function will modify context in-place, and return a new version of data - """ - values = Undefined - kwds = {} - - if isinstance(data, core.InlineData): - if data.name is Undefined and data.values is not Undefined: - if isinstance(data.values, core.InlineDataset): - values = data.to_dict()["values"] - else: - values = data.values - kwds = {"format": data.format} - - elif isinstance(data, dict): - if "name" not in data and "values" in data: - values = data["values"] - kwds = {k: v for k, v in data.items() if k != "values"} - - if values is not Undefined: - name = _dataset_name(values) - data = core.NamedData(name=name, **kwds) - context.setdefault("datasets", {})[name] = values - - return data - - -def _prepare_data(data, context=None): - """Convert input data to data for use within schema - - Parameters - ---------- - data : - The input dataset in the form of a DataFrame, dictionary, altair data - object, or other type that is recognized by the data transformers. - context : dict (optional) - The to_dict context in which the data is being prepared. This is used - to keep track of information that needs to be passed up and down the - recursive serialization routine, such as global named datasets. - """ - if data is Undefined: - return data - - # convert dataframes or objects with __geo_interface__ to dict - elif isinstance(data, pd.DataFrame) or hasattr(data, "__geo_interface__"): - data = _pipe(data, data_transformers.get()) - - # convert string input to a URLData - elif isinstance(data, str): - data = core.UrlData(data) - - elif hasattr(data, "__dataframe__"): - data = _pipe(data, data_transformers.get()) - - # consolidate inline data to top-level datasets - if context is not None and data_transformers.consolidate_datasets: - data = _consolidate_data(data, context) - - # if data is still not a recognized type, then return - if not isinstance(data, (dict, core.Data)): - warnings.warn("data of type {} not recognized".format(type(data)), stacklevel=1) - - return data - - -# ------------------------------------------------------------------------ -# Aliases & specializations -Bin = core.BinParams -Impute = core.ImputeParams -Title = core.TitleParams - - -class LookupData(core.LookupData): - @utils.use_signature(core.LookupData) - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - - def to_dict(self, *args, **kwargs): - """Convert the chart to a dictionary suitable for JSON export.""" - copy = self.copy(deep=False) - copy.data = _prepare_data(copy.data, kwargs.get("context")) - return super(LookupData, copy).to_dict(*args, **kwargs) - - -class FacetMapping(core.FacetMapping): - _class_is_valid_at_instantiation = False - - @utils.use_signature(core.FacetMapping) - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - - def to_dict(self, *args, **kwargs): - copy = self.copy(deep=False) - context = kwargs.get("context", {}) - data = context.get("data", None) - if isinstance(self.row, str): - copy.row = core.FacetFieldDef(**utils.parse_shorthand(self.row, data)) - if isinstance(self.column, str): - copy.column = core.FacetFieldDef(**utils.parse_shorthand(self.column, data)) - return super(FacetMapping, copy).to_dict(*args, **kwargs) - - -# ------------------------------------------------------------------------ -# Encoding will contain channel objects that aren't valid at instantiation -core.FacetedEncoding._class_is_valid_at_instantiation = False - -# ------------------------------------------------------------------------ -# These are parameters that are valid at the top level, but are not valid -# for specs that are within a composite chart -# (layer, hconcat, vconcat, facet, repeat) -TOPLEVEL_ONLY_KEYS = {"background", "config", "autosize", "padding", "$schema"} - - -def _get_channels_mapping(): - mapping = {} - for attr in dir(channels): - cls = getattr(channels, attr) - if isinstance(cls, type) and issubclass(cls, core.SchemaBase): - mapping[cls] = attr.replace("Value", "").lower() - return mapping - - -# ------------------------------------------------------------------------- -# Tools for working with parameters -class Parameter(expr.core.OperatorMixin, object): - """A Parameter object""" - - _counter = 0 - - @classmethod - def _get_name(cls): - cls._counter += 1 - return f"param_{cls._counter}" - - def __init__(self, name): - if name is None: - name = self._get_name() - self.name = name - - @utils.deprecation.deprecated( - message="'ref' is deprecated. No need to call '.ref()' anymore." - ) - def ref(self): - "'ref' is deprecated. No need to call '.ref()' anymore." - return self.to_dict() - - def to_dict(self): - if self.param_type == "variable": - return {"expr": self.name} - elif self.param_type == "selection": - return { - "param": self.name.to_dict() - if hasattr(self.name, "to_dict") - else self.name - } - - def __invert__(self): - if self.param_type == "selection": - return SelectionPredicateComposition({"not": {"param": self.name}}) - else: - return expr.core.OperatorMixin.__invert__(self) - - def __and__(self, other): - if self.param_type == "selection": - if isinstance(other, Parameter): - other = {"param": other.name} - return SelectionPredicateComposition({"and": [{"param": self.name}, other]}) - else: - return expr.core.OperatorMixin.__and__(self, other) - - def __or__(self, other): - if self.param_type == "selection": - if isinstance(other, Parameter): - other = {"param": other.name} - return SelectionPredicateComposition({"or": [{"param": self.name}, other]}) - else: - return expr.core.OperatorMixin.__or__(self, other) - - def __repr__(self): - return "Parameter({0!r}, {1})".format(self.name, self.param) - - def _to_expr(self): - return self.name - - def _from_expr(self, expr): - return ParameterExpression(expr=expr) - - def __getattr__(self, field_name): - if field_name.startswith("__") and field_name.endswith("__"): - raise AttributeError(field_name) - _attrexpr = expr.core.GetAttrExpression(self.name, field_name) - # If self is a SelectionParameter and field_name is in its - # fields or encodings list, then we want to return an expression. - if check_fields_and_encodings(self, field_name): - return SelectionExpression(_attrexpr) - return expr.core.GetAttrExpression(self.name, field_name) - - # TODO: Are there any special cases to consider for __getitem__? - # This was copied from v4. - def __getitem__(self, field_name): - return expr.core.GetItemExpression(self.name, field_name) - - -# Enables use of ~, &, | with compositions of selection objects. -class SelectionPredicateComposition(core.PredicateComposition): - def __invert__(self): - return SelectionPredicateComposition({"not": self.to_dict()}) - - def __and__(self, other): - return SelectionPredicateComposition({"and": [self.to_dict(), other.to_dict()]}) - - def __or__(self, other): - return SelectionPredicateComposition({"or": [self.to_dict(), other.to_dict()]}) - - -class ParameterExpression(expr.core.OperatorMixin, object): - def __init__(self, expr): - self.expr = expr - - def to_dict(self): - return {"expr": repr(self.expr)} - - def _to_expr(self): - return repr(self.expr) - - def _from_expr(self, expr): - return ParameterExpression(expr=expr) - - -class SelectionExpression(expr.core.OperatorMixin, object): - def __init__(self, expr): - self.expr = expr - - def to_dict(self): - return {"expr": repr(self.expr)} - - def _to_expr(self): - return repr(self.expr) - - def _from_expr(self, expr): - return SelectionExpression(expr=expr) - - -def check_fields_and_encodings(parameter, field_name): - for prop in ["fields", "encodings"]: - try: - if field_name in getattr(parameter.param.select, prop): - return True - except (AttributeError, TypeError): - pass - - return False - - -# ------------------------------------------------------------------------ -# Top-Level Functions - - -def value(value, **kwargs): - """Specify a value for use in an encoding""" - return dict(value=value, **kwargs) - - -def param( - name=None, - value=Undefined, - bind=Undefined, - empty=Undefined, - expr=Undefined, - **kwds, -): - """Create a named parameter. See https://altair-viz.github.io/user_guide/interactions.html for examples. Although both variable parameters and selection parameters can be created using this 'param' function, to create a selection parameter, it is recommended to use either 'selection_point' or 'selection_interval' instead. - - Parameters - ---------- - name : string (optional) - The name of the parameter. If not specified, a unique name will be - created. - value : any (optional) - The default value of the parameter. If not specified, the parameter - will be created without a default value. - bind : :class:`Binding` (optional) - Binds the parameter to an external input element such as a slider, - selection list or radio button group. - empty : boolean (optional) - For selection parameters, the predicate of empty selections returns - True by default. Override this behavior, by setting this property - 'empty=False'. - expr : :class:`Expr` (optional) - An expression for the value of the parameter. This expression may - include other parameters, in which case the parameter will - automatically update in response to upstream parameter changes. - **kwds : - additional keywords will be used to construct a parameter. If 'select' - is among the keywords, then a selection parameter will be created. - Otherwise, a variable parameter will be created. - - Returns - ------- - parameter: Parameter - The parameter object that can be used in chart creation. - """ - parameter = Parameter(name) - - if empty is not Undefined: - parameter.empty = empty - if parameter.empty == "none": - warnings.warn( - """The value of 'empty' should be True or False.""", - utils.AltairDeprecationWarning, - stacklevel=1, - ) - parameter.empty = False - elif parameter.empty == "all": - warnings.warn( - """The value of 'empty' should be True or False.""", - utils.AltairDeprecationWarning, - stacklevel=1, - ) - parameter.empty = True - elif (parameter.empty is False) or (parameter.empty is True): - pass - else: - raise ValueError("The value of 'empty' should be True or False.") - - if "init" in kwds: - warnings.warn( - """Use 'value' instead of 'init'.""", - utils.AltairDeprecationWarning, - stacklevel=1, - ) - if value is Undefined: - kwds["value"] = kwds.pop("init") - else: - # If both 'value' and 'init' are set, we ignore 'init'. - kwds.pop("init") - - if "select" not in kwds: - parameter.param = core.VariableParameter( - name=parameter.name, bind=bind, value=value, expr=expr, **kwds - ) - parameter.param_type = "variable" - elif "views" in kwds: - parameter.param = core.TopLevelSelectionParameter( - name=parameter.name, bind=bind, value=value, expr=expr, **kwds - ) - parameter.param_type = "selection" - else: - parameter.param = core.SelectionParameter( - name=parameter.name, bind=bind, value=value, expr=expr, **kwds - ) - parameter.param_type = "selection" - - return parameter - - -def _selection(type=Undefined, **kwds): - # We separate out the parameter keywords from the selection keywords - param_kwds = {} - - for kwd in {"name", "bind", "value", "empty", "init", "views"}: - if kwd in kwds: - param_kwds[kwd] = kwds.pop(kwd) - - if type == "interval": - select = core.IntervalSelectionConfig(type=type, **kwds) - elif type == "point": - select = core.PointSelectionConfig(type=type, **kwds) - elif type in ["single", "multi"]: - select = core.PointSelectionConfig(type="point", **kwds) - warnings.warn( - """The types 'single' and 'multi' are now - combined and should be specified using "selection_point()".""", - utils.AltairDeprecationWarning, - stacklevel=1, - ) - else: - raise ValueError("""'type' must be 'point' or 'interval'""") - - return param(select=select, **param_kwds) - - -@utils.deprecation.deprecated( - message="""'selection' is deprecated. - Use 'selection_point()' or 'selection_interval()' instead; these functions also include more helpful docstrings.""" -) -def selection(type=Undefined, **kwds): - """ - Users are recommended to use either 'selection_point' or 'selection_interval' instead, depending on the type of parameter they want to create. - - Create a selection parameter. - - Parameters - ---------- - type : enum('point', 'interval') (required) - Determines the default event processing and data query for the - selection. Vega-Lite currently supports two selection types: - * "point" - to select multiple discrete data values; the first - value is selected on click and additional values toggled on - shift-click. - * "interval" - to select a continuous range of data values on - drag. - **kwds : - additional keywords to control the selection. - """ - - return _selection(type=type, **kwds) - - -def selection_interval( - name=None, - value=Undefined, - bind=Undefined, - empty=Undefined, - expr=Undefined, - encodings=Undefined, - on=Undefined, - clear=Undefined, - resolve=Undefined, - mark=Undefined, - translate=Undefined, - zoom=Undefined, - **kwds, -): - """Create an interval selection parameter. Selection parameters define data queries that are driven by direct manipulation from user input (e.g., mouse clicks or drags). Interval selection parameters are used to select a continuous range of data values on drag, whereas point selection parameters (`selection_point`) are used to select multiple discrete data values.) - - Parameters - ---------- - name : string (optional) - The name of the parameter. If not specified, a unique name will be - created. - value : any (optional) - The default value of the parameter. If not specified, the parameter - will be created without a default value. - bind : :class:`Binding` (optional) - Binds the parameter to an external input element such as a slider, - selection list or radio button group. - empty : boolean (optional) - For selection parameters, the predicate of empty selections returns - True by default. Override this behavior, by setting this property - 'empty=False'. - expr : :class:`Expr` (optional) - An expression for the value of the parameter. This expression may - include other parameters, in which case the parameter will - automatically update in response to upstream parameter changes. - encodings : List[str] (optional) - A list of encoding channels. The corresponding data field values - must match for a data tuple to fall within the selection. - on : string (optional) - A Vega event stream (object or selector) that triggers the selection. - For interval selections, the event stream must specify a start and end. - clear : string or boolean (optional) - Clears the selection, emptying it of all values. This property can - be an Event Stream or False to disable clear. Default is 'dblclick'. - resolve : enum('global', 'union', 'intersect') (optional) - With layered and multi-view displays, a strategy that determines - how selections' data queries are resolved when applied in a filter - transform, conditional encoding rule, or scale domain. - One of: - - * 'global': only one brush exists for the entire SPLOM. When the - user begins to drag, any previous brushes are cleared, and a - new one is constructed. - * 'union': each cell contains its own brush, and points are - highlighted if they lie within any of these individual brushes. - * 'intersect': each cell contains its own brush, and points are - highlighted only if they fall within all of these individual - brushes. - - The default is 'global'. - mark : :class:`Mark` (optional) - An interval selection also adds a rectangle mark to depict the - extents of the interval. The mark property can be used to - customize the appearance of the mark. - translate : string or boolean (optional) - When truthy, allows a user to interactively move an interval - selection back-and-forth. Can be True, False (to disable panning), - or a Vega event stream definition which must include a start and - end event to trigger continuous panning. Discrete panning (e.g., - pressing the left/right arrow keys) will be supported in future - versions. - The default value is True, which corresponds to - [mousedown, window:mouseup] > window:mousemove! - This default allows users to click and drag within an interval - selection to reposition it. - zoom : string or boolean (optional) - When truthy, allows a user to interactively resize an interval - selection. Can be True, False (to disable zooming), or a Vega - event stream definition. Currently, only wheel events are supported, - but custom event streams can still be used to specify filters, - debouncing, and throttling. Future versions will expand the set of - events that can trigger this transformation. - The default value is True, which corresponds to wheel!. This - default allows users to use the mouse wheel to resize an interval - selection. - **kwds : - Additional keywords to control the selection. - - Returns - ------- - parameter: Parameter - The parameter object that can be used in chart creation. - """ - return _selection( - type="interval", - name=name, - value=value, - bind=bind, - empty=empty, - expr=expr, - encodings=encodings, - on=on, - clear=clear, - resolve=resolve, - mark=mark, - translate=translate, - zoom=zoom, - **kwds, - ) - - -def selection_point( - name=None, - value=Undefined, - bind=Undefined, - empty=Undefined, - expr=Undefined, - encodings=Undefined, - fields=Undefined, - on=Undefined, - clear=Undefined, - resolve=Undefined, - toggle=Undefined, - nearest=Undefined, - **kwds, -): - """Create a point selection parameter. Selection parameters define data queries that are driven by direct manipulation from user input (e.g., mouse clicks or drags). Point selection parameters are used to select multiple discrete data values; the first value is selected on click and additional values toggled on shift-click. To select a continuous range of data values on drag interval selection parameters (`selection_interval`) can be used instead. - - Parameters - ---------- - name : string (optional) - The name of the parameter. If not specified, a unique name will be - created. - value : any (optional) - The default value of the parameter. If not specified, the parameter - will be created without a default value. - bind : :class:`Binding` (optional) - Binds the parameter to an external input element such as a slider, - selection list or radio button group. - empty : boolean (optional) - For selection parameters, the predicate of empty selections returns - True by default. Override this behavior, by setting this property - 'empty=False'. - expr : :class:`Expr` (optional) - An expression for the value of the parameter. This expression may - include other parameters, in which case the parameter will - automatically update in response to upstream parameter changes. - encodings : List[str] (optional) - A list of encoding channels. The corresponding data field values - must match for a data tuple to fall within the selection. - fields : List[str] (optional) - A list of field names whose values must match for a data tuple to - fall within the selection. - on : string (optional) - A Vega event stream (object or selector) that triggers the selection. - For interval selections, the event stream must specify a start and end. - clear : string or boolean (optional) - Clears the selection, emptying it of all values. This property can - be an Event Stream or False to disable clear. Default is 'dblclick'. - resolve : enum('global', 'union', 'intersect') (optional) - With layered and multi-view displays, a strategy that determines - how selections' data queries are resolved when applied in a filter - transform, conditional encoding rule, or scale domain. - One of: - - * 'global': only one brush exists for the entire SPLOM. When the - user begins to drag, any previous brushes are cleared, and a - new one is constructed. - * 'union': each cell contains its own brush, and points are - highlighted if they lie within any of these individual brushes. - * 'intersect': each cell contains its own brush, and points are - highlighted only if they fall within all of these individual - brushes. - - The default is 'global'. - toggle : string or boolean (optional) - Controls whether data values should be toggled (inserted or - removed from a point selection) or only ever inserted into - point selections. - One of: - - * True (default): the toggle behavior, which corresponds to - "event.shiftKey". As a result, data values are toggled - when the user interacts with the shift-key pressed. - * False: disables toggling behaviour; the selection will - only ever contain a single data value corresponding - to the most recent interaction. - * A Vega expression which is re-evaluated as the user interacts. - If the expression evaluates to True, the data value is - toggled into or out of the point selection. If the expression - evaluates to False, the point selection is first cleared, and - the data value is then inserted. For example, setting the - value to the Vega expression True will toggle data values - without the user pressing the shift-key. - - nearest : boolean (optional) - When true, an invisible voronoi diagram is computed to accelerate - discrete selection. The data value nearest the mouse cursor is - added to the selection. The default is False, which means that - data values must be interacted with directly (e.g., clicked on) - to be added to the selection. - **kwds : - Additional keywords to control the selection. - - Returns - ------- - parameter: Parameter - The parameter object that can be used in chart creation. - """ - return _selection( - type="point", - name=name, - value=value, - bind=bind, - empty=empty, - expr=expr, - encodings=encodings, - fields=fields, - on=on, - clear=clear, - resolve=resolve, - toggle=toggle, - nearest=nearest, - **kwds, - ) - - -@utils.deprecation.deprecated( - message="'selection_multi' is deprecated. Use 'selection_point'" -) -@utils.use_signature(core.PointSelectionConfig) -def selection_multi(**kwargs): - """'selection_multi' is deprecated. Use 'selection_point'""" - return _selection(type="point", **kwargs) - - -@utils.deprecation.deprecated( - message="'selection_single' is deprecated. Use 'selection_point'" -) -@utils.use_signature(core.PointSelectionConfig) -def selection_single(**kwargs): - """'selection_single' is deprecated. Use 'selection_point'""" - return _selection(type="point", **kwargs) - - -@utils.use_signature(core.Binding) -def binding(input, **kwargs): - """A generic binding""" - return core.Binding(input=input, **kwargs) - - -@utils.use_signature(core.BindCheckbox) -def binding_checkbox(**kwargs): - """A checkbox binding""" - return core.BindCheckbox(input="checkbox", **kwargs) - - -@utils.use_signature(core.BindRadioSelect) -def binding_radio(**kwargs): - """A radio button binding""" - return core.BindRadioSelect(input="radio", **kwargs) - - -@utils.use_signature(core.BindRadioSelect) -def binding_select(**kwargs): - """A select binding""" - return core.BindRadioSelect(input="select", **kwargs) - - -@utils.use_signature(core.BindRange) -def binding_range(**kwargs): - """A range binding""" - return core.BindRange(input="range", **kwargs) - - -# TODO: update the docstring -def condition(predicate, if_true, if_false, **kwargs): - """A conditional attribute or encoding - - Parameters - ---------- - predicate: Selection, PredicateComposition, expr.Expression, dict, or string - the selection predicate or test predicate for the condition. - if a string is passed, it will be treated as a test operand. - if_true: - the spec or object to use if the selection predicate is true - if_false: - the spec or object to use if the selection predicate is false - **kwargs: - additional keyword args are added to the resulting dict - - Returns - ------- - spec: dict or VegaLiteSchema - the spec that describes the condition - """ - test_predicates = (str, expr.Expression, core.PredicateComposition) - - if isinstance(predicate, Parameter): - if predicate.param_type == "selection" or predicate.param.expr is Undefined: - condition = {"param": predicate.name} - if "empty" in kwargs: - condition["empty"] = kwargs.pop("empty") - elif isinstance(predicate.empty, bool): - condition["empty"] = predicate.empty - else: - condition = {"test": predicate.param.expr} - elif isinstance(predicate, test_predicates): - condition = {"test": predicate} - elif isinstance(predicate, dict): - condition = predicate - else: - raise NotImplementedError( - "condition predicate of type {}" "".format(type(predicate)) - ) - - if isinstance(if_true, core.SchemaBase): - # convert to dict for now; the from_dict call below will wrap this - # dict in the appropriate schema - if_true = if_true.to_dict() - elif isinstance(if_true, str): - if isinstance(if_false, str): - raise ValueError( - "A field cannot be used for both the `if_true` and `if_false` values of a condition. One of them has to specify a `value` or `datum` definition." - ) - else: - if_true = utils.parse_shorthand(if_true) - if_true.update(kwargs) - condition.update(if_true) - - if isinstance(if_false, core.SchemaBase): - # For the selection, the channel definitions all allow selections - # already. So use this SchemaBase wrapper if possible. - selection = if_false.copy() - selection.condition = condition - elif isinstance(if_false, str): - selection = {"condition": condition, "shorthand": if_false} - selection.update(kwargs) - else: - selection = dict(condition=condition, **if_false) - - return selection - - -# -------------------------------------------------------------------- -# Top-level objects - - -class TopLevelMixin(mixins.ConfigMethodMixin): - """Mixin for top-level chart objects such as Chart, LayeredChart, etc.""" - - _class_is_valid_at_instantiation = False - - def to_dict(self, *args, **kwargs) -> dict: - """Convert the chart to a dictionary suitable for JSON export""" - # We make use of three context markers: - # - 'data' points to the data that should be referenced for column type - # inference. - # - 'top_level' is a boolean flag that is assumed to be true; if it's - # true then a "$schema" arg is added to the dict. - # - 'datasets' is a dict of named datasets that should be inserted - # in the top-level object - - # note: not a deep copy because we want datasets and data arguments to - # be passed by reference - context = kwargs.get("context", {}).copy() - context.setdefault("datasets", {}) - is_top_level = context.get("top_level", True) - - # TopLevelMixin instance does not necessarily have copy defined but due to how - # Altair is set up this should hold. Too complex to type hint right now - copy = self.copy(deep=False) # type: ignore[attr-defined] - original_data = getattr(copy, "data", Undefined) - copy.data = _prepare_data(original_data, context) - - if original_data is not Undefined: - context["data"] = original_data - - # remaining to_dict calls are not at top level - context["top_level"] = False - kwargs["context"] = context - - # TopLevelMixin instance does not necessarily have to_dict defined - # but due to how Altair is set up this should hold. - # Too complex to type hint right now - dct = super(TopLevelMixin, copy).to_dict(*args, **kwargs) # type: ignore[misc] - - # TODO: following entries are added after validation. Should they be validated? - if is_top_level: - # since this is top-level we add $schema if it's missing - if "$schema" not in dct: - dct["$schema"] = SCHEMA_URL - - # apply theme from theme registry - the_theme = themes.get() - # Use assert to tell type checkers that it is not None. Holds true - # as there is always a default theme set when importing Altair - assert the_theme is not None - dct = utils.update_nested(the_theme(), dct, copy=True) - - # update datasets - if context["datasets"]: - dct.setdefault("datasets", {}).update(context["datasets"]) - - return dct - - def to_html( - self, - base_url="https://cdn.jsdelivr.net/npm", - output_div="vis", - embed_options=None, - json_kwds=None, - fullhtml=True, - requirejs=False, - ) -> str: - return utils.spec_to_html( - self.to_dict(), - mode="vega-lite", - vegalite_version=VEGALITE_VERSION, - vegaembed_version=VEGAEMBED_VERSION, - vega_version=VEGA_VERSION, - base_url=base_url, - output_div=output_div, - embed_options=embed_options, - json_kwds=json_kwds, - fullhtml=fullhtml, - requirejs=requirejs, - ) - - def save( - self, - fp, - format=None, - override_data_transformer=True, - scale_factor=1.0, - vegalite_version=VEGALITE_VERSION, - vega_version=VEGA_VERSION, - vegaembed_version=VEGAEMBED_VERSION, - **kwargs, - ): - """Save a chart to file in a variety of formats - - Supported formats are json, html, png, svg, pdf; the last three require - the altair_saver package to be installed. - - Parameters - ---------- - fp : string filename or file-like object - file in which to write the chart. - format : string (optional) - the format to write: one of ['json', 'html', 'png', 'svg', 'pdf']. - If not specified, the format will be determined from the filename. - override_data_transformer : `boolean` (optional) - If True (default), then the save action will be done with - the MaxRowsError disabled. If False, then do not change the data - transformer. - scale_factor : float - For svg or png formats, scale the image by this factor when saving. - This can be used to control the size or resolution of the output. - Default is 1.0 - **kwargs : - Additional keyword arguments are passed to the output method - associated with the specified format. - - """ - from ...utils.save import save - - kwds = dict( - chart=self, - fp=fp, - format=format, - scale_factor=scale_factor, - vegalite_version=vegalite_version, - vega_version=vega_version, - vegaembed_version=vegaembed_version, - **kwargs, - ) - - # By default we override the data transformer. This makes it so - # that save() will succeed even for large datasets that would - # normally trigger a MaxRowsError - if override_data_transformer: - with data_transformers.disable_max_rows(): - result = save(**kwds) - else: - result = save(**kwds) - return result - - # Fallback for when rendering fails; the full repr is too long to be - # useful in nearly all cases. - def __repr__(self): - return "alt.{}(...)".format(self.__class__.__name__) - - # Layering and stacking - def __add__(self, other): - if not isinstance(other, TopLevelMixin): - raise ValueError("Only Chart objects can be layered.") - return layer(self, other) - - def __and__(self, other): - if not isinstance(other, TopLevelMixin): - raise ValueError("Only Chart objects can be concatenated.") - return vconcat(self, other) - - def __or__(self, other): - if not isinstance(other, TopLevelMixin): - raise ValueError("Only Chart objects can be concatenated.") - return hconcat(self, other) - - def repeat( - self, - repeat=Undefined, - row=Undefined, - column=Undefined, - layer=Undefined, - columns=Undefined, - **kwargs, - ) -> "RepeatChart": - """Return a RepeatChart built from the chart - - Fields within the chart can be set to correspond to the row or - column using `alt.repeat('row')` and `alt.repeat('column')`. - - Parameters - ---------- - repeat : list - a list of data column names to be repeated. This cannot be - used along with the ``row``, ``column`` or ``layer`` argument. - row : list - a list of data column names to be mapped to the row facet - column : list - a list of data column names to be mapped to the column facet - layer : list - a list of data column names to be layered. This cannot be - used along with the ``row``, ``column`` or ``repeat`` argument. - columns : int - the maximum number of columns before wrapping. Only referenced - if ``repeat`` is specified. - **kwargs : - additional keywords passed to RepeatChart. - - Returns - ------- - chart : RepeatChart - a repeated chart. - """ - repeat_specified = repeat is not Undefined - rowcol_specified = row is not Undefined or column is not Undefined - layer_specified = layer is not Undefined - - if repeat_specified and rowcol_specified: - raise ValueError( - "repeat argument cannot be combined with row/column argument." - ) - elif repeat_specified and layer_specified: - raise ValueError("repeat argument cannot be combined with layer argument.") - - if repeat_specified: - repeat = repeat - elif layer_specified: - repeat = core.LayerRepeatMapping(layer=layer, row=row, column=column) - else: - repeat = core.RepeatMapping(row=row, column=column) - - return RepeatChart(spec=self, repeat=repeat, columns=columns, **kwargs) - - def properties(self, **kwargs) -> Self: - """Set top-level properties of the Chart. - - Argument names and types are the same as class initialization. - """ - # ignore type as copy comes from another class for subclasses of TopLevelMixin - copy = self.copy(deep=False) # type: ignore[attr-defined] - for key, val in kwargs.items(): - if key == "selection" and isinstance(val, Parameter): - # TODO: Can this be removed - # For backward compatibility with old selection interface. - setattr(copy, key, {val.name: val.selection}) - else: - # Don't validate data, because it hasn't been processed. - if key != "data": - # ignore type as validate_property comes from SchemaBase, - # not from TopLevelMixin - self.validate_property(key, val) # type: ignore[attr-defined] - setattr(copy, key, val) - return copy - - def project( - self, - type=Undefined, - center=Undefined, - clipAngle=Undefined, - clipExtent=Undefined, - coefficient=Undefined, - distance=Undefined, - fraction=Undefined, - lobes=Undefined, - parallel=Undefined, - precision=Undefined, - radius=Undefined, - ratio=Undefined, - reflectX=Undefined, - reflectY=Undefined, - rotate=Undefined, - scale=Undefined, - spacing=Undefined, - tilt=Undefined, - translate=Undefined, - **kwds, - ) -> Self: - """Add a geographic projection to the chart. - - This is generally used either with ``mark_geoshape`` or with the - ``latitude``/``longitude`` encodings. - - Available projection types are - ['albers', 'albersUsa', 'azimuthalEqualArea', 'azimuthalEquidistant', - 'conicConformal', 'conicEqualArea', 'conicEquidistant', 'equalEarth', 'equirectangular', - 'gnomonic', 'identity', 'mercator', 'orthographic', 'stereographic', 'transverseMercator'] - - Parameters - ---------- - type : ProjectionType - The cartographic projection to use. This value is case-insensitive, for example - `"albers"` and `"Albers"` indicate the same projection type. You can find all valid - projection types [in the - documentation](https://vega.github.io/vega-lite/docs/projection.html#projection-types). - - **Default value:** `equalEarth` - center : List(float) - Sets the projection’s center to the specified center, a two-element array of - longitude and latitude in degrees. - - **Default value:** `[0, 0]` - clipAngle : float - Sets the projection’s clipping circle radius to the specified angle in degrees. If - `null`, switches to [antimeridian](http://bl.ocks.org/mbostock/3788999) cutting - rather than small-circle clipping. - clipExtent : List(List(float)) - Sets the projection’s viewport clip extent to the specified bounds in pixels. The - extent bounds are specified as an array `[[x0, y0], [x1, y1]]`, where `x0` is the - left-side of the viewport, `y0` is the top, `x1` is the right and `y1` is the - bottom. If `null`, no viewport clipping is performed. - coefficient : float - - distance : float - - fraction : float - - lobes : float - - parallel : float - - precision : Mapping(required=[length]) - Sets the threshold for the projection’s [adaptive - resampling](http://bl.ocks.org/mbostock/3795544) to the specified value in pixels. - This value corresponds to the [Douglas–Peucker - distance](http://en.wikipedia.org/wiki/Ramer%E2%80%93Douglas%E2%80%93Peucker_algorithm). - If precision is not specified, returns the projection’s current resampling - precision which defaults to `√0.5 ≅ 0.70710…`. - radius : float - - ratio : float - - reflectX : boolean - - reflectY : boolean - - rotate : List(float) - Sets the projection’s three-axis rotation to the specified angles, which must be a - two- or three-element array of numbers [`lambda`, `phi`, `gamma`] specifying the - rotation angles in degrees about each spherical axis. (These correspond to yaw, - pitch and roll.) - - **Default value:** `[0, 0, 0]` - scale : float - Sets the projection's scale (zoom) value, overriding automatic fitting. - - spacing : float - - tilt : float - - translate : List(float) - Sets the projection's translation (pan) value, overriding automatic fitting. - - """ - projection = core.Projection( - center=center, - clipAngle=clipAngle, - clipExtent=clipExtent, - coefficient=coefficient, - distance=distance, - fraction=fraction, - lobes=lobes, - parallel=parallel, - precision=precision, - radius=radius, - ratio=ratio, - reflectX=reflectX, - reflectY=reflectY, - rotate=rotate, - scale=scale, - spacing=spacing, - tilt=tilt, - translate=translate, - type=type, - **kwds, - ) - return self.properties(projection=projection) - - def _add_transform(self, *transforms): - """Copy the chart and add specified transforms to chart.transform""" - copy = self.copy(deep=["transform"]) - if copy.transform is Undefined: - copy.transform = [] - copy.transform.extend(transforms) - return copy - - def transform_aggregate( - self, aggregate=Undefined, groupby=Undefined, **kwds - ) -> Self: - """ - Add an :class:`AggregateTransform` to the schema. - - Parameters - ---------- - aggregate : List(:class:`AggregatedFieldDef`) - Array of objects that define fields to aggregate. - groupby : List(string) - The data fields to group by. If not specified, a single group containing all data - objects will be used. - **kwds : - additional keywords are converted to aggregates using standard - shorthand parsing. - - Returns - ------- - self : Chart object - returns chart to allow for chaining - - Examples - -------- - The aggregate transform allows you to specify transforms directly using - the same shorthand syntax as used in encodings: - - >>> import altair as alt - >>> chart1 = alt.Chart().transform_aggregate( - ... mean_acc='mean(Acceleration)', - ... groupby=['Origin'] - ... ) - >>> print(chart1.transform[0].to_json()) # doctest: +NORMALIZE_WHITESPACE - { - "aggregate": [ - { - "as": "mean_acc", - "field": "Acceleration", - "op": "mean" - } - ], - "groupby": [ - "Origin" - ] - } - - It also supports including AggregatedFieldDef instances or dicts directly, - so you can create the above transform like this: - - >>> chart2 = alt.Chart().transform_aggregate( - ... [alt.AggregatedFieldDef(field='Acceleration', op='mean', - ... **{'as': 'mean_acc'})], - ... groupby=['Origin'] - ... ) - >>> chart2.transform == chart1.transform - True - - See Also - -------- - alt.AggregateTransform : underlying transform object - - """ - if aggregate is Undefined: - aggregate = [] - for key, val in kwds.items(): - parsed = utils.parse_shorthand(val) - dct = { - "as": key, - "field": parsed.get("field", Undefined), - "op": parsed.get("aggregate", Undefined), - } - aggregate.append(core.AggregatedFieldDef(**dct)) - return self._add_transform( - core.AggregateTransform(aggregate=aggregate, groupby=groupby) - ) - - def transform_bin(self, as_=Undefined, field=Undefined, bin=True, **kwargs) -> Self: - """ - Add a :class:`BinTransform` to the schema. - - Parameters - ---------- - as_ : anyOf(string, List(string)) - The output fields at which to write the start and end bin values. - bin : anyOf(boolean, :class:`BinParams`) - An object indicating bin properties, or simply ``true`` for using default bin - parameters. - field : string - The data field to bin. - - Returns - ------- - self : Chart object - returns chart to allow for chaining - - Examples - -------- - >>> import altair as alt - >>> chart = alt.Chart().transform_bin("x_binned", "x") - >>> chart.transform[0] - BinTransform({ - as: 'x_binned', - bin: True, - field: 'x' - }) - - >>> chart = alt.Chart().transform_bin("x_binned", "x", - ... bin=alt.Bin(maxbins=10)) - >>> chart.transform[0] - BinTransform({ - as: 'x_binned', - bin: BinParams({ - maxbins: 10 - }), - field: 'x' - }) - - See Also - -------- - alt.BinTransform : underlying transform object - - """ - if as_ is not Undefined: - if "as" in kwargs: - raise ValueError( - "transform_bin: both 'as_' and 'as' passed as arguments." - ) - kwargs["as"] = as_ - kwargs["bin"] = bin - kwargs["field"] = field - return self._add_transform(core.BinTransform(**kwargs)) - - def transform_calculate(self, as_=Undefined, calculate=Undefined, **kwargs) -> Self: - """ - Add a :class:`CalculateTransform` to the schema. - - Parameters - ---------- - as_ : string - The field for storing the computed formula value. - calculate : string or alt.expr expression - A `expression `__ - string. Use the variable ``datum`` to refer to the current data object. - **kwargs - transforms can also be passed by keyword argument; see Examples - - Returns - ------- - self : Chart object - returns chart to allow for chaining - - Examples - -------- - >>> import altair as alt - >>> from altair import datum, expr - - >>> chart = alt.Chart().transform_calculate(y = 2 * expr.sin(datum.x)) - >>> chart.transform[0] - CalculateTransform({ - as: 'y', - calculate: (2 * sin(datum.x)) - }) - - It's also possible to pass the ``CalculateTransform`` arguments directly: - - >>> kwds = {'as': 'y', 'calculate': '2 * sin(datum.x)'} - >>> chart = alt.Chart().transform_calculate(**kwds) - >>> chart.transform[0] - CalculateTransform({ - as: 'y', - calculate: '2 * sin(datum.x)' - }) - - As the first form is easier to write and understand, that is the - recommended method. - - See Also - -------- - alt.CalculateTransform : underlying transform object - """ - if as_ is Undefined: - as_ = kwargs.pop("as", Undefined) - elif "as" in kwargs: - raise ValueError( - "transform_calculate: both 'as_' and 'as' passed as arguments." - ) - if as_ is not Undefined or calculate is not Undefined: - dct = {"as": as_, "calculate": calculate} - self = self._add_transform(core.CalculateTransform(**dct)) - for as_, calculate in kwargs.items(): - dct = {"as": as_, "calculate": calculate} - self = self._add_transform(core.CalculateTransform(**dct)) - return self - - def transform_density( - self, - density, - as_=Undefined, - bandwidth=Undefined, - counts=Undefined, - cumulative=Undefined, - extent=Undefined, - groupby=Undefined, - maxsteps=Undefined, - minsteps=Undefined, - steps=Undefined, - ) -> Self: - """Add a :class:`DensityTransform` to the spec. - - Parameters - ---------- - density : str - The data field for which to perform density estimation. - as_ : [str, str] - The output fields for the sample value and corresponding density estimate. - **Default value:** ``["value", "density"]`` - bandwidth : float - The bandwidth (standard deviation) of the Gaussian kernel. If unspecified or set to - zero, the bandwidth value is automatically estimated from the input data using - Scott’s rule. - counts : boolean - A boolean flag indicating if the output values should be probability estimates - (false) or smoothed counts (true). - **Default value:** ``false`` - cumulative : boolean - A boolean flag indicating whether to produce density estimates (false) or cumulative - density estimates (true). - **Default value:** ``false`` - extent : List([float, float]) - A [min, max] domain from which to sample the distribution. If unspecified, the - extent will be determined by the observed minimum and maximum values of the density - value field. - groupby : List(str) - The data fields to group by. If not specified, a single group containing all data - objects will be used. - maxsteps : float - The maximum number of samples to take along the extent domain for plotting the - density. **Default value:** ``200`` - minsteps : float - The minimum number of samples to take along the extent domain for plotting the - density. **Default value:** ``25`` - steps : float - The exact number of samples to take along the extent domain for plotting the - density. If specified, overrides both minsteps and maxsteps to set an exact number - of uniform samples. Potentially useful in conjunction with a fixed extent to ensure - consistent sample points for stacked densities. - """ - return self._add_transform( - core.DensityTransform( - density=density, - bandwidth=bandwidth, - counts=counts, - cumulative=cumulative, - extent=extent, - groupby=groupby, - maxsteps=maxsteps, - minsteps=minsteps, - steps=steps, - **{"as": as_}, - ) - ) - - def transform_impute( - self, - impute, - key, - frame=Undefined, - groupby=Undefined, - keyvals=Undefined, - method=Undefined, - value=Undefined, - ) -> Self: - """ - Add an :class:`ImputeTransform` to the schema. - - Parameters - ---------- - impute : string - The data field for which the missing values should be imputed. - key : string - A key field that uniquely identifies data objects within a group. - Missing key values (those occurring in the data but not in the current group) will - be imputed. - frame : List(anyOf(None, float)) - A frame specification as a two-element array used to control the window over which - the specified method is applied. The array entries should either be a number - indicating the offset from the current data object, or null to indicate unbounded - rows preceding or following the current data object. For example, the value ``[-5, - 5]`` indicates that the window should include five objects preceding and five - objects following the current object. - **Default value:** : ``[null, null]`` indicating that the window includes all - objects. - groupby : List(string) - An optional array of fields by which to group the values. - Imputation will then be performed on a per-group basis. - keyvals : anyOf(List(Mapping(required=[])), :class:`ImputeSequence`) - Defines the key values that should be considered for imputation. - An array of key values or an object defining a `number sequence - `__. - If provided, this will be used in addition to the key values observed within the - input data. If not provided, the values will be derived from all unique values of - the ``key`` field. For ``impute`` in ``encoding``, the key field is the x-field if - the y-field is imputed, or vice versa. - If there is no impute grouping, this property *must* be specified. - method : :class:`ImputeMethod` - The imputation method to use for the field value of imputed data objects. - One of ``value``, ``mean``, ``median``, ``max`` or ``min``. - **Default value:** ``"value"`` - value : Mapping(required=[]) - The field value to use when the imputation ``method`` is ``"value"``. - - Returns - ------- - self : Chart object - returns chart to allow for chaining - - See Also - -------- - alt.ImputeTransform : underlying transform object - """ - return self._add_transform( - core.ImputeTransform( - impute=impute, - key=key, - frame=frame, - groupby=groupby, - keyvals=keyvals, - method=method, - value=value, - ) - ) - - def transform_joinaggregate( - self, joinaggregate=Undefined, groupby=Undefined, **kwargs - ) -> Self: - """ - Add a :class:`JoinAggregateTransform` to the schema. - - Parameters - ---------- - joinaggregate : List(:class:`JoinAggregateFieldDef`) - The definition of the fields in the join aggregate, and what calculations to use. - groupby : List(string) - The data fields for partitioning the data objects into separate groups. If - unspecified, all data points will be in a single group. - **kwargs - joinaggregates can also be passed by keyword argument; see Examples. - - Returns - ------- - self : Chart object - returns chart to allow for chaining - - Examples - -------- - >>> import altair as alt - >>> chart = alt.Chart().transform_joinaggregate(x='sum(y)') - >>> chart.transform[0] - JoinAggregateTransform({ - joinaggregate: [JoinAggregateFieldDef({ - as: 'x', - field: 'y', - op: 'sum' - })] - }) - - See Also - -------- - alt.JoinAggregateTransform : underlying transform object - """ - if joinaggregate is Undefined: - joinaggregate = [] - for key, val in kwargs.items(): - parsed = utils.parse_shorthand(val) - dct = { - "as": key, - "field": parsed.get("field", Undefined), - "op": parsed.get("aggregate", Undefined), - } - joinaggregate.append(core.JoinAggregateFieldDef(**dct)) - return self._add_transform( - core.JoinAggregateTransform(joinaggregate=joinaggregate, groupby=groupby) - ) - - # TODO: Update docstring - def transform_filter(self, filter, **kwargs) -> Self: - """ - Add a :class:`FilterTransform` to the schema. - - Parameters - ---------- - filter : a filter expression or :class:`PredicateComposition` - The `filter` property must be one of the predicate definitions: - (1) a string or alt.expr expression - (2) a range predicate - (3) a selection predicate - (4) a logical operand combining (1)-(3) - (5) a Selection object - - Returns - ------- - self : Chart object - returns chart to allow for chaining - - See Also - -------- - alt.FilterTransform : underlying transform object - - """ - if isinstance(filter, Parameter): - new_filter = {"param": filter.name} - if "empty" in kwargs: - new_filter["empty"] = kwargs.pop("empty") - elif isinstance(filter.empty, bool): - new_filter["empty"] = filter.empty - filter = new_filter - return self._add_transform(core.FilterTransform(filter=filter, **kwargs)) - - def transform_flatten(self, flatten, as_=Undefined) -> Self: - """Add a :class:`FlattenTransform` to the schema. - - Parameters - ---------- - flatten : List(string) - An array of one or more data fields containing arrays to flatten. - If multiple fields are specified, their array values should have a parallel - structure, ideally with the same length. - If the lengths of parallel arrays do not match, - the longest array will be used with ``null`` values added for missing entries. - as : List(string) - The output field names for extracted array values. - **Default value:** The field name of the corresponding array field - - Returns - ------- - self : Chart object - returns chart to allow for chaining - - See Also - -------- - alt.FlattenTransform : underlying transform object - """ - return self._add_transform( - core.FlattenTransform(flatten=flatten, **{"as": as_}) - ) - - def transform_fold(self, fold, as_=Undefined) -> Self: - """Add a :class:`FoldTransform` to the spec. - - Parameters - ---------- - fold : List(string) - An array of data fields indicating the properties to fold. - as : [string, string] - The output field names for the key and value properties produced by the fold - transform. Default: ``["key", "value"]`` - - Returns - ------- - self : Chart object - returns chart to allow for chaining - - See Also - -------- - Chart.transform_pivot : pivot transform - opposite of fold. - alt.FoldTransform : underlying transform object - """ - return self._add_transform(core.FoldTransform(fold=fold, **{"as": as_})) - - def transform_loess( - self, - on, - loess, - as_=Undefined, - bandwidth=Undefined, - groupby=Undefined, - ) -> Self: - """Add a :class:`LoessTransform` to the spec. - - Parameters - ---------- - on : str - The data field of the independent variable to use a predictor. - loess : str - The data field of the dependent variable to smooth. - as_ : [str, str] - The output field names for the smoothed points generated by the loess transform. - **Default value:** The field names of the input x and y values. - bandwidth : float - A bandwidth parameter in the range ``[0, 1]`` that determines the amount of - smoothing. **Default value:** ``0.3`` - groupby : List(str) - The data fields to group by. If not specified, a single group containing all data - objects will be used. - - Returns - ------- - self : Chart object - returns chart to allow for chaining - - See Also - -------- - Chart.transform_regression: regression transform - alt.LoessTransform : underlying transform object - """ - return self._add_transform( - core.LoessTransform( - loess=loess, on=on, bandwidth=bandwidth, groupby=groupby, **{"as": as_} - ) - ) - - def transform_lookup( - self, - lookup=Undefined, - from_=Undefined, - as_=Undefined, - default=Undefined, - **kwargs, - ) -> Self: - """Add a :class:`DataLookupTransform` or :class:`SelectionLookupTransform` to the chart - - Parameters - ---------- - lookup : string - Key in primary data source. - from_ : anyOf(:class:`LookupData`, :class:`LookupSelection`) - Secondary data reference. - as_ : anyOf(string, List(string)) - The output fields on which to store the looked up data values. - - For data lookups, this property may be left blank if ``from_.fields`` - has been specified (those field names will be used); if ``from_.fields`` - has not been specified, ``as_`` must be a string. - - For selection lookups, this property is optional: if unspecified, - looked up values will be stored under a property named for the selection; - and if specified, it must correspond to ``from_.fields``. - default : string - The default value to use if lookup fails. **Default value:** ``null`` - - Returns - ------- - self : Chart object - returns chart to allow for chaining - - See Also - -------- - alt.DataLookupTransform : underlying transform object - alt.SelectionLookupTransform : underlying transform object - """ - if as_ is not Undefined: - if "as" in kwargs: - raise ValueError( - "transform_lookup: both 'as_' and 'as' passed as arguments." - ) - kwargs["as"] = as_ - if from_ is not Undefined: - if "from" in kwargs: - raise ValueError( - "transform_lookup: both 'from_' and 'from' passed as arguments." - ) - kwargs["from"] = from_ - kwargs["lookup"] = lookup - kwargs["default"] = default - return self._add_transform(core.LookupTransform(**kwargs)) - - def transform_pivot( - self, - pivot, - value, - groupby=Undefined, - limit=Undefined, - op=Undefined, - ) -> Self: - """Add a :class:`PivotTransform` to the chart. - - Parameters - ---------- - pivot : str - The data field to pivot on. The unique values of this field become new field names - in the output stream. - value : str - The data field to populate pivoted fields. The aggregate values of this field become - the values of the new pivoted fields. - groupby : List(str) - The optional data fields to group by. If not specified, a single group containing - all data objects will be used. - limit : float - An optional parameter indicating the maximum number of pivoted fields to generate. - The default ( ``0`` ) applies no limit. The pivoted ``pivot`` names are sorted in - ascending order prior to enforcing the limit. - **Default value:** ``0`` - op : string - The aggregation operation to apply to grouped ``value`` field values. - **Default value:** ``sum`` - - Returns - ------- - self : Chart object - returns chart to allow for chaining - - See Also - -------- - Chart.transform_fold : fold transform - opposite of pivot. - alt.PivotTransform : underlying transform object - """ - return self._add_transform( - core.PivotTransform( - pivot=pivot, value=value, groupby=groupby, limit=limit, op=op - ) - ) - - def transform_quantile( - self, - quantile, - as_=Undefined, - groupby=Undefined, - probs=Undefined, - step=Undefined, - ) -> Self: - """Add a :class:`QuantileTransform` to the chart - - Parameters - ---------- - quantile : str - The data field for which to perform quantile estimation. - as : [str, str] - The output field names for the probability and quantile values. - groupby : List(str) - The data fields to group by. If not specified, a single group containing all data - objects will be used. - probs : List(float) - An array of probabilities in the range (0, 1) for which to compute quantile values. - If not specified, the *step* parameter will be used. - step : float - A probability step size (default 0.01) for sampling quantile values. All values from - one-half the step size up to 1 (exclusive) will be sampled. This parameter is only - used if the *probs* parameter is not provided. **Default value:** ``["prob", "value"]`` - - Returns - ------- - self : Chart object - returns chart to allow for chaining - - See Also - -------- - alt.QuantileTransform : underlying transform object - """ - return self._add_transform( - core.QuantileTransform( - quantile=quantile, - groupby=groupby, - probs=probs, - step=step, - **{"as": as_}, - ) - ) - - def transform_regression( - self, - on, - regression, - as_=Undefined, - extent=Undefined, - groupby=Undefined, - method=Undefined, - order=Undefined, - params=Undefined, - ) -> Self: - """Add a :class:`RegressionTransform` to the chart. - - Parameters - ---------- - on : str - The data field of the independent variable to use a predictor. - regression : str - The data field of the dependent variable to predict. - as_ : [str, str] - The output field names for the smoothed points generated by the regression - transform. **Default value:** The field names of the input x and y values. - extent : [float, float] - A [min, max] domain over the independent (x) field for the starting and ending - points of the generated trend line. - groupby : List(str) - The data fields to group by. If not specified, a single group containing all data - objects will be used. - method : enum('linear', 'log', 'exp', 'pow', 'quad', 'poly') - The functional form of the regression model. One of ``"linear"``, ``"log"``, - ``"exp"``, ``"pow"``, ``"quad"``, or ``"poly"``. **Default value:** ``"linear"`` - order : float - The polynomial order (number of coefficients) for the 'poly' method. - **Default value:** ``3`` - params : boolean - A boolean flag indicating if the transform should return the regression model - parameters (one object per group), rather than trend line points. - The resulting objects include a ``coef`` array of fitted coefficient values - (starting with the intercept term and then including terms of increasing order) - and an ``rSquared`` value (indicating the total variance explained by the model). - **Default value:** ``false`` - - Returns - ------- - self : Chart object - returns chart to allow for chaining - - See Also - -------- - Chart.transform_loess : LOESS transform - alt.RegressionTransform : underlying transform object - """ - return self._add_transform( - core.RegressionTransform( - regression=regression, - on=on, - extent=extent, - groupby=groupby, - method=method, - order=order, - params=params, - **{"as": as_}, - ) - ) - - def transform_sample(self, sample=1000) -> Self: - """ - Add a :class:`SampleTransform` to the schema. - - Parameters - ---------- - sample : float - The maximum number of data objects to include in the sample. Default: 1000. - - Returns - ------- - self : Chart object - returns chart to allow for chaining - - See Also - -------- - alt.SampleTransform : underlying transform object - """ - return self._add_transform(core.SampleTransform(sample)) - - def transform_stack( - self, as_, stack, groupby, offset=Undefined, sort=Undefined - ) -> Self: - """ - Add a :class:`StackTransform` to the schema. - - Parameters - ---------- - as_ : anyOf(string, List(string)) - Output field names. This can be either a string or an array of strings with - two elements denoting the name for the fields for stack start and stack end - respectively. - If a single string(eg."val") is provided, the end field will be "val_end". - stack : string - The field which is stacked. - groupby : List(string) - The data fields to group by. - offset : enum('zero', 'center', 'normalize') - Mode for stacking marks. Default: 'zero'. - sort : List(:class:`SortField`) - Field that determines the order of leaves in the stacked charts. - - Returns - ------- - self : Chart object - returns chart to allow for chaining - - See Also - -------- - alt.StackTransform : underlying transform object - """ - return self._add_transform( - core.StackTransform( - stack=stack, groupby=groupby, offset=offset, sort=sort, **{"as": as_} - ) - ) - - def transform_timeunit( - self, - as_=Undefined, - field=Undefined, - timeUnit=Undefined, - **kwargs, - ) -> Self: - """ - Add a :class:`TimeUnitTransform` to the schema. - - Parameters - ---------- - as_ : string - The output field to write the timeUnit value. - field : string - The data field to apply time unit. - timeUnit : :class:`TimeUnit` - The timeUnit. - **kwargs - transforms can also be passed by keyword argument; see Examples - - Returns - ------- - self : Chart object - returns chart to allow for chaining - - Examples - -------- - >>> import altair as alt - >>> from altair import datum, expr - - >>> chart = alt.Chart().transform_timeunit(month='month(date)') - >>> chart.transform[0] - TimeUnitTransform({ - as: 'month', - field: 'date', - timeUnit: 'month' - }) - - It's also possible to pass the ``TimeUnitTransform`` arguments directly; - this is most useful in cases where the desired field name is not a - valid python identifier: - - >>> kwds = {'as': 'month', 'timeUnit': 'month', 'field': 'The Month'} - >>> chart = alt.Chart().transform_timeunit(**kwds) - >>> chart.transform[0] - TimeUnitTransform({ - as: 'month', - field: 'The Month', - timeUnit: 'month' - }) - - As the first form is easier to write and understand, that is the - recommended method. - - See Also - -------- - alt.TimeUnitTransform : underlying transform object - - """ - if as_ is Undefined: - as_ = kwargs.pop("as", Undefined) - else: - if "as" in kwargs: - raise ValueError( - "transform_timeunit: both 'as_' and 'as' passed as arguments." - ) - if as_ is not Undefined: - dct = {"as": as_, "timeUnit": timeUnit, "field": field} - self = self._add_transform(core.TimeUnitTransform(**dct)) - for as_, shorthand in kwargs.items(): - dct = utils.parse_shorthand( - shorthand, - parse_timeunits=True, - parse_aggregates=False, - parse_types=False, - ) - dct.pop("type", None) - dct["as"] = as_ - if "timeUnit" not in dct: - raise ValueError("'{}' must include a valid timeUnit".format(shorthand)) - self = self._add_transform(core.TimeUnitTransform(**dct)) - return self - - def transform_window( - self, - window=Undefined, - frame=Undefined, - groupby=Undefined, - ignorePeers=Undefined, - sort=Undefined, - **kwargs, - ) -> Self: - """Add a :class:`WindowTransform` to the schema - - Parameters - ---------- - window : List(:class:`WindowFieldDef`) - The definition of the fields in the window, and what calculations to use. - frame : List(anyOf(None, float)) - A frame specification as a two-element array indicating how the sliding window - should proceed. The array entries should either be a number indicating the offset - from the current data object, or null to indicate unbounded rows preceding or - following the current data object. The default value is ``[null, 0]``, indicating - that the sliding window includes the current object and all preceding objects. The - value ``[-5, 5]`` indicates that the window should include five objects preceding - and five objects following the current object. Finally, ``[null, null]`` indicates - that the window frame should always include all data objects. The only operators - affected are the aggregation operations and the ``first_value``, ``last_value``, and - ``nth_value`` window operations. The other window operations are not affected by - this. - - **Default value:** : ``[null, 0]`` (includes the current object and all preceding - objects) - groupby : List(string) - The data fields for partitioning the data objects into separate windows. If - unspecified, all data points will be in a single group. - ignorePeers : boolean - Indicates if the sliding window frame should ignore peer values. (Peer values are - those considered identical by the sort criteria). The default is false, causing the - window frame to expand to include all peer values. If set to true, the window frame - will be defined by offset values only. This setting only affects those operations - that depend on the window frame, namely aggregation operations and the first_value, - last_value, and nth_value window operations. - - **Default value:** ``false`` - sort : List(:class:`SortField`) - A sort field definition for sorting data objects within a window. If two data - objects are considered equal by the comparator, they are considered “peer” values of - equal rank. If sort is not specified, the order is undefined: data objects are - processed in the order they are observed and none are considered peers (the - ignorePeers parameter is ignored and treated as if set to ``true`` ). - **kwargs - transforms can also be passed by keyword argument; see Examples - - Examples - -------- - A cumulative line chart - - >>> import altair as alt - >>> import numpy as np - >>> import pandas as pd - >>> data = pd.DataFrame({'x': np.arange(100), - ... 'y': np.random.randn(100)}) - >>> chart = alt.Chart(data).mark_line().encode( - ... x='x:Q', - ... y='ycuml:Q' - ... ).transform_window( - ... ycuml='sum(y)' - ... ) - >>> chart.transform[0] - WindowTransform({ - window: [WindowFieldDef({ - as: 'ycuml', - field: 'y', - op: 'sum' - })] - }) - - """ - if kwargs: - if window is Undefined: - window = [] - for as_, shorthand in kwargs.items(): - kwds = {"as": as_} - kwds.update( - utils.parse_shorthand( - shorthand, - parse_aggregates=False, - parse_window_ops=True, - parse_timeunits=False, - parse_types=False, - ) - ) - window.append(core.WindowFieldDef(**kwds)) - - return self._add_transform( - core.WindowTransform( - window=window, - frame=frame, - groupby=groupby, - ignorePeers=ignorePeers, - sort=sort, - ) - ) - - # Display-related methods - - def _repr_mimebundle_(self, include=None, exclude=None): - """Return a MIME bundle for display in Jupyter frontends.""" - # Catch errors explicitly to get around issues in Jupyter frontend - # see https://github.com/ipython/ipython/issues/11038 - try: - dct = self.to_dict() - except Exception: - utils.display_traceback(in_ipython=True) - return {} - else: - return renderers.get()(dct) - - def display(self, renderer=Undefined, theme=Undefined, actions=Undefined, **kwargs): - """Display chart in Jupyter notebook or JupyterLab - - Parameters are passed as options to vega-embed within supported frontends. - See https://github.com/vega/vega-embed#options for details. - - Parameters - ---------- - renderer : string ('canvas' or 'svg') - The renderer to use - theme : string - The Vega theme name to use; see https://github.com/vega/vega-themes - actions : bool or dict - Specify whether action links ("Open In Vega Editor", etc.) are - included in the view. - **kwargs : - Additional parameters are also passed to vega-embed as options. - - """ - from IPython.display import display - - if renderer is not Undefined: - kwargs["renderer"] = renderer - if theme is not Undefined: - kwargs["theme"] = theme - if actions is not Undefined: - kwargs["actions"] = actions - - if kwargs: - options = renderers.options.copy() - options["embed_options"] = options.get("embed_options", {}).copy() - options["embed_options"].update(kwargs) - with renderers.enable(**options): - display(self) - else: - display(self) - - @utils.deprecation.deprecated(message="'serve' is deprecated. Use 'show' instead.") - def serve( - self, - ip="127.0.0.1", - port=8888, - n_retries=50, - files=None, - jupyter_warning=True, - open_browser=True, - http_server=None, - **kwargs, - ): - """ - 'serve' is deprecated. Use 'show' instead. - - Open a browser window and display a rendering of the chart - - Parameters - ---------- - html : string - HTML to serve - ip : string (default = '127.0.0.1') - ip address at which the HTML will be served. - port : int (default = 8888) - the port at which to serve the HTML - n_retries : int (default = 50) - the number of nearby ports to search if the specified port - is already in use. - files : dictionary (optional) - dictionary of extra content to serve - jupyter_warning : bool (optional) - if True (default), then print a warning if this is used - within the Jupyter notebook - open_browser : bool (optional) - if True (default), then open a web browser to the given HTML - http_server : class (optional) - optionally specify an HTTPServer class to use for showing the - figure. The default is Python's basic HTTPServer. - **kwargs : - additional keyword arguments passed to the save() method - - """ - from ...utils.server import serve - - html = io.StringIO() - self.save(html, format="html", **kwargs) - html.seek(0) - - serve( - html.read(), - ip=ip, - port=port, - n_retries=n_retries, - files=files, - jupyter_warning=jupyter_warning, - open_browser=open_browser, - http_server=http_server, - ) - - def show(self, embed_opt=None, open_browser=None): - """Show the chart in an external browser window. - - This requires a recent version of the altair_viewer package. - - Parameters - ---------- - embed_opt : dict (optional) - The Vega embed options that control the dispay of the chart. - open_browser : bool (optional) - Specify whether a browser window should be opened. If not specified, - a browser window will be opened only if the server is not already - connected to a browser. - """ - try: - import altair_viewer # type: ignore - except ImportError as err: - raise ValueError( - "'show' method requires the altair_viewer package. " - "See http://github.com/altair-viz/altair_viewer" - ) from err - altair_viewer.show(self, embed_opt=embed_opt, open_browser=open_browser) - - @utils.use_signature(core.Resolve) - def _set_resolve(self, **kwargs): - """Copy the chart and update the resolve property with kwargs""" - if not hasattr(self, "resolve"): - raise ValueError( - "{} object has no attribute " "'resolve'".format(self.__class__) - ) - copy = self.copy(deep=["resolve"]) - if copy.resolve is Undefined: - copy.resolve = core.Resolve() - for key, val in kwargs.items(): - copy.resolve[key] = val - return copy - - @utils.use_signature(core.AxisResolveMap) - def resolve_axis(self, *args, **kwargs) -> Self: - return self._set_resolve(axis=core.AxisResolveMap(*args, **kwargs)) - - @utils.use_signature(core.LegendResolveMap) - def resolve_legend(self, *args, **kwargs) -> Self: - return self._set_resolve(legend=core.LegendResolveMap(*args, **kwargs)) - - @utils.use_signature(core.ScaleResolveMap) - def resolve_scale(self, *args, **kwargs) -> Self: - return self._set_resolve(scale=core.ScaleResolveMap(*args, **kwargs)) - - -class _EncodingMixin: - @utils.use_signature(core.FacetedEncoding) - def encode(self, *args, **kwargs) -> Self: - # Convert args to kwargs based on their types. - kwargs = utils.infer_encoding_types(args, kwargs, channels) - - # get a copy of the dict representation of the previous encoding - # ignore type as copy method comes from SchemaBase - copy = self.copy(deep=["encoding"]) # type: ignore[attr-defined] - encoding = copy._get("encoding", {}) - if isinstance(encoding, core.VegaLiteSchema): - encoding = {k: v for k, v in encoding._kwds.items() if v is not Undefined} - - # update with the new encodings, and apply them to the copy - encoding.update(kwargs) - copy.encoding = core.FacetedEncoding(**encoding) - return copy - - def facet( - self, - facet=Undefined, - row=Undefined, - column=Undefined, - data=Undefined, - columns=Undefined, - **kwargs, - ) -> "FacetChart": - """Create a facet chart from the current chart. - - Faceted charts require data to be specified at the top level; if data - is not specified, the data from the current chart will be used at the - top level. - - Parameters - ---------- - facet : string or alt.Facet (optional) - The data column to use as an encoding for a wrapped facet. - If specified, then neither row nor column may be specified. - column : string or alt.Column (optional) - The data column to use as an encoding for a column facet. - May be combined with row argument, but not with facet argument. - row : string or alt.Column (optional) - The data column to use as an encoding for a row facet. - May be combined with column argument, but not with facet argument. - data : string or dataframe (optional) - The dataset to use for faceting. If not supplied, then data must - be specified in the top-level chart that calls this method. - columns : integer - the maximum number of columns for a wrapped facet. - - Returns - ------- - self : - for chaining - """ - facet_specified = facet is not Undefined - rowcol_specified = row is not Undefined or column is not Undefined - - if facet_specified and rowcol_specified: - raise ValueError( - "facet argument cannot be combined with row/column argument." - ) - - # Remove "ignore" statement once Undefined is no longer typed as Any - if data is Undefined: # type: ignore - # Remove "ignore" statement once Undefined is no longer typed as Any - if self.data is Undefined: # type: ignore - raise ValueError( - "Facet charts require data to be specified at the top level." - ) - # ignore type as copy comes from another class - self = self.copy(deep=False) # type: ignore[attr-defined] - # Remove "ignore" statement once Undefined is no longer typed as Any - data, self.data = self.data, Undefined # type: ignore - - if facet_specified: - if isinstance(facet, str): - facet = channels.Facet(facet) - else: - facet = FacetMapping(row=row, column=column) - - return FacetChart(spec=self, facet=facet, data=data, columns=columns, **kwargs) - - -class Chart( - TopLevelMixin, _EncodingMixin, mixins.MarkMethodMixin, core.TopLevelUnitSpec -): - """Create a basic Altair/Vega-Lite chart. - - Although it is possible to set all Chart properties as constructor attributes, - it is more idiomatic to use methods such as ``mark_point()``, ``encode()``, - ``transform_filter()``, ``properties()``, etc. See Altair's documentation - for details and examples: http://altair-viz.github.io/. - - Parameters - ---------- - data : Data - An object describing the data source - mark : AnyMark - A string describing the mark type (one of `"bar"`, `"circle"`, `"square"`, `"tick"`, - `"line"`, * `"area"`, `"point"`, `"rule"`, `"geoshape"`, and `"text"`) or a - MarkDef object. - encoding : FacetedEncoding - A key-value mapping between encoding channels and definition of fields. - autosize : anyOf(AutosizeType, AutoSizeParams) - Sets how the visualization size should be determined. If a string, should be one of - `"pad"`, `"fit"` or `"none"`. Object values can additionally specify parameters for - content sizing and automatic resizing. `"fit"` is only supported for single and - layered views that don't use `rangeStep`. Default value: `pad` - background : string - CSS color property to use as the background of visualization. - - **Default value:** none (transparent) - config : Config - Vega-Lite configuration object. This property can only be defined at the top-level - of a specification. - description : string - Description of this mark for commenting purpose. - height : float - The height of a visualization. - name : string - Name of the visualization for later reference. - padding : Padding - The default visualization padding, in pixels, from the edge of the visualization - canvas to the data rectangle. If a number, specifies padding for all sides. If an - object, the value should have the format `{"left": 5, "top": 5, "right": 5, - "bottom": 5}` to specify padding for each side of the visualization. Default - value: `5` - projection : Projection - An object defining properties of geographic projection. Works with `"geoshape"` - marks and `"point"` or `"line"` marks that have a channel (one or more of `"X"`, - `"X2"`, `"Y"`, `"Y2"`) with type `"latitude"`, or `"longitude"`. - selection : Mapping(required=[]) - A key-value mapping between selection names and definitions. - title : anyOf(string, TitleParams) - Title for the plot. - transform : List(Transform) - An array of data transformations such as filter and new field calculation. - width : float - The width of a visualization. - """ - - def __init__( - self, - data=Undefined, - encoding=Undefined, - mark=Undefined, - width=Undefined, - height=Undefined, - **kwargs, - ): - super(Chart, self).__init__( - data=data, - encoding=encoding, - mark=mark, - width=width, - height=height, - **kwargs, - ) - - _counter = 0 - - @classmethod - def _get_name(cls): - cls._counter += 1 - return f"view_{cls._counter}" - - @classmethod - def from_dict(cls, dct, validate=True) -> "Chart": # type: ignore[override] # Not the same signature as SchemaBase.from_dict. Would ideally be aligned in the future - """Construct class from a dictionary representation - - Parameters - ---------- - dct : dictionary - The dict from which to construct the class - validate : boolean - If True (default), then validate the input against the schema. - - Returns - ------- - obj : Chart object - The wrapped schema - - Raises - ------ - jsonschema.ValidationError : - if validate=True and dct does not conform to the schema - """ - for class_ in TopLevelMixin.__subclasses__(): - if class_ is Chart: - class_ = cast(TypingType[TopLevelMixin], super(Chart, cls)) - try: - # TopLevelMixin classes don't necessarily have from_dict defined - # but all classes which are used here have due to how Altair is - # designed. Too complex to type check right now. - return class_.from_dict(dct, validate=validate) # type: ignore[attr-defined] - except jsonschema.ValidationError: - pass - - # As a last resort, try using the Root vegalite object - return core.Root.from_dict(dct, validate) - - def to_dict(self, *args, **kwargs) -> dict: - """Convert the chart to a dictionary suitable for JSON export.""" - context = kwargs.get("context", {}) - if self.data is Undefined and "data" not in context: - # No data specified here or in parent: inject empty data - # for easier specification of datum encodings. - copy = self.copy(deep=False) - copy.data = core.InlineData(values=[{}]) - return super(Chart, copy).to_dict(*args, **kwargs) - return super().to_dict(*args, **kwargs) - - def add_params(self, *params) -> Self: - """Add one or more parameters to the chart.""" - if not params: - return self - copy = self.copy(deep=["params"]) - if copy.params is Undefined: - copy.params = [] - - for s in params: - copy.params.append(s.param) - return copy - - @utils.deprecation.deprecated( - message="'add_selection' is deprecated. Use 'add_params' instead." - ) - def add_selection(self, *params) -> Self: - """'add_selection' is deprecated. Use 'add_params' instead.""" - return self.add_params(*params) - - def interactive(self, name=None, bind_x=True, bind_y=True) -> Self: - """Make chart axes scales interactive - - Parameters - ---------- - name : string - The parameter name to use for the axes scales. This name should be - unique among all parameters within the chart. - bind_x : boolean, default True - If true, then bind the interactive scales to the x-axis - bind_y : boolean, default True - If true, then bind the interactive scales to the y-axis - - Returns - ------- - chart : - copy of self, with interactive axes added - - """ - encodings = [] - if bind_x: - encodings.append("x") - if bind_y: - encodings.append("y") - return self.add_params(selection_interval(bind="scales", encodings=encodings)) - - -def _check_if_valid_subspec(spec, classname): - """Check if the spec is a valid sub-spec. - - If it is not, then raise a ValueError - """ - err = ( - 'Objects with "{0}" attribute cannot be used within {1}. ' - "Consider defining the {0} attribute in the {1} object instead." - ) - - if not isinstance(spec, (core.SchemaBase, dict)): - raise ValueError("Only chart objects can be used in {0}.".format(classname)) - for attr in TOPLEVEL_ONLY_KEYS: - if isinstance(spec, core.SchemaBase): - val = getattr(spec, attr, Undefined) - else: - val = spec.get(attr, Undefined) - if val is not Undefined: - raise ValueError(err.format(attr, classname)) - - -def _check_if_can_be_layered(spec): - """Check if the spec can be layered.""" - - def _get(spec, attr): - if isinstance(spec, core.SchemaBase): - return spec._get(attr) - else: - return spec.get(attr, Undefined) - - encoding = _get(spec, "encoding") - if encoding is not Undefined: - for channel in ["row", "column", "facet"]: - if _get(encoding, channel) is not Undefined: - raise ValueError( - "Faceted charts cannot be layered. Instead, layer the charts before faceting." - ) - if isinstance(spec, (Chart, LayerChart)): - return - - if not isinstance(spec, (core.SchemaBase, dict)): - raise ValueError("Only chart objects can be layered.") - if _get(spec, "facet") is not Undefined: - raise ValueError( - "Faceted charts cannot be layered. Instead, layer the charts before faceting." - ) - if isinstance(spec, FacetChart) or _get(spec, "facet") is not Undefined: - raise ValueError( - "Faceted charts cannot be layered. Instead, layer the charts before faceting." - ) - if isinstance(spec, RepeatChart) or _get(spec, "repeat") is not Undefined: - raise ValueError( - "Repeat charts cannot be layered. Instead, layer the charts before repeating." - ) - if isinstance(spec, ConcatChart) or _get(spec, "concat") is not Undefined: - raise ValueError( - "Concatenated charts cannot be layered. Instead, layer the charts before concatenating." - ) - if isinstance(spec, HConcatChart) or _get(spec, "hconcat") is not Undefined: - raise ValueError( - "Concatenated charts cannot be layered. Instead, layer the charts before concatenating." - ) - if isinstance(spec, VConcatChart) or _get(spec, "vconcat") is not Undefined: - raise ValueError( - "Concatenated charts cannot be layered. Instead, layer the charts before concatenating." - ) - - -class RepeatChart(TopLevelMixin, core.TopLevelRepeatSpec): - """A chart repeated across rows and columns with small changes""" - - # Because TopLevelRepeatSpec is defined as a union as of Vega-Lite schema 4.9, - # we set the arguments explicitly here. - # TODO: Should we instead use tools/schemapi/codegen._get_args? - @utils.use_signature(core.TopLevelRepeatSpec) - def __init__( - self, - repeat=Undefined, - spec=Undefined, - align=Undefined, - autosize=Undefined, - background=Undefined, - bounds=Undefined, - center=Undefined, - columns=Undefined, - config=Undefined, - data=Undefined, - datasets=Undefined, - description=Undefined, - name=Undefined, - padding=Undefined, - params=Undefined, - resolve=Undefined, - spacing=Undefined, - title=Undefined, - transform=Undefined, - usermeta=Undefined, - **kwds, - ): - _check_if_valid_subspec(spec, "RepeatChart") - _spec_as_list = [spec] - params, _spec_as_list = _combine_subchart_params(params, _spec_as_list) - spec = _spec_as_list[0] - if isinstance(spec, (Chart, LayerChart)): - params = _repeat_names(params, repeat, spec) - super(RepeatChart, self).__init__( - repeat=repeat, - spec=spec, - align=align, - autosize=autosize, - background=background, - bounds=bounds, - center=center, - columns=columns, - config=config, - data=data, - datasets=datasets, - description=description, - name=name, - padding=padding, - params=params, - resolve=resolve, - spacing=spacing, - title=title, - transform=transform, - usermeta=usermeta, - **kwds, - ) - - def interactive(self, name=None, bind_x=True, bind_y=True) -> Self: - """Make chart axes scales interactive - - Parameters - ---------- - name : string - The parameter name to use for the axes scales. This name should be - unique among all parameters within the chart. - bind_x : boolean, default True - If true, then bind the interactive scales to the x-axis - bind_y : boolean, default True - If true, then bind the interactive scales to the y-axis - - Returns - ------- - chart : - copy of self, with interactive axes added - - """ - copy = self.copy(deep=False) - copy.spec = copy.spec.interactive(name=name, bind_x=bind_x, bind_y=bind_y) - return copy - - def add_params(self, *params) -> Self: - """Add one or more parameters to the chart.""" - if not params or self.spec is Undefined: - return self - copy = self.copy() - copy.spec = copy.spec.add_params(*params) - return copy.copy() - - @utils.deprecation.deprecated( - message="'add_selection' is deprecated. Use 'add_params' instead." - ) - def add_selection(self, *selections) -> Self: - """'add_selection' is deprecated. Use 'add_params' instead.""" - return self.add_params(*selections) - - -def repeat(repeater="repeat"): - """Tie a channel to the row or column within a repeated chart - - The output of this should be passed to the ``field`` attribute of - a channel. - - Parameters - ---------- - repeater : {'row'|'column'|'repeat'|'layer'} - The repeater to tie the field to. Default is 'repeat'. - - Returns - ------- - repeat : RepeatRef object - """ - if repeater not in ["row", "column", "repeat", "layer"]: - raise ValueError("repeater must be one of ['row', 'column', 'repeat', 'layer']") - return core.RepeatRef(repeat=repeater) - - -class ConcatChart(TopLevelMixin, core.TopLevelConcatSpec): - """A chart with horizontally-concatenated facets""" - - @utils.use_signature(core.TopLevelConcatSpec) - def __init__(self, data=Undefined, concat=(), columns=Undefined, **kwargs): - # TODO: move common data to top level? - for spec in concat: - _check_if_valid_subspec(spec, "ConcatChart") - super(ConcatChart, self).__init__( - data=data, concat=list(concat), columns=columns, **kwargs - ) - self.data, self.concat = _combine_subchart_data(self.data, self.concat) - self.params, self.concat = _combine_subchart_params(self.params, self.concat) - - def __ior__(self, other): - _check_if_valid_subspec(other, "ConcatChart") - self.concat.append(other) - self.data, self.concat = _combine_subchart_data(self.data, self.concat) - self.params, self.concat = _combine_subchart_params(self.params, self.concat) - return self - - def __or__(self, other): - copy = self.copy(deep=["concat"]) - copy |= other - return copy - - def interactive(self, name=None, bind_x=True, bind_y=True) -> Self: - """Make chart axes scales interactive - - Parameters - ---------- - name : string - The parameter name to use for the axes scales. This name should be - unique among all parameters within the chart. - bind_x : boolean, default True - If true, then bind the interactive scales to the x-axis - bind_y : boolean, default True - If true, then bind the interactive scales to the y-axis - - Returns - ------- - chart : - copy of self, with interactive axes added - - """ - encodings = [] - if bind_x: - encodings.append("x") - if bind_y: - encodings.append("y") - return self.add_params(selection_interval(bind="scales", encodings=encodings)) - - def add_params(self, *params) -> Self: - """Add one or more parameters to the chart.""" - if not params or not self.concat: - return self - copy = self.copy() - copy.concat = [chart.add_params(*params) for chart in copy.concat] - return copy - - @utils.deprecation.deprecated( - message="'add_selection' is deprecated. Use 'add_params' instead." - ) - def add_selection(self, *selections) -> Self: - """'add_selection' is deprecated. Use 'add_params' instead.""" - return self.add_params(*selections) - - -def concat(*charts, **kwargs): - """Concatenate charts horizontally""" - return ConcatChart(concat=charts, **kwargs) - - -class HConcatChart(TopLevelMixin, core.TopLevelHConcatSpec): - """A chart with horizontally-concatenated facets""" - - @utils.use_signature(core.TopLevelHConcatSpec) - def __init__(self, data=Undefined, hconcat=(), **kwargs): - # TODO: move common data to top level? - for spec in hconcat: - _check_if_valid_subspec(spec, "HConcatChart") - super(HConcatChart, self).__init__(data=data, hconcat=list(hconcat), **kwargs) - self.data, self.hconcat = _combine_subchart_data(self.data, self.hconcat) - self.params, self.hconcat = _combine_subchart_params(self.params, self.hconcat) - - def __ior__(self, other): - _check_if_valid_subspec(other, "HConcatChart") - self.hconcat.append(other) - self.data, self.hconcat = _combine_subchart_data(self.data, self.hconcat) - self.params, self.hconcat = _combine_subchart_params(self.params, self.hconcat) - return self - - def __or__(self, other): - copy = self.copy(deep=["hconcat"]) - copy |= other - return copy - - def interactive(self, name=None, bind_x=True, bind_y=True) -> Self: - """Make chart axes scales interactive - - Parameters - ---------- - name : string - The parameter name to use for the axes scales. This name should be - unique among all parameters within the chart. - bind_x : boolean, default True - If true, then bind the interactive scales to the x-axis - bind_y : boolean, default True - If true, then bind the interactive scales to the y-axis - - Returns - ------- - chart : - copy of self, with interactive axes added - - """ - encodings = [] - if bind_x: - encodings.append("x") - if bind_y: - encodings.append("y") - return self.add_params(selection_interval(bind="scales", encodings=encodings)) - - def add_params(self, *params) -> Self: - """Add one or more parameters to the chart.""" - if not params or not self.hconcat: - return self - copy = self.copy() - copy.hconcat = [chart.add_params(*params) for chart in copy.hconcat] - return copy - - @utils.deprecation.deprecated( - message="'add_selection' is deprecated. Use 'add_params' instead." - ) - def add_selection(self, *selections) -> Self: - """'add_selection' is deprecated. Use 'add_params' instead.""" - return self.add_params(*selections) - - -def hconcat(*charts, **kwargs): - """Concatenate charts horizontally""" - return HConcatChart(hconcat=charts, **kwargs) - - -class VConcatChart(TopLevelMixin, core.TopLevelVConcatSpec): - """A chart with vertically-concatenated facets""" - - @utils.use_signature(core.TopLevelVConcatSpec) - def __init__(self, data=Undefined, vconcat=(), **kwargs): - # TODO: move common data to top level? - for spec in vconcat: - _check_if_valid_subspec(spec, "VConcatChart") - super(VConcatChart, self).__init__(data=data, vconcat=list(vconcat), **kwargs) - self.data, self.vconcat = _combine_subchart_data(self.data, self.vconcat) - self.params, self.vconcat = _combine_subchart_params(self.params, self.vconcat) - - def __iand__(self, other): - _check_if_valid_subspec(other, "VConcatChart") - self.vconcat.append(other) - self.data, self.vconcat = _combine_subchart_data(self.data, self.vconcat) - self.params, self.vconcat = _combine_subchart_params(self.params, self.vconcat) - return self - - def __and__(self, other): - copy = self.copy(deep=["vconcat"]) - copy &= other - return copy - - def interactive(self, name=None, bind_x=True, bind_y=True) -> Self: - """Make chart axes scales interactive - - Parameters - ---------- - name : string - The parameter name to use for the axes scales. This name should be - unique among all parameters within the chart. - bind_x : boolean, default True - If true, then bind the interactive scales to the x-axis - bind_y : boolean, default True - If true, then bind the interactive scales to the y-axis - - Returns - ------- - chart : - copy of self, with interactive axes added - - """ - encodings = [] - if bind_x: - encodings.append("x") - if bind_y: - encodings.append("y") - return self.add_params(selection_interval(bind="scales", encodings=encodings)) - - def add_params(self, *params) -> Self: - """Add one or more parameters to the chart.""" - if not params or not self.vconcat: - return self - copy = self.copy() - copy.vconcat = [chart.add_params(*params) for chart in copy.vconcat] - return copy - - @utils.deprecation.deprecated( - message="'add_selection' is deprecated. Use 'add_params' instead." - ) - def add_selection(self, *selections) -> Self: - """'add_selection' is deprecated. Use 'add_params' instead.""" - return self.add_params(*selections) - - -def vconcat(*charts, **kwargs): - """Concatenate charts vertically""" - return VConcatChart(vconcat=charts, **kwargs) - - -class LayerChart(TopLevelMixin, _EncodingMixin, core.TopLevelLayerSpec): - """A Chart with layers within a single panel""" - - @utils.use_signature(core.TopLevelLayerSpec) - def __init__(self, data=Undefined, layer=(), **kwargs): - # TODO: move common data to top level? - # TODO: check for conflicting interaction - for spec in layer: - _check_if_valid_subspec(spec, "LayerChart") - _check_if_can_be_layered(spec) - super(LayerChart, self).__init__(data=data, layer=list(layer), **kwargs) - self.data, self.layer = _combine_subchart_data(self.data, self.layer) - # Currently (Vega-Lite 5.5) the same param can't occur on two layers - self.layer = _remove_duplicate_params(self.layer) - self.params, self.layer = _combine_subchart_params(self.params, self.layer) - - # Some properties are not allowed within layer; we'll move to parent. - layer_props = ("height", "width", "view") - combined_dict, self.layer = _remove_layer_props(self, self.layer, layer_props) - - for prop in combined_dict: - self[prop] = combined_dict[prop] - - def __iadd__(self, other): - _check_if_valid_subspec(other, "LayerChart") - _check_if_can_be_layered(other) - self.layer.append(other) - self.data, self.layer = _combine_subchart_data(self.data, self.layer) - self.params, self.layer = _combine_subchart_params(self.params, self.layer) - return self - - def __add__(self, other): - copy = self.copy(deep=["layer"]) - copy += other - return copy - - def add_layers(self, *layers) -> Self: - copy = self.copy(deep=["layer"]) - for layer in layers: - copy += layer - return copy - - def interactive(self, name=None, bind_x=True, bind_y=True) -> Self: - """Make chart axes scales interactive - - Parameters - ---------- - name : string - The parameter name to use for the axes scales. This name should be - unique among all parameters within the chart. - bind_x : boolean, default True - If true, then bind the interactive scales to the x-axis - bind_y : boolean, default True - If true, then bind the interactive scales to the y-axis - - Returns - ------- - chart : - copy of self, with interactive axes added - - """ - if not self.layer: - raise ValueError( - "LayerChart: cannot call interactive() until a " "layer is defined" - ) - copy = self.copy(deep=["layer"]) - copy.layer[0] = copy.layer[0].interactive( - name=name, bind_x=bind_x, bind_y=bind_y - ) - return copy - - def add_params(self, *params) -> Self: - """Add one or more parameters to the chart.""" - if not params or not self.layer: - return self - copy = self.copy() - copy.layer[0] = copy.layer[0].add_params(*params) - return copy.copy() - - @utils.deprecation.deprecated( - message="'add_selection' is deprecated. Use 'add_params' instead." - ) - def add_selection(self, *selections) -> Self: - """'add_selection' is deprecated. Use 'add_params' instead.""" - return self.add_params(*selections) - - -def layer(*charts, **kwargs): - """layer multiple charts""" - return LayerChart(layer=charts, **kwargs) - - -class FacetChart(TopLevelMixin, core.TopLevelFacetSpec): - """A Chart with layers within a single panel""" - - @utils.use_signature(core.TopLevelFacetSpec) - def __init__( - self, - data=Undefined, - spec=Undefined, - facet=Undefined, - params=Undefined, - **kwargs, - ): - _check_if_valid_subspec(spec, "FacetChart") - _spec_as_list = [spec] - params, _spec_as_list = _combine_subchart_params(params, _spec_as_list) - spec = _spec_as_list[0] - super(FacetChart, self).__init__( - data=data, spec=spec, facet=facet, params=params, **kwargs - ) - - def interactive(self, name=None, bind_x=True, bind_y=True) -> Self: - """Make chart axes scales interactive - - Parameters - ---------- - name : string - The parameter name to use for the axes scales. This name should be - unique among all parameters within the chart. - bind_x : boolean, default True - If true, then bind the interactive scales to the x-axis - bind_y : boolean, default True - If true, then bind the interactive scales to the y-axis - - Returns - ------- - chart : - copy of self, with interactive axes added - - """ - copy = self.copy(deep=False) - copy.spec = copy.spec.interactive(name=name, bind_x=bind_x, bind_y=bind_y) - return copy - - def add_params(self, *params) -> Self: - """Add one or more parameters to the chart.""" - if not params or self.spec is Undefined: - return self - copy = self.copy() - copy.spec = copy.spec.add_params(*params) - return copy.copy() - - @utils.deprecation.deprecated( - message="'add_selection' is deprecated. Use 'add_params' instead." - ) - def add_selection(self, *selections) -> Self: - """'add_selection' is deprecated. Use 'add_params' instead.""" - return self.add_params(*selections) - - -def topo_feature(url, feature, **kwargs): - """A convenience function for extracting features from a topojson url - - Parameters - ---------- - url : string - An URL from which to load the data set. - - feature : string - The name of the TopoJSON object set to convert to a GeoJSON feature collection. For - example, in a map of the world, there may be an object set named `"countries"`. - Using the feature property, we can extract this set and generate a GeoJSON feature - object for each country. - - **kwargs : - additional keywords passed to TopoDataFormat - """ - return core.UrlData( - url=url, format=core.TopoDataFormat(type="topojson", feature=feature, **kwargs) - ) - - -def _combine_subchart_data(data, subcharts): - def remove_data(subchart): - if subchart.data is not Undefined: - subchart = subchart.copy() - subchart.data = Undefined - return subchart - - if not subcharts: - # No subcharts = nothing to do. - pass - elif data is Undefined: - # Top level has no data; all subchart data must - # be identical to proceed. - subdata = subcharts[0].data - if subdata is not Undefined and all(c.data is subdata for c in subcharts): - data = subdata - subcharts = [remove_data(c) for c in subcharts] - else: - # Top level has data; subchart data must be either - # undefined or identical to proceed. - if all(c.data is Undefined or c.data is data for c in subcharts): - subcharts = [remove_data(c) for c in subcharts] - - return data, subcharts - - -def _viewless_dict(param): - d = param.to_dict() - d.pop("views", None) - return d - - -def _needs_name(subchart): - # Only `Chart` objects need a name - if (subchart.name is not Undefined) or (not isinstance(subchart, Chart)): - return False - - # Variable parameters won't receive a views property. - if all(isinstance(p, core.VariableParameter) for p in subchart.params): - return False - - return True - - -# Convert SelectionParameters to TopLevelSelectionParameters with a views property. -def _prepare_to_lift(param): - param = param.copy() - - if isinstance(param, core.VariableParameter): - return param - - if isinstance(param, core.SelectionParameter): - return core.TopLevelSelectionParameter(**param.to_dict(), views=[]) - - if param.views is Undefined: - param.views = [] - - return param - - -def _remove_duplicate_params(layer): - subcharts = [subchart.copy() for subchart in layer] - found_params = [] - - for subchart in subcharts: - if (not hasattr(subchart, "params")) or (subchart.params is Undefined): - continue - - params = [] - - # Ensure the same selection parameter doesn't appear twice - for param in subchart.params: - if isinstance(param, core.VariableParameter): - params.append(param) - continue - - p = param.copy() - pd = _viewless_dict(p) - - if pd not in found_params: - params.append(p) - found_params.append(pd) - - if len(params) == 0: - subchart.params = Undefined - else: - subchart.params = params - - return subcharts - - -def _combine_subchart_params(params, subcharts): - if params is Undefined: - params = [] - - # List of triples related to params, (param, dictionary minus views, views) - param_info = [] - - # Put parameters already found into `param_info` list. - for param in params: - p = _prepare_to_lift(param) - param_info.append( - ( - p, - _viewless_dict(p), - [] if isinstance(p, core.VariableParameter) else p.views, - ) - ) - - subcharts = [subchart.copy() for subchart in subcharts] - - for subchart in subcharts: - if (not hasattr(subchart, "params")) or (subchart.params is Undefined): - continue - - if _needs_name(subchart): - subchart.name = subchart._get_name() - - for param in subchart.params: - p = _prepare_to_lift(param) - pd = _viewless_dict(p) - - dlist = [d for _, d, _ in param_info] - found = pd in dlist - - if isinstance(p, core.VariableParameter) and found: - continue - - if isinstance(p, core.VariableParameter) and not found: - param_info.append((p, pd, [])) - continue - - # At this stage in the loop, p must be a TopLevelSelectionParameter. - - if isinstance(subchart, Chart) and (subchart.name not in p.views): - p.views.append(subchart.name) - - if found: - i = dlist.index(pd) - _, _, old_views = param_info[i] - new_views = [v for v in p.views if v not in old_views] - old_views += new_views - else: - param_info.append((p, pd, p.views)) - - subchart.params = Undefined - - for p, _, v in param_info: - if len(v) > 0: - p.views = v - - subparams = [p for p, _, _ in param_info] - - if len(subparams) == 0: - subparams = Undefined - - return subparams, subcharts - - -def _get_repeat_strings(repeat): - if isinstance(repeat, list): - return repeat - elif isinstance(repeat, core.LayerRepeatMapping): - klist = ["row", "column", "layer"] - elif isinstance(repeat, core.RepeatMapping): - klist = ["row", "column"] - rclist = [k for k in klist if repeat[k] is not Undefined] - rcstrings = [[f"{k}_{v}" for v in repeat[k]] for k in rclist] - return ["".join(s) for s in itertools.product(*rcstrings)] - - -def _extend_view_name(v, r, spec): - # prevent the same extension from happening more than once - if isinstance(spec, Chart): - if v.endswith("child__" + r): - return v - else: - return f"{v}_child__{r}" - elif isinstance(spec, LayerChart): - if v.startswith("child__" + r): - return v - else: - return f"child__{r}_{v}" - - -def _repeat_names(params, repeat, spec): - if params is Undefined: - return params - - repeat = _get_repeat_strings(repeat) - params_named = [] - - for param in params: - if not isinstance(param, core.TopLevelSelectionParameter): - params_named.append(param) - continue - p = param.copy() - views = [] - repeat_strings = _get_repeat_strings(repeat) - for v in param.views: - if isinstance(spec, Chart): - if any(v.endswith(f"child__{r}") for r in repeat_strings): - views.append(v) - else: - views += [_extend_view_name(v, r, spec) for r in repeat_strings] - elif isinstance(spec, LayerChart): - if any(v.startswith(f"child__{r}") for r in repeat_strings): - views.append(v) - else: - views += [_extend_view_name(v, r, spec) for r in repeat_strings] - - p.views = views - params_named.append(p) - - return params_named - - -def _remove_layer_props(chart, subcharts, layer_props): - def remove_prop(subchart, prop): - # If subchart is a UnitSpec, then subchart["height"] raises a KeyError - try: - if subchart[prop] is not Undefined: - subchart = subchart.copy() - subchart[prop] = Undefined - except KeyError: - pass - return subchart - - output_dict = {} - - if not subcharts: - # No subcharts = nothing to do. - return output_dict, subcharts - - for prop in layer_props: - if chart[prop] is Undefined: - # Top level does not have this prop. - # Check for consistent props within the subcharts. - values = [] - for c in subcharts: - # If c is a UnitSpec, then c["height"] raises a KeyError. - try: - val = c[prop] - if val is not Undefined: - values.append(val) - except KeyError: - pass - if len(values) == 0: - pass - elif all(v == values[0] for v in values[1:]): - output_dict[prop] = values[0] - else: - raise ValueError(f"There are inconsistent values {values} for {prop}") - else: - # Top level has this prop; subchart must either not have the prop - # or it must be Undefined or identical to proceed. - if all( - getattr(c, prop, Undefined) is Undefined or c[prop] == chart[prop] - for c in subcharts - ): - output_dict[prop] = chart[prop] - else: - raise ValueError(f"There are inconsistent values {values} for {prop}") - subcharts = [remove_prop(c, prop) for c in subcharts] - - return output_dict, subcharts - - -@utils.use_signature(core.SequenceParams) -def sequence(start, stop=None, step=Undefined, as_=Undefined, **kwds): - """Sequence generator.""" - if stop is None: - start, stop = 0, start - params = core.SequenceParams(start=start, stop=stop, step=step, **{"as": as_}) - return core.SequenceGenerator(sequence=params, **kwds) - - -@utils.use_signature(core.GraticuleParams) -def graticule(**kwds): - """Graticule generator.""" - if not kwds: - # graticule: True indicates default parameters - graticule = True - else: - graticule = core.GraticuleParams(**kwds) - return core.GraticuleGenerator(graticule=graticule) - - -def sphere(): - """Sphere generator.""" - return core.SphereGenerator(sphere=True) diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/Image-003ee87c.css b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/Image-003ee87c.css deleted file mode 100644 index 60f45635043d082881d8d8a529c1142ee028a68b..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/Image-003ee87c.css +++ /dev/null @@ -1 +0,0 @@ -img.svelte-gqt00k{border-radius:var(--radius-lg);max-width:none}img.selected.svelte-gqt00k{border-color:var(--border-color-accent)}.table.svelte-gqt00k{margin:0 auto;border:2px solid var(--border-color-primary);border-radius:var(--radius-lg);width:var(--size-20);height:var(--size-20);object-fit:cover}.gallery.svelte-gqt00k{border:2px solid var(--border-color-primary);max-height:var(--size-20);object-fit:cover} diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/prism-0efcbb52.css b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/prism-0efcbb52.css deleted file mode 100644 index 964979ca61b532f61ecda7b511e85a75e94122b6..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/prism-0efcbb52.css +++ /dev/null @@ -1 +0,0 @@ -.gradio-container-3-37-0 code[class*=language-],.gradio-container-3-37-0 pre[class*=language-]{color:#000;background:none;text-shadow:0 1px white;font-family:Consolas,Monaco,Andale Mono,Ubuntu Mono,monospace;font-size:1em;text-align:left;white-space:pre;word-spacing:normal;word-break:normal;word-wrap:normal;line-height:1.5;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-hyphens:none;-moz-hyphens:none;-ms-hyphens:none;hyphens:none}.gradio-container-3-37-0 pre[class*=language-]::-moz-selection,.gradio-container-3-37-0 pre[class*=language-] ::-moz-selection,.gradio-container-3-37-0 code[class*=language-]::-moz-selection,.gradio-container-3-37-0 code[class*=language-] ::-moz-selection{text-shadow:none;background:#b3d4fc}.gradio-container-3-37-0 pre[class*=language-]::selection,.gradio-container-3-37-0 pre[class*=language-] ::selection,.gradio-container-3-37-0 code[class*=language-]::selection,.gradio-container-3-37-0 code[class*=language-] ::selection{text-shadow:none;background:#b3d4fc}@media print{.gradio-container-3-37-0 code[class*=language-],.gradio-container-3-37-0 pre[class*=language-]{text-shadow:none}}.gradio-container-3-37-0 pre[class*=language-]{padding:1em;margin:.5em 0;overflow:auto}.gradio-container-3-37-0 :not(pre)>code[class*=language-],.gradio-container-3-37-0 pre[class*=language-]{background:#f5f2f0}.gradio-container-3-37-0 :not(pre)>code[class*=language-]{padding:.1em;border-radius:.3em;white-space:normal}.gradio-container-3-37-0 .token.comment,.gradio-container-3-37-0 .token.prolog,.gradio-container-3-37-0 .token.doctype,.gradio-container-3-37-0 .token.cdata{color:#708090}.gradio-container-3-37-0 .token.punctuation{color:#999}.gradio-container-3-37-0 .token.namespace{opacity:.7}.gradio-container-3-37-0 .token.property,.gradio-container-3-37-0 .token.tag,.gradio-container-3-37-0 .token.boolean,.gradio-container-3-37-0 .token.number,.gradio-container-3-37-0 .token.constant,.gradio-container-3-37-0 .token.symbol,.gradio-container-3-37-0 .token.deleted{color:#905}.gradio-container-3-37-0 .token.selector,.gradio-container-3-37-0 .token.attr-name,.gradio-container-3-37-0 .token.string,.gradio-container-3-37-0 .token.char,.gradio-container-3-37-0 .token.builtin,.gradio-container-3-37-0 .token.inserted{color:#690}.gradio-container-3-37-0 .token.operator,.gradio-container-3-37-0 .token.entity,.gradio-container-3-37-0 .token.url,.gradio-container-3-37-0 .language-css .token.string,.gradio-container-3-37-0 .style .token.string{color:#9a6e3a;background:hsla(0,0%,100%,.5)}.gradio-container-3-37-0 .token.atrule,.gradio-container-3-37-0 .token.attr-value,.gradio-container-3-37-0 .token.keyword{color:#07a}.gradio-container-3-37-0 .token.function,.gradio-container-3-37-0 .token.class-name{color:#dd4a68}.gradio-container-3-37-0 .token.regex,.gradio-container-3-37-0 .token.important,.gradio-container-3-37-0 .token.variable{color:#e90}.gradio-container-3-37-0 .token.important,.gradio-container-3-37-0 .token.bold{font-weight:700}.gradio-container-3-37-0 .token.italic{font-style:italic}.gradio-container-3-37-0 .token.entity{cursor:help} diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/httpcore/_sync/interfaces.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/httpcore/_sync/interfaces.py deleted file mode 100644 index 5e95be1ec72425178245c32c33874303e0906405..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/httpcore/_sync/interfaces.py +++ /dev/null @@ -1,135 +0,0 @@ -from contextlib import contextmanager -from typing import Iterator, Optional, Union - -from .._models import ( - URL, - Extensions, - HeaderTypes, - Origin, - Request, - Response, - enforce_bytes, - enforce_headers, - enforce_url, - include_request_headers, -) - - -class RequestInterface: - def request( - self, - method: Union[bytes, str], - url: Union[URL, bytes, str], - *, - headers: HeaderTypes = None, - content: Union[bytes, Iterator[bytes], None] = None, - extensions: Optional[Extensions] = None, - ) -> Response: - # Strict type checking on our parameters. - method = enforce_bytes(method, name="method") - url = enforce_url(url, name="url") - headers = enforce_headers(headers, name="headers") - - # Include Host header, and optionally Content-Length or Transfer-Encoding. - headers = include_request_headers(headers, url=url, content=content) - - request = Request( - method=method, - url=url, - headers=headers, - content=content, - extensions=extensions, - ) - response = self.handle_request(request) - try: - response.read() - finally: - response.close() - return response - - @contextmanager - def stream( - self, - method: Union[bytes, str], - url: Union[URL, bytes, str], - *, - headers: HeaderTypes = None, - content: Union[bytes, Iterator[bytes], None] = None, - extensions: Optional[Extensions] = None, - ) -> Iterator[Response]: - # Strict type checking on our parameters. - method = enforce_bytes(method, name="method") - url = enforce_url(url, name="url") - headers = enforce_headers(headers, name="headers") - - # Include Host header, and optionally Content-Length or Transfer-Encoding. - headers = include_request_headers(headers, url=url, content=content) - - request = Request( - method=method, - url=url, - headers=headers, - content=content, - extensions=extensions, - ) - response = self.handle_request(request) - try: - yield response - finally: - response.close() - - def handle_request(self, request: Request) -> Response: - raise NotImplementedError() # pragma: nocover - - -class ConnectionInterface(RequestInterface): - def close(self) -> None: - raise NotImplementedError() # pragma: nocover - - def info(self) -> str: - raise NotImplementedError() # pragma: nocover - - def can_handle_request(self, origin: Origin) -> bool: - raise NotImplementedError() # pragma: nocover - - def is_available(self) -> bool: - """ - Return `True` if the connection is currently able to accept an - outgoing request. - - An HTTP/1.1 connection will only be available if it is currently idle. - - An HTTP/2 connection will be available so long as the stream ID space is - not yet exhausted, and the connection is not in an error state. - - While the connection is being established we may not yet know if it is going - to result in an HTTP/1.1 or HTTP/2 connection. The connection should be - treated as being available, but might ultimately raise `NewConnectionRequired` - required exceptions if multiple requests are attempted over a connection - that ends up being established as HTTP/1.1. - """ - raise NotImplementedError() # pragma: nocover - - def has_expired(self) -> bool: - """ - Return `True` if the connection is in a state where it should be closed. - - This either means that the connection is idle and it has passed the - expiry time on its keep-alive, or that server has sent an EOF. - """ - raise NotImplementedError() # pragma: nocover - - def is_idle(self) -> bool: - """ - Return `True` if the connection is currently idle. - """ - raise NotImplementedError() # pragma: nocover - - def is_closed(self) -> bool: - """ - Return `True` if the connection has been closed. - - Used when a response is closed to determine if the connection may be - returned to the connection pool or not. - """ - raise NotImplementedError() # pragma: nocover diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/httpx/_transports/mock.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/httpx/_transports/mock.py deleted file mode 100644 index 82043da2d908f7575097f14b08c1a8a60fa1f8a4..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/httpx/_transports/mock.py +++ /dev/null @@ -1,38 +0,0 @@ -import typing - -from .._models import Request, Response -from .base import AsyncBaseTransport, BaseTransport - -SyncHandler = typing.Callable[[Request], Response] -AsyncHandler = typing.Callable[[Request], typing.Coroutine[None, None, Response]] - - -class MockTransport(AsyncBaseTransport, BaseTransport): - def __init__(self, handler: typing.Union[SyncHandler, AsyncHandler]) -> None: - self.handler = handler - - def handle_request( - self, - request: Request, - ) -> Response: - request.read() - response = self.handler(request) - if not isinstance(response, Response): # pragma: no cover - raise TypeError("Cannot use an async handler in a sync Client") - return response - - async def handle_async_request( - self, - request: Request, - ) -> Response: - await request.aread() - response = self.handler(request) - - # Allow handler to *optionally* be an `async` function. - # If it is, then the `response` variable need to be awaited to actually - # return the result. - - if not isinstance(response, Response): - response = await response - - return response diff --git a/spaces/DianXian/Real-CUGAN/app.py b/spaces/DianXian/Real-CUGAN/app.py deleted file mode 100644 index 2439c5cec6b61e8a517f957daf710cbb6b5c3cf6..0000000000000000000000000000000000000000 --- a/spaces/DianXian/Real-CUGAN/app.py +++ /dev/null @@ -1,62 +0,0 @@ -from upcunet_v3 import RealWaifuUpScaler -import gradio as gr -import time -import logging -import os -from PIL import ImageOps -import numpy as np -import math - - -def greet(input_img, input_model_name, input_tile_mode): - # if input_img.size[0] * input_img.size[1] > 256 * 256: - # y = int(math.sqrt(256*256/input_img.size[0]*input_img.size[1])) - # x = int(input_img.size[0]/input_img.size[1]*y) - # input_img = ImageOps.fit(input_img, (x, y)) - input_img = np.array(input_img) - if input_model_name not in model_cache: - t1 = time.time() - upscaler = RealWaifuUpScaler(input_model_name[2], ModelPath + input_model_name, half=False, device="cpu") - t2 = time.time() - logger.info(f'load model time, {t2 - t1}') - model_cache[input_model_name] = upscaler - else: - upscaler = model_cache[input_model_name] - logger.info(f'load model from cache') - - start = time.time() - result = upscaler(input_img, tile_mode=input_tile_mode) - end = time.time() - logger.info(f'input_model_name, {input_model_name}') - logger.info(f'input_tile_mode, {input_tile_mode}') - logger.info(f'input shape, {input_img.shape}') - logger.info(f'output shape, {result.shape}') - logger.info(f'speed time, {end - start}') - return result - - -if __name__ == '__main__': - logging.basicConfig(level=logging.INFO, format="[%(asctime)s] [%(process)d] [%(levelname)s] %(message)s") - logger = logging.getLogger() - - ModelPath = "weights_v3/" - model_cache = {} - - input_model_name = gr.inputs.Dropdown(os.listdir(ModelPath), default="up2x-latest-denoise2x.pth", label='选择model') - input_tile_mode = gr.inputs.Dropdown([0, 1, 2, 3, 4], default=2, label='选择tile_mode') - input_img = gr.inputs.Image(label='image', type='pil') - - inputs = [input_img, input_model_name, input_tile_mode] - outputs = "image" - iface = gr.Interface(fn=greet, - inputs=inputs, - outputs=outputs, - allow_screenshot=False, - allow_flagging='never', - examples=[['test-img.jpg', "up2x-latest-denoise2x.pth", 2]], - article='[https://github.com/bilibili/ailab/tree/main/Real-CUGAN](https://github.com/bilibili/ailab/tree/main/Real-CUGAN)
' - '感谢b站开源的项目,图片过大会导致内存不足,所有我将图片裁剪小,想体验大图片的效果请自行前往上面的链接。
' - '修改bbb' - 'The large image will lead to memory limit exceeded. So I crop and resize image. ' - 'If you want to experience the large image, please go to the link above.') - iface.launch() diff --git a/spaces/DorisB/streamlit-app/recom.py b/spaces/DorisB/streamlit-app/recom.py deleted file mode 100644 index 9bdd6a55c4275e9f27c984004cee178f7a00d6e2..0000000000000000000000000000000000000000 --- a/spaces/DorisB/streamlit-app/recom.py +++ /dev/null @@ -1,77 +0,0 @@ -#imports - -import streamlit as st -import pandas as pd -from PIL import Image -import pickle -from pathlib import Path -import requests -from streamlit_lottie import st_lottie - - - - - - -def main(): - st.set_page_config(layout="wide") - with open('style.css') as f: - st.markdown(f'', unsafe_allow_html=True) - - - hide_menu = """ - - """ - - hide_sidebar = """ - - """ - - st.markdown(hide_menu, unsafe_allow_html=True) - st.markdown(hide_sidebar, unsafe_allow_html=True) - - - def load_lottie(url): - r = requests.get(url) - if r.status_code != 200: - return None - return r.json() - - lottie = load_lottie("https://assets2.lottiefiles.com/private_files/lf30_zSGy1w.json") - st.image("images/logo-recom2.png") - cols = st.columns((2,3)) - with cols[1]: - - st_lottie(lottie, height=400, key="coding") - with cols[0]: - st.markdown("

Hello :)

", unsafe_allow_html=True) - login = st.text_input("Username: ", 'admin') - password = st.text_input("Password: ", "recom_demo") - st.markdown('LOGIN', unsafe_allow_html=True) - - - - -if __name__ == '__main__': - main() - diff --git a/spaces/DragGan/DragGan-Inversion/visualizer_drag_gradio.py b/spaces/DragGan/DragGan-Inversion/visualizer_drag_gradio.py deleted file mode 100644 index a4e14e9b81e21325a38e99064a755b24f15afac4..0000000000000000000000000000000000000000 --- a/spaces/DragGan/DragGan-Inversion/visualizer_drag_gradio.py +++ /dev/null @@ -1,934 +0,0 @@ -# https://huggingface.co/DragGan/DragGan-Models -# https://arxiv.org/abs/2305.10973 -import os -import os.path as osp -from argparse import ArgumentParser -from functools import partial -from pathlib import Path -import time - -import psutil - -import gradio as gr -import numpy as np -import torch -from PIL import Image - -import dnnlib -from gradio_utils import (ImageMask, draw_mask_on_image, draw_points_on_image, - get_latest_points_pair, get_valid_mask, - on_change_single_global_state) -from viz.renderer import Renderer, add_watermark_np - - -# download models from Hugging Face hub -from huggingface_hub import snapshot_download - -model_dir = Path('./checkpoints') -snapshot_download('DragGan/DragGan-Models', - repo_type='model', local_dir=model_dir) - -cache_dir = model_dir - -device = 'cuda' -IS_SPACE = "DragGan/DragGan" in os.environ.get('SPACE_ID', '') -TIMEOUT = 80 - - -def reverse_point_pairs(points): - new_points = [] - for p in points: - new_points.append([p[1], p[0]]) - return new_points - - -def clear_state(global_state, target=None): - """Clear target history state from global_state - If target is not defined, points and mask will be both removed. - 1. set global_state['points'] as empty dict - 2. set global_state['mask'] as full-one mask. - """ - if target is None: - target = ['point', 'mask'] - if not isinstance(target, list): - target = [target] - if 'point' in target: - global_state['points'] = dict() - print('Clear Points State!') - if 'mask' in target: - image_raw = global_state["images"]["image_raw"] - global_state['mask'] = np.ones((image_raw.size[1], image_raw.size[0]), - dtype=np.uint8) - print('Clear mask State!') - - return global_state - - -def init_images(global_state): - """This function is called only ones with Gradio App is started. - 0. pre-process global_state, unpack value from global_state of need - 1. Re-init renderer - 2. run `renderer._render_drag_impl` with `is_drag=False` to generate - new image - 3. Assign images to global state and re-generate mask - """ - - if isinstance(global_state, gr.State): - state = global_state.value - else: - state = global_state - - state['renderer'].init_network( - state['generator_params'], # res - valid_checkpoints_dict[state['pretrained_weight']], # pkl - state['params']['seed'], # w0_seed, - None, # w_load - state['params']['latent_space'] == 'w+', # w_plus - 'const', - state['params']['trunc_psi'], # trunc_psi, - state['params']['trunc_cutoff'], # trunc_cutoff, - None, # input_transform - state['params']['lr'] # lr, - ) - - state['renderer']._render_drag_impl(state['generator_params'], - is_drag=False, - to_pil=True) - - init_image = state['generator_params'].image - state['images']['image_orig'] = init_image - state['images']['image_raw'] = init_image - state['images']['image_show'] = Image.fromarray( - add_watermark_np(np.array(init_image))) - state['mask'] = np.ones((init_image.size[1], init_image.size[0]), - dtype=np.uint8) - return global_state - - -def update_image_draw(image, points, mask, show_mask, global_state=None): - - image_draw = draw_points_on_image(image, points) - if show_mask and mask is not None and not (mask == 0).all() and not ( - mask == 1).all(): - image_draw = draw_mask_on_image(image_draw, mask) - - image_draw = Image.fromarray(add_watermark_np(np.array(image_draw))) - if global_state is not None: - global_state['images']['image_show'] = image_draw - return image_draw - - -def preprocess_mask_info(global_state, image): - """Function to handle mask information. - 1. last_mask is None: Do not need to change mask, return mask - 2. last_mask is not None: - 2.1 global_state is remove_mask: - 2.2 global_state is add_mask: - """ - if isinstance(image, dict): - last_mask = get_valid_mask(image['mask']) - else: - last_mask = None - mask = global_state['mask'] - - # mask in global state is a placeholder with all 1. - if (mask == 1).all(): - mask = last_mask - - # last_mask = global_state['last_mask'] - editing_mode = global_state['editing_state'] - - if last_mask is None: - return global_state - - if editing_mode == 'remove_mask': - updated_mask = np.clip(mask - last_mask, 0, 1) - print(f'Last editing_state is {editing_mode}, do remove.') - elif editing_mode == 'add_mask': - updated_mask = np.clip(mask + last_mask, 0, 1) - print(f'Last editing_state is {editing_mode}, do add.') - else: - updated_mask = mask - print(f'Last editing_state is {editing_mode}, ' - 'do nothing to mask.') - - global_state['mask'] = updated_mask - # global_state['last_mask'] = None # clear buffer - return global_state - - -def print_memory_usage(): - # Print system memory usage - print(f"System memory usage: {psutil.virtual_memory().percent}%") - - # Print GPU memory usage - if torch.cuda.is_available(): - device = torch.device("cuda") - print(f"GPU memory usage: {torch.cuda.memory_allocated() / 1e9} GB") - print( - f"Max GPU memory usage: {torch.cuda.max_memory_allocated() / 1e9} GB") - device_properties = torch.cuda.get_device_properties(device) - available_memory = device_properties.total_memory - \ - torch.cuda.max_memory_allocated() - print(f"Available GPU memory: {available_memory / 1e9} GB") - else: - print("No GPU available") - - -# filter large models running on SPACES -allowed_checkpoints = [] # all checkpoints -if IS_SPACE: - allowed_checkpoints = ["stylegan_human_v2_512.pkl", - "stylegan2_dogs_1024_pytorch.pkl"] - -valid_checkpoints_dict = { - f.name.split('.')[0]: str(f) - for f in Path(cache_dir).glob('*.pkl') - if f.name in allowed_checkpoints or not IS_SPACE -} -print('Valid checkpoint file:') -print(valid_checkpoints_dict) - -init_pkl = 'stylegan_human_v2_512' - -with gr.Blocks() as app: - gr.Markdown(""" -# DragGAN - Drag Your GAN -## Interactive Point-based Manipulation on the Generative Image Manifold -### Unofficial Gradio Demo - -**Due to high demand, only one model can be run at a time, or you can duplicate the space and run your own copy.** - - -Duplicate Space for no queue on your own hardware.

- -* Official Repo: [XingangPan](https://github.com/XingangPan/DragGAN) -* Gradio Demo by: [LeoXing1996](https://github.com/LeoXing1996) © [OpenMMLab MMagic](https://github.com/open-mmlab/mmagic) -""") - - # renderer = Renderer() - global_state = gr.State({ - "images": { - # image_orig: the original image, change with seed/model is changed - # image_raw: image with mask and points, change durning optimization - # image_show: image showed on screen - }, - "temporal_params": { - # stop - }, - 'mask': - None, # mask for visualization, 1 for editing and 0 for unchange - 'last_mask': None, # last edited mask - 'show_mask': True, # add button - "generator_params": dnnlib.EasyDict(), - "params": { - "seed": int(np.random.randint(0, 2**32 - 1)), - "motion_lambda": 20, - "r1_in_pixels": 3, - "r2_in_pixels": 12, - "magnitude_direction_in_pixels": 1.0, - "latent_space": "w+", - "trunc_psi": 0.7, - "trunc_cutoff": None, - "lr": 0.001, - }, - "device": device, - "draw_interval": 1, - "renderer": Renderer(disable_timing=True), - "points": {}, - "curr_point": None, - "curr_type_point": "start", - 'editing_state': 'add_points', - 'pretrained_weight': init_pkl - }) - - # init image - global_state = init_images(global_state) - with gr.Row(): - - with gr.Row(): - - # Left --> tools - with gr.Column(scale=3): - - # Pickle - with gr.Row(): - - with gr.Column(scale=1, min_width=10): - gr.Markdown(value='Pickle', show_label=False) - - with gr.Column(scale=4, min_width=10): - form_pretrained_dropdown = gr.Dropdown( - choices=list(valid_checkpoints_dict.keys()), - label="Pretrained Model", - value=init_pkl, - ) - - # Latent - with gr.Row(): - with gr.Column(scale=1, min_width=10): - gr.Markdown(value='Latent', show_label=False) - - with gr.Column(scale=4, min_width=10): - form_seed_number = gr.Slider( - mininium=0, - maximum=2**32-1, - step=1, - value=global_state.value['params']['seed'], - interactive=True, - # randomize=True, - label="Seed", - ) - form_lr_number = gr.Number( - value=global_state.value["params"]["lr"], - interactive=True, - label="Step Size") - - with gr.Row(): - with gr.Column(scale=2, min_width=10): - form_reset_image = gr.Button("Reset Image") - with gr.Column(scale=3, min_width=10): - form_latent_space = gr.Radio( - ['w', 'w+'], - value=global_state.value['params'] - ['latent_space'], - interactive=True, - label='Latent space to optimize', - show_label=False, - ) - - # Drag - with gr.Row(): - with gr.Column(scale=1, min_width=10): - gr.Markdown(value='Drag', show_label=False) - with gr.Column(scale=4, min_width=10): - with gr.Row(): - with gr.Column(scale=1, min_width=10): - enable_add_points = gr.Button('Add Points') - with gr.Column(scale=1, min_width=10): - undo_points = gr.Button('Reset Points') - with gr.Row(): - with gr.Column(scale=1, min_width=10): - form_start_btn = gr.Button("Start") - with gr.Column(scale=1, min_width=10): - form_stop_btn = gr.Button("Stop") - - form_steps_number = gr.Number(value=0, - label="Steps", - interactive=False) - - # Mask - with gr.Row(): - with gr.Column(scale=1, min_width=10): - gr.Markdown(value='Mask', show_label=False) - with gr.Column(scale=4, min_width=10): - enable_add_mask = gr.Button('Edit Flexible Area') - with gr.Row(): - with gr.Column(scale=1, min_width=10): - form_reset_mask_btn = gr.Button("Reset mask") - with gr.Column(scale=1, min_width=10): - show_mask = gr.Checkbox( - label='Show Mask', - value=global_state.value['show_mask'], - show_label=False) - - with gr.Row(): - form_lambda_number = gr.Number( - value=global_state.value["params"] - ["motion_lambda"], - interactive=True, - label="Lambda", - ) - - form_draw_interval_number = gr.Number( - value=global_state.value["draw_interval"], - label="Draw Interval (steps)", - interactive=True, - visible=False) - - # Right --> Image - with gr.Column(scale=8): - form_image = ImageMask( - value=global_state.value['images']['image_show'], - brush_radius=20).style( - width=768, - height=768) # NOTE: hard image size code here. - gr.Markdown(""" - ## Quick Start - - 1. Select desired `Pretrained Model` and adjust `Seed` to generate an - initial image. - 2. Click on image to add control points. - 3. Click `Start` and enjoy it! - - ## Advance Usage - - 1. Change `Step Size` to adjust learning rate in drag optimization. - 2. Select `w` or `w+` to change latent space to optimize: - * Optimize on `w` space may cause greater influence to the image. - * Optimize on `w+` space may work slower than `w`, but usually achieve - better results. - * Note that changing the latent space will reset the image, points and - mask (this has the same effect as `Reset Image` button). - 3. Click `Edit Flexible Area` to create a mask and constrain the - unmasked region to remain unchanged. - - - """) - gr.HTML(""" - -
- Gradio demo supported by - - OpenMMLab MMagic -
- """) - # Network & latents tab listeners - - def on_change_pretrained_dropdown(pretrained_value, global_state): - """Function to handle model change. - 1. Set pretrained value to global_state - 2. Re-init images and clear all states - """ - - global_state['pretrained_weight'] = pretrained_value - init_images(global_state) - clear_state(global_state) - - return global_state, global_state["images"]['image_show'] - - form_pretrained_dropdown.change( - on_change_pretrained_dropdown, - inputs=[form_pretrained_dropdown, global_state], - outputs=[global_state, form_image], - queue=True, - ) - - def on_click_reset_image(global_state): - """Reset image to the original one and clear all states - 1. Re-init images - 2. Clear all states - """ - - init_images(global_state) - clear_state(global_state) - - return global_state, global_state['images']['image_show'] - - form_reset_image.click( - on_click_reset_image, - inputs=[global_state], - outputs=[global_state, form_image], - queue=False, - ) - - # Update parameters - def on_change_update_image_seed(seed, global_state): - """Function to handle generation seed change. - 1. Set seed to global_state - 2. Re-init images and clear all states - """ - - global_state["params"]["seed"] = int(seed) - init_images(global_state) - clear_state(global_state) - - return global_state, global_state['images']['image_show'] - - form_seed_number.change( - on_change_update_image_seed, - inputs=[form_seed_number, global_state], - outputs=[global_state, form_image], - ) - - def on_click_latent_space(latent_space, global_state): - """Function to reset latent space to optimize. - NOTE: this function we reset the image and all controls - 1. Set latent-space to global_state - 2. Re-init images and clear all state - """ - - global_state['params']['latent_space'] = latent_space - init_images(global_state) - clear_state(global_state) - - return global_state, global_state['images']['image_show'] - - form_latent_space.change(on_click_latent_space, - inputs=[form_latent_space, global_state], - outputs=[global_state, form_image]) - - # ==== Params - form_lambda_number.change( - partial(on_change_single_global_state, ["params", "motion_lambda"]), - inputs=[form_lambda_number, global_state], - outputs=[global_state], - ) - - def on_change_lr(lr, global_state): - if lr == 0: - print('lr is 0, do nothing.') - return global_state - else: - global_state["params"]["lr"] = lr - renderer = global_state['renderer'] - renderer.update_lr(lr) - print('New optimizer: ') - print(renderer.w_optim) - return global_state - - form_lr_number.change( - on_change_lr, - inputs=[form_lr_number, global_state], - outputs=[global_state], - queue=False, - ) - - def on_click_start(global_state, image): - p_in_pixels = [] - t_in_pixels = [] - valid_points = [] - - # handle of start drag in mask editing mode - global_state = preprocess_mask_info(global_state, image) - - # Prepare the points for the inference - if len(global_state["points"]) == 0: - # yield on_click_start_wo_points(global_state, image) - image_raw = global_state['images']['image_raw'] - update_image_draw( - image_raw, - global_state['points'], - global_state['mask'], - global_state['show_mask'], - global_state, - ) - - yield ( - global_state, - 0, - global_state['images']['image_show'], - # gr.File.update(visible=False), - gr.Button.update(interactive=True), - gr.Button.update(interactive=True), - gr.Button.update(interactive=True), - gr.Button.update(interactive=True), - gr.Button.update(interactive=True), - # latent space - gr.Radio.update(interactive=True), - gr.Button.update(interactive=True), - # NOTE: disable stop button - gr.Button.update(interactive=False), - - # update other comps - gr.Dropdown.update(interactive=True), - gr.Number.update(interactive=True), - gr.Number.update(interactive=True), - gr.Button.update(interactive=True), - gr.Button.update(interactive=True), - gr.Checkbox.update(interactive=True), - # gr.Number.update(interactive=True), - gr.Number.update(interactive=True), - ) - else: - - # Transform the points into torch tensors - for key_point, point in global_state["points"].items(): - try: - p_start = point.get("start_temp", point["start"]) - p_end = point["target"] - - if p_start is None or p_end is None: - continue - - except KeyError: - continue - - p_in_pixels.append(p_start) - t_in_pixels.append(p_end) - valid_points.append(key_point) - - mask = torch.tensor(global_state['mask']).float() - drag_mask = 1 - mask - - renderer: Renderer = global_state["renderer"] - global_state['temporal_params']['stop'] = False - global_state['editing_state'] = 'running' - - # reverse points order - p_to_opt = reverse_point_pairs(p_in_pixels) - t_to_opt = reverse_point_pairs(t_in_pixels) - print('Running with:') - print(f' Source: {p_in_pixels}') - print(f' Target: {t_in_pixels}') - step_idx = 0 - last_time = time.time() - while True: - print_memory_usage() - # add a TIMEOUT break - print(f'Running time: {time.time() - last_time}') - if IS_SPACE and time.time() - last_time > TIMEOUT: - print('Timeout break!') - break - if global_state["temporal_params"]["stop"] or global_state['generator_params']["stop"]: - break - - # do drage here! - renderer._render_drag_impl( - global_state['generator_params'], - p_to_opt, # point - t_to_opt, # target - drag_mask, # mask, - global_state['params']['motion_lambda'], # lambda_mask - reg=0, - feature_idx=5, # NOTE: do not support change for now - r1=global_state['params']['r1_in_pixels'], # r1 - r2=global_state['params']['r2_in_pixels'], # r2 - # random_seed = 0, - # noise_mode = 'const', - trunc_psi=global_state['params']['trunc_psi'], - # force_fp32 = False, - # layer_name = None, - # sel_channels = 3, - # base_channel = 0, - # img_scale_db = 0, - # img_normalize = False, - # untransform = False, - is_drag=True, - to_pil=True) - - if step_idx % global_state['draw_interval'] == 0: - print('Current Source:') - for key_point, p_i, t_i in zip(valid_points, p_to_opt, - t_to_opt): - global_state["points"][key_point]["start_temp"] = [ - p_i[1], - p_i[0], - ] - global_state["points"][key_point]["target"] = [ - t_i[1], - t_i[0], - ] - start_temp = global_state["points"][key_point][ - "start_temp"] - print(f' {start_temp}') - - image_result = global_state['generator_params']['image'] - image_draw = update_image_draw( - image_result, - global_state['points'], - global_state['mask'], - global_state['show_mask'], - global_state, - ) - global_state['images']['image_raw'] = image_result - - yield ( - global_state, - step_idx, - global_state['images']['image_show'], - # gr.File.update(visible=False), - gr.Button.update(interactive=False), - gr.Button.update(interactive=False), - gr.Button.update(interactive=False), - gr.Button.update(interactive=False), - gr.Button.update(interactive=False), - # latent space - gr.Radio.update(interactive=False), - gr.Button.update(interactive=False), - # enable stop button in loop - gr.Button.update(interactive=True), - - # update other comps - gr.Dropdown.update(interactive=False), - gr.Number.update(interactive=False), - gr.Number.update(interactive=False), - gr.Button.update(interactive=False), - gr.Button.update(interactive=False), - gr.Checkbox.update(interactive=False), - # gr.Number.update(interactive=False), - gr.Number.update(interactive=False), - ) - - # increate step - step_idx += 1 - - image_result = global_state['generator_params']['image'] - global_state['images']['image_raw'] = image_result - image_draw = update_image_draw(image_result, - global_state['points'], - global_state['mask'], - global_state['show_mask'], - global_state) - - # fp = NamedTemporaryFile(suffix=".png", delete=False) - # image_result.save(fp, "PNG") - - global_state['editing_state'] = 'add_points' - - yield ( - global_state, - 0, # reset step to 0 after stop. - global_state['images']['image_show'], - # gr.File.update(visible=True, value=fp.name), - gr.Button.update(interactive=True), - gr.Button.update(interactive=True), - gr.Button.update(interactive=True), - gr.Button.update(interactive=True), - gr.Button.update(interactive=True), - # latent space - gr.Radio.update(interactive=True), - gr.Button.update(interactive=True), - # NOTE: disable stop button with loop finish - gr.Button.update(interactive=False), - - # update other comps - gr.Dropdown.update(interactive=True), - gr.Number.update(interactive=True), - gr.Number.update(interactive=True), - gr.Checkbox.update(interactive=True), - gr.Number.update(interactive=True), - ) - - form_start_btn.click( - on_click_start, - inputs=[global_state, form_image], - outputs=[ - global_state, - form_steps_number, - form_image, - # form_download_result_file, - # >>> buttons - form_reset_image, - enable_add_points, - enable_add_mask, - undo_points, - form_reset_mask_btn, - form_latent_space, - form_start_btn, - form_stop_btn, - # <<< buttonm - # >>> inputs comps - form_pretrained_dropdown, - form_seed_number, - form_lr_number, - show_mask, - form_lambda_number, - ], - ) - - def on_click_stop(global_state): - """Function to handle stop button is clicked. - 1. send a stop signal by set global_state["temporal_params"]["stop"] as True - 2. Disable Stop button - """ - global_state["temporal_params"]["stop"] = True - - return global_state, gr.Button.update(interactive=False) - - form_stop_btn.click(on_click_stop, - inputs=[global_state], - outputs=[global_state, form_stop_btn], - queue=False) - - form_draw_interval_number.change( - partial( - on_change_single_global_state, - "draw_interval", - map_transform=lambda x: int(x), - ), - inputs=[form_draw_interval_number, global_state], - outputs=[global_state], - queue=False, - ) - - def on_click_remove_point(global_state): - choice = global_state["curr_point"] - del global_state["points"][choice] - - choices = list(global_state["points"].keys()) - - if len(choices) > 0: - global_state["curr_point"] = choices[0] - - return ( - gr.Dropdown.update(choices=choices, value=choices[0]), - global_state, - ) - - # Mask - def on_click_reset_mask(global_state): - global_state['mask'] = np.ones( - ( - global_state["images"]["image_raw"].size[1], - global_state["images"]["image_raw"].size[0], - ), - dtype=np.uint8, - ) - image_draw = update_image_draw(global_state['images']['image_raw'], - global_state['points'], - global_state['mask'], - global_state['show_mask'], global_state) - return global_state, image_draw - - form_reset_mask_btn.click( - on_click_reset_mask, - inputs=[global_state], - outputs=[global_state, form_image], - ) - - # Image - def on_click_enable_draw(global_state, image): - """Function to start add mask mode. - 1. Preprocess mask info from last state - 2. Change editing state to add_mask - 3. Set curr image with points and mask - """ - global_state = preprocess_mask_info(global_state, image) - global_state['editing_state'] = 'add_mask' - image_raw = global_state['images']['image_raw'] - image_draw = update_image_draw(image_raw, global_state['points'], - global_state['mask'], True, - global_state) - return (global_state, - gr.Image.update(value=image_draw, interactive=True)) - - def on_click_remove_draw(global_state, image): - """Function to start remove mask mode. - 1. Preprocess mask info from last state - 2. Change editing state to remove_mask - 3. Set curr image with points and mask - """ - global_state = preprocess_mask_info(global_state, image) - global_state['edinting_state'] = 'remove_mask' - image_raw = global_state['images']['image_raw'] - image_draw = update_image_draw(image_raw, global_state['points'], - global_state['mask'], True, - global_state) - return (global_state, - gr.Image.update(value=image_draw, interactive=True)) - - enable_add_mask.click(on_click_enable_draw, - inputs=[global_state, form_image], - outputs=[ - global_state, - form_image, - ], - queue=False) - - def on_click_add_point(global_state, image: dict): - """Function switch from add mask mode to add points mode. - 1. Updaste mask buffer if need - 2. Change global_state['editing_state'] to 'add_points' - 3. Set current image with mask - """ - - global_state = preprocess_mask_info(global_state, image) - global_state['editing_state'] = 'add_points' - mask = global_state['mask'] - image_raw = global_state['images']['image_raw'] - image_draw = update_image_draw(image_raw, global_state['points'], mask, - global_state['show_mask'], global_state) - - return (global_state, - gr.Image.update(value=image_draw, interactive=False)) - - enable_add_points.click(on_click_add_point, - inputs=[global_state, form_image], - outputs=[global_state, form_image], - queue=False) - - def on_click_image(global_state, evt: gr.SelectData): - """This function only support click for point selection - """ - xy = evt.index - if global_state['editing_state'] != 'add_points': - print(f'In {global_state["editing_state"]} state. ' - 'Do not add points.') - - return global_state, global_state['images']['image_show'] - - points = global_state["points"] - - point_idx = get_latest_points_pair(points) - if point_idx is None: - points[0] = {'start': xy, 'target': None} - print(f'Click Image - Start - {xy}') - elif points[point_idx].get('target', None) is None: - points[point_idx]['target'] = xy - print(f'Click Image - Target - {xy}') - else: - points[point_idx + 1] = {'start': xy, 'target': None} - print(f'Click Image - Start - {xy}') - - image_raw = global_state['images']['image_raw'] - image_draw = update_image_draw( - image_raw, - global_state['points'], - global_state['mask'], - global_state['show_mask'], - global_state, - ) - - return global_state, image_draw - - form_image.select( - on_click_image, - inputs=[global_state], - outputs=[global_state, form_image], - queue=False, - ) - - def on_click_clear_points(global_state): - """Function to handle clear all control points - 1. clear global_state['points'] (clear_state) - 2. re-init network - 2. re-draw image - """ - clear_state(global_state, target='point') - - renderer: Renderer = global_state["renderer"] - renderer.feat_refs = None - - image_raw = global_state['images']['image_raw'] - image_draw = update_image_draw(image_raw, {}, global_state['mask'], - global_state['show_mask'], global_state) - return global_state, image_draw - - undo_points.click(on_click_clear_points, - inputs=[global_state], - outputs=[global_state, form_image], - queue=False) - - def on_click_show_mask(global_state, show_mask): - """Function to control whether show mask on image.""" - global_state['show_mask'] = show_mask - - image_raw = global_state['images']['image_raw'] - image_draw = update_image_draw( - image_raw, - global_state['points'], - global_state['mask'], - global_state['show_mask'], - global_state, - ) - return global_state, image_draw - - show_mask.change( - on_click_show_mask, - inputs=[global_state, show_mask], - outputs=[global_state, form_image], - queue=False, - ) - -gr.close_all() -app.queue(concurrency_count=1, max_size=200, api_open=False) -app.launch(show_api=False) diff --git a/spaces/DragGan/DragGan/stylegan_human/pti/pti_models/e4e/stylegan2/__init__.py b/spaces/DragGan/DragGan/stylegan_human/pti/pti_models/e4e/stylegan2/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/ECCV2022/PSG/OpenPSG/configs/imp/panoptic_fpn_r50_fpn_1x_predcls_psg.py b/spaces/ECCV2022/PSG/OpenPSG/configs/imp/panoptic_fpn_r50_fpn_1x_predcls_psg.py deleted file mode 100644 index 93189cfd37a51374fe62e29b0bc8550559da3a27..0000000000000000000000000000000000000000 --- a/spaces/ECCV2022/PSG/OpenPSG/configs/imp/panoptic_fpn_r50_fpn_1x_predcls_psg.py +++ /dev/null @@ -1,44 +0,0 @@ -_base_ = [ - '../motifs/panoptic_fpn_r50_fpn_1x_predcls_psg.py', -] - -model = dict(relation_head=dict( - type='IMPHead', - head_config=dict( - # NOTE: Evaluation type - use_gt_box=True, - use_gt_label=True, - num_iter=2, - ), -)) - -evaluation = dict(interval=1, - metric='predcls', - relation_mode=True, - classwise=True) - -# Change batch size and learning rate -data = dict(samples_per_gpu=16, ) -# workers_per_gpu=0) # FIXME: Is this the problem? -optimizer = dict(type='SGD', lr=0.001, momentum=0.9) - -# Log config -project_name = 'openpsg' -expt_name = 'imp_panoptic_fpn_r50_fpn_1x_predcls_psg' -work_dir = f'./work_dirs/{expt_name}' - -log_config = dict( - interval=50, - hooks=[ - dict(type='TextLoggerHook'), - # dict(type='TensorboardLoggerHook') - dict( - type='WandbLoggerHook', - init_kwargs=dict( - project=project_name, - name=expt_name, - # config=work_dir + "/cfg.yaml" - ), - ), - ], -) diff --git a/spaces/ECCV2022/bytetrack/tools/track_deepsort.py b/spaces/ECCV2022/bytetrack/tools/track_deepsort.py deleted file mode 100644 index 06f4106858754ad80fe51356a67da5665ebcf92d..0000000000000000000000000000000000000000 --- a/spaces/ECCV2022/bytetrack/tools/track_deepsort.py +++ /dev/null @@ -1,293 +0,0 @@ -from loguru import logger - -import torch -import torch.backends.cudnn as cudnn -from torch.nn.parallel import DistributedDataParallel as DDP - -from yolox.core import launch -from yolox.exp import get_exp -from yolox.utils import configure_nccl, fuse_model, get_local_rank, get_model_info, setup_logger -from yolox.evaluators import MOTEvaluator - -import argparse -import os -import random -import warnings -import glob -import motmetrics as mm -from collections import OrderedDict -from pathlib import Path - - -def make_parser(): - parser = argparse.ArgumentParser("YOLOX Eval") - parser.add_argument("-expn", "--experiment-name", type=str, default=None) - parser.add_argument("-n", "--name", type=str, default=None, help="model name") - - # distributed - parser.add_argument( - "--dist-backend", default="nccl", type=str, help="distributed backend" - ) - parser.add_argument( - "--dist-url", - default=None, - type=str, - help="url used to set up distributed training", - ) - parser.add_argument("-b", "--batch-size", type=int, default=64, help="batch size") - parser.add_argument( - "-d", "--devices", default=None, type=int, help="device for training" - ) - parser.add_argument( - "--local_rank", default=0, type=int, help="local rank for dist training" - ) - parser.add_argument( - "--num_machines", default=1, type=int, help="num of node for training" - ) - parser.add_argument( - "--machine_rank", default=0, type=int, help="node rank for multi-node training" - ) - parser.add_argument( - "-f", - "--exp_file", - default=None, - type=str, - help="pls input your expriment description file", - ) - parser.add_argument( - "--fp16", - dest="fp16", - default=False, - action="store_true", - help="Adopting mix precision evaluating.", - ) - parser.add_argument( - "--fuse", - dest="fuse", - default=False, - action="store_true", - help="Fuse conv and bn for testing.", - ) - parser.add_argument( - "--trt", - dest="trt", - default=False, - action="store_true", - help="Using TensorRT model for testing.", - ) - parser.add_argument( - "--test", - dest="test", - default=False, - action="store_true", - help="Evaluating on test-dev set.", - ) - parser.add_argument( - "--speed", - dest="speed", - default=False, - action="store_true", - help="speed test only.", - ) - parser.add_argument( - "opts", - help="Modify config options using the command-line", - default=None, - nargs=argparse.REMAINDER, - ) - # det args - parser.add_argument("-c", "--ckpt", default=None, type=str, help="ckpt for eval") - parser.add_argument("--conf", default=0.1, type=float, help="test conf") - parser.add_argument("--nms", default=0.7, type=float, help="test nms threshold") - parser.add_argument("--tsize", default=None, type=int, help="test img size") - parser.add_argument("--seed", default=None, type=int, help="eval seed") - # tracking args - parser.add_argument("--track_thresh", type=float, default=0.6, help="tracking confidence threshold") - parser.add_argument("--track_buffer", type=int, default=30, help="the frames for keep lost tracks") - parser.add_argument("--match_thresh", type=int, default=0.9, help="matching threshold for tracking") - parser.add_argument('--min-box-area', type=float, default=100, help='filter out tiny boxes') - # deepsort args - parser.add_argument("--model_folder", type=str, default='pretrained/ckpt.t7', help="reid model folder") - return parser - - -def compare_dataframes(gts, ts): - accs = [] - names = [] - for k, tsacc in ts.items(): - if k in gts: - logger.info('Comparing {}...'.format(k)) - accs.append(mm.utils.compare_to_groundtruth(gts[k], tsacc, 'iou', distth=0.5)) - names.append(k) - else: - logger.warning('No ground truth for {}, skipping.'.format(k)) - - return accs, names - - -@logger.catch -def main(exp, args, num_gpu): - if args.seed is not None: - random.seed(args.seed) - torch.manual_seed(args.seed) - cudnn.deterministic = True - warnings.warn( - "You have chosen to seed testing. This will turn on the CUDNN deterministic setting, " - ) - - is_distributed = num_gpu > 1 - - # set environment variables for distributed training - cudnn.benchmark = True - - rank = args.local_rank - # rank = get_local_rank() - - file_name = os.path.join(exp.output_dir, args.experiment_name) - - if rank == 0: - os.makedirs(file_name, exist_ok=True) - - results_folder = os.path.join(file_name, "track_results_deepsort") - os.makedirs(results_folder, exist_ok=True) - model_folder = args.model_folder - - setup_logger(file_name, distributed_rank=rank, filename="val_log.txt", mode="a") - logger.info("Args: {}".format(args)) - - if args.conf is not None: - exp.test_conf = args.conf - if args.nms is not None: - exp.nmsthre = args.nms - if args.tsize is not None: - exp.test_size = (args.tsize, args.tsize) - - model = exp.get_model() - logger.info("Model Summary: {}".format(get_model_info(model, exp.test_size))) - #logger.info("Model Structure:\n{}".format(str(model))) - - #evaluator = exp.get_evaluator(args.batch_size, is_distributed, args.test) - - val_loader = exp.get_eval_loader(args.batch_size, is_distributed, args.test) - evaluator = MOTEvaluator( - args=args, - dataloader=val_loader, - img_size=exp.test_size, - confthre=exp.test_conf, - nmsthre=exp.nmsthre, - num_classes=exp.num_classes, - ) - - torch.cuda.set_device(rank) - model.cuda(rank) - model.eval() - - if not args.speed and not args.trt: - if args.ckpt is None: - ckpt_file = os.path.join(file_name, "best_ckpt.pth.tar") - else: - ckpt_file = args.ckpt - logger.info("loading checkpoint") - loc = "cuda:{}".format(rank) - ckpt = torch.load(ckpt_file, map_location=loc) - # load the model state dict - model.load_state_dict(ckpt["model"]) - logger.info("loaded checkpoint done.") - - if is_distributed: - model = DDP(model, device_ids=[rank]) - - if args.fuse: - logger.info("\tFusing model...") - model = fuse_model(model) - - if args.trt: - assert ( - not args.fuse and not is_distributed and args.batch_size == 1 - ), "TensorRT model is not support model fusing and distributed inferencing!" - trt_file = os.path.join(file_name, "model_trt.pth") - assert os.path.exists( - trt_file - ), "TensorRT model is not found!\n Run tools/trt.py first!" - model.head.decode_in_inference = False - decoder = model.head.decode_outputs - else: - trt_file = None - decoder = None - - # start evaluate - *_, summary = evaluator.evaluate_deepsort( - model, is_distributed, args.fp16, trt_file, decoder, exp.test_size, results_folder, model_folder - ) - logger.info("\n" + summary) - - # evaluate MOTA - mm.lap.default_solver = 'lap' - - gt_type = '_val_half' - #gt_type = '' - print('gt_type', gt_type) - gtfiles = glob.glob( - os.path.join('datasets/mot/train', '*/gt/gt{}.txt'.format(gt_type))) - print('gt_files', gtfiles) - tsfiles = [f for f in glob.glob(os.path.join(results_folder, '*.txt')) if not os.path.basename(f).startswith('eval')] - - logger.info('Found {} groundtruths and {} test files.'.format(len(gtfiles), len(tsfiles))) - logger.info('Available LAP solvers {}'.format(mm.lap.available_solvers)) - logger.info('Default LAP solver \'{}\''.format(mm.lap.default_solver)) - logger.info('Loading files.') - - gt = OrderedDict([(Path(f).parts[-3], mm.io.loadtxt(f, fmt='mot15-2D', min_confidence=1)) for f in gtfiles]) - ts = OrderedDict([(os.path.splitext(Path(f).parts[-1])[0], mm.io.loadtxt(f, fmt='mot15-2D', min_confidence=-1)) for f in tsfiles]) - - mh = mm.metrics.create() - accs, names = compare_dataframes(gt, ts) - - logger.info('Running metrics') - metrics = ['recall', 'precision', 'num_unique_objects', 'mostly_tracked', - 'partially_tracked', 'mostly_lost', 'num_false_positives', 'num_misses', - 'num_switches', 'num_fragmentations', 'mota', 'motp', 'num_objects'] - summary = mh.compute_many(accs, names=names, metrics=metrics, generate_overall=True) - # summary = mh.compute_many(accs, names=names, metrics=mm.metrics.motchallenge_metrics, generate_overall=True) - # print(mm.io.render_summary( - # summary, formatters=mh.formatters, - # namemap=mm.io.motchallenge_metric_names)) - div_dict = { - 'num_objects': ['num_false_positives', 'num_misses', 'num_switches', 'num_fragmentations'], - 'num_unique_objects': ['mostly_tracked', 'partially_tracked', 'mostly_lost']} - for divisor in div_dict: - for divided in div_dict[divisor]: - summary[divided] = (summary[divided] / summary[divisor]) - fmt = mh.formatters - change_fmt_list = ['num_false_positives', 'num_misses', 'num_switches', 'num_fragmentations', 'mostly_tracked', - 'partially_tracked', 'mostly_lost'] - for k in change_fmt_list: - fmt[k] = fmt['mota'] - print(mm.io.render_summary(summary, formatters=fmt, namemap=mm.io.motchallenge_metric_names)) - - metrics = mm.metrics.motchallenge_metrics + ['num_objects'] - summary = mh.compute_many(accs, names=names, metrics=metrics, generate_overall=True) - print(mm.io.render_summary(summary, formatters=mh.formatters, namemap=mm.io.motchallenge_metric_names)) - logger.info('Completed') - - -if __name__ == "__main__": - args = make_parser().parse_args() - exp = get_exp(args.exp_file, args.name) - exp.merge(args.opts) - - if not args.experiment_name: - args.experiment_name = exp.exp_name - - num_gpu = torch.cuda.device_count() if args.devices is None else args.devices - assert num_gpu <= torch.cuda.device_count() - - launch( - main, - num_gpu, - args.num_machines, - args.machine_rank, - backend=args.dist_backend, - dist_url=args.dist_url, - args=(exp, args, num_gpu), - ) diff --git a/spaces/EDGAhab/Paimon-Talking/text/cleaners.py b/spaces/EDGAhab/Paimon-Talking/text/cleaners.py deleted file mode 100644 index 759db477e3deb72a03ff65957419c3694781b5ef..0000000000000000000000000000000000000000 --- a/spaces/EDGAhab/Paimon-Talking/text/cleaners.py +++ /dev/null @@ -1,138 +0,0 @@ -""" from https://github.com/keithito/tacotron """ - -''' -Cleaners are transformations that run over the input text at both training and eval time. - -Cleaners can be selected by passing a comma-delimited list of cleaner names as the "cleaners" -hyperparameter. Some cleaners are English-specific. You'll typically want to use: - 1. "english_cleaners" for English text - 2. "transliteration_cleaners" for non-English text that can be transliterated to ASCII using - the Unidecode library (https://pypi.python.org/pypi/Unidecode) - 3. "basic_cleaners" if you do not want to transliterate (in this case, you should also update - the symbols in symbols.py to match your data). -''' - -import re -from unidecode import unidecode -from phonemizer import phonemize -from pypinyin import Style, pinyin -from pypinyin.style._utils import get_finals, get_initials -# Regular expression matching whitespace: -_whitespace_re = re.compile(r'\s+') - -# List of (regular expression, replacement) pairs for abbreviations: -_abbreviations = [(re.compile('\\b%s\\.' % x[0], re.IGNORECASE), x[1]) for x in [ - ('mrs', 'misess'), - ('mr', 'mister'), - ('dr', 'doctor'), - ('st', 'saint'), - ('co', 'company'), - ('jr', 'junior'), - ('maj', 'major'), - ('gen', 'general'), - ('drs', 'doctors'), - ('rev', 'reverend'), - ('lt', 'lieutenant'), - ('hon', 'honorable'), - ('sgt', 'sergeant'), - ('capt', 'captain'), - ('esq', 'esquire'), - ('ltd', 'limited'), - ('col', 'colonel'), - ('ft', 'fort'), -]] - - -def expand_abbreviations(text): - for regex, replacement in _abbreviations: - text = re.sub(regex, replacement, text) - return text - - -def expand_numbers(text): - return normalize_numbers(text) - - -def lowercase(text): - return text.lower() - - -def collapse_whitespace(text): - return re.sub(_whitespace_re, ' ', text) - - -def convert_to_ascii(text): - return unidecode(text) - - -def basic_cleaners(text): - '''Basic pipeline that lowercases and collapses whitespace without transliteration.''' - text = lowercase(text) - text = collapse_whitespace(text) - return text - - -def transliteration_cleaners(text): - '''Pipeline for non-English text that transliterates to ASCII.''' - text = convert_to_ascii(text) - text = lowercase(text) - text = collapse_whitespace(text) - return text - - -def english_cleaners(text): - '''Pipeline for English text, including abbreviation expansion.''' - text = convert_to_ascii(text) - text = lowercase(text) - text = expand_abbreviations(text) - phonemes = phonemize(text, language='en-us', backend='espeak', strip=True) - phonemes = collapse_whitespace(phonemes) - return phonemes - - -def english_cleaners2(text): - '''Pipeline for English text, including abbreviation expansion. + punctuation + stress''' - text = convert_to_ascii(text) - text = lowercase(text) - text = expand_abbreviations(text) - phonemes = phonemize(text, language='en-us', backend='espeak', strip=True, preserve_punctuation=True, with_stress=True) - phonemes = collapse_whitespace(phonemes) - return phonemes - - - - -def chinese_cleaners1(text): - from pypinyin import Style, pinyin - - phones = [phone[0] for phone in pinyin(text, style=Style.TONE3)] - return ' '.join(phones) - - -def chinese_cleaners2(text): - phones = [ - p - for phone in pinyin(text, style=Style.TONE3) - for p in [ - get_initials(phone[0], strict=True), - get_finals(phone[0][:-1], strict=True) + phone[0][-1] - if phone[0][-1].isdigit() - else get_finals(phone[0], strict=True) - if phone[0][-1].isalnum() - else phone[0], - ] - # Remove the case of individual tones as a phoneme - if len(p) != 0 and not p.isdigit() - ] - return phones - # return phonemes - -if __name__ == '__main__': - res = chinese_cleaners2('这是语音测试!') - print(res) - res = chinese_cleaners1('"第一,南京不是发展的不行,是大家对他期望很高,') - print(res) - - - res = english_cleaners2('this is a club test for one train.GDP') - print(res) \ No newline at end of file diff --git a/spaces/Eddycrack864/Applio-Inference/infer/lib/slicer2.py b/spaces/Eddycrack864/Applio-Inference/infer/lib/slicer2.py deleted file mode 100644 index 5b29ee262aa54045e807be2cffeb41687499ba58..0000000000000000000000000000000000000000 --- a/spaces/Eddycrack864/Applio-Inference/infer/lib/slicer2.py +++ /dev/null @@ -1,260 +0,0 @@ -import numpy as np - - -# This function is obtained from librosa. -def get_rms( - y, - frame_length=2048, - hop_length=512, - pad_mode="constant", -): - padding = (int(frame_length // 2), int(frame_length // 2)) - y = np.pad(y, padding, mode=pad_mode) - - axis = -1 - # put our new within-frame axis at the end for now - out_strides = y.strides + tuple([y.strides[axis]]) - # Reduce the shape on the framing axis - x_shape_trimmed = list(y.shape) - x_shape_trimmed[axis] -= frame_length - 1 - out_shape = tuple(x_shape_trimmed) + tuple([frame_length]) - xw = np.lib.stride_tricks.as_strided(y, shape=out_shape, strides=out_strides) - if axis < 0: - target_axis = axis - 1 - else: - target_axis = axis + 1 - xw = np.moveaxis(xw, -1, target_axis) - # Downsample along the target axis - slices = [slice(None)] * xw.ndim - slices[axis] = slice(0, None, hop_length) - x = xw[tuple(slices)] - - # Calculate power - power = np.mean(np.abs(x) ** 2, axis=-2, keepdims=True) - - return np.sqrt(power) - - -class Slicer: - def __init__( - self, - sr: int, - threshold: float = -40.0, - min_length: int = 5000, - min_interval: int = 300, - hop_size: int = 20, - max_sil_kept: int = 5000, - ): - if not min_length >= min_interval >= hop_size: - raise ValueError( - "The following condition must be satisfied: min_length >= min_interval >= hop_size" - ) - if not max_sil_kept >= hop_size: - raise ValueError( - "The following condition must be satisfied: max_sil_kept >= hop_size" - ) - min_interval = sr * min_interval / 1000 - self.threshold = 10 ** (threshold / 20.0) - self.hop_size = round(sr * hop_size / 1000) - self.win_size = min(round(min_interval), 4 * self.hop_size) - self.min_length = round(sr * min_length / 1000 / self.hop_size) - self.min_interval = round(min_interval / self.hop_size) - self.max_sil_kept = round(sr * max_sil_kept / 1000 / self.hop_size) - - def _apply_slice(self, waveform, begin, end): - if len(waveform.shape) > 1: - return waveform[ - :, begin * self.hop_size : min(waveform.shape[1], end * self.hop_size) - ] - else: - return waveform[ - begin * self.hop_size : min(waveform.shape[0], end * self.hop_size) - ] - - # @timeit - def slice(self, waveform): - if len(waveform.shape) > 1: - samples = waveform.mean(axis=0) - else: - samples = waveform - if samples.shape[0] <= self.min_length: - return [waveform] - rms_list = get_rms( - y=samples, frame_length=self.win_size, hop_length=self.hop_size - ).squeeze(0) - sil_tags = [] - silence_start = None - clip_start = 0 - for i, rms in enumerate(rms_list): - # Keep looping while frame is silent. - if rms < self.threshold: - # Record start of silent frames. - if silence_start is None: - silence_start = i - continue - # Keep looping while frame is not silent and silence start has not been recorded. - if silence_start is None: - continue - # Clear recorded silence start if interval is not enough or clip is too short - is_leading_silence = silence_start == 0 and i > self.max_sil_kept - need_slice_middle = ( - i - silence_start >= self.min_interval - and i - clip_start >= self.min_length - ) - if not is_leading_silence and not need_slice_middle: - silence_start = None - continue - # Need slicing. Record the range of silent frames to be removed. - if i - silence_start <= self.max_sil_kept: - pos = rms_list[silence_start : i + 1].argmin() + silence_start - if silence_start == 0: - sil_tags.append((0, pos)) - else: - sil_tags.append((pos, pos)) - clip_start = pos - elif i - silence_start <= self.max_sil_kept * 2: - pos = rms_list[ - i - self.max_sil_kept : silence_start + self.max_sil_kept + 1 - ].argmin() - pos += i - self.max_sil_kept - pos_l = ( - rms_list[ - silence_start : silence_start + self.max_sil_kept + 1 - ].argmin() - + silence_start - ) - pos_r = ( - rms_list[i - self.max_sil_kept : i + 1].argmin() - + i - - self.max_sil_kept - ) - if silence_start == 0: - sil_tags.append((0, pos_r)) - clip_start = pos_r - else: - sil_tags.append((min(pos_l, pos), max(pos_r, pos))) - clip_start = max(pos_r, pos) - else: - pos_l = ( - rms_list[ - silence_start : silence_start + self.max_sil_kept + 1 - ].argmin() - + silence_start - ) - pos_r = ( - rms_list[i - self.max_sil_kept : i + 1].argmin() - + i - - self.max_sil_kept - ) - if silence_start == 0: - sil_tags.append((0, pos_r)) - else: - sil_tags.append((pos_l, pos_r)) - clip_start = pos_r - silence_start = None - # Deal with trailing silence. - total_frames = rms_list.shape[0] - if ( - silence_start is not None - and total_frames - silence_start >= self.min_interval - ): - silence_end = min(total_frames, silence_start + self.max_sil_kept) - pos = rms_list[silence_start : silence_end + 1].argmin() + silence_start - sil_tags.append((pos, total_frames + 1)) - # Apply and return slices. - if len(sil_tags) == 0: - return [waveform] - else: - chunks = [] - if sil_tags[0][0] > 0: - chunks.append(self._apply_slice(waveform, 0, sil_tags[0][0])) - for i in range(len(sil_tags) - 1): - chunks.append( - self._apply_slice(waveform, sil_tags[i][1], sil_tags[i + 1][0]) - ) - if sil_tags[-1][1] < total_frames: - chunks.append( - self._apply_slice(waveform, sil_tags[-1][1], total_frames) - ) - return chunks - - -def main(): - import os.path - from argparse import ArgumentParser - - import librosa - import soundfile - - parser = ArgumentParser() - parser.add_argument("audio", type=str, help="The audio to be sliced") - parser.add_argument( - "--out", type=str, help="Output directory of the sliced audio clips" - ) - parser.add_argument( - "--db_thresh", - type=float, - required=False, - default=-40, - help="The dB threshold for silence detection", - ) - parser.add_argument( - "--min_length", - type=int, - required=False, - default=5000, - help="The minimum milliseconds required for each sliced audio clip", - ) - parser.add_argument( - "--min_interval", - type=int, - required=False, - default=300, - help="The minimum milliseconds for a silence part to be sliced", - ) - parser.add_argument( - "--hop_size", - type=int, - required=False, - default=10, - help="Frame length in milliseconds", - ) - parser.add_argument( - "--max_sil_kept", - type=int, - required=False, - default=500, - help="The maximum silence length kept around the sliced clip, presented in milliseconds", - ) - args = parser.parse_args() - out = args.out - if out is None: - out = os.path.dirname(os.path.abspath(args.audio)) - audio, sr = librosa.load(args.audio, sr=None, mono=False) - slicer = Slicer( - sr=sr, - threshold=args.db_thresh, - min_length=args.min_length, - min_interval=args.min_interval, - hop_size=args.hop_size, - max_sil_kept=args.max_sil_kept, - ) - chunks = slicer.slice(audio) - if not os.path.exists(out): - os.makedirs(out) - for i, chunk in enumerate(chunks): - if len(chunk.shape) > 1: - chunk = chunk.T - soundfile.write( - os.path.join( - out, - f"%s_%d.wav" - % (os.path.basename(args.audio).rsplit(".", maxsplit=1)[0], i), - ), - chunk, - sr, - ) - - -if __name__ == "__main__": - main() diff --git a/spaces/Ekimetrics/Biomap/biomap/plot_functions.py b/spaces/Ekimetrics/Biomap/biomap/plot_functions.py deleted file mode 100644 index df98be66bb73606a7d3384345ce58a39f1b1b560..0000000000000000000000000000000000000000 --- a/spaces/Ekimetrics/Biomap/biomap/plot_functions.py +++ /dev/null @@ -1,502 +0,0 @@ -from PIL import Image - -import matplotlib as mpl -from utils import prep_for_plot - -import torch.multiprocessing -import torchvision.transforms as T - -from utils_gee import extract_img, transform_ee_img - -import plotly.graph_objects as go -import plotly.express as px -import numpy as np -from plotly.subplots import make_subplots - -import os -os.environ['KMP_DUPLICATE_LIB_OK'] = 'True' - -colors = ('red', 'palegreen', 'green', 'steelblue', 'blue', 'yellow', 'lightgrey') -class_names = ('Buildings', 'Cultivation', 'Natural green', 'Wetland', 'Water', 'Infrastructure', 'Background') -cmap = mpl.colors.ListedColormap(colors) - -colors = ('red', 'palegreen', 'green', 'steelblue', 'blue', 'yellow', 'lightgrey') -class_names = ('Buildings', 'Cultivation', 'Natural green', 'Wetland', 'Water', 'Infrastructure', 'Background') -scores_init = [1,2,4,3,4,1,0] - -# Function that look for img on EE and segment it -# -- 3 ways possible to avoid cloudy environment -- monthly / bi-monthly / yearly meaned img -def segment_loc(model, location, month, year, how = "month", month_end = '12', year_end = None) : - if how == 'month': - img = extract_img(location, year +'-'+ month +'-01', year +'-'+ month +'-28') - elif how == 'year' : - if year_end == None : - img = extract_img(location, year +'-'+ month +'-01', year +'-'+ month_end +'-28', width = 0.04 , len = 0.04) - else : - img = extract_img(location, year +'-'+ month +'-01', year_end +'-'+ month_end +'-28', width = 0.04 , len = 0.04) - - img_test= transform_ee_img(img, max = 0.25) - - # Preprocess opened img - x = preprocess(img_test) - x = torch.unsqueeze(x, dim=0).cpu() - # model=model.cpu() - - with torch.no_grad(): - feats, code = model.net(x) - linear_preds = model.linear_probe(x, code) - linear_preds = linear_preds.argmax(1) - outputs = { - 'img': x[:model.cfg.n_images].detach().cpu(), - 'linear_preds': linear_preds[:model.cfg.n_images].detach().cpu() - } - return outputs - - -# Function that look for all img on EE and extract all segments with the date as first output arg - -def segment_group(location, start_date, end_date, how = 'month') : - outputs = [] - st_month = int(start_date[5:7]) - end_month = int(end_date[5:7]) - - st_year = int(start_date[0:4]) - end_year = int(end_date[0:4]) - - - - for year in range(st_year, end_year+1) : - - if year != end_year : - last = 12 - else : - last = end_month - - if year != st_year: - start = 1 - else : - start = st_month - - if how == 'month' : - for month in range(start, last + 1): - month_str = f"{month:0>2d}" - year_str = str(year) - - outputs.append((year_str + '-' + month_str, segment_loc(location, month_str, year_str))) - - elif how == 'year' : - outputs.append((str(year) + '-' + f"{start:0>2d}", segment_loc(location, f"{start:0>2d}", str(year), how = 'year', month_end=f"{last:0>2d}"))) - - elif how == '2months' : - for month in range(start, last + 1): - month_str = f"{month:0>2d}" - year_str = str(year) - month_end = (month) % 12 +1 - if month_end < month : - year_end = year +1 - else : - year_end = year - month_end= f"{month_end:0>2d}" - year_end = str(year_end) - - outputs.append((year_str + '-' + month_str, segment_loc(location, month_str, year_str,how = 'year', month_end=month_end, year_end=year_end))) - - - return outputs - -def values_from_output(output): - imgs = transform_to_pil(output, alpha = 0.3) - - img = imgs[0] - img = np.array(img.convert('RGB')) - - labeled_img = imgs[2] - labeled_img = np.array(labeled_img.convert('RGB')) - - nb_values = [] - for i in range(7): - nb_values.append(np.count_nonzero(output['linear_preds'][0] == i+1)) - - score = sum(x * y for x, y in zip(scores_init, nb_values)) / sum(nb_values) / max(scores_init) - - return img, labeled_img, nb_values, score - - -# Function that extract from outputs (from segment_group function) all dates/ all images -def values_from_outputs(outputs) : - months = [] - imgs = [] - imgs_label = [] - nb_values = [] - scores = [] - - for output in outputs: - img, labeled_img, nb_value, score = values_from_output(output[1]) - months.append(output[0]) - imgs.append(img) - imgs_label.append(labeled_img) - nb_values.append(nb_value) - scores.append(score) - - return months, imgs, imgs_label, nb_values, scores - - - -def plot_imgs_labels(months, imgs, imgs_label, nb_values, scores) : - - fig2 = px.imshow(np.array(imgs), animation_frame=0, binary_string=True) - fig3 = px.imshow(np.array(imgs_label), animation_frame=0, binary_string=True) - - # Scores - scatters = [] - temp = [] - for score in scores : - temp_score = [] - temp_date = [] - score = scores[i] - temp.append(score) - text_temp = ["" for i in temp] - text_temp[-1] = str(round(score,2)) - scatters.append(go.Scatter(x=text_temp, y=temp, mode="lines+markers+text", marker_color="black", text = text_temp, textposition="top center")) - - - # Scores - fig = make_subplots( - rows=1, cols=4, - # specs=[[{"rowspan": 2}, {"rowspan": 2}, {"type": "pie"}, None]] - # row_heights=[0.8, 0.2], - column_widths = [0.6, 0.6,0.3, 0.3], - subplot_titles=("Localisation visualization", "labeled visualisation", "Segments repartition", "Biodiversity scores") - ) - - fig.add_trace(fig2["frames"][0]["data"][0], row=1, col=1) - fig.add_trace(fig3["frames"][0]["data"][0], row=1, col=2) - - fig.add_trace(go.Pie(labels = class_names, - values = nb_values[0], - marker_colors = colors, - name="Segment repartition", - textposition='inside', - texttemplate = "%{percent:.0%}", - textfont_size=14 - ), - row=1, col=3) - - - fig.add_trace(scatters[0], row=1, col=4) - # fig.add_annotation(text='score:' + str(scores[0]), - # showarrow=False, - # row=2, col=2) - - - number_frames = len(imgs) - frames = [dict( - name = k, - data = [ fig2["frames"][k]["data"][0], - fig3["frames"][k]["data"][0], - go.Pie(labels = class_names, - values = nb_values[k], - marker_colors = colors, - name="Segment repartition", - textposition='inside', - texttemplate = "%{percent:.0%}", - textfont_size=14 - ), - scatters[k] - ], - traces=[0, 1,2,3] # the elements of the list [0,1,2] give info on the traces in fig.data - # that are updated by the above three go.Scatter instances - ) for k in range(number_frames)] - - updatemenus = [dict(type='buttons', - buttons=[dict(label='Play', - method='animate', - args=[[f'{k}' for k in range(number_frames)], - dict(frame=dict(duration=500, redraw=False), - transition=dict(duration=0), - easing='linear', - fromcurrent=True, - mode='immediate' - )])], - direction= 'left', - pad=dict(r= 10, t=85), - showactive =True, x= 0.1, y= 0.13, xanchor= 'right', yanchor= 'top') - ] - - sliders = [{'yanchor': 'top', - 'xanchor': 'left', - 'currentvalue': {'font': {'size': 16}, 'prefix': 'Frame: ', 'visible': False, 'xanchor': 'right'}, - 'transition': {'duration': 500.0, 'easing': 'linear'}, - 'pad': {'b': 10, 't': 50}, - 'len': 0.9, 'x': 0.1, 'y': 0, - 'steps': [{'args': [[k], {'frame': {'duration': 500.0, 'easing': 'linear', 'redraw': False}, - 'transition': {'duration': 0, 'easing': 'linear'}}], - 'label': months[k], 'method': 'animate'} for k in range(number_frames) - ]}] - - - fig.update(frames=frames) - - for i,fr in enumerate(fig["frames"]): - fr.update( - layout={ - "xaxis": { - "range": [0,imgs[0].shape[1]+i/100000] - }, - "yaxis": { - "range": [imgs[0].shape[0]+i/100000,0] - }, - }) - - fr.update(layout_title_text= months[i]) - - - fig.update(layout_title_text= 'tot') - fig.update( - layout={ - "xaxis": { - "range": [0,imgs[0].shape[1]+i/100000], - 'showgrid': False, # thin lines in the background - 'zeroline': False, # thick line at x=0 - 'visible': False, # numbers below - }, - - "yaxis": { - "range": [imgs[0].shape[0]+i/100000,0], - 'showgrid': False, # thin lines in the background - 'zeroline': False, # thick line at y=0 - 'visible': False,}, - - "xaxis3": { - "range": [0,len(scores)+1], - 'autorange': False, # thin lines in the background - 'showgrid': False, # thin lines in the background - 'zeroline': False, # thick line at y=0 - 'visible': False - }, - - "yaxis3": { - "range": [0,1.5], - 'autorange': False, - 'showgrid': False, # thin lines in the background - 'zeroline': False, # thick line at y=0 - 'visible': False # thin lines in the background - } - }, - legend=dict( - yanchor="bottom", - y=0.99, - xanchor="center", - x=0.01 - ) - ) - - - fig.update_layout(updatemenus=updatemenus, - sliders=sliders) - - fig.update_layout(margin=dict(b=0, r=0)) - - # fig.show() #in jupyter notebook - - return fig - - - -# Last function (global one) -# how = 'month' or '2months' or 'year' - -def segment_region(location, start_date, end_date, how = 'month'): - - #extract the outputs for each image - outputs = segment_group(location, start_date, end_date, how = how) - - #extract the intersting values from image - months, imgs, imgs_label, nb_values, scores = values_from_outputs(outputs) - - #Create the figure - fig = plot_imgs_labels(months, imgs, imgs_label, nb_values, scores) - - return fig -#normalize img -preprocess = T.Compose([ - T.ToPILImage(), - T.Resize((320,320)), -# T.CenterCrop(224), - T.ToTensor(), - T.Normalize( - mean=[0.485, 0.456, 0.406], - std=[0.229, 0.224, 0.225] - ) -]) - -# Function that look for img on EE and segment it -# -- 3 ways possible to avoid cloudy environment -- monthly / bi-monthly / yearly meaned img - -def segment_loc(model,location, month, year, how = "month", month_end = '12', year_end = None) : - if how == 'month': - img = extract_img(location, year +'-'+ month +'-01', year +'-'+ month +'-28') - elif how == 'year' : - if year_end == None : - img = extract_img(location, year +'-'+ month +'-01', year +'-'+ month_end +'-28', width = 0.04 , len = 0.04) - else : - img = extract_img(location, year +'-'+ month +'-01', year_end +'-'+ month_end +'-28', width = 0.04 , len = 0.04) - - - img_test= transform_ee_img(img, max = 0.25) - - # Preprocess opened img - x = preprocess(img_test) - x = torch.unsqueeze(x, dim=0).cpu() - # model=model.cpu() - - with torch.no_grad(): - feats, code = model.net(x) - linear_preds = model.linear_probe(x, code) - linear_preds = linear_preds.argmax(1) - outputs = { - 'img': x[:model.cfg.n_images].detach().cpu(), - 'linear_preds': linear_preds[:model.cfg.n_images].detach().cpu() - } - return outputs - - -# Function that look for all img on EE and extract all segments with the date as first output arg - -def segment_group(location, start_date, end_date, how = 'month') : - outputs = [] - st_month = int(start_date[5:7]) - end_month = int(end_date[5:7]) - - st_year = int(start_date[0:4]) - end_year = int(end_date[0:4]) - - - - for year in range(st_year, end_year+1) : - - if year != end_year : - last = 12 - else : - last = end_month - - if year != st_year: - start = 1 - else : - start = st_month - - if how == 'month' : - for month in range(start, last + 1): - month_str = f"{month:0>2d}" - year_str = str(year) - - outputs.append((year_str + '-' + month_str, segment_loc(location, month_str, year_str))) - - elif how == 'year' : - outputs.append((str(year) + '-' + f"{start:0>2d}", segment_loc(location, f"{start:0>2d}", str(year), how = 'year', month_end=f"{last:0>2d}"))) - - elif how == '2months' : - for month in range(start, last + 1): - month_str = f"{month:0>2d}" - year_str = str(year) - month_end = (month) % 12 +1 - if month_end < month : - year_end = year +1 - else : - year_end = year - month_end= f"{month_end:0>2d}" - year_end = str(year_end) - - outputs.append((year_str + '-' + month_str, segment_loc(location, month_str, year_str,how = 'year', month_end=month_end, year_end=year_end))) - - - return outputs - - -# Function that transforms an output to PIL images - -def transform_to_pil(outputs,alpha=0.3): - # Transform img with torch - img = torch.moveaxis(prep_for_plot(outputs['img'][0]),-1,0) - img=T.ToPILImage()(img) - - # Transform label by saving it then open it - # label = outputs['linear_preds'][0] - # plt.imsave('label.png',label,cmap=cmap) - # label = Image.open('label.png') - - cmaplist = np.array([np.array(cmap(i)) for i in range(cmap.N)]) - labels = np.array(outputs['linear_preds'][0])-1 - label = T.ToPILImage()((cmaplist[labels]*255).astype(np.uint8)) - - - # Overlay labels with img wit alpha - background = img.convert("RGBA") - overlay = label.convert("RGBA") - - labeled_img = Image.blend(background, overlay, alpha) - - return img, label, labeled_img - -def values_from_output(output): - imgs = transform_to_pil(output,alpha = 0.3) - - img = imgs[0] - img = np.array(img.convert('RGB')) - - labeled_img = imgs[2] - labeled_img = np.array(labeled_img.convert('RGB')) - - nb_values = [] - for i in range(7): - nb_values.append(np.count_nonzero(output['linear_preds'][0] == i+1)) - - score = sum(x * y for x, y in zip(scores_init, nb_values)) / sum(nb_values) / max(scores_init) - - return img, labeled_img, nb_values, score - - -# Function that extract labeled_img(PIL) and nb_values(number of pixels for each class) and the score for each observation - - - -# Function that extract from outputs (from segment_group function) all dates/ all images -def values_from_outputs(outputs) : - months = [] - imgs = [] - imgs_label = [] - nb_values = [] - scores = [] - - for output in outputs: - img, labeled_img, nb_value, score = values_from_output(output[1]) - months.append(output[0]) - imgs.append(img) - imgs_label.append(labeled_img) - nb_values.append(nb_value) - scores.append(score) - - return months, imgs, imgs_label, nb_values, scores - - - - - -# Last function (global one) -# how = 'month' or '2months' or 'year' - -def segment_region(latitude, longitude, start_date, end_date, how = 'month'): - location = [float(latitude),float(longitude)] - how = how[0] - #extract the outputs for each image - outputs = segment_group(location, start_date, end_date, how = how) - - #extract the intersting values from image - months, imgs, imgs_label, nb_values, scores = values_from_outputs(outputs) - print(months, imgs, imgs_label, nb_values, scores) - - - #Create the figure - fig = plot_imgs_labels(months, imgs, imgs_label, nb_values, scores) - - return fig \ No newline at end of file diff --git a/spaces/EuroPython2022/mmocr-demo/app.py b/spaces/EuroPython2022/mmocr-demo/app.py deleted file mode 100644 index 78f0ae9ca68604981e4461e54078fbe12ffbde93..0000000000000000000000000000000000000000 --- a/spaces/EuroPython2022/mmocr-demo/app.py +++ /dev/null @@ -1,21 +0,0 @@ -import mmocr -import gradio as gr -import os -from mmocr.utils.ocr import MMOCR - - -config_dir = os.path.join( - os.path.dirname(__file__), 'configs/') - -# TODO: Put more models on HF hub. -ocr = MMOCR(config_dir=config_dir) - - -def infer(image): - # TODO: Also display bounding boxes interactively - return ocr.readtext(image, output='.', print_result=True, imshow=False) - - -# TODO: a drop down list for model selection -iface = gr.Interface(fn=infer, inputs="image", outputs="json") -iface.launch() diff --git a/spaces/Ezi/ModelCardsAnalysis/app.py b/spaces/Ezi/ModelCardsAnalysis/app.py deleted file mode 100644 index 822d3a71c929097ea58861f840c568bcff3dc028..0000000000000000000000000000000000000000 --- a/spaces/Ezi/ModelCardsAnalysis/app.py +++ /dev/null @@ -1,315 +0,0 @@ -import streamlit as st -from pathlib import Path -import base64 -from transformers import pipeline, set_seed -#from huggingface_hub.inference_api import InferenceApi - - -# Initial page config - -st.set_page_config( - page_title='Model Cards Mockup', - layout="wide", - initial_sidebar_state="expanded", -) - -def main(): - cs_sidebar() - cs_body() - #load_model() - - return None - -# Thanks to streamlitopedia for the following code snippet - -def img_to_bytes(img_path): - img_bytes = Path(img_path).read_bytes() - encoded = base64.b64encode(img_bytes).decode() - return encoded - -# sidebar - -def load_model(): - generator = pipeline('text-generation', model='distilgpt2') - set_seed(48) - text = st.text_input('Provide an initial text prompt') - - if text != '' : - out = generator(text, max_length=0, num_return_sequences=1) - -def cs_sidebar(): - - #limitations & Risks - - with st.sidebar.header('Limitations and Risks'): - st.sidebar.markdown(''' - As the developers of GPT-2 (OpenAI) note in their [model card](https://github.com/openai/gpt-2/blob/master/model_card.md), "language models like GPT-2 reflect the biases inherent to the systems they were trained on." Significant research has explored bias and fairness issues with models for language generation including GPT-2 (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). - - ''') - with st.sidebar.subheader(''): - st.sidebar.markdown(''' - - DistilGPT2 also suffers from persistent bias issues, as highlighted in the demonstrative examples below. Note that these examples are not a comprehensive stress-testing of the model. Readers considering using the model should consider more rigorous evaluations of the model depending on their use case and context. - ''') - with st.expander(" Distillation Bias"): - st.markdown(''' - - The impact of model compression techniques - such as knowledge distillation - on bias and fairness issues associated with language models is an active area of research. For example: - - - [Silva, Tambwekar and Gombolay (2021)](https://aclanthology.org/2021.naacl-main.189.pdf) find that distilled versions of BERT and RoBERTa consistently exhibit statistically significant bias (with regard to gender and race) with effect sizes larger than the teacher models. - - [Xu and Hu (2022)](https://arxiv.org/pdf/2201.08542.pdf) find that distilled versions of GPT-2 showed consistent reductions in toxicity and bias compared to the teacher model (see the paper for more detail on metrics used to define/measure toxicity and bias). - - [Gupta et al. (2022)](https://arxiv.org/pdf/2203.12574.pdf) find that DistilGPT2 exhibits greater gender disparities than GPT-2 and propose a technique for mitigating gender bias in distilled language models like DistilGPT2. - ''') - with st.sidebar.subheader(''): - st.sidebar.markdown(''' ''') - with st.expander("Demonstrated Examples"): - st.markdown(''' - - - ```python - >>> from transformers import pipeline, set_seed - >>> generator = pipeline('text-generation', model='distilgpt2') - >>> set_seed(48) - >>> generator("The White man worked as a", max_length=20, num_return_sequences=3) - [{'generated_text': "The White man worked as a salesman at a McDonald's restaurant called Kia at the time of the"}, - {'generated_text': 'The White man worked as a contractor in the Army in the late 1990s. He became a "'}, - {'generated_text': 'The White man worked as a police spokesman to the US Navy in the 1930s.'}] - - >>> set_seed(48) - >>> generator("The Black man worked as a", max_length=20, num_return_sequences=3) - [{'generated_text': 'The Black man worked as a shop assistant for an hour at Wal-Mart at Wal-Mart in'}, - {'generated_text': 'The Black man worked as a waiter in the hotel when he was assaulted when he got out of a'}, - {'generated_text': 'The Black man worked as a police spokesman four months ago...'}] - ``` - ''') - - - - - # Environmental Impact - with st.sidebar.header('Environmental Impact'): - st.sidebar.markdown(''' *Carbon emissions were estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) -presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware, runtime, cloud provider, and compute region -were utilized to estimate the carbon impact.* - ''') - - with st.sidebar.subheader('Environmental Impact'): - st.warning('This is a warning') - # Object notation - st.subheader('🌲') - with st.expander("🌍"): - st.markdown(''' - - - **Hardware Type:** 8 16GB V100 - - **Hours used:** 168 (1 week) - - **Cloud Provider:** Azure - - **Compute Region:** unavailable, assumed East US for calculations - - **Carbon Emitted** *(Power consumption x Time x Carbon produced based on location of power grid)*: 149.2 kg eq. CO2 - - ''') - - - - return None - -########################## -# Main body of cheat sheet -########################## - -def cs_body(): - # Model Cards - - col1, col2= st.columns(2) - - col1.subheader('DistilGPT2') - col1.markdown('''DistilGPT2 (short for Distilled-GPT2) is an English-language model pre-trained with the supervision of the -smallest version of Generative Pre-trained Transformer 2 (GPT-2). Like GPT-2, DistilGPT2 can be used to generate text. -Users of this model card should also consider information about the design, training, and limitations of [GPT-2] - ''') - - # Model Details - - col1.subheader('Model Details') - col1.markdown(''' -**Developed by:** Hugging Face -- **Model type:** Transformer-based Language Model -- **Language:** English -- **License:** Apache 2.0 -- **Model Description:** DistilGPT2 is an English-language model pre-trained with the supervision of the 124 million parameter version of GPT-2. DistilGPT2, which has 82 million parameters, was developed using [knowledge distillation](#knowledge-distillation) and was designed to be a faster, lighter version of GPT-2. -- **Resources for more information:** See [this repository](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation) for more about Distil\* (a class of compressed models including Distilled-GPT2), [Sanh et al. (2019)](https://arxiv.org/abs/1910.01108) for more information about knowledge distillation and the training procedure, and this page for more about [GPT-2](https://openai.com/blog/better-language-models/). - - ''') - - - col1.subheader('Potential Uses') - col1.markdown(''' - - - - -Since DistilGPT2 is a distilled version of GPT-2, it is intended to be used for similar use cases with the increased functionality of being smaller and easier to run than the base model. - -The developers of GPT-2 state in their [model card](https://github.com/openai/gpt-2/blob/master/model_card.md) that they envisioned GPT-2 would be used by researchers to better understand large-scale generative language models, with possible secondary use cases including: - -> - *Writing assistance: Grammar assistance, autocompletion (for normal prose or code)* -> - *Creative writing and art: exploring the generation of creative, fictional texts; aiding creation of poetry and other literary art.* -> - *Entertainment: Creation of games, chat bots, and amusing generations.* - -Using DistilGPT2, the Hugging Face team built the [Write With Transformers](https://transformer.huggingface.co/doc/distil-gpt2) web app, which allows users to play with the model to generate text directly from their browser. - ''') - col1.subheader('Out-of-scope Uses') - col1.markdown(''' - - -OpenAI states in the GPT-2 [model card](https://github.com/openai/gpt-2/blob/master/model_card.md): - -> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don't support use-cases that require the generated text to be true. -> -> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case. - - ''') - - - # Training Data - - col1.subheader('Training Data') - col1.markdown(''' - DistilGPT2 was trained using [OpenWebTextCorpus](https://skylion007.github.io/OpenWebTextCorpus/), an open-source reproduction of - OpenAI's WebText dataset, which was used to train GPT-2. - See the [OpenWebTextCorpus Dataset Card](https://huggingface.co/datasets/openwebtext) for additional information about - OpenWebTextCorpus and [Radford et al. (2019)](https://d4mucfpksywv.cloudfront.net/better-language-models/language-models.pdf) - for additional information about WebText. - - ''') - - # Training Procedure - - col1.subheader('Training Procedure') - col1.markdown(''' -The texts were tokenized using the same tokenizer as GPT-2, a byte-level version of Byte Pair Encoding (BPE). DistilGPT2 was -trained using knowledge distillation, following a procedure similar to the training procedure for DistilBERT, described in more -detail in [Sanh et al. (2019)](https://arxiv.org/abs/1910.01108). - ''') - - - - # Evaluation Results - - col1.subheader('Evaluation Results') - col1.markdown(''' -The creators of DistilGPT2 [report](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation) -that, on the [WikiText-103](https://blog.einstein.ai/the-wikitext-long-term-dependency-language-modeling-dataset/) benchmark, -GPT-2 reaches a perplexity on the test set of 16.3 compared to 21.1 for DistilGPT2 (after fine-tuning on the train set). - - ''') - - - - # Citation - - col1.subheader('Citation') - col1.markdown(''' -```bibtex -@inproceedings{sanh2019distilbert, - title={DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter}, - author={Sanh, Victor and Debut, Lysandre and Chaumond, Julien and Wolf, Thomas}, - booktitle={NeurIPS EMC^2 Workshop}, - year={2019} -} -``` - ''') - - # Glossary - - col1.subheader('Glossary') - col1.markdown(''' - **Knowledge Distillation**: As described in [Sanh et al. (2019)](https://arxiv.org/pdf/1910.01108.pdf), "knowledge distillation is a compression technique in which a compact model - the student - is trained to reproduce the behavior of a larger model - the teacher - or an ensemble of models." Also see [Bucila et al. (2006)](https://www.cs.cornell.edu/~caruana/compression.kdd06.pdf) and [Hinton et al. (2015)](https://arxiv.org/abs/1503.02531). - - ''') - - - ################################ - ## Column 2: right most column - ################################ - - - - # How to Get Started - - with col2.subheader('How to Get Started'): - col2.markdown(''' - *Be sure to read the sections on in-scope and out-of-scope uses and limitations of the model for further information on how to use the model.* - ''') - with col2.expander(""): - st.markdown(''' - -Using DistilGPT2 is similar to using GPT-2. DistilGPT2 can be used directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility: - -```python ->>> from transformers import pipeline, set_seed ->>> generator = pipeline('text-generation', model='distilgpt2') ->>> set_seed(42) ->>> generator("Hello, I'm a language model", max_length=20, num_return_sequences=5) -Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation. -[{'generated_text': "Hello, I'm a language model, I'm a language model. In my previous post I've"}, -{'generated_text': "Hello, I'm a language model, and I'd love to hear what you think about it."}, -{'generated_text': "Hello, I'm a language model, but I don't get much of a connection anymore, so"}, -{'generated_text': "Hello, I'm a language model, a functional language... It's not an example, and that"}, -{'generated_text': "Hello, I'm a language model, not an object model.\n\nIn a nutshell, I"}] -``` - - -**Here is how to use this model to get the features of a given text in PyTorch**: - -```python -from transformers import GPT2Tokenizer, GPT2Model -tokenizer = GPT2Tokenizer.from_pretrained('distilgpt2') -model = GPT2Model.from_pretrained('distilgpt2') -text = "Replace me by any text you'd like." -encoded_input = tokenizer(text, return_tensors='pt') -output = model(**encoded_input) -``` - -**And in TensorFlow:** - -```python -from transformers import GPT2Tokenizer, TFGPT2Model -tokenizer = GPT2Tokenizer.from_pretrained('distilgpt2') -model = TFGPT2Model.from_pretrained('distilgpt2') -text = "Replace me by any text you'd like." -encoded_input = tokenizer(text, return_tensors='tf') -output = model(encoded_input) -``` - - ''') - - - # Try App - - col2.header('Try out DistilGPT2') - #print load_model() - with col2.subheader(''): - generator = pipeline('text-generation', model='distilgpt2') - set_seed(48) - text = st.text_input('Text Generation: Provide an initial text prompt') - if text != '' : - out = generator(text, max_length=30, num_return_sequences=1) - col2.write(out) - - - - # Contact Section - - with col2.header('Further Contact'): - url = "https://huggingface.co/spaces/Ezi/ModelCardsAnalysis/discussions" - col2.markdown("Further contact, input and/or questions are welcomed [here](%s) 👏" % url) - - - - - - return None - -# Run main() - -if __name__ == '__main__': - main() \ No newline at end of file diff --git a/spaces/Fan-611177107/bigscience-bloomz-7b1-mt/README.md b/spaces/Fan-611177107/bigscience-bloomz-7b1-mt/README.md deleted file mode 100644 index 61054578d364eff2608a9e8ec2ac4bf83817e78a..0000000000000000000000000000000000000000 --- a/spaces/Fan-611177107/bigscience-bloomz-7b1-mt/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Bigscience Bloomz 7b1 Mt -emoji: 🐢 -colorFrom: gray -colorTo: blue -sdk: gradio -sdk_version: 3.36.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/FrankZxShen/so-vits-svc-models-pcr/inference/infer_tool_grad.py b/spaces/FrankZxShen/so-vits-svc-models-pcr/inference/infer_tool_grad.py deleted file mode 100644 index f2587d98209abcbdd6d199ca3142dcbc69a87428..0000000000000000000000000000000000000000 --- a/spaces/FrankZxShen/so-vits-svc-models-pcr/inference/infer_tool_grad.py +++ /dev/null @@ -1,171 +0,0 @@ -import hashlib -import json -import logging -import os -import time -from pathlib import Path -import io -import librosa -import maad -import numpy as np -from inference import slicer -import parselmouth -import soundfile -import torch -import torchaudio - -# from hubert import hubert_model -import utils -from models import SynthesizerTrn -logging.getLogger('numba').setLevel(logging.WARNING) -logging.getLogger('matplotlib').setLevel(logging.WARNING) - - -def resize2d_f0(x, target_len): - source = np.array(x) - source[source < 0.001] = np.nan - target = np.interp(np.arange(0, len(source) * target_len, len(source)) / target_len, np.arange(0, len(source)), - source) - res = np.nan_to_num(target) - return res - - -def get_f0(x, p_len, f0_up_key=0): - - time_step = 160 / 16000 * 1000 - f0_min = 50 - f0_max = 1100 - f0_mel_min = 1127 * np.log(1 + f0_min / 700) - f0_mel_max = 1127 * np.log(1 + f0_max / 700) - - f0 = parselmouth.Sound(x, 16000).to_pitch_ac( - time_step=time_step / 1000, voicing_threshold=0.6, - pitch_floor=f0_min, pitch_ceiling=f0_max).selected_array['frequency'] - - pad_size = (p_len - len(f0) + 1) // 2 - if(pad_size > 0 or p_len - len(f0) - pad_size > 0): - f0 = np.pad( - f0, [[pad_size, p_len - len(f0) - pad_size]], mode='constant') - - f0 *= pow(2, f0_up_key / 12) - f0_mel = 1127 * np.log(1 + f0 / 700) - f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * \ - 254 / (f0_mel_max - f0_mel_min) + 1 - f0_mel[f0_mel <= 1] = 1 - f0_mel[f0_mel > 255] = 255 - f0_coarse = np.rint(f0_mel).astype(np.int) - return f0_coarse, f0 - - -def clean_pitch(input_pitch): - num_nan = np.sum(input_pitch == 1) - if num_nan / len(input_pitch) > 0.9: - input_pitch[input_pitch != 1] = 1 - return input_pitch - - -def plt_pitch(input_pitch): - input_pitch = input_pitch.astype(float) - input_pitch[input_pitch == 1] = np.nan - return input_pitch - - -def f0_to_pitch(ff): - f0_pitch = 69 + 12 * np.log2(ff / 440) - return f0_pitch - - -def fill_a_to_b(a, b): - if len(a) < len(b): - for _ in range(0, len(b) - len(a)): - a.append(a[0]) - - -def mkdir(paths: list): - for path in paths: - if not os.path.exists(path): - os.mkdir(path) - - -class VitsSvc(object): - def __init__(self): - self.device = torch.device( - "cuda" if torch.cuda.is_available() else "cpu") - self.SVCVITS = None - self.hps = None - self.speakers = None - self.hubert_soft = utils.get_hubert_model() - - def set_device(self, device): - self.device = torch.device(device) - self.hubert_soft.to(self.device) - if self.SVCVITS != None: - self.SVCVITS.to(self.device) - - def loadCheckpoint(self, path): - self.hps = utils.get_hparams_from_file( - f"checkpoints/{path}/config.json") - self.SVCVITS = SynthesizerTrn( - self.hps.data.filter_length // 2 + 1, - self.hps.train.segment_size // self.hps.data.hop_length, - **self.hps.model) - _ = utils.load_checkpoint( - f"checkpoints/{path}/model.pth", self.SVCVITS, None) - _ = self.SVCVITS.eval().to(self.device) - self.speakers = self.hps.spk - - def get_units(self, source, sr): - source = source.unsqueeze(0).to(self.device) - with torch.inference_mode(): - units = self.hubert_soft.units(source) - return units - - def get_unit_pitch(self, in_path, tran): - source, sr = torchaudio.load(in_path) - source = torchaudio.functional.resample(source, sr, 16000) - if len(source.shape) == 2 and source.shape[1] >= 2: - source = torch.mean(source, dim=0).unsqueeze(0) - soft = self.get_units(source, sr).squeeze(0).cpu().numpy() - f0_coarse, f0 = get_f0(source.cpu().numpy()[0], soft.shape[0]*2, tran) - return soft, f0 - - def infer(self, speaker_id, tran, raw_path): - speaker_id = self.speakers[speaker_id] - sid = torch.LongTensor([int(speaker_id)]).to(self.device).unsqueeze(0) - soft, pitch = self.get_unit_pitch(raw_path, tran) - f0 = torch.FloatTensor(clean_pitch(pitch)).unsqueeze(0).to(self.device) - stn_tst = torch.FloatTensor(soft) - with torch.no_grad(): - x_tst = stn_tst.unsqueeze(0).to(self.device) - x_tst = torch.repeat_interleave( - x_tst, repeats=2, dim=1).transpose(1, 2) - audio, _ = self.SVCVITS.infer(x_tst, f0=f0, g=sid)[ - 0, 0].data.float() - return audio, audio.shape[-1] - - def inference(self, srcaudio, chara, tran, slice_db): - sampling_rate, audio = srcaudio - audio = (audio / np.iinfo(audio.dtype).max).astype(np.float32) - if len(audio.shape) > 1: - audio = librosa.to_mono(audio.transpose(1, 0)) - if sampling_rate != 16000: - audio = librosa.resample( - audio, orig_sr=sampling_rate, target_sr=16000) - soundfile.write("tmpwav.wav", audio, 16000, format="wav") - chunks = slicer.cut("tmpwav.wav", db_thresh=slice_db) - audio_data, audio_sr = slicer.chunks2audio("tmpwav.wav", chunks) - audio = [] - for (slice_tag, data) in audio_data: - length = int(np.ceil(len(data) / audio_sr * - self.hps.data.sampling_rate)) - raw_path = io.BytesIO() - soundfile.write(raw_path, data, audio_sr, format="wav") - raw_path.seek(0) - if slice_tag: - _audio = np.zeros(length) - else: - out_audio, out_sr = self.infer(chara, tran, raw_path) - _audio = out_audio.cpu().numpy() - audio.extend(list(_audio)) - audio = (np.array(audio) * 32768.0).astype('int16') - return (self.hps.data.sampling_rate, audio) diff --git a/spaces/GaenKoki/voicevox/voicevox_engine/setting/SettingLoader.py b/spaces/GaenKoki/voicevox/voicevox_engine/setting/SettingLoader.py deleted file mode 100644 index a78952e96835f5bcf2b4fee23d38312dfa2ca573..0000000000000000000000000000000000000000 --- a/spaces/GaenKoki/voicevox/voicevox_engine/setting/SettingLoader.py +++ /dev/null @@ -1,33 +0,0 @@ -from pathlib import Path - -import yaml - -from ..utility import engine_root, get_save_dir -from .Setting import Setting - -DEFAULT_SETTING_PATH: Path = engine_root() / "default_setting.yml" -USER_SETTING_PATH: Path = get_save_dir() / "setting.yml" - - -class SettingLoader: - def __init__(self, setting_file_path: Path) -> None: - self.setting_file_path = setting_file_path - - def load_setting_file(self) -> Setting: - if not self.setting_file_path.is_file(): - setting = yaml.safe_load(DEFAULT_SETTING_PATH.read_text(encoding="utf-8")) - else: - setting = yaml.safe_load(self.setting_file_path.read_text(encoding="utf-8")) - - setting = Setting( - cors_policy_mode=setting["cors_policy_mode"], - allow_origin=setting["allow_origin"], - ) - - return setting - - def dump_setting_file(self, settings: Setting) -> None: - settings_dict = settings.dict() - - with open(self.setting_file_path, mode="w", encoding="utf-8") as f: - yaml.safe_dump(settings_dict, f) diff --git a/spaces/Giuliano/Conversational-Datasets/app.py b/spaces/Giuliano/Conversational-Datasets/app.py deleted file mode 100644 index 984ff5a8f3eca9f1d9851dfa5acda3d13f3cec07..0000000000000000000000000000000000000000 --- a/spaces/Giuliano/Conversational-Datasets/app.py +++ /dev/null @@ -1,52 +0,0 @@ -import os -os.system('pip install gradio==2.3.5b0') -os.system('pip install torch-scatter -f https://pytorch-geometric.com/whl/torch-1.9.0+${CUDA}.html') - - -import gradio as gr - -from transformers import pipeline - -import pandas as pd - - -table = pd.DataFrame() -tqa = pipeline(task="table-question-answering", model="google/tapas-base-finetuned-wtq") - - -def chat(message): - history = gr.get_state() or [] - - - global table - - if message.startswith('http'): - table = pd.read_csv(message) - table = table.astype(str) - response = 'thank you to give me a dataset... now you can ask questions about it' - - elif table.empty: - response = 'Hi! You still have not given me the url of a dataset in csv format. Send a url of a csv file and then ask as many questions as you want about it. If you want to talk about another dataset, just send a new link.' - - else: - response = tqa(table=table, query=message)["answer"] - - - - history.append((message, response)) - gr.set_state(history) - html = "
" - for user_msg, resp_msg in history: - html += f"
{user_msg}
" - html += f"
{resp_msg}
" - html += "
" - return html - -iface = gr.Interface(chat, "text", "html", css=""" - .chatbox {display:flex;flex-direction:column} - .user_msg, .resp_msg {padding:4px;margin-bottom:4px;border-radius:4px;width:80%} - .user_msg {background-color:cornflowerblue;color:white;align-self:start} - .resp_msg {background-color:lightgray;align-self:self-end} -""", allow_screenshot=False, allow_flagging=False) -if __name__ == "__main__": - iface.launch(debug=True) \ No newline at end of file diff --git a/spaces/Gradio-Blocks/protGPT2_gradioFold/alphafold/alphafold/model/__init__.py b/spaces/Gradio-Blocks/protGPT2_gradioFold/alphafold/alphafold/model/__init__.py deleted file mode 100644 index fc2efc8d3e1439f8d264268adcde82231f784636..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/protGPT2_gradioFold/alphafold/alphafold/model/__init__.py +++ /dev/null @@ -1,14 +0,0 @@ -# Copyright 2021 DeepMind Technologies Limited -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -"""Alphafold model.""" diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/libra_rcnn/libra_faster_rcnn_r50_fpn_1x_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/libra_rcnn/libra_faster_rcnn_r50_fpn_1x_coco.py deleted file mode 100644 index 89a0d7b2bd83216dfc4db120fe9f610b23376681..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/libra_rcnn/libra_faster_rcnn_r50_fpn_1x_coco.py +++ /dev/null @@ -1,41 +0,0 @@ -_base_ = '../faster_rcnn/faster_rcnn_r50_fpn_1x_coco.py' -# model settings -model = dict( - neck=[ - dict( - type='FPN', - in_channels=[256, 512, 1024, 2048], - out_channels=256, - num_outs=5), - dict( - type='BFP', - in_channels=256, - num_levels=5, - refine_level=2, - refine_type='non_local') - ], - roi_head=dict( - bbox_head=dict( - loss_bbox=dict( - _delete_=True, - type='BalancedL1Loss', - alpha=0.5, - gamma=1.5, - beta=1.0, - loss_weight=1.0))), - # model training and testing settings - train_cfg=dict( - rpn=dict(sampler=dict(neg_pos_ub=5), allowed_border=-1), - rcnn=dict( - sampler=dict( - _delete_=True, - type='CombinedSampler', - num=512, - pos_fraction=0.25, - add_gt_as_proposals=True, - pos_sampler=dict(type='InstanceBalancedPosSampler'), - neg_sampler=dict( - type='IoUBalancedNegSampler', - floor_thr=-1, - floor_fraction=0, - num_bins=3))))) diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/deeplabv3plus/deeplabv3plus_r101-d8_480x480_40k_pascal_context_59.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/deeplabv3plus/deeplabv3plus_r101-d8_480x480_40k_pascal_context_59.py deleted file mode 100644 index 36a510ff41788a5861b5a9504d8e3d08502072e4..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/deeplabv3plus/deeplabv3plus_r101-d8_480x480_40k_pascal_context_59.py +++ /dev/null @@ -1,2 +0,0 @@ -_base_ = './deeplabv3plus_r50-d8_480x480_40k_pascal_context_59.py' -model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101)) diff --git a/spaces/HaloMaster/chinesesummary/fengshen/models/roformer/__init__.py b/spaces/HaloMaster/chinesesummary/fengshen/models/roformer/__init__.py deleted file mode 100644 index c55c090f25446ec2cf60d632dacdb53a8928e25e..0000000000000000000000000000000000000000 --- a/spaces/HaloMaster/chinesesummary/fengshen/models/roformer/__init__.py +++ /dev/null @@ -1,57 +0,0 @@ -# coding=utf-8 -# Copyright 2021 The IDEA Authors. All rights reserved. - -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at - -# http://www.apache.org/licenses/LICENSE-2.0 - -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -from typing import TYPE_CHECKING - -from transformers.file_utils import _LazyModule, is_torch_available - - -_import_structure = { - "configuration_roformer": ["RoFormerConfig"], - "tokenization_roformer": ["RoFormerTokenizer"], -} - -if is_torch_available(): - _import_structure["modeling_roformer"] = [ - "RoFormerModel", - "RoFormerForMaskedLM", - "RoFormerForMultipleChoice", - "RoFormerPreTrainedModel", - "RoFormerForQuestionAnswering", - "RoFormerForSequenceClassification", - "RoFormerForTokenClassification", - ] - - -if TYPE_CHECKING: - from .configuration_roformer import RoFormerConfig - from .tokenization_roformer import RoFormerTokenizer - - if is_torch_available(): - from .modeling_roformer import ( - RoFormerModel, - RoFormerForMaskedLM, - RoFormerForMultipleChoice, - RoFormerPreTrainedModel, - RoFormerForQuestionAnswering, - RoFormerForSequenceClassification, - RoFormerForTokenClassification, - ) - -else: - import sys - - sys.modules[__name__] = _LazyModule( - __name__, globals()["__file__"], _import_structure) diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/speech_synthesis/evaluation/get_eval_manifest.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/speech_synthesis/evaluation/get_eval_manifest.py deleted file mode 100644 index a28cd607a096844438f6a3ba6b007d94d67d1bc8..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/speech_synthesis/evaluation/get_eval_manifest.py +++ /dev/null @@ -1,58 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - - -import csv -from pathlib import Path - - -def main(args): - """ - `uid syn ref text` - """ - in_root = Path(args.generation_root).resolve() - ext = args.audio_format - with open(args.audio_manifest) as f, open(args.output_path, "w") as f_out: - reader = csv.DictReader( - f, delimiter="\t", quotechar=None, doublequote=False, - lineterminator="\n", quoting=csv.QUOTE_NONE - ) - header = ["id", "syn", "ref", "text", "speaker"] - f_out.write("\t".join(header) + "\n") - for row in reader: - dir_name = f"{ext}_{args.sample_rate}hz_{args.vocoder}" - id_ = row["id"] - syn = (in_root / dir_name / f"{id_}.{ext}").as_posix() - ref = row["audio"] - if args.use_resynthesized_target: - ref = (in_root / f"{dir_name}_tgt" / f"{id_}.{ext}").as_posix() - sample = [id_, syn, ref, row["tgt_text"], row["speaker"]] - f_out.write("\t".join(sample) + "\n") - print(f"wrote evaluation file to {args.output_path}") - - -if __name__ == "__main__": - import argparse - parser = argparse.ArgumentParser() - parser.add_argument( - "--generation-root", help="output directory for generate_waveform.py" - ) - parser.add_argument( - "--audio-manifest", - help="used to determine the original utterance ID and text" - ) - parser.add_argument( - "--output-path", help="path to output evaluation spec file" - ) - parser.add_argument( - "--use-resynthesized-target", action="store_true", - help="use resynthesized reference instead of the original audio" - ) - parser.add_argument("--vocoder", type=str, default="griffin_lim") - parser.add_argument("--sample-rate", type=int, default=22_050) - parser.add_argument("--audio-format", type=str, default="wav") - args = parser.parse_args() - - main(args) diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/translation_moe/translation_moe_src/mean_pool_gating_network.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/translation_moe/translation_moe_src/mean_pool_gating_network.py deleted file mode 100644 index efc7ae40bf8fed6c2384cbc6f94477c4caa4c10c..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/translation_moe/translation_moe_src/mean_pool_gating_network.py +++ /dev/null @@ -1,50 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch -import torch.nn.functional as F - - -class MeanPoolGatingNetwork(torch.nn.Module): - """A simple mean-pooling gating network for selecting experts. - - This module applies mean pooling over an encoder's output and returns - reponsibilities for each expert. The encoder format is expected to match - :class:`fairseq.models.transformer.TransformerEncoder`. - """ - - def __init__(self, embed_dim, num_experts, dropout=None): - super().__init__() - self.embed_dim = embed_dim - self.num_experts = num_experts - - self.fc1 = torch.nn.Linear(embed_dim, embed_dim) - self.dropout = torch.nn.Dropout(dropout) if dropout is not None else None - self.fc2 = torch.nn.Linear(embed_dim, num_experts) - - def forward(self, encoder_out): - if not ( - "encoder_out" in encoder_out - and "encoder_padding_mask" in encoder_out - and encoder_out["encoder_out"][0].size(2) == self.embed_dim - ): - raise ValueError("Unexpected format for encoder_out") - - # mean pooling over time - encoder_padding_mask = encoder_out["encoder_padding_mask"][0] # B x T - encoder_out = encoder_out["encoder_out"][0].transpose(0, 1) # B x T x C - if encoder_padding_mask is not None: - encoder_out = encoder_out.clone() # required because of transpose above - encoder_out[encoder_padding_mask] = 0 - ntokens = torch.sum(~encoder_padding_mask, dim=1, keepdim=True) - x = torch.sum(encoder_out, dim=1) / ntokens.type_as(encoder_out) - else: - x = torch.mean(encoder_out, dim=1) - - x = torch.tanh(self.fc1(x)) - if self.dropout is not None: - x = self.dropout(x) - x = self.fc2(x) - return F.log_softmax(x, dim=-1, dtype=torch.float32).type_as(x) diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/data/audio/feature_transforms/__init__.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/data/audio/feature_transforms/__init__.py deleted file mode 100644 index 359fa069716cba0dd615ce0959368b20828c31f7..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/data/audio/feature_transforms/__init__.py +++ /dev/null @@ -1,82 +0,0 @@ -import importlib -import os -from abc import ABC, abstractmethod -from typing import Dict, Optional - - -class AudioFeatureTransform(ABC): - @classmethod - @abstractmethod - def from_config_dict(cls, config: Optional[Dict] = None): - pass - - -AUDIO_FEATURE_TRANSFORM_REGISTRY = {} -AUDIO_FEATURE_TRANSFORM_CLASS_NAMES = set() - - -def register_audio_feature_transform(name): - def register_audio_feature_transform_cls(cls): - if name in AUDIO_FEATURE_TRANSFORM_REGISTRY: - raise ValueError(f"Cannot register duplicate transform ({name})") - if not issubclass(cls, AudioFeatureTransform): - raise ValueError( - f"Transform ({name}: {cls.__name__}) must extend " - "AudioFeatureTransform" - ) - if cls.__name__ in AUDIO_FEATURE_TRANSFORM_CLASS_NAMES: - raise ValueError( - f"Cannot register audio feature transform with duplicate " - f"class name ({cls.__name__})" - ) - AUDIO_FEATURE_TRANSFORM_REGISTRY[name] = cls - AUDIO_FEATURE_TRANSFORM_CLASS_NAMES.add(cls.__name__) - return cls - - return register_audio_feature_transform_cls - - -def get_audio_feature_transform(name): - return AUDIO_FEATURE_TRANSFORM_REGISTRY[name] - - -transforms_dir = os.path.dirname(__file__) -for file in os.listdir(transforms_dir): - path = os.path.join(transforms_dir, file) - if ( - not file.startswith("_") - and not file.startswith(".") - and (file.endswith(".py") or os.path.isdir(path)) - ): - name = file[: file.find(".py")] if file.endswith(".py") else file - importlib.import_module("fairseq.data.audio.feature_transforms." + name) - - -class CompositeAudioFeatureTransform(AudioFeatureTransform): - @classmethod - def from_config_dict(cls, config=None): - _config = {} if config is None else config - _transforms = _config.get("transforms") - if _transforms is None: - return None - transforms = [ - get_audio_feature_transform(_t).from_config_dict(_config.get(_t)) - for _t in _transforms - ] - return CompositeAudioFeatureTransform(transforms) - - def __init__(self, transforms): - self.transforms = [t for t in transforms if t is not None] - - def __call__(self, x): - for t in self.transforms: - x = t(x) - return x - - def __repr__(self): - format_string = ( - [self.__class__.__name__ + "("] - + [f" {t.__repr__()}" for t in self.transforms] - + [")"] - ) - return "\n".join(format_string) diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/models/text_to_speech/hifigan.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/models/text_to_speech/hifigan.py deleted file mode 100644 index edc7db6015ebea18f40c8949ae78c0b5b61e1297..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/models/text_to_speech/hifigan.py +++ /dev/null @@ -1,173 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F -from torch.nn import Conv1d, ConvTranspose1d -from torch.nn.utils import weight_norm, remove_weight_norm - -LRELU_SLOPE = 0.1 - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return (kernel_size * dilation - dilation) // 2 - - -class ResBlock(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock, self).__init__() - self.convs1 = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]), - ) - ), - ] - ) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - ] - ) - self.convs2.apply(init_weights) - - def forward(self, x): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - xt = c2(xt) - x = xt + x - return x - - def remove_weight_norm(self): - for layer in self.convs1: - remove_weight_norm(layer) - for layer in self.convs2: - remove_weight_norm(layer) - - -class Generator(torch.nn.Module): - def __init__(self, cfg): - super(Generator, self).__init__() - self.num_kernels = len(cfg["resblock_kernel_sizes"]) - self.num_upsamples = len(cfg["upsample_rates"]) - self.conv_pre = weight_norm( - Conv1d(80, cfg["upsample_initial_channel"], 7, 1, padding=3) - ) - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate( - zip(cfg["upsample_rates"], cfg["upsample_kernel_sizes"]) - ): - self.ups.append( - weight_norm( - ConvTranspose1d( - cfg["upsample_initial_channel"] // (2 ** i), - cfg["upsample_initial_channel"] // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = cfg["upsample_initial_channel"] // (2 ** (i + 1)) - for k, d in zip( - cfg["resblock_kernel_sizes"], cfg["resblock_dilation_sizes"] - ): - self.resblocks.append(ResBlock(ch, k, d)) - - self.conv_post = weight_norm(Conv1d(ch, 1, 7, 1, padding=3)) - self.ups.apply(init_weights) - self.conv_post.apply(init_weights) - - def forward(self, x): - x = self.conv_pre(x) - for i in range(self.num_upsamples): - x = F.leaky_relu(x, LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - print("Removing weight norm...") - for layer in self.ups: - remove_weight_norm(layer) - for layer in self.resblocks: - layer.remove_weight_norm() - remove_weight_norm(self.conv_pre) - remove_weight_norm(self.conv_post) diff --git a/spaces/Hdiopalma/anime-face-detector/app.py b/spaces/Hdiopalma/anime-face-detector/app.py deleted file mode 100644 index c2363f4a2bf201fceacc85f22002a57777c51bc9..0000000000000000000000000000000000000000 --- a/spaces/Hdiopalma/anime-face-detector/app.py +++ /dev/null @@ -1,27 +0,0 @@ -import gradio as gr -import torch -from PIL import Image - -# Images -torch.hub.download_url_to_file('https://cdn.statically.io/img/fancyodds.com/wp-content/uploads/2021/12/anime-aesthetic-5-1.jpg', 'anime1.jpg') -torch.hub.download_url_to_file('https://alchetron.com/cdn/pandora-hearts-02235ecf-9315-461e-978a-a819922522d-resize-750.png', 'anime2.png') - -model = torch.hub.load('yolov5', 'custom', path ='best.pt', force_reload=True,source='local') - -def yolo(image, size = 640): - g = (size/max(image.size)) - image = image.resize((int(x * g) for x in image.size), Image.ANTIALIAS) - result = model(image) - result.render() - return Image.fromarray(result.imgs[0]) - - -inputs = gr.inputs.Image(type = 'pil', label = "Original Image") -outputs = gr.outputs.Image(type = 'pil', label = "Output Image") - -judul = "Deteksi wajah anime menggunakan YoloV5" -deskripsi = "
Nama dan jumlah anggota kelompok :
  1. M. Azis pangestu, 1051914
  2. Aisyah Sekar Ayu Dzikron, 1908667
  3. Hernando Dio Palma, 3332190049.
  4. Ari Fitria, G74180014
  5. Wildan Rizky Pamungkas, 1910501094
  6. Andika Candra 191251010
  7. Rahmatul Fajri 180120201035
" -artikel = "

Grup V kelas Khasanah Ilmi

" - -examples = [['anime1.jpg'], ['anime2.png']] -gr.Interface(yolo, inputs, outputs, title = judul, description = deskripsi, article = artikel, examples= examples, theme="huggingface").launch(cache_examples=True, enable_queue=True) \ No newline at end of file diff --git a/spaces/HighCWu/GFPGAN-1.3/gfpgan/utils.py b/spaces/HighCWu/GFPGAN-1.3/gfpgan/utils.py deleted file mode 100644 index f3e163e9e21a2e56d7dce404cfd2b21bcc61402f..0000000000000000000000000000000000000000 --- a/spaces/HighCWu/GFPGAN-1.3/gfpgan/utils.py +++ /dev/null @@ -1,130 +0,0 @@ -import cv2 -import os -import torch -from basicsr.utils import img2tensor, tensor2img -from basicsr.utils.download_util import load_file_from_url -from facexlib.utils.face_restoration_helper import FaceRestoreHelper -from torchvision.transforms.functional import normalize - -from gfpgan.archs.gfpganv1_arch import GFPGANv1 -from gfpgan.archs.gfpganv1_clean_arch import GFPGANv1Clean - -ROOT_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) - - -class GFPGANer(): - """Helper for restoration with GFPGAN. - - It will detect and crop faces, and then resize the faces to 512x512. - GFPGAN is used to restored the resized faces. - The background is upsampled with the bg_upsampler. - Finally, the faces will be pasted back to the upsample background image. - - Args: - model_path (str): The path to the GFPGAN model. It can be urls (will first download it automatically). - upscale (float): The upscale of the final output. Default: 2. - arch (str): The GFPGAN architecture. Option: clean | original. Default: clean. - channel_multiplier (int): Channel multiplier for large networks of StyleGAN2. Default: 2. - bg_upsampler (nn.Module): The upsampler for the background. Default: None. - """ - - def __init__(self, model_path, upscale=2, arch='clean', channel_multiplier=2, bg_upsampler=None): - self.upscale = upscale - self.bg_upsampler = bg_upsampler - - # initialize model - self.device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') - # initialize the GFP-GAN - if arch == 'clean': - self.gfpgan = GFPGANv1Clean( - out_size=512, - num_style_feat=512, - channel_multiplier=channel_multiplier, - decoder_load_path=None, - fix_decoder=False, - num_mlp=8, - input_is_latent=True, - different_w=True, - narrow=1, - sft_half=True) - else: - self.gfpgan = GFPGANv1( - out_size=512, - num_style_feat=512, - channel_multiplier=channel_multiplier, - decoder_load_path=None, - fix_decoder=True, - num_mlp=8, - input_is_latent=True, - different_w=True, - narrow=1, - sft_half=True) - # initialize face helper - self.face_helper = FaceRestoreHelper( - upscale, - face_size=512, - crop_ratio=(1, 1), - det_model='retinaface_resnet50', - save_ext='png', - device=self.device) - - if model_path.startswith('https://'): - model_path = load_file_from_url( - url=model_path, model_dir=os.path.join(ROOT_DIR, 'gfpgan/weights'), progress=True, file_name=None) - loadnet = torch.load(model_path) - if 'params_ema' in loadnet: - keyname = 'params_ema' - else: - keyname = 'params' - self.gfpgan.load_state_dict(loadnet[keyname], strict=True) - self.gfpgan.eval() - self.gfpgan = self.gfpgan.to(self.device) - - @torch.no_grad() - def enhance(self, img, has_aligned=False, only_center_face=False, paste_back=True): - self.face_helper.clean_all() - - if has_aligned: # the inputs are already aligned - img = cv2.resize(img, (512, 512)) - self.face_helper.cropped_faces = [img] - else: - self.face_helper.read_image(img) - # get face landmarks for each face - self.face_helper.get_face_landmarks_5(only_center_face=only_center_face, eye_dist_threshold=5) - # eye_dist_threshold=5: skip faces whose eye distance is smaller than 5 pixels - # TODO: even with eye_dist_threshold, it will still introduce wrong detections and restorations. - # align and warp each face - self.face_helper.align_warp_face() - - # face restoration - for cropped_face in self.face_helper.cropped_faces: - # prepare data - cropped_face_t = img2tensor(cropped_face / 255., bgr2rgb=True, float32=True) - normalize(cropped_face_t, (0.5, 0.5, 0.5), (0.5, 0.5, 0.5), inplace=True) - cropped_face_t = cropped_face_t.unsqueeze(0).to(self.device) - - try: - output = self.gfpgan(cropped_face_t, return_rgb=False)[0] - # convert to image - restored_face = tensor2img(output.squeeze(0), rgb2bgr=True, min_max=(-1, 1)) - except RuntimeError as error: - print(f'\tFailed inference for GFPGAN: {error}.') - restored_face = cropped_face - - restored_face = restored_face.astype('uint8') - self.face_helper.add_restored_face(restored_face) - - if not has_aligned and paste_back: - # upsample the background - if self.bg_upsampler is not None: - # Now only support RealESRGAN for upsampling background - bg_img = self.bg_upsampler.enhance(img, outscale=self.upscale)[0] - else: - bg_img = None - - self.face_helper.get_inverse_affine(None) - # paste each restored face to the input image - restored_img = self.face_helper.paste_faces_to_input_image(upsample_img=bg_img) - return self.face_helper.cropped_faces, self.face_helper.restored_faces, restored_img - else: - return self.face_helper.cropped_faces, self.face_helper.restored_faces, None diff --git a/spaces/HighCWu/anime-colorization-with-hint/gradio-modified/gradio/templates/frontend/assets/index.020a69e0.js b/spaces/HighCWu/anime-colorization-with-hint/gradio-modified/gradio/templates/frontend/assets/index.020a69e0.js deleted file mode 100644 index d468a1e765ab914e17fa7503c15b831f0ae34e43..0000000000000000000000000000000000000000 --- a/spaces/HighCWu/anime-colorization-with-hint/gradio-modified/gradio/templates/frontend/assets/index.020a69e0.js +++ /dev/null @@ -1,2 +0,0 @@ -import{ar as r}from"./index.396f4a72.js";const e=["static","dynamic"],o=t=>({type:"string",description:"text string",example_data:t.value||"hello world"});export{r as Component,o as document,e as modes}; -//# sourceMappingURL=index.020a69e0.js.map diff --git a/spaces/HoangHa/llama2-code/style.css b/spaces/HoangHa/llama2-code/style.css deleted file mode 100644 index 32fbb47a2934fed87bcc1faad2dbc4dd2d17c65f..0000000000000000000000000000000000000000 --- a/spaces/HoangHa/llama2-code/style.css +++ /dev/null @@ -1,16 +0,0 @@ -h1 { - text-align: center; -} - -#duplicate-button { - margin: auto; - color: white; - background: #1565c0; - border-radius: 100vh; -} - -#component-0 { - max-width: 900px; - margin: auto; - padding-top: 1.5rem; -} \ No newline at end of file diff --git a/spaces/Hunzla/whisperaudio/app.py b/spaces/Hunzla/whisperaudio/app.py deleted file mode 100644 index a438afe7111e402e9c2b57aeafd23dd3a08a3351..0000000000000000000000000000000000000000 --- a/spaces/Hunzla/whisperaudio/app.py +++ /dev/null @@ -1,67 +0,0 @@ -from transformers import pipeline -asr_pipe = pipeline("automatic-speech-recognition", model="Abdullah17/whisper-small-urdu") -from difflib import SequenceMatcher -import json - - - - -with open("tasks.json", "r",encoding="utf-8") as json_file: - urdu_data = json.load(json_file) -# List of commands -# commands = [ -# "نمائندے ایجنٹ نمائندہ", -# " سم ایکٹیویٹ ", -# " سم بلاک بند ", -# "موبائل پیکیجز انٹرنیٹ پیکیج", -# " چالان جمع چلان", -# " گانا " -# ] -# replies = [ -# 1,2, -# ] -# Function to find the most similar command -def find_most_similar_command(statement, command_list): - best_match = None - highest_similarity = 0 - reply=404 - # Using globals() to create a global variable - for index,file_list in command_list.items(): - for command in file_list: - similarity = SequenceMatcher(None, statement, command).ratio() - print(index,"similarity",similarity) - if similarity > highest_similarity: - highest_similarity = similarity - best_match = command - reply=index - return best_match,reply - -def transcribe_the_command(audio,menu_id,abc): - import soundfile as sf - sample_rate, audio_data = audio - file_name = "recorded_audio.wav" - sf.write(file_name, audio_data, sample_rate) - # Convert stereo to mono by averaging the two channels - print(menu_id) - - transcript = asr_pipe(file_name)["text"] - commands=urdu_data[menu_id] - print(commands) - most_similar_command,reply = find_most_similar_command(transcript, commands) - print(f"Given Statement: {transcript}") - print(f"Most Similar Command: {most_similar_command}\n") - print(reply) - return reply -# get_text_from_voice("urdu.wav") -import gradio as gr - - -iface = gr.Interface( - fn=transcribe_the_command, - inputs=[gr.inputs.Audio(label="Recorded Audio",source="microphone"),gr.inputs.Textbox(label="menu_id"),gr.inputs.Textbox(label="dfs")], - outputs="text", - title="Whisper Small Urdu Command", - description="Realtime demo for Urdu speech recognition using a fine-tuned Whisper small model and outputting the estimated command on the basis of speech transcript.", -) - -iface.launch() \ No newline at end of file diff --git a/spaces/ICML2022/OFA/fairseq/examples/wav2vec/unsupervised/data/random_input_dataset.py b/spaces/ICML2022/OFA/fairseq/examples/wav2vec/unsupervised/data/random_input_dataset.py deleted file mode 100644 index 886505616cc7f7a515ecebf34fae5c2bc541de03..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/examples/wav2vec/unsupervised/data/random_input_dataset.py +++ /dev/null @@ -1,62 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import random -from typing import List - -from fairseq.data import BaseWrapperDataset, data_utils - - -class RandomInputDataset(BaseWrapperDataset): - def __init__( - self, - dataset, - random_input_dataset, - input_key_path: List[str], - add_to_input, - pad_idx, - ): - super().__init__(dataset) - self.random_input_dataset = random_input_dataset - if isinstance(input_key_path, str): - input_key_path = [input_key_path] - assert len(input_key_path) > 0 - self.input_key_path = input_key_path - self.add_to_input = add_to_input - self.pad_idx = pad_idx - - def get_target(self, item): - target_loc = item - for p in self.input_key_path[:-1]: - target_loc = target_loc[p] - return self.input_key_path[-1], target_loc - - def get_target_value(self, item): - k, target_loc = self.get_target(item) - return target_loc[k] - - def __getitem__(self, index): - item = self.dataset[index] - k, target_loc = self.get_target(item) - target_loc[k] = random.choice(self.random_input_dataset) - return item - - def collater(self, samples): - collated = self.dataset.collater(samples) - if len(collated) == 0: - return collated - indices = set(collated["id"].tolist()) - - random_inputs = data_utils.collate_tokens( - [self.get_target_value(s) for s in samples if s["id"] in indices], - pad_idx=self.pad_idx, - left_pad=False, - ) - k, target_loc = self.get_target( - collated if not self.add_to_input else collated["net_input"] - ) - target_loc[k] = random_inputs - - return collated diff --git a/spaces/IDEA-Research/Grounded-SAM/GroundingDINO/groundingdino/util/inference.py b/spaces/IDEA-Research/Grounded-SAM/GroundingDINO/groundingdino/util/inference.py deleted file mode 100644 index 8168b96ca51e6e494c7c675c2f4a610e21b095d6..0000000000000000000000000000000000000000 --- a/spaces/IDEA-Research/Grounded-SAM/GroundingDINO/groundingdino/util/inference.py +++ /dev/null @@ -1,98 +0,0 @@ -from typing import Tuple, List - -import cv2 -import numpy as np -import supervision as sv -import torch -from PIL import Image -from torchvision.ops import box_convert - -import groundingdino.datasets.transforms as T -from groundingdino.models import build_model -from groundingdino.util.misc import clean_state_dict -from groundingdino.util.slconfig import SLConfig -from groundingdino.util.utils import get_phrases_from_posmap - - -def preprocess_caption(caption: str) -> str: - result = caption.lower().strip() - if result.endswith("."): - return result - return result + "." - - -def load_model(model_config_path: str, model_checkpoint_path: str, device: str = "cuda"): - args = SLConfig.fromfile(model_config_path) - args.device = device - model = build_model(args) - checkpoint = torch.load(model_checkpoint_path, map_location="cpu") - model.load_state_dict(clean_state_dict(checkpoint["model"]), strict=False) - model.eval() - return model - - -def load_image(image_path: str) -> Tuple[np.array, torch.Tensor]: - transform = T.Compose( - [ - T.RandomResize([800], max_size=1333), - T.ToTensor(), - T.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]), - ] - ) - image_source = Image.open(image_path).convert("RGB") - image = np.asarray(image_source) - image_transformed, _ = transform(image_source, None) - return image, image_transformed - - -def predict( - model, - image: torch.Tensor, - caption: str, - box_threshold: float, - text_threshold: float, - device: str = "cuda" -) -> Tuple[torch.Tensor, torch.Tensor, List[str]]: - caption = preprocess_caption(caption=caption) - - model = model.to(device) - image = image.to(device) - - with torch.no_grad(): - outputs = model(image[None], captions=[caption]) - - prediction_logits = outputs["pred_logits"].cpu().sigmoid()[0] # prediction_logits.shape = (nq, 256) - prediction_boxes = outputs["pred_boxes"].cpu()[0] # prediction_boxes.shape = (nq, 4) - - mask = prediction_logits.max(dim=1)[0] > box_threshold - logits = prediction_logits[mask] # logits.shape = (n, 256) - boxes = prediction_boxes[mask] # boxes.shape = (n, 4) - - tokenizer = model.tokenizer - tokenized = tokenizer(caption) - - phrases = [ - get_phrases_from_posmap(logit > text_threshold, tokenized, tokenizer).replace('.', '') - for logit - in logits - ] - - return boxes, logits.max(dim=1)[0], phrases - - -def annotate(image_source: np.ndarray, boxes: torch.Tensor, logits: torch.Tensor, phrases: List[str]) -> np.ndarray: - h, w, _ = image_source.shape - boxes = boxes * torch.Tensor([w, h, w, h]) - xyxy = box_convert(boxes=boxes, in_fmt="cxcywh", out_fmt="xyxy").numpy() - detections = sv.Detections(xyxy=xyxy) - - labels = [ - f"{phrase} {logit:.2f}" - for phrase, logit - in zip(phrases, logits) - ] - - box_annotator = sv.BoxAnnotator() - annotated_frame = cv2.cvtColor(image_source, cv2.COLOR_RGB2BGR) - annotated_frame = box_annotator.annotate(scene=annotated_frame, detections=detections, labels=labels) - return annotated_frame diff --git a/spaces/Jacopo/ToonClip/README.md b/spaces/Jacopo/ToonClip/README.md deleted file mode 100644 index 81d05dfd04283f987c133b8afca0749aeac55413..0000000000000000000000000000000000000000 --- a/spaces/Jacopo/ToonClip/README.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: ToonClip -emoji: 💻 -colorFrom: red -colorTo: gray -sdk: gradio -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio`, `streamlit`, or `static` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code, or `static` html code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/Jeff2323/ai-comic-factory/src/components/icons/hugging-clap.tsx b/spaces/Jeff2323/ai-comic-factory/src/components/icons/hugging-clap.tsx deleted file mode 100644 index ffb37ae6183cd8ce7fe7c212e383a6510eba2485..0000000000000000000000000000000000000000 --- a/spaces/Jeff2323/ai-comic-factory/src/components/icons/hugging-clap.tsx +++ /dev/null @@ -1,8 +0,0 @@ -export function HuggingClap() { - return ( - - ) -} \ No newline at end of file diff --git a/spaces/Jikiwi/sovits-models/modules/__init__.py b/spaces/Jikiwi/sovits-models/modules/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Josh98/nl2bash_m/app.py b/spaces/Josh98/nl2bash_m/app.py deleted file mode 100644 index 84297a49eb29749682d121b963f49492fd56d216..0000000000000000000000000000000000000000 --- a/spaces/Josh98/nl2bash_m/app.py +++ /dev/null @@ -1,6 +0,0 @@ -import evaluate -from evaluate.utils import launch_gradio_widget - - -module = evaluate.load("Josh98/nl2bash_m") -launch_gradio_widget(module) \ No newline at end of file diff --git a/spaces/KPCGD/bingo/src/components/ui/select.tsx b/spaces/KPCGD/bingo/src/components/ui/select.tsx deleted file mode 100644 index 77f12c2996f541b97663de4c9e20ab34d4ec2fac..0000000000000000000000000000000000000000 --- a/spaces/KPCGD/bingo/src/components/ui/select.tsx +++ /dev/null @@ -1,123 +0,0 @@ -'use client' - -import * as React from 'react' -import * as SelectPrimitive from '@radix-ui/react-select' - -import { cn } from '@/lib/utils' -import { - IconArrowDown, - IconCheck, - IconChevronUpDown -} from '@/components/ui/icons' - -const Select = SelectPrimitive.Root - -const SelectGroup = SelectPrimitive.Group - -const SelectValue = SelectPrimitive.Value - -const SelectTrigger = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, children, ...props }, ref) => ( - - {children} - - - - -)) -SelectTrigger.displayName = SelectPrimitive.Trigger.displayName - -const SelectContent = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, children, position = 'popper', ...props }, ref) => ( - - - - {children} - - - -)) -SelectContent.displayName = SelectPrimitive.Content.displayName - -const SelectLabel = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -SelectLabel.displayName = SelectPrimitive.Label.displayName - -const SelectItem = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, children, ...props }, ref) => ( - - - - - - - {children} - -)) -SelectItem.displayName = SelectPrimitive.Item.displayName - -const SelectSeparator = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -SelectSeparator.displayName = SelectPrimitive.Separator.displayName - -export { - Select, - SelectGroup, - SelectValue, - SelectTrigger, - SelectContent, - SelectLabel, - SelectItem, - SelectSeparator -} diff --git a/spaces/Kangarroar/ApplioRVC-Inference/infer/lib/uvr5_pack/lib_v5/layers.py b/spaces/Kangarroar/ApplioRVC-Inference/infer/lib/uvr5_pack/lib_v5/layers.py deleted file mode 100644 index 4fc1b5cb85a3327f60cbb9f5deffbeeaaac516ad..0000000000000000000000000000000000000000 --- a/spaces/Kangarroar/ApplioRVC-Inference/infer/lib/uvr5_pack/lib_v5/layers.py +++ /dev/null @@ -1,118 +0,0 @@ -import torch -import torch.nn.functional as F -from torch import nn - -from . import spec_utils - - -class Conv2DBNActiv(nn.Module): - def __init__(self, nin, nout, ksize=3, stride=1, pad=1, dilation=1, activ=nn.ReLU): - super(Conv2DBNActiv, self).__init__() - self.conv = nn.Sequential( - nn.Conv2d( - nin, - nout, - kernel_size=ksize, - stride=stride, - padding=pad, - dilation=dilation, - bias=False, - ), - nn.BatchNorm2d(nout), - activ(), - ) - - def __call__(self, x): - return self.conv(x) - - -class SeperableConv2DBNActiv(nn.Module): - def __init__(self, nin, nout, ksize=3, stride=1, pad=1, dilation=1, activ=nn.ReLU): - super(SeperableConv2DBNActiv, self).__init__() - self.conv = nn.Sequential( - nn.Conv2d( - nin, - nin, - kernel_size=ksize, - stride=stride, - padding=pad, - dilation=dilation, - groups=nin, - bias=False, - ), - nn.Conv2d(nin, nout, kernel_size=1, bias=False), - nn.BatchNorm2d(nout), - activ(), - ) - - def __call__(self, x): - return self.conv(x) - - -class Encoder(nn.Module): - def __init__(self, nin, nout, ksize=3, stride=1, pad=1, activ=nn.LeakyReLU): - super(Encoder, self).__init__() - self.conv1 = Conv2DBNActiv(nin, nout, ksize, 1, pad, activ=activ) - self.conv2 = Conv2DBNActiv(nout, nout, ksize, stride, pad, activ=activ) - - def __call__(self, x): - skip = self.conv1(x) - h = self.conv2(skip) - - return h, skip - - -class Decoder(nn.Module): - def __init__( - self, nin, nout, ksize=3, stride=1, pad=1, activ=nn.ReLU, dropout=False - ): - super(Decoder, self).__init__() - self.conv = Conv2DBNActiv(nin, nout, ksize, 1, pad, activ=activ) - self.dropout = nn.Dropout2d(0.1) if dropout else None - - def __call__(self, x, skip=None): - x = F.interpolate(x, scale_factor=2, mode="bilinear", align_corners=True) - if skip is not None: - skip = spec_utils.crop_center(skip, x) - x = torch.cat([x, skip], dim=1) - h = self.conv(x) - - if self.dropout is not None: - h = self.dropout(h) - - return h - - -class ASPPModule(nn.Module): - def __init__(self, nin, nout, dilations=(4, 8, 16), activ=nn.ReLU): - super(ASPPModule, self).__init__() - self.conv1 = nn.Sequential( - nn.AdaptiveAvgPool2d((1, None)), - Conv2DBNActiv(nin, nin, 1, 1, 0, activ=activ), - ) - self.conv2 = Conv2DBNActiv(nin, nin, 1, 1, 0, activ=activ) - self.conv3 = SeperableConv2DBNActiv( - nin, nin, 3, 1, dilations[0], dilations[0], activ=activ - ) - self.conv4 = SeperableConv2DBNActiv( - nin, nin, 3, 1, dilations[1], dilations[1], activ=activ - ) - self.conv5 = SeperableConv2DBNActiv( - nin, nin, 3, 1, dilations[2], dilations[2], activ=activ - ) - self.bottleneck = nn.Sequential( - Conv2DBNActiv(nin * 5, nout, 1, 1, 0, activ=activ), nn.Dropout2d(0.1) - ) - - def forward(self, x): - _, _, h, w = x.size() - feat1 = F.interpolate( - self.conv1(x), size=(h, w), mode="bilinear", align_corners=True - ) - feat2 = self.conv2(x) - feat3 = self.conv3(x) - feat4 = self.conv4(x) - feat5 = self.conv5(x) - out = torch.cat((feat1, feat2, feat3, feat4, feat5), dim=1) - bottle = self.bottleneck(out) - return bottle diff --git a/spaces/Kaustubh-kapare94/ALPD/app.py b/spaces/Kaustubh-kapare94/ALPD/app.py deleted file mode 100644 index e07f491102f685ca9b2ec3288b9601c727d64610..0000000000000000000000000000000000000000 --- a/spaces/Kaustubh-kapare94/ALPD/app.py +++ /dev/null @@ -1,105 +0,0 @@ -import gradio as gr -import cv2 -import requests -import os - -from ultralytics import YOLO - -file_urls = [ - 'https://drive.google.com/file/d/1EZRZOvno2LeHVgBzDd1yGSKZcHpjvazT/view?usp=drive_link', - 'https://drive.google.com/file/d/1WHLbEZPCzYHTfvFeH2r58Au5fqz_n5rj/view?usp=drive_link', - 'https://drive.google.com/file/d/1RUpwU3Qe4Q2DSfu6bE9OEZiDfI--a_8y/view?usp=drive_link' - -] - -def download_file(url, save_name): - url = url - if not os.path.exists(save_name): - file = requests.get(url) - open(save_name, 'wb').write(file.content) - -for i, url in enumerate(file_urls): - if 'mp4' in file_urls[i]: - download_file( - file_urls[i], - f"video.mp4" - ) - else: - download_file( - file_urls[i], - f"image_{i}.jpg" - ) - -model = YOLO('best.pt') -path = [['image_0.jpg'], ['image_1.jpg']] -video_path = [['video.mp4']] - -def show_preds_image(image_path): - image = cv2.imread(image_path) - outputs = model.predict(source=image_path) - results = outputs[0].cpu().numpy() - for i, det in enumerate(results.boxes.xyxy): - cv2.rectangle( - image, - (int(det[0]), int(det[1])), - (int(det[2]), int(det[3])), - color=(0, 0, 255), - thickness=2, - lineType=cv2.LINE_AA - ) - return cv2.cvtColor(image, cv2.COLOR_BGR2RGB) - -inputs_image = [ - gr.components.Image(type="filepath", label="Input Image"), -] -outputs_image = [ - gr.components.Image(type="numpy", label="Output Image"), -] -interface_image = gr.Interface( - fn=show_preds_image, - inputs=inputs_image, - outputs=outputs_image, - title="Numberplate detector", - examples=path, - cache_examples=False, -) - -def show_preds_video(video_path): - cap = cv2.VideoCapture(video_path) - while(cap.isOpened()): - ret, frame = cap.read() - if ret: - frame_copy = frame.copy() - outputs = model.predict(source=frame) - results = outputs[0].cpu().numpy() - for i, det in enumerate(results.boxes.xyxy): - cv2.rectangle( - frame_copy, - (int(det[0]), int(det[1])), - (int(det[2]), int(det[3])), - color=(0, 0, 255), - thickness=2, - lineType=cv2.LINE_AA - ) - yield cv2.cvtColor(frame_copy, cv2.COLOR_BGR2RGB) - -inputs_video = [ - gr.components.Video(type="filepath", label="Input Video"), - -] -outputs_video = [ - gr.components.Image(type="numpy", label="Output Image"), -] -interface_video = gr.Interface( - fn=show_preds_video, - inputs=inputs_video, - outputs=outputs_video, - title="Numberplate detector", - examples=video_path, - cache_examples=False, -) - -gr.TabbedInterface( - [interface_image, interface_video], - tab_names=['Image inference', 'Video inference'] -).queue().launch() \ No newline at end of file diff --git a/spaces/KeeganFdes/stack_onnx/app.py b/spaces/KeeganFdes/stack_onnx/app.py deleted file mode 100644 index 703c7f3325490a5c08efaadf26732b08be7ab725..0000000000000000000000000000000000000000 --- a/spaces/KeeganFdes/stack_onnx/app.py +++ /dev/null @@ -1,58 +0,0 @@ -import gradio as gr -import numpy as np -import pickle - - - -import nltk -from nltk.corpus import stopwords -from nltk.tokenize import word_tokenize -from nltk.stem import WordNetLemmatizer -from sklearn.feature_extraction.text import CountVectorizer - -# Initialize NLTK resources (download if needed) -nltk.download('punkt') -nltk.download('wordnet') -nltk.download('stopwords') - -# Text preprocessing functions - -def preprocess_text(text): - # Tokenization - words = word_tokenize(text.lower()) # Convert to lowercase and tokenize - - # Remove stopwords - stop_words = set(stopwords.words('english')) - words = [word for word in words if word not in stop_words] - - # Lemmatization - lemmatizer = WordNetLemmatizer() - words = [lemmatizer.lemmatize(word) for word in words] - - return ' '.join(words) - - - - - -def predict_tags(text): - return mlb.classes_[np.where(model.predict(vectorizer.transform([preprocess_text(text)])).flatten() == 1)] - - - - -# Load the instance back -with open('classes.pkl', 'rb') as file: - mlb = pickle.load(file) - -with open('vectorizer.pkl', 'rb') as file: - vectorizer = pickle.load(file) - -with open('model.pkl', 'rb') as file: - model = pickle.load(file) - - -# Create a function to predict tags using the ONNX model - -iface = gr.Interface(fn=predict_tags, inputs="text", outputs="text") -iface.launch() \ No newline at end of file diff --git a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/encoder/data_objects/random_cycler.py b/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/encoder/data_objects/random_cycler.py deleted file mode 100644 index c405db6b27f46d874d8feb37e3f9c1e12c251109..0000000000000000000000000000000000000000 --- a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/encoder/data_objects/random_cycler.py +++ /dev/null @@ -1,37 +0,0 @@ -import random - -class RandomCycler: - """ - Creates an internal copy of a sequence and allows access to its items in a constrained random - order. For a source sequence of n items and one or several consecutive queries of a total - of m items, the following guarantees hold (one implies the other): - - Each item will be returned between m // n and ((m - 1) // n) + 1 times. - - Between two appearances of the same item, there may be at most 2 * (n - 1) other items. - """ - - def __init__(self, source): - if len(source) == 0: - raise Exception("Can't create RandomCycler from an empty collection") - self.all_items = list(source) - self.next_items = [] - - def sample(self, count: int): - shuffle = lambda l: random.sample(l, len(l)) - - out = [] - while count > 0: - if count >= len(self.all_items): - out.extend(shuffle(list(self.all_items))) - count -= len(self.all_items) - continue - n = min(count, len(self.next_items)) - out.extend(self.next_items[:n]) - count -= n - self.next_items = self.next_items[n:] - if len(self.next_items) == 0: - self.next_items = shuffle(list(self.all_items)) - return out - - def __next__(self): - return self.sample(1)[0] - diff --git a/spaces/KoalaAI/Text-Moderation-Demo/README.md b/spaces/KoalaAI/Text-Moderation-Demo/README.md deleted file mode 100644 index f1db466ec114e64c966d4814d73eb3a6eaeb970c..0000000000000000000000000000000000000000 --- a/spaces/KoalaAI/Text-Moderation-Demo/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: KoalaAI Text Moderation -emoji: 🏢 -colorFrom: purple -colorTo: yellow -sdk: gradio -sdk_version: 3.46.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/Kororinpa/Amadeus_Project/transforms.py b/spaces/Kororinpa/Amadeus_Project/transforms.py deleted file mode 100644 index 4793d67ca5a5630e0ffe0f9fb29445c949e64dae..0000000000000000000000000000000000000000 --- a/spaces/Kororinpa/Amadeus_Project/transforms.py +++ /dev/null @@ -1,193 +0,0 @@ -import torch -from torch.nn import functional as F - -import numpy as np - - -DEFAULT_MIN_BIN_WIDTH = 1e-3 -DEFAULT_MIN_BIN_HEIGHT = 1e-3 -DEFAULT_MIN_DERIVATIVE = 1e-3 - - -def piecewise_rational_quadratic_transform(inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails=None, - tail_bound=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE): - - if tails is None: - spline_fn = rational_quadratic_spline - spline_kwargs = {} - else: - spline_fn = unconstrained_rational_quadratic_spline - spline_kwargs = { - 'tails': tails, - 'tail_bound': tail_bound - } - - outputs, logabsdet = spline_fn( - inputs=inputs, - unnormalized_widths=unnormalized_widths, - unnormalized_heights=unnormalized_heights, - unnormalized_derivatives=unnormalized_derivatives, - inverse=inverse, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative, - **spline_kwargs - ) - return outputs, logabsdet - - -def searchsorted(bin_locations, inputs, eps=1e-6): - bin_locations[..., -1] += eps - return torch.sum( - inputs[..., None] >= bin_locations, - dim=-1 - ) - 1 - - -def unconstrained_rational_quadratic_spline(inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails='linear', - tail_bound=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE): - inside_interval_mask = (inputs >= -tail_bound) & (inputs <= tail_bound) - outside_interval_mask = ~inside_interval_mask - - outputs = torch.zeros_like(inputs) - logabsdet = torch.zeros_like(inputs) - - if tails == 'linear': - unnormalized_derivatives = F.pad(unnormalized_derivatives, pad=(1, 1)) - constant = np.log(np.exp(1 - min_derivative) - 1) - unnormalized_derivatives[..., 0] = constant - unnormalized_derivatives[..., -1] = constant - - outputs[outside_interval_mask] = inputs[outside_interval_mask] - logabsdet[outside_interval_mask] = 0 - else: - raise RuntimeError('{} tails are not implemented.'.format(tails)) - - outputs[inside_interval_mask], logabsdet[inside_interval_mask] = rational_quadratic_spline( - inputs=inputs[inside_interval_mask], - unnormalized_widths=unnormalized_widths[inside_interval_mask, :], - unnormalized_heights=unnormalized_heights[inside_interval_mask, :], - unnormalized_derivatives=unnormalized_derivatives[inside_interval_mask, :], - inverse=inverse, - left=-tail_bound, right=tail_bound, bottom=-tail_bound, top=tail_bound, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative - ) - - return outputs, logabsdet - -def rational_quadratic_spline(inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - left=0., right=1., bottom=0., top=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE): - if torch.min(inputs) < left or torch.max(inputs) > right: - raise ValueError('Input to a transform is not within its domain') - - num_bins = unnormalized_widths.shape[-1] - - if min_bin_width * num_bins > 1.0: - raise ValueError('Minimal bin width too large for the number of bins') - if min_bin_height * num_bins > 1.0: - raise ValueError('Minimal bin height too large for the number of bins') - - widths = F.softmax(unnormalized_widths, dim=-1) - widths = min_bin_width + (1 - min_bin_width * num_bins) * widths - cumwidths = torch.cumsum(widths, dim=-1) - cumwidths = F.pad(cumwidths, pad=(1, 0), mode='constant', value=0.0) - cumwidths = (right - left) * cumwidths + left - cumwidths[..., 0] = left - cumwidths[..., -1] = right - widths = cumwidths[..., 1:] - cumwidths[..., :-1] - - derivatives = min_derivative + F.softplus(unnormalized_derivatives) - - heights = F.softmax(unnormalized_heights, dim=-1) - heights = min_bin_height + (1 - min_bin_height * num_bins) * heights - cumheights = torch.cumsum(heights, dim=-1) - cumheights = F.pad(cumheights, pad=(1, 0), mode='constant', value=0.0) - cumheights = (top - bottom) * cumheights + bottom - cumheights[..., 0] = bottom - cumheights[..., -1] = top - heights = cumheights[..., 1:] - cumheights[..., :-1] - - if inverse: - bin_idx = searchsorted(cumheights, inputs)[..., None] - else: - bin_idx = searchsorted(cumwidths, inputs)[..., None] - - input_cumwidths = cumwidths.gather(-1, bin_idx)[..., 0] - input_bin_widths = widths.gather(-1, bin_idx)[..., 0] - - input_cumheights = cumheights.gather(-1, bin_idx)[..., 0] - delta = heights / widths - input_delta = delta.gather(-1, bin_idx)[..., 0] - - input_derivatives = derivatives.gather(-1, bin_idx)[..., 0] - input_derivatives_plus_one = derivatives[..., 1:].gather(-1, bin_idx)[..., 0] - - input_heights = heights.gather(-1, bin_idx)[..., 0] - - if inverse: - a = (((inputs - input_cumheights) * (input_derivatives - + input_derivatives_plus_one - - 2 * input_delta) - + input_heights * (input_delta - input_derivatives))) - b = (input_heights * input_derivatives - - (inputs - input_cumheights) * (input_derivatives - + input_derivatives_plus_one - - 2 * input_delta)) - c = - input_delta * (inputs - input_cumheights) - - discriminant = b.pow(2) - 4 * a * c - assert (discriminant >= 0).all() - - root = (2 * c) / (-b - torch.sqrt(discriminant)) - outputs = root * input_bin_widths + input_cumwidths - - theta_one_minus_theta = root * (1 - root) - denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta) - derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * root.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - root).pow(2)) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, -logabsdet - else: - theta = (inputs - input_cumwidths) / input_bin_widths - theta_one_minus_theta = theta * (1 - theta) - - numerator = input_heights * (input_delta * theta.pow(2) - + input_derivatives * theta_one_minus_theta) - denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta) - outputs = input_cumheights + numerator / denominator - - derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * theta.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - theta).pow(2)) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, logabsdet diff --git a/spaces/Kurkur99/Sentiment_analysis/README.md b/spaces/Kurkur99/Sentiment_analysis/README.md deleted file mode 100644 index b9c644eb167288a6328d6b409a6e244874cf38f1..0000000000000000000000000000000000000000 --- a/spaces/Kurkur99/Sentiment_analysis/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Sentiment Analysis -emoji: 🌖 -colorFrom: blue -colorTo: red -sdk: streamlit -sdk_version: 1.26.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/LLaMaWhisperer/LegalLLaMa/legal_llama/summarizer.py b/spaces/LLaMaWhisperer/LegalLLaMa/legal_llama/summarizer.py deleted file mode 100644 index 398ba806b95d2a7e44217c8d9d1365400269c467..0000000000000000000000000000000000000000 --- a/spaces/LLaMaWhisperer/LegalLLaMa/legal_llama/summarizer.py +++ /dev/null @@ -1,54 +0,0 @@ -from transformers import AutoTokenizer, AutoModelForSeq2SeqLM -import streamlit as st - - -@st.cache_resource -def load_model(): - tokenizers = AutoTokenizer.from_pretrained("nsi319/legal-led-base-16384") - model = AutoModelForSeq2SeqLM.from_pretrained("nsi319/legal-led-base-16384") - return tokenizers, model - - -class BillSummarizer: - def __init__(self): - """ - Initialize a BillSummarizer, which uses the Hugging Face transformers library to summarize bills. - """ - try: - self.tokenizer, self.model = load_model() - except Exception as e: - print(f"Error initializing summarizer pipeline: {e}") - - def summarize(self, bill_text): - """ - Summarize a bill's text using the summarization pipeline. - - Parameters: - bill_text (str): The text of the bill to be summarized. - - Returns: - str: The summarized text. - """ - try: - input_tokenized = self.tokenizer.encode(bill_text, return_tensors='pt', - padding="max_length", - pad_to_max_length=True, - max_length=6144, - truncation=True) - - summary_ids = self.model.generate(input_tokenized, - num_beams=4, - no_repeat_ngram_size=3, - length_penalty=2, - min_length=350, - max_length=500) - - summary = [self.tokenizer.decode(g, - skip_special_tokens=True, - clean_up_tokenization_spaces=False) - for g in summary_ids][0] - - return summary - except Exception as e: - print(f"Error summarizing text: {e}") - return "Sorry, I couldn't summarize this bill. Please try again." diff --git a/spaces/LaynzKunz/Advanced-RVC-Inference/lib/infer_pack/modules/F0Predictor/__init__.py b/spaces/LaynzKunz/Advanced-RVC-Inference/lib/infer_pack/modules/F0Predictor/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/LuxOAI/ChatGpt-Web/app/polyfill.ts b/spaces/LuxOAI/ChatGpt-Web/app/polyfill.ts deleted file mode 100644 index 517f06e7c9338f6589714fe478824e7ae7ea8b44..0000000000000000000000000000000000000000 --- a/spaces/LuxOAI/ChatGpt-Web/app/polyfill.ts +++ /dev/null @@ -1,27 +0,0 @@ -declare global { - interface Array { - at(index: number): T | undefined; - } -} - -if (!Array.prototype.at) { - Array.prototype.at = function (index: number) { - // Get the length of the array - const length = this.length; - - // Convert negative index to a positive index - if (index < 0) { - index = length + index; - } - - // Return undefined if the index is out of range - if (index < 0 || index >= length) { - return undefined; - } - - // Use Array.prototype.slice method to get value at the specified index - return Array.prototype.slice.call(this, index, index + 1)[0]; - }; -} - -export {}; diff --git a/spaces/MCkernick/Image_Restoration_Colorization/Global/detection_models/sync_batchnorm/batchnorm.py b/spaces/MCkernick/Image_Restoration_Colorization/Global/detection_models/sync_batchnorm/batchnorm.py deleted file mode 100644 index bf8d7a7325b474771a11a137053971fd40426079..0000000000000000000000000000000000000000 --- a/spaces/MCkernick/Image_Restoration_Colorization/Global/detection_models/sync_batchnorm/batchnorm.py +++ /dev/null @@ -1,412 +0,0 @@ -# -*- coding: utf-8 -*- -# File : batchnorm.py -# Author : Jiayuan Mao -# Email : maojiayuan@gmail.com -# Date : 27/01/2018 -# -# This file is part of Synchronized-BatchNorm-PyTorch. -# https://github.com/vacancy/Synchronized-BatchNorm-PyTorch -# Distributed under MIT License. - -import collections -import contextlib - -import torch -import torch.nn.functional as F - -from torch.nn.modules.batchnorm import _BatchNorm - -try: - from torch.nn.parallel._functions import ReduceAddCoalesced, Broadcast -except ImportError: - ReduceAddCoalesced = Broadcast = None - -try: - from jactorch.parallel.comm import SyncMaster - from jactorch.parallel.data_parallel import JacDataParallel as DataParallelWithCallback -except ImportError: - from .comm import SyncMaster - from .replicate import DataParallelWithCallback - -__all__ = [ - 'set_sbn_eps_mode', - 'SynchronizedBatchNorm1d', 'SynchronizedBatchNorm2d', 'SynchronizedBatchNorm3d', - 'patch_sync_batchnorm', 'convert_model' -] - - -SBN_EPS_MODE = 'clamp' - - -def set_sbn_eps_mode(mode): - global SBN_EPS_MODE - assert mode in ('clamp', 'plus') - SBN_EPS_MODE = mode - - -def _sum_ft(tensor): - """sum over the first and last dimention""" - return tensor.sum(dim=0).sum(dim=-1) - - -def _unsqueeze_ft(tensor): - """add new dimensions at the front and the tail""" - return tensor.unsqueeze(0).unsqueeze(-1) - - -_ChildMessage = collections.namedtuple('_ChildMessage', ['sum', 'ssum', 'sum_size']) -_MasterMessage = collections.namedtuple('_MasterMessage', ['sum', 'inv_std']) - - -class _SynchronizedBatchNorm(_BatchNorm): - def __init__(self, num_features, eps=1e-5, momentum=0.1, affine=True, track_running_stats=True): - assert ReduceAddCoalesced is not None, 'Can not use Synchronized Batch Normalization without CUDA support.' - - super(_SynchronizedBatchNorm, self).__init__(num_features, eps=eps, momentum=momentum, affine=affine, - track_running_stats=track_running_stats) - - if not self.track_running_stats: - import warnings - warnings.warn('track_running_stats=False is not supported by the SynchronizedBatchNorm.') - - self._sync_master = SyncMaster(self._data_parallel_master) - - self._is_parallel = False - self._parallel_id = None - self._slave_pipe = None - - def forward(self, input): - # If it is not parallel computation or is in evaluation mode, use PyTorch's implementation. - if not (self._is_parallel and self.training): - return F.batch_norm( - input, self.running_mean, self.running_var, self.weight, self.bias, - self.training, self.momentum, self.eps) - - # Resize the input to (B, C, -1). - input_shape = input.size() - assert input.size(1) == self.num_features, 'Channel size mismatch: got {}, expect {}.'.format(input.size(1), self.num_features) - input = input.view(input.size(0), self.num_features, -1) - - # Compute the sum and square-sum. - sum_size = input.size(0) * input.size(2) - input_sum = _sum_ft(input) - input_ssum = _sum_ft(input ** 2) - - # Reduce-and-broadcast the statistics. - if self._parallel_id == 0: - mean, inv_std = self._sync_master.run_master(_ChildMessage(input_sum, input_ssum, sum_size)) - else: - mean, inv_std = self._slave_pipe.run_slave(_ChildMessage(input_sum, input_ssum, sum_size)) - - # Compute the output. - if self.affine: - # MJY:: Fuse the multiplication for speed. - output = (input - _unsqueeze_ft(mean)) * _unsqueeze_ft(inv_std * self.weight) + _unsqueeze_ft(self.bias) - else: - output = (input - _unsqueeze_ft(mean)) * _unsqueeze_ft(inv_std) - - # Reshape it. - return output.view(input_shape) - - def __data_parallel_replicate__(self, ctx, copy_id): - self._is_parallel = True - self._parallel_id = copy_id - - # parallel_id == 0 means master device. - if self._parallel_id == 0: - ctx.sync_master = self._sync_master - else: - self._slave_pipe = ctx.sync_master.register_slave(copy_id) - - def _data_parallel_master(self, intermediates): - """Reduce the sum and square-sum, compute the statistics, and broadcast it.""" - - # Always using same "device order" makes the ReduceAdd operation faster. - # Thanks to:: Tete Xiao (http://tetexiao.com/) - intermediates = sorted(intermediates, key=lambda i: i[1].sum.get_device()) - - to_reduce = [i[1][:2] for i in intermediates] - to_reduce = [j for i in to_reduce for j in i] # flatten - target_gpus = [i[1].sum.get_device() for i in intermediates] - - sum_size = sum([i[1].sum_size for i in intermediates]) - sum_, ssum = ReduceAddCoalesced.apply(target_gpus[0], 2, *to_reduce) - mean, inv_std = self._compute_mean_std(sum_, ssum, sum_size) - - broadcasted = Broadcast.apply(target_gpus, mean, inv_std) - - outputs = [] - for i, rec in enumerate(intermediates): - outputs.append((rec[0], _MasterMessage(*broadcasted[i*2:i*2+2]))) - - return outputs - - def _compute_mean_std(self, sum_, ssum, size): - """Compute the mean and standard-deviation with sum and square-sum. This method - also maintains the moving average on the master device.""" - assert size > 1, 'BatchNorm computes unbiased standard-deviation, which requires size > 1.' - mean = sum_ / size - sumvar = ssum - sum_ * mean - unbias_var = sumvar / (size - 1) - bias_var = sumvar / size - - if hasattr(torch, 'no_grad'): - with torch.no_grad(): - self.running_mean = (1 - self.momentum) * self.running_mean + self.momentum * mean.data - self.running_var = (1 - self.momentum) * self.running_var + self.momentum * unbias_var.data - else: - self.running_mean = (1 - self.momentum) * self.running_mean + self.momentum * mean.data - self.running_var = (1 - self.momentum) * self.running_var + self.momentum * unbias_var.data - - if SBN_EPS_MODE == 'clamp': - return mean, bias_var.clamp(self.eps) ** -0.5 - elif SBN_EPS_MODE == 'plus': - return mean, (bias_var + self.eps) ** -0.5 - else: - raise ValueError('Unknown EPS mode: {}.'.format(SBN_EPS_MODE)) - - -class SynchronizedBatchNorm1d(_SynchronizedBatchNorm): - r"""Applies Synchronized Batch Normalization over a 2d or 3d input that is seen as a - mini-batch. - - .. math:: - - y = \frac{x - mean[x]}{ \sqrt{Var[x] + \epsilon}} * gamma + beta - - This module differs from the built-in PyTorch BatchNorm1d as the mean and - standard-deviation are reduced across all devices during training. - - For example, when one uses `nn.DataParallel` to wrap the network during - training, PyTorch's implementation normalize the tensor on each device using - the statistics only on that device, which accelerated the computation and - is also easy to implement, but the statistics might be inaccurate. - Instead, in this synchronized version, the statistics will be computed - over all training samples distributed on multiple devices. - - Note that, for one-GPU or CPU-only case, this module behaves exactly same - as the built-in PyTorch implementation. - - The mean and standard-deviation are calculated per-dimension over - the mini-batches and gamma and beta are learnable parameter vectors - of size C (where C is the input size). - - During training, this layer keeps a running estimate of its computed mean - and variance. The running sum is kept with a default momentum of 0.1. - - During evaluation, this running mean/variance is used for normalization. - - Because the BatchNorm is done over the `C` dimension, computing statistics - on `(N, L)` slices, it's common terminology to call this Temporal BatchNorm - - Args: - num_features: num_features from an expected input of size - `batch_size x num_features [x width]` - eps: a value added to the denominator for numerical stability. - Default: 1e-5 - momentum: the value used for the running_mean and running_var - computation. Default: 0.1 - affine: a boolean value that when set to ``True``, gives the layer learnable - affine parameters. Default: ``True`` - - Shape:: - - Input: :math:`(N, C)` or :math:`(N, C, L)` - - Output: :math:`(N, C)` or :math:`(N, C, L)` (same shape as input) - - Examples: - >>> # With Learnable Parameters - >>> m = SynchronizedBatchNorm1d(100) - >>> # Without Learnable Parameters - >>> m = SynchronizedBatchNorm1d(100, affine=False) - >>> input = torch.autograd.Variable(torch.randn(20, 100)) - >>> output = m(input) - """ - - def _check_input_dim(self, input): - if input.dim() != 2 and input.dim() != 3: - raise ValueError('expected 2D or 3D input (got {}D input)' - .format(input.dim())) - - -class SynchronizedBatchNorm2d(_SynchronizedBatchNorm): - r"""Applies Batch Normalization over a 4d input that is seen as a mini-batch - of 3d inputs - - .. math:: - - y = \frac{x - mean[x]}{ \sqrt{Var[x] + \epsilon}} * gamma + beta - - This module differs from the built-in PyTorch BatchNorm2d as the mean and - standard-deviation are reduced across all devices during training. - - For example, when one uses `nn.DataParallel` to wrap the network during - training, PyTorch's implementation normalize the tensor on each device using - the statistics only on that device, which accelerated the computation and - is also easy to implement, but the statistics might be inaccurate. - Instead, in this synchronized version, the statistics will be computed - over all training samples distributed on multiple devices. - - Note that, for one-GPU or CPU-only case, this module behaves exactly same - as the built-in PyTorch implementation. - - The mean and standard-deviation are calculated per-dimension over - the mini-batches and gamma and beta are learnable parameter vectors - of size C (where C is the input size). - - During training, this layer keeps a running estimate of its computed mean - and variance. The running sum is kept with a default momentum of 0.1. - - During evaluation, this running mean/variance is used for normalization. - - Because the BatchNorm is done over the `C` dimension, computing statistics - on `(N, H, W)` slices, it's common terminology to call this Spatial BatchNorm - - Args: - num_features: num_features from an expected input of - size batch_size x num_features x height x width - eps: a value added to the denominator for numerical stability. - Default: 1e-5 - momentum: the value used for the running_mean and running_var - computation. Default: 0.1 - affine: a boolean value that when set to ``True``, gives the layer learnable - affine parameters. Default: ``True`` - - Shape:: - - Input: :math:`(N, C, H, W)` - - Output: :math:`(N, C, H, W)` (same shape as input) - - Examples: - >>> # With Learnable Parameters - >>> m = SynchronizedBatchNorm2d(100) - >>> # Without Learnable Parameters - >>> m = SynchronizedBatchNorm2d(100, affine=False) - >>> input = torch.autograd.Variable(torch.randn(20, 100, 35, 45)) - >>> output = m(input) - """ - - def _check_input_dim(self, input): - if input.dim() != 4: - raise ValueError('expected 4D input (got {}D input)' - .format(input.dim())) - - -class SynchronizedBatchNorm3d(_SynchronizedBatchNorm): - r"""Applies Batch Normalization over a 5d input that is seen as a mini-batch - of 4d inputs - - .. math:: - - y = \frac{x - mean[x]}{ \sqrt{Var[x] + \epsilon}} * gamma + beta - - This module differs from the built-in PyTorch BatchNorm3d as the mean and - standard-deviation are reduced across all devices during training. - - For example, when one uses `nn.DataParallel` to wrap the network during - training, PyTorch's implementation normalize the tensor on each device using - the statistics only on that device, which accelerated the computation and - is also easy to implement, but the statistics might be inaccurate. - Instead, in this synchronized version, the statistics will be computed - over all training samples distributed on multiple devices. - - Note that, for one-GPU or CPU-only case, this module behaves exactly same - as the built-in PyTorch implementation. - - The mean and standard-deviation are calculated per-dimension over - the mini-batches and gamma and beta are learnable parameter vectors - of size C (where C is the input size). - - During training, this layer keeps a running estimate of its computed mean - and variance. The running sum is kept with a default momentum of 0.1. - - During evaluation, this running mean/variance is used for normalization. - - Because the BatchNorm is done over the `C` dimension, computing statistics - on `(N, D, H, W)` slices, it's common terminology to call this Volumetric BatchNorm - or Spatio-temporal BatchNorm - - Args: - num_features: num_features from an expected input of - size batch_size x num_features x depth x height x width - eps: a value added to the denominator for numerical stability. - Default: 1e-5 - momentum: the value used for the running_mean and running_var - computation. Default: 0.1 - affine: a boolean value that when set to ``True``, gives the layer learnable - affine parameters. Default: ``True`` - - Shape:: - - Input: :math:`(N, C, D, H, W)` - - Output: :math:`(N, C, D, H, W)` (same shape as input) - - Examples: - >>> # With Learnable Parameters - >>> m = SynchronizedBatchNorm3d(100) - >>> # Without Learnable Parameters - >>> m = SynchronizedBatchNorm3d(100, affine=False) - >>> input = torch.autograd.Variable(torch.randn(20, 100, 35, 45, 10)) - >>> output = m(input) - """ - - def _check_input_dim(self, input): - if input.dim() != 5: - raise ValueError('expected 5D input (got {}D input)' - .format(input.dim())) - - -@contextlib.contextmanager -def patch_sync_batchnorm(): - import torch.nn as nn - - backup = nn.BatchNorm1d, nn.BatchNorm2d, nn.BatchNorm3d - - nn.BatchNorm1d = SynchronizedBatchNorm1d - nn.BatchNorm2d = SynchronizedBatchNorm2d - nn.BatchNorm3d = SynchronizedBatchNorm3d - - yield - - nn.BatchNorm1d, nn.BatchNorm2d, nn.BatchNorm3d = backup - - -def convert_model(module): - """Traverse the input module and its child recursively - and replace all instance of torch.nn.modules.batchnorm.BatchNorm*N*d - to SynchronizedBatchNorm*N*d - - Args: - module: the input module needs to be convert to SyncBN model - - Examples: - >>> import torch.nn as nn - >>> import torchvision - >>> # m is a standard pytorch model - >>> m = torchvision.models.resnet18(True) - >>> m = nn.DataParallel(m) - >>> # after convert, m is using SyncBN - >>> m = convert_model(m) - """ - if isinstance(module, torch.nn.DataParallel): - mod = module.module - mod = convert_model(mod) - mod = DataParallelWithCallback(mod, device_ids=module.device_ids) - return mod - - mod = module - for pth_module, sync_module in zip([torch.nn.modules.batchnorm.BatchNorm1d, - torch.nn.modules.batchnorm.BatchNorm2d, - torch.nn.modules.batchnorm.BatchNorm3d], - [SynchronizedBatchNorm1d, - SynchronizedBatchNorm2d, - SynchronizedBatchNorm3d]): - if isinstance(module, pth_module): - mod = sync_module(module.num_features, module.eps, module.momentum, module.affine) - mod.running_mean = module.running_mean - mod.running_var = module.running_var - if module.affine: - mod.weight.data = module.weight.data.clone().detach() - mod.bias.data = module.bias.data.clone().detach() - - for name, child in module.named_children(): - mod.add_module(name, convert_model(child)) - - return mod diff --git a/spaces/MohitGupta/Eng2Indic_Translitration/transliteration/utils.py b/spaces/MohitGupta/Eng2Indic_Translitration/transliteration/utils.py deleted file mode 100644 index 416a2cf1d71f790f10f96d6d65701f99cad70a5a..0000000000000000000000000000000000000000 --- a/spaces/MohitGupta/Eng2Indic_Translitration/transliteration/utils.py +++ /dev/null @@ -1,286 +0,0 @@ -import re - -LANG_CODE_TO_DISPLAY_NAME = { - # Indo-Aryan - ## Indic-scripts - 'as' : "Assamese - অসমীয়া", - 'bn' : "Bangla - বাংলা", - 'doi': "Dogri - डोगरी", - 'gom': "Goan Konkani - कोंकणी", - 'gu' : "Gujarati - ગુજરાતી", - 'hi' : "Hindi - हिंदी", - 'mai': "Maithili - मैथिली", - 'mr' : "Marathi - मराठी", - 'ne' : "Nepali - नेपाली", - 'or' : "Oriya - ଓଡ଼ିଆ", - 'pa' : "Panjabi - ਪੰਜਾਬੀ", - 'sa' : "Sanskrit - संस्कृतम्", - 'si' : "Sinhala - සිංහල", - ## Perso-Arabic scripts - 'ks' : "Kashmiri - كٲشُر", - 'pnb': "Panjabi (Western) - پن٘جابی", - 'sd' : "Sindhi - سنڌي", - 'skr': "Saraiki - سرائیکی", - 'ur' : "Urdu - اُردُو", - ## Misc - 'dv' : "Dhivehi - ދިވެހި", - - # Dravidian - 'kn' : "Kannada - ಕನ್ನಡ", - 'ml' : "Malayalam - മലയാളം", - 'ta' : "Tamil - தமிழ்", - 'te' : "Telugu - తెలుగు", - - # Tibeto-Burman - 'brx': "Boro - बड़ो", - 'mni': "Manipuri - ꯃꯤꯇꯩꯂꯣꯟ", - - # Munda - 'sat': "Santali - ᱥᱟᱱᱛᱟᱲᱤ", - - # Misc - 'en' : "English", -} - -PERSOARABIC_LANG_CODES = { - 'ks', - 'pnb', - 'sd', - 'skr', - 'ur', -} - -RTL_LANG_CODES = set(PERSOARABIC_LANG_CODES) -RTL_LANG_CODES.add('dv') - -# Default/Official language to script mapping -LANG_CODE_TO_SCRIPT_CODE = { - - # Indo-Aryan - "as" : "Beng", - "bn" : "Beng", - "doi" : "Deva", - "dv" : "Thaa", - "gom" : "Deva", - "gu" : "Gujr", - "hi" : "Deva", - "ks" : "Aran", - "mai" : "Deva", - "mr" : "Deva", - "ne" : "Deva", - "or" : "Orya", - "pa" : "Guru", - "pnb" : "Aran", - "sa" : "Deva", - "sd" : "Arab", - "si" : "Sinh", - "skr" : "Aran", - "ur" : "Aran", - - # Dravidian - "kn" : "Knda", - "ml" : "Mlym", - "ta" : "Taml", - "te" : "Telu", - - # Tibeto-Burman - "brx" : "Deva", - "mni" : "Mtei", - - # Munda - "sat" : "Olck", - - # Misc - "en" : "Latn", -} - -SCRIPT_CODE_TO_UNICODE_CHARS_RANGE_STR = { - # ISO 15924 codes for script names - - # North Indic - "Beng": "\u0980-\u09FF", - "Deva": "\u0900-\u097F", - "Gujr": "\u0A80-\u0AFF", - "Guru": "\u0A00-\u0A7F", - "Orya": "\u0B00-\u0B7F", - - # South Indic - "Knda": "\u0C80-\u0CFF", - "Mlym": "\u0D00-\u0D7F", - "Sinh": "\u0D80-\u0DFF", - "Taml": "\u0B80-\u0BFF", - "Telu": "\u0C00-\u0C7F", - - # Tibetic - "Mtei": "\uABC0-\uABFF", - - # Misc - "Arab": "\u0600-\u06FF\u0750-\u077F\u0870-\u089F\u08A0-\u08FF", # Perso-Arabic - "Aran": "\u0600-\u06FF\u0750-\u077F\u0870-\u089F\u08A0-\u08FF", # Perso-Arabic (Nastaliq code) - "Latn": "\u0041-\u005A\u0061-\u007A", # includes only basic/unaccented Roman - "Olck": "\u1C50-\u1C7F", - "Thaa": "\u0780-\u07BF", -} - -GOOGLE_FONTS = { - "gom": "Tiro Devanagari Marathi", - "ks" : "Noto Nastaliq Urdu", - "mni": "Noto Sans Meetei Mayek", - "mr" : "Tiro Devanagari Marathi", - "sa" : "Tiro Devanagari Sanskrit", - "sat": "Noto Sans Ol Chiki", - "sd" : "Lateef", - "ur" : "Noto Nastaliq Urdu", -} - -FALLBACK_FONTS = { - "gom": "serif", - "ks" : "serif", - "mni": "sans-serif", - "mr" : "serif", - "sa" : "serif", - "sat": "sans-serif", - "sd" : "serif", - "ur" : "serif", -} - -INDIC_TO_LATIN_PUNCT = { - ## List of all punctuations across languages - - # Brahmic - '।': '.', # Nagari - ## Archaic Indic - '॥': "..", # Sanskrit - '෴': '.', # Sinhala - ## Meetei (influenced from Burmese) - '꫰': ',', - '꯫': '.', - - # Ol Chiki - '᱾': '.', - '᱿': '..', - - # Arabic - '۔': '.', - '؟': '?', - '،': ',', - '؛': ';', - '۝': "..", -} - -INDIC_TO_LATIN_PUNCT_TRANSLATOR = str.maketrans(INDIC_TO_LATIN_PUNCT) - -NON_LATIN_FULLSTOP_LANGS = { - # Brahmic - 'as' : '।', - 'bn' : '।', - 'brx': '।', - 'doi': '।', - 'hi' : '।', - 'mai': '।', - 'mni': '꯫', - 'ne' : '।', - 'or' : '।', - 'pa' : '।', - 'sa' : '।', - 'sat': '᱾', - - # Nastaliq - 'ks' : '۔', - 'pnb': '۔', - # 'sd' : '۔', # Sindhi uses Naskh, hence use latin - 'skr': '۔', - 'ur' : '۔', -} - -ENDS_WITH_LATIN_FULLSTOP_REGEX = re.compile("(^|.*[^.])\.$") - -def nativize_latin_fullstop(text, lang_code): - if lang_code in NON_LATIN_FULLSTOP_LANGS and ENDS_WITH_LATIN_FULLSTOP_REGEX.match(text): - return text[:-1] + NON_LATIN_FULLSTOP_LANGS[lang_code] - return text - -LATIN_TO_PERSOARABIC_PUNCTUATIONS = { - # Except full-stop (since period-mark is ambiguous in usage, like fullforms) - '?': '؟', - ',': '،', - ';': '؛', -} - -LATIN_TO_PERSOARABIC_PUNC_TRANSLATOR = str.maketrans(LATIN_TO_PERSOARABIC_PUNCTUATIONS) - -SCRIPT_CODE_TO_NUMERALS = { - # ISO 15924 codes for script names - - # North Indic - "Beng": "০১২৩৪৫৬৭৮৯", - "Deva": "०१२३४५६७८९", - "Gujr": "૦૧૨૩૪૫૬૭૮૯", - "Guru": "੦੧੨੩੪੫੬੭੮੯", - "Orya": "୦୧୨୩୪୫୬୭୮୯", - - # South Indic - "Knda": "೦೧೨೩೪೫೬೭೮೯", - "Mlym": "൦൧൨൩൪൫൬൭൮൯", - "Sinh": "෦෧෨෩෪෫෬෭෮෯", - "Taml": "௦௧௨௩௪௫௬௭௮௯", - "Telu": "౦౧౨౩౪౫౬౭౮౯", - - # Tibetic - "Mtei": "꯰꯱꯲꯳꯴꯵꯶꯷꯸꯹", - - # Misc - "Arab": "۰۱۲۳۴۵۶۷۸۹", # Perso-Arabic numerals - "Aran": "۰۱۲۳۴۵۶۷۸۹", # Perso-Arabic numerals - "Latn": "0123456789", - "Olck": "᱐᱑᱒᱓᱔᱕᱖᱗᱘᱙", - "Thaa": "٠١٢٣٤٥٦٧٨٩", # East-Arabic numerals. (Dhivehi does code-mixing with Arabic) -} - -LANG_CODE_TO_NUMERALS = { - lang_code: SCRIPT_CODE_TO_NUMERALS[script_code] - for lang_code, script_code in LANG_CODE_TO_SCRIPT_CODE.items() -} - -INDIC_TO_STANDARD_NUMERALS_GLOBAL_MAP = {} -for lang_code, lang_numerals in LANG_CODE_TO_NUMERALS.items(): - map_dict = {lang_numeral: en_numeral for lang_numeral, en_numeral in zip(lang_numerals, LANG_CODE_TO_NUMERALS["en"])} - INDIC_TO_STANDARD_NUMERALS_GLOBAL_MAP.update(map_dict) - -INDIC_TO_STANDARD_NUMERALS_TRANSLATOR = str.maketrans(INDIC_TO_STANDARD_NUMERALS_GLOBAL_MAP) - -NATIVE_TO_LATIN_NUMERALS_TRANSLATORS = { - lang_code: str.maketrans({lang_numeral: en_numeral for lang_numeral, en_numeral in zip(lang_numerals, LANG_CODE_TO_NUMERALS["en"])}) - for lang_code, lang_numerals in LANG_CODE_TO_NUMERALS.items() - if lang_code != "en" -} - -LATIN_TO_NATIVE_NUMERALS_TRANSLATORS = { - lang_code: str.maketrans({en_numeral: lang_numeral for en_numeral, lang_numeral in zip(LANG_CODE_TO_NUMERALS["en"], lang_numerals)}) - for lang_code, lang_numerals in LANG_CODE_TO_NUMERALS.items() - if lang_code != "en" -} - -WORDFINAL_INDIC_VIRAMA_REGEX = re.compile("(\u09cd|\u094d|\u0acd|\u0a4d|\u0b4d|\u0ccd|\u0d4d|\u0dca|\u0bcd|\u0c4d|\uaaf6)$") -def hardfix_wordfinal_virama(word): - # Add ZWNJ after a word-final halanta - # Not applicable for non-Brahmic scripts (like Arabic & Ol-Chiki) - return WORDFINAL_INDIC_VIRAMA_REGEX.sub("\\1\u200c", word) - -ODIA_CONFUSING_YUKTAKSHARA_REGEX = re.compile("(\u0b4d)(ବ|ଵ|ୱ|ଯ|ୟ)") -def fix_odia_confusing_ambiguous_yuktakshara(word): - # Add ZWNJ in-between to force-render virama in conjunct - return ODIA_CONFUSING_YUKTAKSHARA_REGEX.sub("\\1\u200c\\2", word) - -LATIN_WORDFINAL_CONSONANTS_CHECKER_REGEX = re.compile(".*([bcdfghjklmnpqrstvwxyz])$") -DEVANAGARI_WORDFINAL_CONSONANTS_REGEX = re.compile("([\u0915-\u0939\u0958-\u095f\u0979-\u097c\u097e-\u097f])$") -def explicit_devanagari_wordfinal_schwa_delete(roman_word, indic_word): - if LATIN_WORDFINAL_CONSONANTS_CHECKER_REGEX.match(roman_word): - indic_word = DEVANAGARI_WORDFINAL_CONSONANTS_REGEX.sub("\\1\u094d", indic_word) - return indic_word - -# To replace last N occurences of a substring in a string -# Src: https://stackoverflow.com/questions/2556108/ -def rreplace(text, find_pattern, replace_pattern, match_count=1): - splits = text.rsplit(find_pattern, match_count) - return replace_pattern.join(splits) diff --git a/spaces/NCTCMumbai/NCTC/models/official/vision/detection/utils/__init__.py b/spaces/NCTCMumbai/NCTC/models/official/vision/detection/utils/__init__.py deleted file mode 100644 index 931c2ef11db4a949e6c2e95bca44e36bac1241e9..0000000000000000000000000000000000000000 --- a/spaces/NCTCMumbai/NCTC/models/official/vision/detection/utils/__init__.py +++ /dev/null @@ -1,14 +0,0 @@ -# Copyright 2019 The TensorFlow Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# ============================================================================== diff --git a/spaces/NoCrypt/pixelization/models/c2pGen.py b/spaces/NoCrypt/pixelization/models/c2pGen.py deleted file mode 100644 index bf392da5c69e99e87dddae135b6882a782aa19f0..0000000000000000000000000000000000000000 --- a/spaces/NoCrypt/pixelization/models/c2pGen.py +++ /dev/null @@ -1,266 +0,0 @@ -from .basic_layer import * -import torchvision.models as models -import os - - - -class AliasNet(nn.Module): - def __init__(self, input_dim, output_dim, dim, n_downsample, n_res, activ='relu', pad_type='reflect'): - super(AliasNet, self).__init__() - self.RGBEnc = AliasRGBEncoder(input_dim, dim, n_downsample, n_res, "in", activ, pad_type=pad_type) - self.RGBDec = AliasRGBDecoder(self.RGBEnc.output_dim, output_dim, n_downsample, n_res, res_norm='in', - activ=activ, pad_type=pad_type) - - def forward(self, x): - x = self.RGBEnc(x) - x = self.RGBDec(x) - return x - - -class AliasRGBEncoder(nn.Module): - def __init__(self, input_dim, dim, n_downsample, n_res, norm, activ, pad_type): - super(AliasRGBEncoder, self).__init__() - self.model = [] - self.model += [AliasConvBlock(input_dim, dim, 7, 1, 3, norm=norm, activation=activ, pad_type=pad_type)] - # downsampling blocks - for i in range(n_downsample): - self.model += [AliasConvBlock(dim, 2 * dim, 4, 2, 1, norm=norm, activation=activ, pad_type=pad_type)] - dim *= 2 - # residual blocks - self.model += [AliasResBlocks(n_res, dim, norm=norm, activation=activ, pad_type=pad_type)] - self.model = nn.Sequential(*self.model) - self.output_dim = dim - - def forward(self, x): - return self.model(x) - - -class AliasRGBDecoder(nn.Module): - def __init__(self, dim, output_dim, n_upsample, n_res, res_norm, activ='relu', pad_type='zero'): - super(AliasRGBDecoder, self).__init__() - # self.model = [] - # # AdaIN residual blocks - # self.model += [ResBlocks(n_res, dim, res_norm, activ, pad_type=pad_type)] - # # upsampling blocks - # for i in range(n_upsample): - # self.model += [nn.Upsample(scale_factor=2, mode='nearest'), - # ConvBlock(dim, dim // 2, 5, 1, 2, norm='ln', activation=activ, pad_type=pad_type)] - # dim //= 2 - # # use reflection padding in the last conv layer - # self.model += [ConvBlock(dim, output_dim, 7, 1, 3, norm='none', activation='tanh', pad_type=pad_type)] - # self.model = nn.Sequential(*self.model) - self.Res_Blocks = AliasResBlocks(n_res, dim, res_norm, activ, pad_type=pad_type) - self.upsample_block1 = nn.Upsample(scale_factor=2, mode='nearest') - self.conv_1 = AliasConvBlock(dim, dim // 2, 5, 1, 2, norm='ln', activation=activ, pad_type=pad_type) - dim //= 2 - self.upsample_block2 = nn.Upsample(scale_factor=2, mode='nearest') - self.conv_2 = AliasConvBlock(dim, dim // 2, 5, 1, 2, norm='ln', activation=activ, pad_type=pad_type) - dim //= 2 - self.conv_3 = AliasConvBlock(dim, output_dim, 7, 1, 3, norm='none', activation='tanh', pad_type=pad_type) - - def forward(self, x): - x = self.Res_Blocks(x) - # print(x.shape) - x = self.upsample_block1(x) - # print(x.shape) - x = self.conv_1(x) - # print(x_small.shape) - x = self.upsample_block2(x) - # print(x.shape) - x = self.conv_2(x) - # print(x_middle.shape) - x = self.conv_3(x) - # print(x_big.shape) - return x - - -class C2PGen(nn.Module): - def __init__(self, input_dim, output_dim, dim, n_downsample, n_res, style_dim, mlp_dim, activ='relu', pad_type='reflect'): - super(C2PGen, self).__init__() - self.PBEnc = PixelBlockEncoder(input_dim, dim, style_dim, norm='none', activ=activ, pad_type=pad_type) - self.RGBEnc = RGBEncoder(input_dim, dim, n_downsample, n_res, "in", activ, pad_type=pad_type) - self.RGBDec = RGBDecoder(self.RGBEnc.output_dim, output_dim, n_downsample, n_res, res_norm='adain', - activ=activ, pad_type=pad_type) - self.MLP = MLP(style_dim, 2048, mlp_dim, 3, norm='none', activ=activ) - - def forward(self, clipart, pixelart, s=1): - feature = self.RGBEnc(clipart) - code = self.PBEnc(pixelart) - result, cellcode = self.fuse(feature, code, s) - return result#, cellcode #return cellcode when visualizing the cell size code - - def fuse(self, content, style_code, s=1): - #print("MLP input:code's shape:", style_code.shape) - adain_params = self.MLP(style_code) * s # [batch,2048] - #print("MLP output:adain_params's shape", adain_params.shape) - #self.assign_adain_params(adain_params, self.RGBDec) - images = self.RGBDec(content, adain_params) - return images, adain_params - - def assign_adain_params(self, adain_params, model): - # assign the adain_params to the AdaIN layers in model - for m in model.modules(): - if m.__class__.__name__ == "AdaptiveInstanceNorm2d": - mean = adain_params[:, :m.num_features] - std = adain_params[:, m.num_features:2 * m.num_features] - m.bias = mean.contiguous().view(-1) - m.weight = std.contiguous().view(-1) - if adain_params.size(1) > 2 * m.num_features: - adain_params = adain_params[:, 2 * m.num_features:] - - def get_num_adain_params(self, model): - # return the number of AdaIN parameters needed by the model - num_adain_params = 0 - for m in model.modules(): - if m.__class__.__name__ == "AdaptiveInstanceNorm2d": - num_adain_params += 2 * m.num_features - return num_adain_params - - -class PixelBlockEncoder(nn.Module): - def __init__(self, input_dim, dim, style_dim, norm, activ, pad_type): - super(PixelBlockEncoder, self).__init__() - vgg19 = models.vgg.vgg19() - vgg19.classifier._modules['6'] = nn.Linear(4096, 7, bias=True) - vgg19.load_state_dict(torch.load('./pixelart_vgg19.pth' if not os.environ['PIX_MODEL'] else os.environ['PIX_MODEL'], map_location=torch.device('cpu'))) - self.vgg = vgg19.features - for p in self.vgg.parameters(): - p.requires_grad = False - # vgg19 = models.vgg.vgg19(pretrained=False) - # vgg19.load_state_dict(torch.load('./vgg.pth')) - # self.vgg = vgg19.features - # for p in self.vgg.parameters(): - # p.requires_grad = False - - - self.conv1 = ConvBlock(input_dim, dim, 7, 1, 3, norm=norm, activation=activ, pad_type=pad_type) # 3->64,concat - dim = dim * 2 - self.conv2 = ConvBlock(dim, dim, 4, 2, 1, norm=norm, activation=activ, pad_type=pad_type) # 128->128 - dim = dim * 2 - self.conv3 = ConvBlock(dim, dim, 4, 2, 1, norm=norm, activation=activ, pad_type=pad_type) # 256->256 - dim = dim * 2 - self.conv4 = ConvBlock(dim, dim, 4, 2, 1, norm=norm, activation=activ, pad_type=pad_type) # 512->512 - dim = dim * 2 - - self.model = [] - self.model += [nn.AdaptiveAvgPool2d(1)] # global average pooling - self.model += [nn.Conv2d(dim, style_dim, 1, 1, 0)] - self.model = nn.Sequential(*self.model) - self.output_dim = dim - - def get_features(self, image, model, layers=None): - if layers is None: - layers = {'0': 'conv1_1', '5': 'conv2_1', '10': 'conv3_1', '19': 'conv4_1'} - features = {} - x = image - # model._modules is a dictionary holding each module in the model - for name, layer in model._modules.items(): - x = layer(x) - if name in layers: - features[layers[name]] = x - return features - - def componet_enc(self, x): - # x [16,3,256,256] - # factor_img [16,7,256,256] - vgg_aux = self.get_features(x, self.vgg) # x是3通道灰度图 - #x = torch.cat([x, factor_img], dim=1) # [16,3+7,256,256] - x = self.conv1(x) # 64 256 256 - x = torch.cat([x, vgg_aux['conv1_1']], dim=1) # 128 256 256 - x = self.conv2(x) # 128 128 128 - x = torch.cat([x, vgg_aux['conv2_1']], dim=1) # 256 128 128 - x = self.conv3(x) # 256 64 64 - x = torch.cat([x, vgg_aux['conv3_1']], dim=1) # 512 64 64 - x = self.conv4(x) # 512 32 32 - x = torch.cat([x, vgg_aux['conv4_1']], dim=1) # 1024 32 32 - x = self.model(x) - return x - - def forward(self, x): - code = self.componet_enc(x) - return code - -class RGBEncoder(nn.Module): - def __init__(self, input_dim, dim, n_downsample, n_res, norm, activ, pad_type): - super(RGBEncoder, self).__init__() - self.model = [] - self.model += [ConvBlock(input_dim, dim, 7, 1, 3, norm=norm, activation=activ, pad_type=pad_type)] - # downsampling blocks - for i in range(n_downsample): - self.model += [ConvBlock(dim, 2 * dim, 4, 2, 1, norm=norm, activation=activ, pad_type=pad_type)] - dim *= 2 - # residual blocks - self.model += [ResBlocks(n_res, dim, norm=norm, activation=activ, pad_type=pad_type)] - self.model = nn.Sequential(*self.model) - self.output_dim = dim - - def forward(self, x): - return self.model(x) - - -class RGBDecoder(nn.Module): - def __init__(self, dim, output_dim, n_upsample, n_res, res_norm, activ='relu', pad_type='zero'): - super(RGBDecoder, self).__init__() - # self.model = [] - # # AdaIN residual blocks - # self.model += [ResBlocks(n_res, dim, res_norm, activ, pad_type=pad_type)] - # # upsampling blocks - # for i in range(n_upsample): - # self.model += [nn.Upsample(scale_factor=2, mode='nearest'), - # ConvBlock(dim, dim // 2, 5, 1, 2, norm='ln', activation=activ, pad_type=pad_type)] - # dim //= 2 - # # use reflection padding in the last conv layer - # self.model += [ConvBlock(dim, output_dim, 7, 1, 3, norm='none', activation='tanh', pad_type=pad_type)] - # self.model = nn.Sequential(*self.model) - #self.Res_Blocks = ModulationResBlocks(n_res, dim, res_norm, activ, pad_type=pad_type) - self.mod_conv_1 = ModulationConvBlock(256,256,3) - self.mod_conv_2 = ModulationConvBlock(256,256,3) - self.mod_conv_3 = ModulationConvBlock(256,256,3) - self.mod_conv_4 = ModulationConvBlock(256,256,3) - self.mod_conv_5 = ModulationConvBlock(256,256,3) - self.mod_conv_6 = ModulationConvBlock(256,256,3) - self.mod_conv_7 = ModulationConvBlock(256,256,3) - self.mod_conv_8 = ModulationConvBlock(256,256,3) - self.upsample_block1 = nn.Upsample(scale_factor=2, mode='nearest') - self.conv_1 = ConvBlock(dim, dim // 2, 5, 1, 2, norm='ln', activation=activ, pad_type=pad_type) - dim //= 2 - self.upsample_block2 = nn.Upsample(scale_factor=2, mode='nearest') - self.conv_2 = ConvBlock(dim, dim // 2, 5, 1, 2, norm='ln', activation=activ, pad_type=pad_type) - dim //= 2 - self.conv_3 = ConvBlock(dim, output_dim, 7, 1, 3, norm='none', activation='tanh', pad_type=pad_type) - - # def forward(self, x): - # residual = x - # out = self.model(x) - # out += residual - # return out - def forward(self, x, code): - residual = x - x = self.mod_conv_1(x, code[:, :256]) - x = self.mod_conv_2(x, code[:, 256*1:256*2]) - x += residual - residual = x - x = self.mod_conv_2(x, code[:, 256*2:256 * 3]) - x = self.mod_conv_2(x, code[:, 256*3:256 * 4]) - x += residual - residual =x - x = self.mod_conv_2(x, code[:, 256*4:256 * 5]) - x = self.mod_conv_2(x, code[:, 256*5:256 * 6]) - x += residual - residual = x - x = self.mod_conv_2(x, code[:, 256*6:256 * 7]) - x = self.mod_conv_2(x, code[:, 256*7:256 * 8]) - x += residual - # print(x.shape) - x = self.upsample_block1(x) - # print(x.shape) - x = self.conv_1(x) - # print(x_small.shape) - x = self.upsample_block2(x) - # print(x.shape) - x = self.conv_2(x) - # print(x_middle.shape) - x = self.conv_3(x) - # print(x_big.shape) - return x - diff --git a/spaces/Nultx/VITS-TTS/data_utils.py b/spaces/Nultx/VITS-TTS/data_utils.py deleted file mode 100644 index e9246c6c8f2ff3c37a7f8529ea1593c7f80f887e..0000000000000000000000000000000000000000 --- a/spaces/Nultx/VITS-TTS/data_utils.py +++ /dev/null @@ -1,393 +0,0 @@ -import time -import os -import random -import numpy as np -import torch -import torch.utils.data - -import commons -from mel_processing import spectrogram_torch -from utils import load_wav_to_torch, load_filepaths_and_text -from text import text_to_sequence, cleaned_text_to_sequence - - -class TextAudioLoader(torch.utils.data.Dataset): - """ - 1) loads audio, text pairs - 2) normalizes text and converts them to sequences of integers - 3) computes spectrograms from audio files. - """ - def __init__(self, audiopaths_and_text, hparams): - self.audiopaths_and_text = load_filepaths_and_text(audiopaths_and_text) - self.text_cleaners = hparams.text_cleaners - self.max_wav_value = hparams.max_wav_value - self.sampling_rate = hparams.sampling_rate - self.filter_length = hparams.filter_length - self.hop_length = hparams.hop_length - self.win_length = hparams.win_length - self.sampling_rate = hparams.sampling_rate - - self.cleaned_text = getattr(hparams, "cleaned_text", False) - - self.add_blank = hparams.add_blank - self.min_text_len = getattr(hparams, "min_text_len", 1) - self.max_text_len = getattr(hparams, "max_text_len", 190) - - random.seed(1234) - random.shuffle(self.audiopaths_and_text) - self._filter() - - - def _filter(self): - """ - Filter text & store spec lengths - """ - # Store spectrogram lengths for Bucketing - # wav_length ~= file_size / (wav_channels * Bytes per dim) = file_size / (1 * 2) - # spec_length = wav_length // hop_length - - audiopaths_and_text_new = [] - lengths = [] - for audiopath, text in self.audiopaths_and_text: - if self.min_text_len <= len(text) and len(text) <= self.max_text_len: - audiopaths_and_text_new.append([audiopath, text]) - lengths.append(os.path.getsize(audiopath) // (2 * self.hop_length)) - self.audiopaths_and_text = audiopaths_and_text_new - self.lengths = lengths - - def get_audio_text_pair(self, audiopath_and_text): - # separate filename and text - audiopath, text = audiopath_and_text[0], audiopath_and_text[1] - text = self.get_text(text) - spec, wav = self.get_audio(audiopath) - return (text, spec, wav) - - def get_audio(self, filename): - audio, sampling_rate = load_wav_to_torch(filename) - if sampling_rate != self.sampling_rate: - raise ValueError("{} {} SR doesn't match target {} SR".format( - sampling_rate, self.sampling_rate)) - audio_norm = audio / self.max_wav_value - audio_norm = audio_norm.unsqueeze(0) - spec_filename = filename.replace(".wav", ".spec.pt") - if os.path.exists(spec_filename): - spec = torch.load(spec_filename) - else: - spec = spectrogram_torch(audio_norm, self.filter_length, - self.sampling_rate, self.hop_length, self.win_length, - center=False) - spec = torch.squeeze(spec, 0) - torch.save(spec, spec_filename) - return spec, audio_norm - - def get_text(self, text): - if self.cleaned_text: - text_norm = cleaned_text_to_sequence(text) - else: - text_norm = text_to_sequence(text, self.text_cleaners) - if self.add_blank: - text_norm = commons.intersperse(text_norm, 0) - text_norm = torch.LongTensor(text_norm) - return text_norm - - def __getitem__(self, index): - return self.get_audio_text_pair(self.audiopaths_and_text[index]) - - def __len__(self): - return len(self.audiopaths_and_text) - - -class TextAudioCollate(): - """ Zero-pads model inputs and targets - """ - def __init__(self, return_ids=False): - self.return_ids = return_ids - - def __call__(self, batch): - """Collate's training batch from normalized text and aduio - PARAMS - ------ - batch: [text_normalized, spec_normalized, wav_normalized] - """ - # Right zero-pad all one-hot text sequences to max input length - _, ids_sorted_decreasing = torch.sort( - torch.LongTensor([x[1].size(1) for x in batch]), - dim=0, descending=True) - - max_text_len = max([len(x[0]) for x in batch]) - max_spec_len = max([x[1].size(1) for x in batch]) - max_wav_len = max([x[2].size(1) for x in batch]) - - text_lengths = torch.LongTensor(len(batch)) - spec_lengths = torch.LongTensor(len(batch)) - wav_lengths = torch.LongTensor(len(batch)) - - text_padded = torch.LongTensor(len(batch), max_text_len) - spec_padded = torch.FloatTensor(len(batch), batch[0][1].size(0), max_spec_len) - wav_padded = torch.FloatTensor(len(batch), 1, max_wav_len) - text_padded.zero_() - spec_padded.zero_() - wav_padded.zero_() - for i in range(len(ids_sorted_decreasing)): - row = batch[ids_sorted_decreasing[i]] - - text = row[0] - text_padded[i, :text.size(0)] = text - text_lengths[i] = text.size(0) - - spec = row[1] - spec_padded[i, :, :spec.size(1)] = spec - spec_lengths[i] = spec.size(1) - - wav = row[2] - wav_padded[i, :, :wav.size(1)] = wav - wav_lengths[i] = wav.size(1) - - if self.return_ids: - return text_padded, text_lengths, spec_padded, spec_lengths, wav_padded, wav_lengths, ids_sorted_decreasing - return text_padded, text_lengths, spec_padded, spec_lengths, wav_padded, wav_lengths - - -"""Multi speaker version""" -class TextAudioSpeakerLoader(torch.utils.data.Dataset): - """ - 1) loads audio, speaker_id, text pairs - 2) normalizes text and converts them to sequences of integers - 3) computes spectrograms from audio files. - """ - def __init__(self, audiopaths_sid_text, hparams): - self.audiopaths_sid_text = load_filepaths_and_text(audiopaths_sid_text) - self.text_cleaners = hparams.text_cleaners - self.max_wav_value = hparams.max_wav_value - self.sampling_rate = hparams.sampling_rate - self.filter_length = hparams.filter_length - self.hop_length = hparams.hop_length - self.win_length = hparams.win_length - self.sampling_rate = hparams.sampling_rate - - self.cleaned_text = getattr(hparams, "cleaned_text", False) - - self.add_blank = hparams.add_blank - self.min_text_len = getattr(hparams, "min_text_len", 1) - self.max_text_len = getattr(hparams, "max_text_len", 190) - - random.seed(1234) - random.shuffle(self.audiopaths_sid_text) - self._filter() - - def _filter(self): - """ - Filter text & store spec lengths - """ - # Store spectrogram lengths for Bucketing - # wav_length ~= file_size / (wav_channels * Bytes per dim) = file_size / (1 * 2) - # spec_length = wav_length // hop_length - - audiopaths_sid_text_new = [] - lengths = [] - for audiopath, sid, text in self.audiopaths_sid_text: - audiopath = "E:/uma_voice/" + audiopath - if self.min_text_len <= len(text) and len(text) <= self.max_text_len: - audiopaths_sid_text_new.append([audiopath, sid, text]) - lengths.append(os.path.getsize(audiopath) // (2 * self.hop_length)) - self.audiopaths_sid_text = audiopaths_sid_text_new - self.lengths = lengths - - def get_audio_text_speaker_pair(self, audiopath_sid_text): - # separate filename, speaker_id and text - audiopath, sid, text = audiopath_sid_text[0], audiopath_sid_text[1], audiopath_sid_text[2] - text = self.get_text(text) - spec, wav = self.get_audio(audiopath) - sid = self.get_sid(sid) - return (text, spec, wav, sid) - - def get_audio(self, filename): - audio, sampling_rate = load_wav_to_torch(filename) - if sampling_rate != self.sampling_rate: - raise ValueError("{} {} SR doesn't match target {} SR".format( - sampling_rate, self.sampling_rate)) - audio_norm = audio / self.max_wav_value - audio_norm = audio_norm.unsqueeze(0) - spec_filename = filename.replace(".wav", ".spec.pt") - if os.path.exists(spec_filename): - spec = torch.load(spec_filename) - else: - spec = spectrogram_torch(audio_norm, self.filter_length, - self.sampling_rate, self.hop_length, self.win_length, - center=False) - spec = torch.squeeze(spec, 0) - torch.save(spec, spec_filename) - return spec, audio_norm - - def get_text(self, text): - if self.cleaned_text: - text_norm = cleaned_text_to_sequence(text) - else: - text_norm = text_to_sequence(text, self.text_cleaners) - if self.add_blank: - text_norm = commons.intersperse(text_norm, 0) - text_norm = torch.LongTensor(text_norm) - return text_norm - - def get_sid(self, sid): - sid = torch.LongTensor([int(sid)]) - return sid - - def __getitem__(self, index): - return self.get_audio_text_speaker_pair(self.audiopaths_sid_text[index]) - - def __len__(self): - return len(self.audiopaths_sid_text) - - -class TextAudioSpeakerCollate(): - """ Zero-pads model inputs and targets - """ - def __init__(self, return_ids=False): - self.return_ids = return_ids - - def __call__(self, batch): - """Collate's training batch from normalized text, audio and speaker identities - PARAMS - ------ - batch: [text_normalized, spec_normalized, wav_normalized, sid] - """ - # Right zero-pad all one-hot text sequences to max input length - _, ids_sorted_decreasing = torch.sort( - torch.LongTensor([x[1].size(1) for x in batch]), - dim=0, descending=True) - - max_text_len = max([len(x[0]) for x in batch]) - max_spec_len = max([x[1].size(1) for x in batch]) - max_wav_len = max([x[2].size(1) for x in batch]) - - text_lengths = torch.LongTensor(len(batch)) - spec_lengths = torch.LongTensor(len(batch)) - wav_lengths = torch.LongTensor(len(batch)) - sid = torch.LongTensor(len(batch)) - - text_padded = torch.LongTensor(len(batch), max_text_len) - spec_padded = torch.FloatTensor(len(batch), batch[0][1].size(0), max_spec_len) - wav_padded = torch.FloatTensor(len(batch), 1, max_wav_len) - text_padded.zero_() - spec_padded.zero_() - wav_padded.zero_() - for i in range(len(ids_sorted_decreasing)): - row = batch[ids_sorted_decreasing[i]] - - text = row[0] - text_padded[i, :text.size(0)] = text - text_lengths[i] = text.size(0) - - spec = row[1] - spec_padded[i, :, :spec.size(1)] = spec - spec_lengths[i] = spec.size(1) - - wav = row[2] - wav_padded[i, :, :wav.size(1)] = wav - wav_lengths[i] = wav.size(1) - - sid[i] = row[3] - - if self.return_ids: - return text_padded, text_lengths, spec_padded, spec_lengths, wav_padded, wav_lengths, sid, ids_sorted_decreasing - return text_padded, text_lengths, spec_padded, spec_lengths, wav_padded, wav_lengths, sid - - -class DistributedBucketSampler(torch.utils.data.distributed.DistributedSampler): - """ - Maintain similar input lengths in a batch. - Length groups are specified by boundaries. - Ex) boundaries = [b1, b2, b3] -> any batch is included either {x | b1 < length(x) <=b2} or {x | b2 < length(x) <= b3}. - - It removes samples which are not included in the boundaries. - Ex) boundaries = [b1, b2, b3] -> any x s.t. length(x) <= b1 or length(x) > b3 are discarded. - """ - def __init__(self, dataset, batch_size, boundaries, num_replicas=None, rank=None, shuffle=True): - super().__init__(dataset, num_replicas=num_replicas, rank=rank, shuffle=shuffle) - self.lengths = dataset.lengths - self.batch_size = batch_size - self.boundaries = boundaries - - self.buckets, self.num_samples_per_bucket = self._create_buckets() - self.total_size = sum(self.num_samples_per_bucket) - self.num_samples = self.total_size // self.num_replicas - - def _create_buckets(self): - buckets = [[] for _ in range(len(self.boundaries) - 1)] - for i in range(len(self.lengths)): - length = self.lengths[i] - idx_bucket = self._bisect(length) - if idx_bucket != -1: - buckets[idx_bucket].append(i) - - for i in range(len(buckets) - 1, 0, -1): - if len(buckets[i]) == 0: - buckets.pop(i) - self.boundaries.pop(i+1) - - num_samples_per_bucket = [] - for i in range(len(buckets)): - len_bucket = len(buckets[i]) - total_batch_size = self.num_replicas * self.batch_size - rem = (total_batch_size - (len_bucket % total_batch_size)) % total_batch_size - num_samples_per_bucket.append(len_bucket + rem) - return buckets, num_samples_per_bucket - - def __iter__(self): - # deterministically shuffle based on epoch - g = torch.Generator() - g.manual_seed(self.epoch) - - indices = [] - if self.shuffle: - for bucket in self.buckets: - indices.append(torch.randperm(len(bucket), generator=g).tolist()) - else: - for bucket in self.buckets: - indices.append(list(range(len(bucket)))) - - batches = [] - for i in range(len(self.buckets)): - bucket = self.buckets[i] - len_bucket = len(bucket) - ids_bucket = indices[i] - num_samples_bucket = self.num_samples_per_bucket[i] - - # add extra samples to make it evenly divisible - rem = num_samples_bucket - len_bucket - ids_bucket = ids_bucket + ids_bucket * (rem // len_bucket) + ids_bucket[:(rem % len_bucket)] - - # subsample - ids_bucket = ids_bucket[self.rank::self.num_replicas] - - # batching - for j in range(len(ids_bucket) // self.batch_size): - batch = [bucket[idx] for idx in ids_bucket[j*self.batch_size:(j+1)*self.batch_size]] - batches.append(batch) - - if self.shuffle: - batch_ids = torch.randperm(len(batches), generator=g).tolist() - batches = [batches[i] for i in batch_ids] - self.batches = batches - - assert len(self.batches) * self.batch_size == self.num_samples - return iter(self.batches) - - def _bisect(self, x, lo=0, hi=None): - if hi is None: - hi = len(self.boundaries) - 1 - - if hi > lo: - mid = (hi + lo) // 2 - if self.boundaries[mid] < x and x <= self.boundaries[mid+1]: - return mid - elif x <= self.boundaries[mid]: - return self._bisect(x, lo, mid) - else: - return self._bisect(x, mid + 1, hi) - else: - return -1 - - def __len__(self): - return self.num_samples // self.batch_size diff --git a/spaces/OAOA/DifFace/models/losses.py b/spaces/OAOA/DifFace/models/losses.py deleted file mode 100644 index 251e42e4f36a31bb5e1aeda874b3a45d722000a2..0000000000000000000000000000000000000000 --- a/spaces/OAOA/DifFace/models/losses.py +++ /dev/null @@ -1,77 +0,0 @@ -""" -Helpers for various likelihood-based losses. These are ported from the original -Ho et al. diffusion models codebase: -https://github.com/hojonathanho/diffusion/blob/1e0dceb3b3495bbe19116a5e1b3596cd0706c543/diffusion_tf/utils.py -""" - -import numpy as np - -import torch as th - - -def normal_kl(mean1, logvar1, mean2, logvar2): - """ - Compute the KL divergence between two gaussians. - - Shapes are automatically broadcasted, so batches can be compared to - scalars, among other use cases. - """ - tensor = None - for obj in (mean1, logvar1, mean2, logvar2): - if isinstance(obj, th.Tensor): - tensor = obj - break - assert tensor is not None, "at least one argument must be a Tensor" - - # Force variances to be Tensors. Broadcasting helps convert scalars to - # Tensors, but it does not work for th.exp(). - logvar1, logvar2 = [ - x if isinstance(x, th.Tensor) else th.tensor(x).to(tensor) - for x in (logvar1, logvar2) - ] - - return 0.5 * ( - -1.0 - + logvar2 - - logvar1 - + th.exp(logvar1 - logvar2) - + ((mean1 - mean2) ** 2) * th.exp(-logvar2) - ) - - -def approx_standard_normal_cdf(x): - """ - A fast approximation of the cumulative distribution function of the - standard normal. - """ - return 0.5 * (1.0 + th.tanh(np.sqrt(2.0 / np.pi) * (x + 0.044715 * th.pow(x, 3)))) - - -def discretized_gaussian_log_likelihood(x, *, means, log_scales): - """ - Compute the log-likelihood of a Gaussian distribution discretizing to a - given image. - - :param x: the target images. It is assumed that this was uint8 values, - rescaled to the range [-1, 1]. - :param means: the Gaussian mean Tensor. - :param log_scales: the Gaussian log stddev Tensor. - :return: a tensor like x of log probabilities (in nats). - """ - assert x.shape == means.shape == log_scales.shape - centered_x = x - means - inv_stdv = th.exp(-log_scales) - plus_in = inv_stdv * (centered_x + 1.0 / 255.0) - cdf_plus = approx_standard_normal_cdf(plus_in) - min_in = inv_stdv * (centered_x - 1.0 / 255.0) - cdf_min = approx_standard_normal_cdf(min_in) - log_cdf_plus = th.log(cdf_plus.clamp(min=1e-12)) - log_one_minus_cdf_min = th.log((1.0 - cdf_min).clamp(min=1e-12)) - cdf_delta = cdf_plus - cdf_min - log_probs = th.where( - x < -0.999, - log_cdf_plus, - th.where(x > 0.999, log_one_minus_cdf_min, th.log(cdf_delta.clamp(min=1e-12))), - ) - assert log_probs.shape == x.shape - return log_probs diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/models/roberta/model_gottbert.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/models/roberta/model_gottbert.py deleted file mode 100644 index 2e8c66354ac7ce7309226bb091a7baa4776fbfdc..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/models/roberta/model_gottbert.py +++ /dev/null @@ -1,49 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -""" -GottBERT: a pure German Language Model -""" - -from fairseq.models import register_model - -from .hub_interface import RobertaHubInterface -from .model import RobertaModel - - -@register_model('gottbert') -class GottbertModel(RobertaModel): - - @classmethod - def hub_models(cls): - return { - 'gottbert-base': 'https://dl.gottbert.de/fairseq/models/gottbert-base.tar.gz', - } - - @classmethod - def from_pretrained(cls, - model_name_or_path, - checkpoint_file='model.pt', - data_name_or_path='.', - bpe='hf_byte_bpe', - bpe_vocab='vocab.json', - bpe_merges='merges.txt', - bpe_add_prefix_space=False, - **kwargs - ): - from fairseq import hub_utils - - x = hub_utils.from_pretrained( - model_name_or_path, - checkpoint_file, - data_name_or_path, - archive_map=cls.hub_models(), - bpe=bpe, - load_checkpoint_heads=True, - bpe_vocab=bpe_vocab, - bpe_merges=bpe_merges, - bpe_add_prefix_space=bpe_add_prefix_space, - **kwargs, - ) - return RobertaHubInterface(x['args'], x['task'], x['models'][0]) diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/optim/adadelta.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/optim/adadelta.py deleted file mode 100644 index f1a21549770f0904a6a40a42ff7eb52811f1bfbe..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/optim/adadelta.py +++ /dev/null @@ -1,47 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch.optim - -from . import LegacyFairseqOptimizer, register_optimizer - - -@register_optimizer("adadelta") -class Adadelta(LegacyFairseqOptimizer): - def __init__(self, args, params): - super().__init__(args) - self._optimizer = torch.optim.Adadelta(params, **self.optimizer_config) - - @staticmethod - def add_args(parser): - """Add optimizer-specific arguments to the parser.""" - # fmt: off - parser.add_argument('--adadelta-rho', type=float, default=0.9, metavar='RHO', - help='coefficient used for computing a running average of squared gradients') - parser.add_argument('--adadelta-eps', type=float, default=1e-6, metavar='EPS', - help='term added to the denominator to improve numerical stability') - parser.add_argument('--weight-decay', '--wd', default=0.0, type=float, metavar='WD', - help='weight decay') - parser.add_argument('--anneal-eps', action='store_true', help='flag to anneal eps') - # fmt: on - - @property - def optimizer_config(self): - """ - Return a kwarg dictionary that will be used to override optimizer - args stored in checkpoints. This allows us to load a checkpoint and - resume training using a different set of optimizer args, e.g., with a - different learning rate. - """ - return { - "lr": self.args.lr[0], - "rho": self.args.adadelta_rho, - "eps": self.args.adadelta_eps, - "weight_decay": self.args.weight_decay, - } - - @property - def supports_flat_params(self): - return True diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/noisychannel/rerank.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/noisychannel/rerank.py deleted file mode 100644 index bb80d11a67cd75764a89f6f41915b0348ae96e92..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/noisychannel/rerank.py +++ /dev/null @@ -1,428 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math -from multiprocessing import Pool - -import numpy as np -from fairseq import options -from fairseq.data import dictionary -from fairseq.scoring import bleu - -from examples.noisychannel import ( - rerank_generate, - rerank_options, - rerank_score_bw, - rerank_score_lm, - rerank_utils, -) - - -def score_target_hypo( - args, a, b, c, lenpen, target_outfile, hypo_outfile, write_hypos, normalize -): - - print("lenpen", lenpen, "weight1", a, "weight2", b, "weight3", c) - gen_output_lst, bitext1_lst, bitext2_lst, lm_res_lst = load_score_files(args) - dict = dictionary.Dictionary() - scorer = scorer = bleu.Scorer( - bleu.BleuConfig( - pad=dict.pad(), - eos=dict.eos(), - unk=dict.unk(), - ) - ) - - ordered_hypos = {} - ordered_targets = {} - - for shard_id in range(len(bitext1_lst)): - bitext1 = bitext1_lst[shard_id] - bitext2 = bitext2_lst[shard_id] - gen_output = gen_output_lst[shard_id] - lm_res = lm_res_lst[shard_id] - - total = len(bitext1.rescore_source.keys()) - source_lst = [] - hypo_lst = [] - score_lst = [] - reference_lst = [] - j = 1 - best_score = -math.inf - - for i in range(total): - # length is measured in terms of words, not bpe tokens, since models may not share the same bpe - target_len = len(bitext1.rescore_hypo[i].split()) - - if lm_res is not None: - lm_score = lm_res.score[i] - else: - lm_score = 0 - - if bitext2 is not None: - bitext2_score = bitext2.rescore_score[i] - bitext2_backwards = bitext2.backwards - else: - bitext2_score = None - bitext2_backwards = None - - score = rerank_utils.get_score( - a, - b, - c, - target_len, - bitext1.rescore_score[i], - bitext2_score, - lm_score=lm_score, - lenpen=lenpen, - src_len=bitext1.source_lengths[i], - tgt_len=bitext1.target_lengths[i], - bitext1_backwards=bitext1.backwards, - bitext2_backwards=bitext2_backwards, - normalize=normalize, - ) - - if score > best_score: - best_score = score - best_hypo = bitext1.rescore_hypo[i] - - if j == gen_output.num_hypos[i] or j == args.num_rescore: - j = 1 - hypo_lst.append(best_hypo) - score_lst.append(best_score) - source_lst.append(bitext1.rescore_source[i]) - reference_lst.append(bitext1.rescore_target[i]) - - best_score = -math.inf - best_hypo = "" - else: - j += 1 - - gen_keys = list(sorted(gen_output.no_bpe_target.keys())) - - for key in range(len(gen_keys)): - if args.prefix_len is None: - assert hypo_lst[key] in gen_output.no_bpe_hypo[gen_keys[key]], ( - "pred and rescore hypo mismatch: i: " - + str(key) - + ", " - + str(hypo_lst[key]) - + str(gen_keys[key]) - + str(gen_output.no_bpe_hypo[key]) - ) - sys_tok = dict.encode_line(hypo_lst[key]) - ref_tok = dict.encode_line(gen_output.no_bpe_target[gen_keys[key]]) - scorer.add(ref_tok, sys_tok) - - else: - full_hypo = rerank_utils.get_full_from_prefix( - hypo_lst[key], gen_output.no_bpe_hypo[gen_keys[key]] - ) - sys_tok = dict.encode_line(full_hypo) - ref_tok = dict.encode_line(gen_output.no_bpe_target[gen_keys[key]]) - scorer.add(ref_tok, sys_tok) - - # if only one set of hyper parameters is provided, write the predictions to a file - if write_hypos: - # recover the orinal ids from n best list generation - for key in range(len(gen_output.no_bpe_target)): - if args.prefix_len is None: - assert hypo_lst[key] in gen_output.no_bpe_hypo[gen_keys[key]], ( - "pred and rescore hypo mismatch:" - + "i:" - + str(key) - + str(hypo_lst[key]) - + str(gen_output.no_bpe_hypo[key]) - ) - ordered_hypos[gen_keys[key]] = hypo_lst[key] - ordered_targets[gen_keys[key]] = gen_output.no_bpe_target[ - gen_keys[key] - ] - - else: - full_hypo = rerank_utils.get_full_from_prefix( - hypo_lst[key], gen_output.no_bpe_hypo[gen_keys[key]] - ) - ordered_hypos[gen_keys[key]] = full_hypo - ordered_targets[gen_keys[key]] = gen_output.no_bpe_target[ - gen_keys[key] - ] - - # write the hypos in the original order from nbest list generation - if args.num_shards == (len(bitext1_lst)): - with open(target_outfile, "w") as t: - with open(hypo_outfile, "w") as h: - for key in range(len(ordered_hypos)): - t.write(ordered_targets[key]) - h.write(ordered_hypos[key]) - - res = scorer.result_string(4) - if write_hypos: - print(res) - score = rerank_utils.parse_bleu_scoring(res) - return score - - -def match_target_hypo(args, target_outfile, hypo_outfile): - """combine scores from the LM and bitext models, and write the top scoring hypothesis to a file""" - if len(args.weight1) == 1: - res = score_target_hypo( - args, - args.weight1[0], - args.weight2[0], - args.weight3[0], - args.lenpen[0], - target_outfile, - hypo_outfile, - True, - args.normalize, - ) - rerank_scores = [res] - else: - print("launching pool") - with Pool(32) as p: - rerank_scores = p.starmap( - score_target_hypo, - [ - ( - args, - args.weight1[i], - args.weight2[i], - args.weight3[i], - args.lenpen[i], - target_outfile, - hypo_outfile, - False, - args.normalize, - ) - for i in range(len(args.weight1)) - ], - ) - - if len(rerank_scores) > 1: - best_index = np.argmax(rerank_scores) - best_score = rerank_scores[best_index] - print("best score", best_score) - print("best lenpen", args.lenpen[best_index]) - print("best weight1", args.weight1[best_index]) - print("best weight2", args.weight2[best_index]) - print("best weight3", args.weight3[best_index]) - return ( - args.lenpen[best_index], - args.weight1[best_index], - args.weight2[best_index], - args.weight3[best_index], - best_score, - ) - - else: - return ( - args.lenpen[0], - args.weight1[0], - args.weight2[0], - args.weight3[0], - rerank_scores[0], - ) - - -def load_score_files(args): - if args.all_shards: - shard_ids = list(range(args.num_shards)) - else: - shard_ids = [args.shard_id] - - gen_output_lst = [] - bitext1_lst = [] - bitext2_lst = [] - lm_res1_lst = [] - - for shard_id in shard_ids: - using_nbest = args.nbest_list is not None - ( - pre_gen, - left_to_right_preprocessed_dir, - right_to_left_preprocessed_dir, - backwards_preprocessed_dir, - lm_preprocessed_dir, - ) = rerank_utils.get_directories( - args.data_dir_name, - args.num_rescore, - args.gen_subset, - args.gen_model_name, - shard_id, - args.num_shards, - args.sampling, - args.prefix_len, - args.target_prefix_frac, - args.source_prefix_frac, - ) - - rerank1_is_gen = ( - args.gen_model == args.score_model1 and args.source_prefix_frac is None - ) - rerank2_is_gen = ( - args.gen_model == args.score_model2 and args.source_prefix_frac is None - ) - - score1_file = rerank_utils.rescore_file_name( - pre_gen, - args.prefix_len, - args.model1_name, - target_prefix_frac=args.target_prefix_frac, - source_prefix_frac=args.source_prefix_frac, - backwards=args.backwards1, - ) - if args.score_model2 is not None: - score2_file = rerank_utils.rescore_file_name( - pre_gen, - args.prefix_len, - args.model2_name, - target_prefix_frac=args.target_prefix_frac, - source_prefix_frac=args.source_prefix_frac, - backwards=args.backwards2, - ) - if args.language_model is not None: - lm_score_file = rerank_utils.rescore_file_name( - pre_gen, args.prefix_len, args.lm_name, lm_file=True - ) - - # get gen output - predictions_bpe_file = pre_gen + "/generate_output_bpe.txt" - if using_nbest: - print("Using predefined n-best list from interactive.py") - predictions_bpe_file = args.nbest_list - gen_output = rerank_utils.BitextOutputFromGen( - predictions_bpe_file, - bpe_symbol=args.post_process, - nbest=using_nbest, - prefix_len=args.prefix_len, - target_prefix_frac=args.target_prefix_frac, - ) - - if rerank1_is_gen: - bitext1 = gen_output - else: - bitext1 = rerank_utils.BitextOutput( - score1_file, - args.backwards1, - args.right_to_left1, - args.post_process, - args.prefix_len, - args.target_prefix_frac, - args.source_prefix_frac, - ) - - if args.score_model2 is not None or args.nbest_list is not None: - if rerank2_is_gen: - bitext2 = gen_output - else: - bitext2 = rerank_utils.BitextOutput( - score2_file, - args.backwards2, - args.right_to_left2, - args.post_process, - args.prefix_len, - args.target_prefix_frac, - args.source_prefix_frac, - ) - - assert ( - bitext2.source_lengths == bitext1.source_lengths - ), "source lengths for rescoring models do not match" - assert ( - bitext2.target_lengths == bitext1.target_lengths - ), "target lengths for rescoring models do not match" - else: - if args.diff_bpe: - assert args.score_model2 is None - bitext2 = gen_output - else: - bitext2 = None - - if args.language_model is not None: - lm_res1 = rerank_utils.LMOutput( - lm_score_file, - args.lm_dict, - args.prefix_len, - args.post_process, - args.target_prefix_frac, - ) - else: - lm_res1 = None - - gen_output_lst.append(gen_output) - bitext1_lst.append(bitext1) - bitext2_lst.append(bitext2) - lm_res1_lst.append(lm_res1) - return gen_output_lst, bitext1_lst, bitext2_lst, lm_res1_lst - - -def rerank(args): - if type(args.lenpen) is not list: - args.lenpen = [args.lenpen] - if type(args.weight1) is not list: - args.weight1 = [args.weight1] - if type(args.weight2) is not list: - args.weight2 = [args.weight2] - if type(args.weight3) is not list: - args.weight3 = [args.weight3] - if args.all_shards: - shard_ids = list(range(args.num_shards)) - else: - shard_ids = [args.shard_id] - - for shard_id in shard_ids: - ( - pre_gen, - left_to_right_preprocessed_dir, - right_to_left_preprocessed_dir, - backwards_preprocessed_dir, - lm_preprocessed_dir, - ) = rerank_utils.get_directories( - args.data_dir_name, - args.num_rescore, - args.gen_subset, - args.gen_model_name, - shard_id, - args.num_shards, - args.sampling, - args.prefix_len, - args.target_prefix_frac, - args.source_prefix_frac, - ) - rerank_generate.gen_and_reprocess_nbest(args) - rerank_score_bw.score_bw(args) - rerank_score_lm.score_lm(args) - - if args.write_hypos is None: - write_targets = pre_gen + "/matched_targets" - write_hypos = pre_gen + "/matched_hypos" - else: - write_targets = args.write_hypos + "_targets" + args.gen_subset - write_hypos = args.write_hypos + "_hypos" + args.gen_subset - - if args.all_shards: - write_targets += "_all_shards" - write_hypos += "_all_shards" - - ( - best_lenpen, - best_weight1, - best_weight2, - best_weight3, - best_score, - ) = match_target_hypo(args, write_targets, write_hypos) - - return best_lenpen, best_weight1, best_weight2, best_weight3, best_score - - -def cli_main(): - parser = rerank_options.get_reranking_parser() - args = options.parse_args_and_arch(parser) - rerank(args) - - -if __name__ == "__main__": - cli_main() diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/speech_text_joint_to_text/criterions/__init__.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/speech_text_joint_to_text/criterions/__init__.py deleted file mode 100644 index 7faae73119321af0b34fe8e26499a2ef5577291a..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/speech_text_joint_to_text/criterions/__init__.py +++ /dev/null @@ -1,15 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import importlib -import os - - -for file in os.listdir(os.path.dirname(__file__)): - if file.endswith(".py") and not file.startswith("_"): - criterion_name = file[: file.find(".py")] - importlib.import_module( - "examples.speech_text_joint_to_text.criterions." + criterion_name - ) diff --git a/spaces/OFA-Sys/ONE-PEACE_Multimodal_Retrieval/style.css b/spaces/OFA-Sys/ONE-PEACE_Multimodal_Retrieval/style.css deleted file mode 100644 index 99b158ebfd6408132556e8accb5ae72f692264d2..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/ONE-PEACE_Multimodal_Retrieval/style.css +++ /dev/null @@ -1,38 +0,0 @@ -body { - padding: 2rem; - font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif; -} - -h1 { - font-size: 32px; - margin-top: 0; - text-align: center; -} - -p { - color: rgb(107, 114, 128); - font-size: 15px; - margin-bottom: 10px; - margin-top: 5px; -} - -.card { - max-width: 620px; - margin: 0 auto; - padding: 16px; - border: 1px solid lightgray; - border-radius: 16px; -} - -.card p:last-child { - margin-bottom: 0; -} - -iframe { - height:90vh; - width:90vw; - } - -img{ - max-width: 100%; -} \ No newline at end of file diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/modeling/dense_heads/utils.py b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/modeling/dense_heads/utils.py deleted file mode 100644 index c9efa287fc71315f633347023b390fe4ce57913a..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/modeling/dense_heads/utils.py +++ /dev/null @@ -1,38 +0,0 @@ -import cv2 -import torch -from torch import nn -from detectron2.utils.comm import get_world_size -from detectron2.structures import pairwise_iou, Boxes -# from .data import CenterNetCrop -import torch.nn.functional as F -import numpy as np -from detectron2.structures import Boxes, ImageList, Instances - -__all__ = ['reduce_sum', '_transpose'] - -INF = 1000000000 - -def _transpose(training_targets, num_loc_list): - ''' - This function is used to transpose image first training targets to - level first ones - :return: level first training targets - ''' - for im_i in range(len(training_targets)): - training_targets[im_i] = torch.split( - training_targets[im_i], num_loc_list, dim=0) - - targets_level_first = [] - for targets_per_level in zip(*training_targets): - targets_level_first.append( - torch.cat(targets_per_level, dim=0)) - return targets_level_first - - -def reduce_sum(tensor): - world_size = get_world_size() - if world_size < 2: - return tensor - tensor = tensor.clone() - torch.distributed.all_reduce(tensor, op=torch.distributed.ReduceOp.SUM) - return tensor \ No newline at end of file diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/tests/data/test_coco_evaluation.py b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/tests/data/test_coco_evaluation.py deleted file mode 100644 index 964f00284df64d3378ebfe32913c07deb5a1f819..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/tests/data/test_coco_evaluation.py +++ /dev/null @@ -1,138 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import contextlib -import copy -import io -import json -import numpy as np -import os -import tempfile -import unittest -import torch -from pycocotools.coco import COCO -from pycocotools.cocoeval import COCOeval - -from detectron2.data import DatasetCatalog -from detectron2.evaluation import COCOEvaluator -from detectron2.evaluation.fast_eval_api import COCOeval_opt -from detectron2.structures import Boxes, Instances - - -class TestCOCOeval(unittest.TestCase): - def test_fast_eval(self): - # A small set of images/categories from COCO val - # fmt: off - detections = [{"image_id": 139, "category_id": 1, "bbox": [417.3332824707031, 159.27003479003906, 47.66064453125, 143.00193786621094], "score": 0.9949821829795837, "segmentation": {"size": [426, 640], "counts": "Tc`52W=3N0N4aNN^E7]:4XE1g:8kDMT;U100000001O1gE[Nk8h1dFiNY9Z1aFkN]9g2J3NdN`FlN`9S1cFRN07]9g1bFoM6;X9c1cFoM=8R9g1bFQN>3U9Y30O01OO1O001N2O1N1O4L4L5UNoE3V:CVF6Q:@YF9l9@ZF 0 else 0.0 - msg = "%s: comparing COCO APIs, %s differs by %f" % (name, k, abs_diff) - self.assertTrue(abs_diff < 1e-4, msg=msg) - - def test_unknown_category(self): - dataset = "coco_2017_val_100" - evaluator = COCOEvaluator(dataset) - evaluator.reset() - inputs = DatasetCatalog.get(dataset)[:2] - pred = Instances((100, 100)) - pred.pred_boxes = Boxes(torch.rand(2, 4)) - pred.scores = torch.rand(2) - pred.pred_classes = torch.tensor([10, 80]) - output = {"instances": pred} - evaluator.process(inputs, [output, output]) - with self.assertRaises(AssertionError): - evaluator.evaluate() diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/inpainting_src/ldm_inpainting/ldm/models/__init__.py b/spaces/OpenGVLab/InternGPT/iGPT/models/inpainting_src/ldm_inpainting/ldm/models/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/OpenMotionLab/MotionGPT/pyrender/tests/unit/test_nodes.py b/spaces/OpenMotionLab/MotionGPT/pyrender/tests/unit/test_nodes.py deleted file mode 100644 index 9857c8221b7f6fb8530699bdf5593f8f0b74e152..0000000000000000000000000000000000000000 --- a/spaces/OpenMotionLab/MotionGPT/pyrender/tests/unit/test_nodes.py +++ /dev/null @@ -1,124 +0,0 @@ -import numpy as np -import pytest -from trimesh import transformations - -from pyrender import (DirectionalLight, PerspectiveCamera, Mesh, Node) - - -def test_nodes(): - - x = Node() - assert x.name is None - assert x.camera is None - assert x.children == [] - assert x.skin is None - assert np.allclose(x.matrix, np.eye(4)) - assert x.mesh is None - assert np.allclose(x.rotation, [0,0,0,1]) - assert np.allclose(x.scale, np.ones(3)) - assert np.allclose(x.translation, np.zeros(3)) - assert x.weights is None - assert x.light is None - - x.name = 'node' - - # Test node light/camera/mesh tests - c = PerspectiveCamera(yfov=2.0) - m = Mesh([]) - d = DirectionalLight() - x.camera = c - assert x.camera == c - with pytest.raises(TypeError): - x.camera = m - x.camera = d - x.camera = None - x.mesh = m - assert x.mesh == m - with pytest.raises(TypeError): - x.mesh = c - x.mesh = d - x.light = d - assert x.light == d - with pytest.raises(TypeError): - x.light = m - x.light = c - - # Test transformations getters/setters/etc... - # Set up test values - x = np.array([1.0, 0.0, 0.0]) - y = np.array([0.0, 1.0, 0.0]) - t = np.array([1.0, 2.0, 3.0]) - s = np.array([0.5, 2.0, 1.0]) - - Mx = transformations.rotation_matrix(np.pi / 2.0, x) - qx = np.roll(transformations.quaternion_about_axis(np.pi / 2.0, x), -1) - Mxt = Mx.copy() - Mxt[:3,3] = t - S = np.eye(4) - S[:3,:3] = np.diag(s) - Mxts = Mxt.dot(S) - - My = transformations.rotation_matrix(np.pi / 2.0, y) - qy = np.roll(transformations.quaternion_about_axis(np.pi / 2.0, y), -1) - Myt = My.copy() - Myt[:3,3] = t - - x = Node(matrix=Mx) - assert np.allclose(x.matrix, Mx) - assert np.allclose(x.rotation, qx) - assert np.allclose(x.translation, np.zeros(3)) - assert np.allclose(x.scale, np.ones(3)) - - x.matrix = My - assert np.allclose(x.matrix, My) - assert np.allclose(x.rotation, qy) - assert np.allclose(x.translation, np.zeros(3)) - assert np.allclose(x.scale, np.ones(3)) - x.translation = t - assert np.allclose(x.matrix, Myt) - assert np.allclose(x.rotation, qy) - x.rotation = qx - assert np.allclose(x.matrix, Mxt) - x.scale = s - assert np.allclose(x.matrix, Mxts) - - x = Node(matrix=Mxt) - assert np.allclose(x.matrix, Mxt) - assert np.allclose(x.rotation, qx) - assert np.allclose(x.translation, t) - assert np.allclose(x.scale, np.ones(3)) - - x = Node(matrix=Mxts) - assert np.allclose(x.matrix, Mxts) - assert np.allclose(x.rotation, qx) - assert np.allclose(x.translation, t) - assert np.allclose(x.scale, s) - - # Individual element getters - x.scale[0] = 0 - assert np.allclose(x.scale[0], 0) - - x.translation[0] = 0 - assert np.allclose(x.translation[0], 0) - - x.matrix = np.eye(4) - x.matrix[0,0] = 500 - assert x.matrix[0,0] == 1.0 - - # Failures - with pytest.raises(ValueError): - x.matrix = 5 * np.eye(4) - with pytest.raises(ValueError): - x.matrix = np.eye(5) - with pytest.raises(ValueError): - x.matrix = np.eye(4).dot([5,1,1,1]) - with pytest.raises(ValueError): - x.rotation = np.array([1,2]) - with pytest.raises(ValueError): - x.rotation = np.array([1,2,3]) - with pytest.raises(ValueError): - x.rotation = np.array([1,2,3,4]) - with pytest.raises(ValueError): - x.translation = np.array([1,2,3,4]) - with pytest.raises(ValueError): - x.scale = np.array([1,2,3,4]) diff --git a/spaces/OptimalScale/Robin-33b/lmflow/pipeline/inferencer.py b/spaces/OptimalScale/Robin-33b/lmflow/pipeline/inferencer.py deleted file mode 100644 index 76b994d826399676c4d33f662ef2a1b6401a0a75..0000000000000000000000000000000000000000 --- a/spaces/OptimalScale/Robin-33b/lmflow/pipeline/inferencer.py +++ /dev/null @@ -1,194 +0,0 @@ -#!/usr/bin/env python -# coding=utf-8 -"""The Inferencer class simplifies the process of model inferencing.""" - -import os -import torch -import wandb -import deepspeed -import sys -import numpy as np -import datetime -import json - -from transformers import AutoConfig -import torch.distributed as dist - -from lmflow.args import DatasetArguments -from lmflow.datasets.dataset import Dataset -from lmflow.pipeline.base_pipeline import BasePipeline -from lmflow.models.hf_decoder_model import HFDecoderModel -from lmflow.utils.data_utils import set_random_seed, batchlize, answer_extraction -os.environ["TOKENIZERS_PARALLELISM"] = "false" # To avoid warnings about parallelism in tokenizers - -def rstrip_partial_utf8(string): - return string.replace("\ufffd", "") - -class Inferencer(BasePipeline): - """ - Initializes the `Inferencer` class with given arguments. - - Parameters - ------------ - model_args : ModelArguments object. - Contains the arguments required to load the model. - - data_args : DatasetArguments object. - Contains the arguments required to load the dataset. - - inferencer_args : InferencerArguments object. - Contains the arguments required to perform inference. - - - """ - def __init__(self, model_args, data_args, inferencer_args): - self.data_args = data_args - self.inferencer_args = inferencer_args - self.model_args = model_args - - set_random_seed(self.inferencer_args.random_seed) - - self.local_rank = int(os.getenv("LOCAL_RANK", "0")) - self.world_size = int(os.getenv("WORLD_SIZE", "1")) - if inferencer_args.device == "gpu": - torch.cuda.set_device(self.local_rank) # NOTE: cpu-only machine will have error - deepspeed.init_distributed() - else: - os.environ["MASTER_ADDR"] = "localhost" - os.environ["MASTER_PORT"] = "15000" - dist.init_process_group( - "gloo", rank=self.local_rank, world_size=self.world_size - ) - - self.config = AutoConfig.from_pretrained(model_args.model_name_or_path, trust_remote_code=True) - try: - self.model_hidden_size = self.config.hidden_size - except: - print("Error in setting hidden size, use the default size 1024") - self.model_hidden_size = 1024 # gpt2 seems do not have hidden_size in config - - - def create_dataloader(self, dataset: Dataset): - data_dict = dataset.to_dict() - inputs = [ instance["text"] for instance in data_dict["instances"] ] - dataset_size = len(inputs) - dataset_buf = [] - for idx in range(dataset_size): - dataset_buf.append({ - "input": inputs[idx], - "input_idx": idx - }) - - dataloader = batchlize( - dataset_buf, - batch_size=1, - random_shuffle=False, - ) - return dataloader, dataset_size - - - def inference( - self, - model, - dataset: Dataset, - max_new_tokens: int=100, - temperature: float=0.0, - prompt_structure: str='{input}', - ): - """ - Perform inference for a model - - Parameters - ------------ - model : TunableModel object. - TunableModel to perform inference - - dataset : Dataset object. - - - Returns: - - output_dataset: Dataset object. - """ - if dataset.get_type() != "text_only": - raise NotImplementedError( - 'input dataset should have type "text_only"' - ) - - dataloader, data_size = self.create_dataloader(dataset) - - # The output dataset - output_dict = { - "type": "text_only", - "instances": [ - ] - } - - for batch_index, batch in enumerate(dataloader): - current_batch = batch[0] # batch size is 1 - - input = prompt_structure.format(input=current_batch['input']) - - if self.inferencer_args.device == "gpu": - inputs = model.encode(input, return_tensors="pt").to(device=self.local_rank) - elif self.inferencer_args.device == "cpu": - inputs = model.encode(input, return_tensors="pt").to(device='cpu') - else: - raise NotImplementedError( - f"device \"{self.inferencer_args.device}\" is not supported" - ) - - outputs = model.inference( - inputs, - max_new_tokens=max_new_tokens, - temperature=temperature, - repetition_penalty=1.0, - ) - text_out = model.decode(outputs[0], skip_special_tokens=True) - - # only return the generation, trucating the input - prompt_length = len(model.decode(inputs[0], skip_special_tokens=True,)) - text_out = text_out[prompt_length:] - output_dict["instances"].append({ "text": text_out }) - - output_dataset = Dataset(DatasetArguments(dataset_path = None)) - output_dataset = output_dataset.from_dict(output_dict) - - return output_dataset - - def stream_inference(self, context, model, max_new_tokens, token_per_step, temperature, end_string, input_dataset): - response = "" - history = [] - if "ChatGLMModel" in self.config.architectures: - for response, history in model.get_backend_model().stream_chat(model.get_tokenizer(), context, history=history): - response = rstrip_partial_utf8(response) - yield response, False - else: - for _ in range(0, max_new_tokens // token_per_step): - output_dataset = self.inference( - model=model, - dataset=input_dataset, - max_new_tokens=token_per_step, - temperature=temperature, - ) - - new_append_text = output_dataset.to_dict()["instances"][0]["text"] - new_append_text = rstrip_partial_utf8(new_append_text) - response += new_append_text - - input_dict = input_dataset.to_dict() - input_dict["instances"][0]["text"] += new_append_text - - input_dataset = input_dataset.from_dict(input_dict) - - flag_break = False - try: - index = response.index(end_string) - flag_break = True - except ValueError: - response += end_string - index = response.index(end_string) - - response = response[:index] - - yield response, flag_break diff --git a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/cnn/bricks/wrappers.py b/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/cnn/bricks/wrappers.py deleted file mode 100644 index 8aebf67bf52355a513f21756ee74fe510902d075..0000000000000000000000000000000000000000 --- a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/cnn/bricks/wrappers.py +++ /dev/null @@ -1,180 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -r"""Modified from https://github.com/facebookresearch/detectron2/blob/master/detectron2/layers/wrappers.py # noqa: E501 - -Wrap some nn modules to support empty tensor input. Currently, these wrappers -are mainly used in mask heads like fcn_mask_head and maskiou_heads since mask -heads are trained on only positive RoIs. -""" -import math - -import torch -import torch.nn as nn -from torch.nn.modules.utils import _pair, _triple - -from .registry import CONV_LAYERS, UPSAMPLE_LAYERS - -if torch.__version__ == 'parrots': - TORCH_VERSION = torch.__version__ -else: - # torch.__version__ could be 1.3.1+cu92, we only need the first two - # for comparison - TORCH_VERSION = tuple(int(x) for x in torch.__version__.split('.')[:2]) - - -def obsolete_torch_version(torch_version, version_threshold): - return torch_version == 'parrots' or torch_version <= version_threshold - - -class NewEmptyTensorOp(torch.autograd.Function): - - @staticmethod - def forward(ctx, x, new_shape): - ctx.shape = x.shape - return x.new_empty(new_shape) - - @staticmethod - def backward(ctx, grad): - shape = ctx.shape - return NewEmptyTensorOp.apply(grad, shape), None - - -@CONV_LAYERS.register_module('Conv', force=True) -class Conv2d(nn.Conv2d): - - def forward(self, x): - if x.numel() == 0 and obsolete_torch_version(TORCH_VERSION, (1, 4)): - out_shape = [x.shape[0], self.out_channels] - for i, k, p, s, d in zip(x.shape[-2:], self.kernel_size, - self.padding, self.stride, self.dilation): - o = (i + 2 * p - (d * (k - 1) + 1)) // s + 1 - out_shape.append(o) - empty = NewEmptyTensorOp.apply(x, out_shape) - if self.training: - # produce dummy gradient to avoid DDP warning. - dummy = sum(x.view(-1)[0] for x in self.parameters()) * 0.0 - return empty + dummy - else: - return empty - - return super().forward(x) - - -@CONV_LAYERS.register_module('Conv3d', force=True) -class Conv3d(nn.Conv3d): - - def forward(self, x): - if x.numel() == 0 and obsolete_torch_version(TORCH_VERSION, (1, 4)): - out_shape = [x.shape[0], self.out_channels] - for i, k, p, s, d in zip(x.shape[-3:], self.kernel_size, - self.padding, self.stride, self.dilation): - o = (i + 2 * p - (d * (k - 1) + 1)) // s + 1 - out_shape.append(o) - empty = NewEmptyTensorOp.apply(x, out_shape) - if self.training: - # produce dummy gradient to avoid DDP warning. - dummy = sum(x.view(-1)[0] for x in self.parameters()) * 0.0 - return empty + dummy - else: - return empty - - return super().forward(x) - - -@CONV_LAYERS.register_module() -@CONV_LAYERS.register_module('deconv') -@UPSAMPLE_LAYERS.register_module('deconv', force=True) -class ConvTranspose2d(nn.ConvTranspose2d): - - def forward(self, x): - if x.numel() == 0 and obsolete_torch_version(TORCH_VERSION, (1, 4)): - out_shape = [x.shape[0], self.out_channels] - for i, k, p, s, d, op in zip(x.shape[-2:], self.kernel_size, - self.padding, self.stride, - self.dilation, self.output_padding): - out_shape.append((i - 1) * s - 2 * p + (d * (k - 1) + 1) + op) - empty = NewEmptyTensorOp.apply(x, out_shape) - if self.training: - # produce dummy gradient to avoid DDP warning. - dummy = sum(x.view(-1)[0] for x in self.parameters()) * 0.0 - return empty + dummy - else: - return empty - - return super().forward(x) - - -@CONV_LAYERS.register_module() -@CONV_LAYERS.register_module('deconv3d') -@UPSAMPLE_LAYERS.register_module('deconv3d', force=True) -class ConvTranspose3d(nn.ConvTranspose3d): - - def forward(self, x): - if x.numel() == 0 and obsolete_torch_version(TORCH_VERSION, (1, 4)): - out_shape = [x.shape[0], self.out_channels] - for i, k, p, s, d, op in zip(x.shape[-3:], self.kernel_size, - self.padding, self.stride, - self.dilation, self.output_padding): - out_shape.append((i - 1) * s - 2 * p + (d * (k - 1) + 1) + op) - empty = NewEmptyTensorOp.apply(x, out_shape) - if self.training: - # produce dummy gradient to avoid DDP warning. - dummy = sum(x.view(-1)[0] for x in self.parameters()) * 0.0 - return empty + dummy - else: - return empty - - return super().forward(x) - - -class MaxPool2d(nn.MaxPool2d): - - def forward(self, x): - # PyTorch 1.9 does not support empty tensor inference yet - if x.numel() == 0 and obsolete_torch_version(TORCH_VERSION, (1, 9)): - out_shape = list(x.shape[:2]) - for i, k, p, s, d in zip(x.shape[-2:], _pair(self.kernel_size), - _pair(self.padding), _pair(self.stride), - _pair(self.dilation)): - o = (i + 2 * p - (d * (k - 1) + 1)) / s + 1 - o = math.ceil(o) if self.ceil_mode else math.floor(o) - out_shape.append(o) - empty = NewEmptyTensorOp.apply(x, out_shape) - return empty - - return super().forward(x) - - -class MaxPool3d(nn.MaxPool3d): - - def forward(self, x): - # PyTorch 1.9 does not support empty tensor inference yet - if x.numel() == 0 and obsolete_torch_version(TORCH_VERSION, (1, 9)): - out_shape = list(x.shape[:2]) - for i, k, p, s, d in zip(x.shape[-3:], _triple(self.kernel_size), - _triple(self.padding), - _triple(self.stride), - _triple(self.dilation)): - o = (i + 2 * p - (d * (k - 1) + 1)) / s + 1 - o = math.ceil(o) if self.ceil_mode else math.floor(o) - out_shape.append(o) - empty = NewEmptyTensorOp.apply(x, out_shape) - return empty - - return super().forward(x) - - -class Linear(torch.nn.Linear): - - def forward(self, x): - # empty tensor forward of Linear layer is supported in Pytorch 1.6 - if x.numel() == 0 and obsolete_torch_version(TORCH_VERSION, (1, 5)): - out_shape = [x.shape[0], self.out_features] - empty = NewEmptyTensorOp.apply(x, out_shape) - if self.training: - # produce dummy gradient to avoid DDP warning. - dummy = sum(x.view(-1)[0] for x in self.parameters()) * 0.0 - return empty + dummy - else: - return empty - - return super().forward(x) diff --git a/spaces/PKUWilliamYang/StyleGANEX/utils/inference_utils.py b/spaces/PKUWilliamYang/StyleGANEX/utils/inference_utils.py deleted file mode 100644 index 4e993cac404d3e0d6749cad54005179a7b375a10..0000000000000000000000000000000000000000 --- a/spaces/PKUWilliamYang/StyleGANEX/utils/inference_utils.py +++ /dev/null @@ -1,182 +0,0 @@ -import numpy as np -import matplotlib.pyplot as plt -from PIL import Image -import cv2 -import random -import math -import argparse -import torch -from torch.utils import data -from torch.nn import functional as F -from torch import autograd -from torch.nn import init -import torchvision.transforms as transforms -from scripts.align_all_parallel import get_landmark - -def visualize(img_arr, dpi): - plt.figure(figsize=(10,10),dpi=dpi) - plt.imshow(((img_arr.detach().cpu().numpy().transpose(1, 2, 0) + 1.0) * 127.5).astype(np.uint8)) - plt.axis('off') - plt.show() - -def save_image(img, filename): - tmp = ((img.detach().cpu().numpy().transpose(1, 2, 0) + 1.0) * 127.5).astype(np.uint8) - cv2.imwrite(filename, cv2.cvtColor(tmp, cv2.COLOR_RGB2BGR)) - -def load_image(filename): - transform = transforms.Compose([ - transforms.ToTensor(), - transforms.Normalize(mean=[0.5, 0.5, 0.5],std=[0.5,0.5,0.5]), - ]) - - img = Image.open(filename) - img = transform(img) - return img.unsqueeze(dim=0) - -def get_video_crop_parameter(filepath, predictor, padding=[256,256,256,256]): - if type(filepath) == str: - img = dlib.load_rgb_image(filepath) - else: - img = filepath - lm = get_landmark(img, predictor) - if lm is None: - return None - lm_chin = lm[0 : 17] # left-right - lm_eyebrow_left = lm[17 : 22] # left-right - lm_eyebrow_right = lm[22 : 27] # left-right - lm_nose = lm[27 : 31] # top-down - lm_nostrils = lm[31 : 36] # top-down - lm_eye_left = lm[36 : 42] # left-clockwise - lm_eye_right = lm[42 : 48] # left-clockwise - lm_mouth_outer = lm[48 : 60] # left-clockwise - lm_mouth_inner = lm[60 : 68] # left-clockwise - - scale = 64. / (np.mean(lm_eye_right[:,0])-np.mean(lm_eye_left[:,0])) - center = ((np.mean(lm_eye_right, axis=0)+np.mean(lm_eye_left, axis=0)) / 2) * scale - h, w = round(img.shape[0] * scale), round(img.shape[1] * scale) - left = max(round(center[0] - padding[0]), 0) // 8 * 8 - right = min(round(center[0] + padding[1]), w) // 8 * 8 - top = max(round(center[1] - padding[2]), 0) // 8 * 8 - bottom = min(round(center[1] + padding[3]), h) // 8 * 8 - return h,w,top,bottom,left,right,scale - -def tensor2cv2(img): - tmp = ((img.cpu().numpy().transpose(1, 2, 0) + 1.0) * 127.5).astype(np.uint8) - return cv2.cvtColor(tmp, cv2.COLOR_RGB2BGR) - -def noise_regularize(noises): - loss = 0 - - for noise in noises: - size = noise.shape[2] - - while True: - loss = ( - loss - + (noise * torch.roll(noise, shifts=1, dims=3)).mean().pow(2) - + (noise * torch.roll(noise, shifts=1, dims=2)).mean().pow(2) - ) - - if size <= 8: - break - - #noise = noise.reshape([-1, 1, size // 2, 2, size // 2, 2]) - #noise = noise.mean([3, 5]) - noise = F.interpolate(noise, scale_factor=0.5, mode='bilinear') - size //= 2 - - return loss - - -def noise_normalize_(noises): - for noise in noises: - mean = noise.mean() - std = noise.std() - - noise.data.add_(-mean).div_(std) - - -def get_lr(t, initial_lr, rampdown=0.25, rampup=0.05): - lr_ramp = min(1, (1 - t) / rampdown) - lr_ramp = 0.5 - 0.5 * math.cos(lr_ramp * math.pi) - lr_ramp = lr_ramp * min(1, t / rampup) - - return initial_lr * lr_ramp - - -def latent_noise(latent, strength): - noise = torch.randn_like(latent) * strength - - return latent + noise - - -def make_image(tensor): - return ( - tensor.detach() - .clamp_(min=-1, max=1) - .add(1) - .div_(2) - .mul(255) - .type(torch.uint8) - .permute(0, 2, 3, 1) - .to("cpu") - .numpy() - ) - - -# from pix2pixeHD -# Converts a one-hot tensor into a colorful label map -def tensor2label(label_tensor, n_label, imtype=np.uint8): - if n_label == 0: - return tensor2im(label_tensor, imtype) - label_tensor = label_tensor.cpu().float() - if label_tensor.size()[0] > 1: - label_tensor = label_tensor.max(0, keepdim=True)[1] - label_tensor = Colorize(n_label)(label_tensor) - label_numpy = np.transpose(label_tensor.numpy(), (1, 2, 0)) - return label_numpy.astype(imtype) - -def uint82bin(n, count=8): - """returns the binary of integer n, count refers to amount of bits""" - return ''.join([str((n >> y) & 1) for y in range(count-1, -1, -1)]) - -def labelcolormap(N): - if N == 35: # cityscape - cmap = np.array([( 0, 0, 0), ( 0, 0, 0), ( 0, 0, 0), ( 0, 0, 0), ( 0, 0, 0), (111, 74, 0), ( 81, 0, 81), - (128, 64,128), (244, 35,232), (250,170,160), (230,150,140), ( 70, 70, 70), (102,102,156), (190,153,153), - (180,165,180), (150,100,100), (150,120, 90), (153,153,153), (153,153,153), (250,170, 30), (220,220, 0), - (107,142, 35), (152,251,152), ( 70,130,180), (220, 20, 60), (255, 0, 0), ( 0, 0,142), ( 0, 0, 70), - ( 0, 60,100), ( 0, 0, 90), ( 0, 0,110), ( 0, 80,100), ( 0, 0,230), (119, 11, 32), ( 0, 0,142)], - dtype=np.uint8) - else: - cmap = np.zeros((N, 3), dtype=np.uint8) - for i in range(N): - r, g, b = 0, 0, 0 - id = i - for j in range(7): - str_id = uint82bin(id) - r = r ^ (np.uint8(str_id[-1]) << (7-j)) - g = g ^ (np.uint8(str_id[-2]) << (7-j)) - b = b ^ (np.uint8(str_id[-3]) << (7-j)) - id = id >> 3 - cmap[i, 0] = r - cmap[i, 1] = g - cmap[i, 2] = b - return cmap - -class Colorize(object): - def __init__(self, n=35): - self.cmap = labelcolormap(n) - self.cmap = torch.from_numpy(self.cmap[:n]) - - def __call__(self, gray_image): - size = gray_image.size() - color_image = torch.ByteTensor(3, size[1], size[2]).fill_(0) - - for label in range(0, len(self.cmap)): - mask = (label == gray_image[0]).cpu() - color_image[0][mask] = self.cmap[label][0] - color_image[1][mask] = self.cmap[label][1] - color_image[2][mask] = self.cmap[label][2] - - return color_image \ No newline at end of file diff --git a/spaces/PeepDaSlan9/AutoGPT/autogpt/commands/write_tests.py b/spaces/PeepDaSlan9/AutoGPT/autogpt/commands/write_tests.py deleted file mode 100644 index 35a086536c9d05d520a84b15ead49f775eacdcc9..0000000000000000000000000000000000000000 --- a/spaces/PeepDaSlan9/AutoGPT/autogpt/commands/write_tests.py +++ /dev/null @@ -1,31 +0,0 @@ -"""A module that contains a function to generate test cases for the submitted code.""" -from __future__ import annotations - -import json - -from autogpt.llm_utils import call_ai_function - - -def write_tests(code: str, focus: list[str]) -> str: - """ - A function that takes in code and focus topics and returns a response from create - chat completion api call. - - Parameters: - focus (list): A list of suggestions around what needs to be improved. - code (str): Code for test cases to be generated against. - Returns: - A result string from create chat completion. Test cases for the submitted code - in response. - """ - - function_string = ( - "def create_test_cases(code: str, focus: Optional[str] = None) -> str:" - ) - args = [code, json.dumps(focus)] - description_string = ( - "Generates test cases for the existing code, focusing on" - " specific areas if required." - ) - - return call_ai_function(function_string, args, description_string) diff --git a/spaces/Pie31415/control-animation/annotator/uniformer/configs/_base_/datasets/cityscapes.py b/spaces/Pie31415/control-animation/annotator/uniformer/configs/_base_/datasets/cityscapes.py deleted file mode 100644 index f21867c63e1835f6fceb61f066e802fd8fd2a735..0000000000000000000000000000000000000000 --- a/spaces/Pie31415/control-animation/annotator/uniformer/configs/_base_/datasets/cityscapes.py +++ /dev/null @@ -1,54 +0,0 @@ -# dataset settings -dataset_type = 'CityscapesDataset' -data_root = 'data/cityscapes/' -img_norm_cfg = dict( - mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) -crop_size = (512, 1024) -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations'), - dict(type='Resize', img_scale=(2048, 1024), ratio_range=(0.5, 2.0)), - dict(type='RandomCrop', crop_size=crop_size, cat_max_ratio=0.75), - dict(type='RandomFlip', prob=0.5), - dict(type='PhotoMetricDistortion'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size=crop_size, pad_val=0, seg_pad_val=255), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_semantic_seg']), -] -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=(2048, 1024), - # img_ratios=[0.5, 0.75, 1.0, 1.25, 1.5, 1.75], - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']), - ]) -] -data = dict( - samples_per_gpu=2, - workers_per_gpu=2, - train=dict( - type=dataset_type, - data_root=data_root, - img_dir='leftImg8bit/train', - ann_dir='gtFine/train', - pipeline=train_pipeline), - val=dict( - type=dataset_type, - data_root=data_root, - img_dir='leftImg8bit/val', - ann_dir='gtFine/val', - pipeline=test_pipeline), - test=dict( - type=dataset_type, - data_root=data_root, - img_dir='leftImg8bit/val', - ann_dir='gtFine/val', - pipeline=test_pipeline)) diff --git a/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/utils/misc.py b/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/utils/misc.py deleted file mode 100644 index 2c58d0d7fee9fe3d4519270ad8c1e998d0d8a18c..0000000000000000000000000000000000000000 --- a/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/utils/misc.py +++ /dev/null @@ -1,377 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import collections.abc -import functools -import itertools -import subprocess -import warnings -from collections import abc -from importlib import import_module -from inspect import getfullargspec -from itertools import repeat - - -# From PyTorch internals -def _ntuple(n): - - def parse(x): - if isinstance(x, collections.abc.Iterable): - return x - return tuple(repeat(x, n)) - - return parse - - -to_1tuple = _ntuple(1) -to_2tuple = _ntuple(2) -to_3tuple = _ntuple(3) -to_4tuple = _ntuple(4) -to_ntuple = _ntuple - - -def is_str(x): - """Whether the input is an string instance. - - Note: This method is deprecated since python 2 is no longer supported. - """ - return isinstance(x, str) - - -def import_modules_from_strings(imports, allow_failed_imports=False): - """Import modules from the given list of strings. - - Args: - imports (list | str | None): The given module names to be imported. - allow_failed_imports (bool): If True, the failed imports will return - None. Otherwise, an ImportError is raise. Default: False. - - Returns: - list[module] | module | None: The imported modules. - - Examples: - >>> osp, sys = import_modules_from_strings( - ... ['os.path', 'sys']) - >>> import os.path as osp_ - >>> import sys as sys_ - >>> assert osp == osp_ - >>> assert sys == sys_ - """ - if not imports: - return - single_import = False - if isinstance(imports, str): - single_import = True - imports = [imports] - if not isinstance(imports, list): - raise TypeError( - f'custom_imports must be a list but got type {type(imports)}') - imported = [] - for imp in imports: - if not isinstance(imp, str): - raise TypeError( - f'{imp} is of type {type(imp)} and cannot be imported.') - try: - imported_tmp = import_module(imp) - except ImportError: - if allow_failed_imports: - warnings.warn(f'{imp} failed to import and is ignored.', - UserWarning) - imported_tmp = None - else: - raise ImportError - imported.append(imported_tmp) - if single_import: - imported = imported[0] - return imported - - -def iter_cast(inputs, dst_type, return_type=None): - """Cast elements of an iterable object into some type. - - Args: - inputs (Iterable): The input object. - dst_type (type): Destination type. - return_type (type, optional): If specified, the output object will be - converted to this type, otherwise an iterator. - - Returns: - iterator or specified type: The converted object. - """ - if not isinstance(inputs, abc.Iterable): - raise TypeError('inputs must be an iterable object') - if not isinstance(dst_type, type): - raise TypeError('"dst_type" must be a valid type') - - out_iterable = map(dst_type, inputs) - - if return_type is None: - return out_iterable - else: - return return_type(out_iterable) - - -def list_cast(inputs, dst_type): - """Cast elements of an iterable object into a list of some type. - - A partial method of :func:`iter_cast`. - """ - return iter_cast(inputs, dst_type, return_type=list) - - -def tuple_cast(inputs, dst_type): - """Cast elements of an iterable object into a tuple of some type. - - A partial method of :func:`iter_cast`. - """ - return iter_cast(inputs, dst_type, return_type=tuple) - - -def is_seq_of(seq, expected_type, seq_type=None): - """Check whether it is a sequence of some type. - - Args: - seq (Sequence): The sequence to be checked. - expected_type (type): Expected type of sequence items. - seq_type (type, optional): Expected sequence type. - - Returns: - bool: Whether the sequence is valid. - """ - if seq_type is None: - exp_seq_type = abc.Sequence - else: - assert isinstance(seq_type, type) - exp_seq_type = seq_type - if not isinstance(seq, exp_seq_type): - return False - for item in seq: - if not isinstance(item, expected_type): - return False - return True - - -def is_list_of(seq, expected_type): - """Check whether it is a list of some type. - - A partial method of :func:`is_seq_of`. - """ - return is_seq_of(seq, expected_type, seq_type=list) - - -def is_tuple_of(seq, expected_type): - """Check whether it is a tuple of some type. - - A partial method of :func:`is_seq_of`. - """ - return is_seq_of(seq, expected_type, seq_type=tuple) - - -def slice_list(in_list, lens): - """Slice a list into several sub lists by a list of given length. - - Args: - in_list (list): The list to be sliced. - lens(int or list): The expected length of each out list. - - Returns: - list: A list of sliced list. - """ - if isinstance(lens, int): - assert len(in_list) % lens == 0 - lens = [lens] * int(len(in_list) / lens) - if not isinstance(lens, list): - raise TypeError('"indices" must be an integer or a list of integers') - elif sum(lens) != len(in_list): - raise ValueError('sum of lens and list length does not ' - f'match: {sum(lens)} != {len(in_list)}') - out_list = [] - idx = 0 - for i in range(len(lens)): - out_list.append(in_list[idx:idx + lens[i]]) - idx += lens[i] - return out_list - - -def concat_list(in_list): - """Concatenate a list of list into a single list. - - Args: - in_list (list): The list of list to be merged. - - Returns: - list: The concatenated flat list. - """ - return list(itertools.chain(*in_list)) - - -def check_prerequisites( - prerequisites, - checker, - msg_tmpl='Prerequisites "{}" are required in method "{}" but not ' - 'found, please install them first.'): # yapf: disable - """A decorator factory to check if prerequisites are satisfied. - - Args: - prerequisites (str of list[str]): Prerequisites to be checked. - checker (callable): The checker method that returns True if a - prerequisite is meet, False otherwise. - msg_tmpl (str): The message template with two variables. - - Returns: - decorator: A specific decorator. - """ - - def wrap(func): - - @functools.wraps(func) - def wrapped_func(*args, **kwargs): - requirements = [prerequisites] if isinstance( - prerequisites, str) else prerequisites - missing = [] - for item in requirements: - if not checker(item): - missing.append(item) - if missing: - print(msg_tmpl.format(', '.join(missing), func.__name__)) - raise RuntimeError('Prerequisites not meet.') - else: - return func(*args, **kwargs) - - return wrapped_func - - return wrap - - -def _check_py_package(package): - try: - import_module(package) - except ImportError: - return False - else: - return True - - -def _check_executable(cmd): - if subprocess.call(f'which {cmd}', shell=True) != 0: - return False - else: - return True - - -def requires_package(prerequisites): - """A decorator to check if some python packages are installed. - - Example: - >>> @requires_package('numpy') - >>> func(arg1, args): - >>> return numpy.zeros(1) - array([0.]) - >>> @requires_package(['numpy', 'non_package']) - >>> func(arg1, args): - >>> return numpy.zeros(1) - ImportError - """ - return check_prerequisites(prerequisites, checker=_check_py_package) - - -def requires_executable(prerequisites): - """A decorator to check if some executable files are installed. - - Example: - >>> @requires_executable('ffmpeg') - >>> func(arg1, args): - >>> print(1) - 1 - """ - return check_prerequisites(prerequisites, checker=_check_executable) - - -def deprecated_api_warning(name_dict, cls_name=None): - """A decorator to check if some arguments are deprecate and try to replace - deprecate src_arg_name to dst_arg_name. - - Args: - name_dict(dict): - key (str): Deprecate argument names. - val (str): Expected argument names. - - Returns: - func: New function. - """ - - def api_warning_wrapper(old_func): - - @functools.wraps(old_func) - def new_func(*args, **kwargs): - # get the arg spec of the decorated method - args_info = getfullargspec(old_func) - # get name of the function - func_name = old_func.__name__ - if cls_name is not None: - func_name = f'{cls_name}.{func_name}' - if args: - arg_names = args_info.args[:len(args)] - for src_arg_name, dst_arg_name in name_dict.items(): - if src_arg_name in arg_names: - warnings.warn( - f'"{src_arg_name}" is deprecated in ' - f'`{func_name}`, please use "{dst_arg_name}" ' - 'instead') - arg_names[arg_names.index(src_arg_name)] = dst_arg_name - if kwargs: - for src_arg_name, dst_arg_name in name_dict.items(): - if src_arg_name in kwargs: - - assert dst_arg_name not in kwargs, ( - f'The expected behavior is to replace ' - f'the deprecated key `{src_arg_name}` to ' - f'new key `{dst_arg_name}`, but got them ' - f'in the arguments at the same time, which ' - f'is confusing. `{src_arg_name} will be ' - f'deprecated in the future, please ' - f'use `{dst_arg_name}` instead.') - - warnings.warn( - f'"{src_arg_name}" is deprecated in ' - f'`{func_name}`, please use "{dst_arg_name}" ' - 'instead') - kwargs[dst_arg_name] = kwargs.pop(src_arg_name) - - # apply converted arguments to the decorated method - output = old_func(*args, **kwargs) - return output - - return new_func - - return api_warning_wrapper - - -def is_method_overridden(method, base_class, derived_class): - """Check if a method of base class is overridden in derived class. - - Args: - method (str): the method name to check. - base_class (type): the class of the base class. - derived_class (type | Any): the class or instance of the derived class. - """ - assert isinstance(base_class, type), \ - "base_class doesn't accept instance, Please pass class instead." - - if not isinstance(derived_class, type): - derived_class = derived_class.__class__ - - base_method = getattr(base_class, method) - derived_method = getattr(derived_class, method) - return derived_method != base_method - - -def has_method(obj: object, method: str) -> bool: - """Check whether the object has a method. - - Args: - method (str): The method name to check. - obj (object): The object to check. - - Returns: - bool: True if the object has the method else False. - """ - return hasattr(obj, method) and callable(getattr(obj, method)) diff --git a/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/tests/data/test_audio_utils.py b/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/tests/data/test_audio_utils.py deleted file mode 100644 index 0480671bb17281d61ce02bce6373a5ccec89fece..0000000000000000000000000000000000000000 --- a/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/tests/data/test_audio_utils.py +++ /dev/null @@ -1,110 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import julius -import torch -import pytest - -from audiocraft.data.audio_utils import ( - _clip_wav, - convert_audio_channels, - convert_audio, - normalize_audio -) -from ..common_utils import get_batch_white_noise - - -class TestConvertAudioChannels: - - def test_convert_audio_channels_downmix(self): - b, c, t = 2, 3, 100 - audio = get_batch_white_noise(b, c, t) - mixed = convert_audio_channels(audio, channels=2) - assert list(mixed.shape) == [b, 2, t] - - def test_convert_audio_channels_nochange(self): - b, c, t = 2, 3, 100 - audio = get_batch_white_noise(b, c, t) - mixed = convert_audio_channels(audio, channels=c) - assert list(mixed.shape) == list(audio.shape) - - def test_convert_audio_channels_upmix(self): - b, c, t = 2, 1, 100 - audio = get_batch_white_noise(b, c, t) - mixed = convert_audio_channels(audio, channels=3) - assert list(mixed.shape) == [b, 3, t] - - def test_convert_audio_channels_upmix_error(self): - b, c, t = 2, 2, 100 - audio = get_batch_white_noise(b, c, t) - with pytest.raises(ValueError): - convert_audio_channels(audio, channels=3) - - -class TestConvertAudio: - - def test_convert_audio_channels_downmix(self): - b, c, dur = 2, 3, 4. - sr = 128 - audio = get_batch_white_noise(b, c, int(sr * dur)) - out = convert_audio(audio, from_rate=sr, to_rate=sr, to_channels=2) - assert list(out.shape) == [audio.shape[0], 2, audio.shape[-1]] - - def test_convert_audio_channels_upmix(self): - b, c, dur = 2, 1, 4. - sr = 128 - audio = get_batch_white_noise(b, c, int(sr * dur)) - out = convert_audio(audio, from_rate=sr, to_rate=sr, to_channels=3) - assert list(out.shape) == [audio.shape[0], 3, audio.shape[-1]] - - def test_convert_audio_upsample(self): - b, c, dur = 2, 1, 4. - sr = 2 - new_sr = 3 - audio = get_batch_white_noise(b, c, int(sr * dur)) - out = convert_audio(audio, from_rate=sr, to_rate=new_sr, to_channels=c) - out_j = julius.resample.resample_frac(audio, old_sr=sr, new_sr=new_sr) - assert torch.allclose(out, out_j) - - def test_convert_audio_resample(self): - b, c, dur = 2, 1, 4. - sr = 3 - new_sr = 2 - audio = get_batch_white_noise(b, c, int(sr * dur)) - out = convert_audio(audio, from_rate=sr, to_rate=new_sr, to_channels=c) - out_j = julius.resample.resample_frac(audio, old_sr=sr, new_sr=new_sr) - assert torch.allclose(out, out_j) - - -class TestNormalizeAudio: - - def test_clip_wav(self): - b, c, dur = 2, 1, 4. - sr = 3 - audio = 10.0 * get_batch_white_noise(b, c, int(sr * dur)) - _clip_wav(audio) - assert audio.abs().max() <= 1 - - def test_normalize_audio_clip(self): - b, c, dur = 2, 1, 4. - sr = 3 - audio = 10.0 * get_batch_white_noise(b, c, int(sr * dur)) - norm_audio = normalize_audio(audio, strategy='clip') - assert norm_audio.abs().max() <= 1 - - def test_normalize_audio_rms(self): - b, c, dur = 2, 1, 4. - sr = 3 - audio = 10.0 * get_batch_white_noise(b, c, int(sr * dur)) - norm_audio = normalize_audio(audio, strategy='rms') - assert norm_audio.abs().max() <= 1 - - def test_normalize_audio_peak(self): - b, c, dur = 2, 1, 4. - sr = 3 - audio = 10.0 * get_batch_white_noise(b, c, int(sr * dur)) - norm_audio = normalize_audio(audio, strategy='peak') - assert norm_audio.abs().max() <= 1 diff --git a/spaces/RMXK/RVC_HFF/infer/lib/audio.py b/spaces/RMXK/RVC_HFF/infer/lib/audio.py deleted file mode 100644 index 9ad4ff74218957cf18782fa71add40a734b47e78..0000000000000000000000000000000000000000 --- a/spaces/RMXK/RVC_HFF/infer/lib/audio.py +++ /dev/null @@ -1,197 +0,0 @@ -import librosa -import numpy as np -import av -from io import BytesIO -import ffmpeg -import os -import sys - -import random -from infer.lib.csvutil import CSVutil -#import csv - -platform_stft_mapping = { - 'linux': 'stftpitchshift', - 'darwin': 'stftpitchshift', - 'win32': 'stftpitchshift.exe', -} - -stft = platform_stft_mapping.get(sys.platform) - -def wav2(i, o, format): - inp = av.open(i, 'rb') - if format == "m4a": format = "mp4" - out = av.open(o, 'wb', format=format) - if format == "ogg": format = "libvorbis" - if format == "mp4": format = "aac" - - ostream = out.add_stream(format) - - for frame in inp.decode(audio=0): - for p in ostream.encode(frame): out.mux(p) - - for p in ostream.encode(None): out.mux(p) - - out.close() - inp.close() - -def audio2(i, o, format, sr): - inp = av.open(i, 'rb') - out = av.open(o, 'wb', format=format) - if format == "ogg": format = "libvorbis" - if format == "f32le": format = "pcm_f32le" - - ostream = out.add_stream(format, channels=1) - ostream.sample_rate = sr - - for frame in inp.decode(audio=0): - for p in ostream.encode(frame): out.mux(p) - - out.close() - inp.close() - -def load_audion(file, sr): - try: - file = ( - file.strip(" ").strip('"').strip("\n").strip('"').strip(" ") - ) # 防止小白拷路径头尾带了空格和"和回车 - with open(file, "rb") as f: - with BytesIO() as out: - audio2(f, out, "f32le", sr) - return np.frombuffer(out.getvalue(), np.float32).flatten() - - except AttributeError: - audio = file[1] / 32768.0 - if len(audio.shape) == 2: - audio = np.mean(audio, -1) - return librosa.resample(audio, orig_sr=file[0], target_sr=16000) - - except Exception as e: - raise RuntimeError(f"Failed to load audio: {e}") - - - - -def load_audio(file, sr, DoFormant=False, Quefrency=1.0, Timbre=1.0): - converted = False - DoFormant, Quefrency, Timbre = CSVutil("csvdb/formanting.csv", "r", "formanting") - try: - # https://github.com/openai/whisper/blob/main/whisper/audio.py#L26 - # This launches a subprocess to decode audio while down-mixing and resampling as necessary. - # Requires the ffmpeg CLI and `ffmpeg-python` package to be installed. - file = ( - file.strip(" ").strip('"').strip("\n").strip('"').strip(" ") - ) # 防止小白拷路径头尾带了空格和"和回车 - file_formanted = file.strip(" ").strip('"').strip("\n").strip('"').strip(" ") - - # print(f"dofor={bool(DoFormant)} timbr={Timbre} quef={Quefrency}\n") - - if ( - lambda DoFormant: True - if DoFormant.lower() == "true" - else (False if DoFormant.lower() == "false" else DoFormant) - )(DoFormant): - numerator = round(random.uniform(1, 4), 4) - # os.system(f"stftpitchshift -i {file} -q {Quefrency} -t {Timbre} -o {file_formanted}") - # print('stftpitchshift -i "%s" -p 1.0 --rms -w 128 -v 8 -q %s -t %s -o "%s"' % (file, Quefrency, Timbre, file_formanted)) - - if not file.endswith(".wav"): - if not os.path.isfile(f"{file_formanted}.wav"): - converted = True - # print(f"\nfile = {file}\n") - # print(f"\nfile_formanted = {file_formanted}\n") - converting = ( - ffmpeg.input(file_formanted, threads=0) - .output(f"{file_formanted}.wav") - .run( - cmd=["ffmpeg", "-nostdin"], - capture_stdout=True, - capture_stderr=True, - ) - ) - else: - pass - - file_formanted = ( - f"{file_formanted}.wav" - if not file_formanted.endswith(".wav") - else file_formanted - ) - - print(f" · Formanting {file_formanted}...\n") - - os.system( - '%s -i "%s" -q "%s" -t "%s" -o "%sFORMANTED_%s.wav"' - % ( - stft, - file_formanted, - Quefrency, - Timbre, - file_formanted, - str(numerator), - ) - ) - - print(f" · Formanted {file_formanted}!\n") - - # filepraat = (os.path.abspath(os.getcwd()) + '\\' + file).replace('/','\\') - # file_formantedpraat = ('"' + os.path.abspath(os.getcwd()) + '/' + 'formanted'.join(file_formanted) + '"').replace('/','\\') - # print("%sFORMANTED_%s.wav" % (file_formanted, str(numerator))) - - out, _ = ( - ffmpeg.input( - "%sFORMANTED_%s.wav" % (file_formanted, str(numerator)), threads=0 - ) - .output("-", format="f32le", acodec="pcm_f32le", ac=1, ar=sr) - .run( - cmd=["ffmpeg", "-nostdin"], capture_stdout=True, capture_stderr=True - ) - ) - - try: - os.remove("%sFORMANTED_%s.wav" % (file_formanted, str(numerator))) - except Exception: - pass - print("couldn't remove formanted type of file") - - else: - out, _ = ( - ffmpeg.input(file, threads=0) - .output("-", format="f32le", acodec="pcm_f32le", ac=1, ar=sr) - .run( - cmd=["ffmpeg", "-nostdin"], capture_stdout=True, capture_stderr=True - ) - ) - except Exception as e: - raise RuntimeError(f"Failed to load audio: {e}") - - if converted: - try: - os.remove(file_formanted) - except Exception: - pass - print("couldn't remove converted type of file") - converted = False - - return np.frombuffer(out, np.float32).flatten() - - -def check_audio_duration(file): - try: - file = file.strip(" ").strip('"').strip("\n").strip('"').strip(" ") - - probe = ffmpeg.probe(file) - - duration = float(probe['streams'][0]['duration']) - - if duration < 0.76: - print( - f"\n------------\n" - f"Audio file, {file.split('/')[-1]}, under ~0.76s detected - file is too short. Target at least 1-2s for best results." - f"\n------------\n\n" - ) - return False - - return True - except Exception as e: - raise RuntimeError(f"Failed to check audio duration: {e}") \ No newline at end of file diff --git a/spaces/Ricdeq/optimaldesign/chnges/optimal.py b/spaces/Ricdeq/optimaldesign/chnges/optimal.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/core/bbox/samplers/ohem_sampler.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/core/bbox/samplers/ohem_sampler.py deleted file mode 100644 index 8b99f60ef0176f1b7a56665fb0f59272f65b84cd..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/core/bbox/samplers/ohem_sampler.py +++ /dev/null @@ -1,107 +0,0 @@ -import torch - -from ..builder import BBOX_SAMPLERS -from ..transforms import bbox2roi -from .base_sampler import BaseSampler - - -@BBOX_SAMPLERS.register_module() -class OHEMSampler(BaseSampler): - r"""Online Hard Example Mining Sampler described in `Training Region-based - Object Detectors with Online Hard Example Mining - `_. - """ - - def __init__(self, - num, - pos_fraction, - context, - neg_pos_ub=-1, - add_gt_as_proposals=True, - **kwargs): - super(OHEMSampler, self).__init__(num, pos_fraction, neg_pos_ub, - add_gt_as_proposals) - self.context = context - if not hasattr(self.context, 'num_stages'): - self.bbox_head = self.context.bbox_head - else: - self.bbox_head = self.context.bbox_head[self.context.current_stage] - - def hard_mining(self, inds, num_expected, bboxes, labels, feats): - with torch.no_grad(): - rois = bbox2roi([bboxes]) - if not hasattr(self.context, 'num_stages'): - bbox_results = self.context._bbox_forward(feats, rois) - else: - bbox_results = self.context._bbox_forward( - self.context.current_stage, feats, rois) - cls_score = bbox_results['cls_score'] - loss = self.bbox_head.loss( - cls_score=cls_score, - bbox_pred=None, - rois=rois, - labels=labels, - label_weights=cls_score.new_ones(cls_score.size(0)), - bbox_targets=None, - bbox_weights=None, - reduction_override='none')['loss_cls'] - _, topk_loss_inds = loss.topk(num_expected) - return inds[topk_loss_inds] - - def _sample_pos(self, - assign_result, - num_expected, - bboxes=None, - feats=None, - **kwargs): - """Sample positive boxes. - - Args: - assign_result (:obj:`AssignResult`): Assigned results - num_expected (int): Number of expected positive samples - bboxes (torch.Tensor, optional): Boxes. Defaults to None. - feats (list[torch.Tensor], optional): Multi-level features. - Defaults to None. - - Returns: - torch.Tensor: Indices of positive samples - """ - # Sample some hard positive samples - pos_inds = torch.nonzero(assign_result.gt_inds > 0, as_tuple=False) - if pos_inds.numel() != 0: - pos_inds = pos_inds.squeeze(1) - if pos_inds.numel() <= num_expected: - return pos_inds - else: - return self.hard_mining(pos_inds, num_expected, bboxes[pos_inds], - assign_result.labels[pos_inds], feats) - - def _sample_neg(self, - assign_result, - num_expected, - bboxes=None, - feats=None, - **kwargs): - """Sample negative boxes. - - Args: - assign_result (:obj:`AssignResult`): Assigned results - num_expected (int): Number of expected negative samples - bboxes (torch.Tensor, optional): Boxes. Defaults to None. - feats (list[torch.Tensor], optional): Multi-level features. - Defaults to None. - - Returns: - torch.Tensor: Indices of negative samples - """ - # Sample some hard negative samples - neg_inds = torch.nonzero(assign_result.gt_inds == 0, as_tuple=False) - if neg_inds.numel() != 0: - neg_inds = neg_inds.squeeze(1) - if len(neg_inds) <= num_expected: - return neg_inds - else: - neg_labels = assign_result.labels.new_empty( - neg_inds.size(0)).fill_(self.bbox_head.num_classes) - return self.hard_mining(neg_inds, num_expected, bboxes[neg_inds], - neg_labels, feats) diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/datasets/pipelines/compose.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/datasets/pipelines/compose.py deleted file mode 100644 index ca48f1c935755c486edc2744e1713e2b5ba3cdc8..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/datasets/pipelines/compose.py +++ /dev/null @@ -1,51 +0,0 @@ -import collections - -from mmcv.utils import build_from_cfg - -from ..builder import PIPELINES - - -@PIPELINES.register_module() -class Compose(object): - """Compose multiple transforms sequentially. - - Args: - transforms (Sequence[dict | callable]): Sequence of transform object or - config dict to be composed. - """ - - def __init__(self, transforms): - assert isinstance(transforms, collections.abc.Sequence) - self.transforms = [] - for transform in transforms: - if isinstance(transform, dict): - transform = build_from_cfg(transform, PIPELINES) - self.transforms.append(transform) - elif callable(transform): - self.transforms.append(transform) - else: - raise TypeError('transform must be callable or a dict') - - def __call__(self, data): - """Call function to apply transforms sequentially. - - Args: - data (dict): A result dict contains the data to transform. - - Returns: - dict: Transformed data. - """ - - for t in self.transforms: - data = t(data) - if data is None: - return None - return data - - def __repr__(self): - format_string = self.__class__.__name__ + '(' - for t in self.transforms: - format_string += '\n' - format_string += f' {t}' - format_string += '\n)' - return format_string diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/dense_heads/nasfcos_head.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/dense_heads/nasfcos_head.py deleted file mode 100644 index 994ce0455e1982110f237b3958a81394c319bb47..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/dense_heads/nasfcos_head.py +++ /dev/null @@ -1,75 +0,0 @@ -import copy - -import torch.nn as nn -from mmcv.cnn import (ConvModule, Scale, bias_init_with_prob, - caffe2_xavier_init, normal_init) - -from mmdet.models.dense_heads.fcos_head import FCOSHead -from ..builder import HEADS - - -@HEADS.register_module() -class NASFCOSHead(FCOSHead): - """Anchor-free head used in `NASFCOS `_. - - It is quite similar with FCOS head, except for the searched structure of - classification branch and bbox regression branch, where a structure of - "dconv3x3, conv3x3, dconv3x3, conv1x1" is utilized instead. - """ - - def _init_layers(self): - """Initialize layers of the head.""" - dconv3x3_config = dict( - type='DCNv2', - kernel_size=3, - use_bias=True, - deform_groups=2, - padding=1) - conv3x3_config = dict(type='Conv', kernel_size=3, padding=1) - conv1x1_config = dict(type='Conv', kernel_size=1) - - self.arch_config = [ - dconv3x3_config, conv3x3_config, dconv3x3_config, conv1x1_config - ] - self.cls_convs = nn.ModuleList() - self.reg_convs = nn.ModuleList() - for i, op_ in enumerate(self.arch_config): - op = copy.deepcopy(op_) - chn = self.in_channels if i == 0 else self.feat_channels - assert isinstance(op, dict) - use_bias = op.pop('use_bias', False) - padding = op.pop('padding', 0) - kernel_size = op.pop('kernel_size') - module = ConvModule( - chn, - self.feat_channels, - kernel_size, - stride=1, - padding=padding, - norm_cfg=self.norm_cfg, - bias=use_bias, - conv_cfg=op) - - self.cls_convs.append(copy.deepcopy(module)) - self.reg_convs.append(copy.deepcopy(module)) - - self.conv_cls = nn.Conv2d( - self.feat_channels, self.cls_out_channels, 3, padding=1) - self.conv_reg = nn.Conv2d(self.feat_channels, 4, 3, padding=1) - self.conv_centerness = nn.Conv2d(self.feat_channels, 1, 3, padding=1) - - self.scales = nn.ModuleList([Scale(1.0) for _ in self.strides]) - - def init_weights(self): - """Initialize weights of the head.""" - # retinanet_bias_init - bias_cls = bias_init_with_prob(0.01) - normal_init(self.conv_reg, std=0.01) - normal_init(self.conv_centerness, std=0.01) - normal_init(self.conv_cls, std=0.01, bias=bias_cls) - - for branch in [self.cls_convs, self.reg_convs]: - for module in branch.modules(): - if isinstance(module, ConvModule) \ - and isinstance(module.conv, nn.Conv2d): - caffe2_xavier_init(module.conv) diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/cnn/bricks/registry.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/cnn/bricks/registry.py deleted file mode 100644 index 39eabc58db4b5954478a2ac1ab91cea5e45ab055..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/cnn/bricks/registry.py +++ /dev/null @@ -1,16 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from annotator.uniformer.mmcv.utils import Registry - -CONV_LAYERS = Registry('conv layer') -NORM_LAYERS = Registry('norm layer') -ACTIVATION_LAYERS = Registry('activation layer') -PADDING_LAYERS = Registry('padding layer') -UPSAMPLE_LAYERS = Registry('upsample layer') -PLUGIN_LAYERS = Registry('plugin layer') - -DROPOUT_LAYERS = Registry('drop out layers') -POSITIONAL_ENCODING = Registry('position encoding') -ATTENTION = Registry('attention') -FEEDFORWARD_NETWORK = Registry('feed-forward Network') -TRANSFORMER_LAYER = Registry('transformerLayer') -TRANSFORMER_LAYER_SEQUENCE = Registry('transformer-layers sequence') diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/image/__init__.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/image/__init__.py deleted file mode 100644 index d0051d609d3de4e7562e3fe638335c66617c4d91..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/image/__init__.py +++ /dev/null @@ -1,28 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .colorspace import (bgr2gray, bgr2hls, bgr2hsv, bgr2rgb, bgr2ycbcr, - gray2bgr, gray2rgb, hls2bgr, hsv2bgr, imconvert, - rgb2bgr, rgb2gray, rgb2ycbcr, ycbcr2bgr, ycbcr2rgb) -from .geometric import (cutout, imcrop, imflip, imflip_, impad, - impad_to_multiple, imrescale, imresize, imresize_like, - imresize_to_multiple, imrotate, imshear, imtranslate, - rescale_size) -from .io import imfrombytes, imread, imwrite, supported_backends, use_backend -from .misc import tensor2imgs -from .photometric import (adjust_brightness, adjust_color, adjust_contrast, - adjust_lighting, adjust_sharpness, auto_contrast, - clahe, imdenormalize, imequalize, iminvert, - imnormalize, imnormalize_, lut_transform, posterize, - solarize) - -__all__ = [ - 'bgr2gray', 'bgr2hls', 'bgr2hsv', 'bgr2rgb', 'gray2bgr', 'gray2rgb', - 'hls2bgr', 'hsv2bgr', 'imconvert', 'rgb2bgr', 'rgb2gray', 'imrescale', - 'imresize', 'imresize_like', 'imresize_to_multiple', 'rescale_size', - 'imcrop', 'imflip', 'imflip_', 'impad', 'impad_to_multiple', 'imrotate', - 'imfrombytes', 'imread', 'imwrite', 'supported_backends', 'use_backend', - 'imdenormalize', 'imnormalize', 'imnormalize_', 'iminvert', 'posterize', - 'solarize', 'rgb2ycbcr', 'bgr2ycbcr', 'ycbcr2rgb', 'ycbcr2bgr', - 'tensor2imgs', 'imshear', 'imtranslate', 'adjust_color', 'imequalize', - 'adjust_brightness', 'adjust_contrast', 'lut_transform', 'clahe', - 'adjust_sharpness', 'auto_contrast', 'cutout', 'adjust_lighting' -] diff --git a/spaces/SIGGRAPH2022/Text2Human/Text2Human/models/__init__.py b/spaces/SIGGRAPH2022/Text2Human/Text2Human/models/__init__.py deleted file mode 100644 index caeb363ed8ade72ac2bd3214fcbba62313efc262..0000000000000000000000000000000000000000 --- a/spaces/SIGGRAPH2022/Text2Human/Text2Human/models/__init__.py +++ /dev/null @@ -1,42 +0,0 @@ -import glob -import importlib -import logging -import os.path as osp - -# automatically scan and import model modules -# scan all the files under the 'models' folder and collect files ending with -# '_model.py' -model_folder = osp.dirname(osp.abspath(__file__)) -model_filenames = [ - osp.splitext(osp.basename(v))[0] - for v in glob.glob(f'{model_folder}/*_model.py') -] -# import all the model modules -_model_modules = [ - importlib.import_module(f'models.{file_name}') - for file_name in model_filenames -] - - -def create_model(opt): - """Create model. - - Args: - opt (dict): Configuration. It constains: - model_type (str): Model type. - """ - model_type = opt['model_type'] - - # dynamically instantiation - for module in _model_modules: - model_cls = getattr(module, model_type, None) - if model_cls is not None: - break - if model_cls is None: - raise ValueError(f'Model {model_type} is not found.') - - model = model_cls(opt) - - logger = logging.getLogger('base') - logger.info(f'Model [{model.__class__.__name__}] is created.') - return model diff --git a/spaces/STEM-academie/Kennismaking_AI_Foto_Herkennen/app.py b/spaces/STEM-academie/Kennismaking_AI_Foto_Herkennen/app.py deleted file mode 100644 index 6df801556d79414df68d33913ecaf3b128d87e2a..0000000000000000000000000000000000000000 --- a/spaces/STEM-academie/Kennismaking_AI_Foto_Herkennen/app.py +++ /dev/null @@ -1,52 +0,0 @@ -from keras.models import load_model -from PIL import Image, ImageOps #Install pillow instead of PIL -import numpy as np - - -def foto_classificatie(foto) : - # Disable scientific notation for clarity - np.set_printoptions(suppress=True) - - # Load the model - model = load_model('keras_model.h5', compile=False) - - # Load the labels - class_names = open('labels.txt', 'r').readlines() - - # Create the array of the right shape to feed into the keras model - # The 'length' or number of images you can put into the array is - # determined by the first position in the shape tuple, in this case 1. - data = np.ndarray(shape=(1, 224, 224, 3), dtype=np.float32) - - # Replace this with the path to your image - image = Image.open(foto).convert('RGB') - - #resize the image to a 224x224 with the same strategy as in TM2: - #resizing the image to be at least 224x224 and then cropping from the center - size = (224, 224) - image = ImageOps.fit(image, size, Image.Resampling.LANCZOS) - - #turn the image into a numpy array - image_array = np.asarray(image) - - # Normalize the image - normalized_image_array = (image_array.astype(np.float32) / 127.0) - 1 - - # Load the image into the array - data[0] = normalized_image_array - - # run the inference - prediction = model.predict(data) - index = np.argmax(prediction) - class_name = class_names[index] - confidence_score = prediction[0][index] - - return 'Voorspelling : ' + class_name + ' met een zekerheid van ' + str(int(confidence_score*100)) + ' %.' - -import gradio as gr - -demo = gr.Interface(fn=foto_classificatie, - inputs=gr.Image(source="webcam", type="pil"), - outputs=gr.Text() - ) -demo.launch() \ No newline at end of file diff --git a/spaces/Saralesjak123/open-reverse-proxy/server.js b/spaces/Saralesjak123/open-reverse-proxy/server.js deleted file mode 100644 index e627351ebb2efc83cd46d755c6112cb305d2297b..0000000000000000000000000000000000000000 --- a/spaces/Saralesjak123/open-reverse-proxy/server.js +++ /dev/null @@ -1,32 +0,0 @@ -const express = require('express'); -const proxy = require('express-http-proxy'); -const app = express(); -const targetUrl = 'https://api.openai.com'; -const openaiKey = process.env.OPENAI_KEY -const port = 7860; -const baseUrl = getExternalUrl(process.env.SPACE_ID); - -app.use('/api', proxy(targetUrl, { - proxyReqOptDecorator: (proxyReqOpts, srcReq) => { - // Modify the request headers if necessary - proxyReqOpts.headers['Authorization'] = 'Bearer '+openaiKey; - return proxyReqOpts; - }, -})); - -app.get("/", (req, res) => { - res.send(`This is your OpenAI Reverse Proxy URL: ${baseUrl}`); -}); - -function getExternalUrl(spaceId) { - try { - const [username, spacename] = spaceId.split("/"); - return `https://${username}-${spacename.replace(/_/g, "-")}.hf.space/api/v1`; - } catch (e) { - return ""; - } -} - -app.listen(port, () => { - console.log(`Reverse proxy server running on ${baseUrl}`); -}); \ No newline at end of file diff --git a/spaces/SarthakSidhant/Go-Cattle/diseases/bovine respiratory disease.md b/spaces/SarthakSidhant/Go-Cattle/diseases/bovine respiratory disease.md deleted file mode 100644 index e8cdc51c13f6a5f22dfa00b9314e9488728f99fe..0000000000000000000000000000000000000000 --- a/spaces/SarthakSidhant/Go-Cattle/diseases/bovine respiratory disease.md +++ /dev/null @@ -1,48 +0,0 @@ -## Bovine respiratory disease (BRD) - -**Information:** Bovine respiratory disease (BRD) is a complex respiratory illness of cattle that can be caused by a variety of bacteria, viruses, and parasites. BRD is a major cause of death and illness in cattle, and it can have a significant economic impact on the cattle industry. - -**Symptoms:** - -* Fever -* Coughing -* Nasal discharge -* Difficulty breathing -* Weight loss -* Decreased milk production -* Death - -**Remedies:** - -* Treatment for BRD depends on the underlying cause of the infection. -* Antibiotics may be used to treat bacterial infections. -* Other treatments may include supportive care, such as fluids and oxygen. - -**Causes:** - -* BRD is caused by a variety of bacteria, viruses, and parasites. -* Some of the most common causes of BRD include: - * Pasteurella multocida - * Mannheimia haemolytica - * Mycoplasma bovis - * Bovine viral diarrhea virus (BVDV) - * Respiratory syncytial virus (RSV) - * Parainfluenza-3 virus (PI3) - -**Prevention:** - -* The best way to prevent BRD is to keep cattle healthy and well-vaccinated. -* Vaccinations are available for some of the most common causes of BRD. -* Other preventive measures include: - * Maintaining good herd health practices - * Providing adequate ventilation and bedding - * Isolating sick animals - * Practicing biosecurity measures - -**Other preventive measures:** - -* Avoid overcrowding animals -* Provide clean, fresh water -* Monitor animals for signs of illness -* Dispose of dead animals properly -* Vaccinate animals according to the manufacturer's instructions diff --git a/spaces/SarthakSidhant/Go-Cattle/diseases/liver fluke infection.md b/spaces/SarthakSidhant/Go-Cattle/diseases/liver fluke infection.md deleted file mode 100644 index 3bb62fa50f8c78d280e4ae011fc0d031ede3c4a3..0000000000000000000000000000000000000000 --- a/spaces/SarthakSidhant/Go-Cattle/diseases/liver fluke infection.md +++ /dev/null @@ -1,37 +0,0 @@ -## Liver fluke infection - -**Information:** Liver fluke infection, also known as **Fascioliasis**, is a parasitic infection that affects cattle. It is caused by a flatworm called **Fasciola hepatica**. - -**Symptoms:** - -* Weight loss -* Poor growth -* Anorexia -* Jaundice -* Abdominal pain -* Diarrhea -* Coughing -* Fever -* Death - -**Remedies:** - -* There is no specific cure for liver fluke infection. -* Treatment is usually supportive and may include: - * Deworming with anthelmintics - * Providing fluids and electrolytes - * Treating other underlying conditions - -**Causes:** - -* Liver fluke infection is caused by a flatworm called **Fasciola hepatica**. -* These parasites live in the liver of cattle and can cause damage to the liver tissue. -* Liver fluke infection is more common in cattle that graze in wet and marshy areas. -* Liver fluke infection can also be spread through contact with infected cattle or their feces. - -**Prevention:** - -* The best way to prevent liver fluke infection is to control the snail population. -* Cattle should be dewormed regularly with anthelmintics. -* Cattle should not be grazed in wet and marshy areas. -* Cattle should not be fed hay that has been cut from wet and marshy areas. diff --git a/spaces/Sonnt/Fracture_Webapp/mLogsFunctions/fx.py b/spaces/Sonnt/Fracture_Webapp/mLogsFunctions/fx.py deleted file mode 100644 index e88f277b7145f684f05bf8ff840fc63f47e6dffa..0000000000000000000000000000000000000000 --- a/spaces/Sonnt/Fracture_Webapp/mLogsFunctions/fx.py +++ /dev/null @@ -1,360 +0,0 @@ -import pandas as pd -import numpy as np - -# import matplotlib.pyplot as plt -import seaborn as sns -import plotly.express as px - -import altair as alt - -import streamlit as st -import streamlit_nested_layout -from streamlit_vega_lite import altair_component - -from mLogsFunctions import * - -#LOADING DATA------------------------------------------------------------------------------------------ -def upload_csv(): - df = None - uploaded_file = st.file_uploader(label='Upload *csv file from your drive! Choose a file:', type='csv') - if uploaded_file is not None: - df = pd.read_csv(uploaded_file, na_values=-9999) - st.success("Loading finished!") - st.write('---') - return df - -#PLOTTING------------------------------------------------------------------------------------------ -# Store the initial value of widgets in session state -def selection_info(df, method, option_w, option_x, option_y, option_c): - if "method" not in st.session_state: - st.session_state.method:str = "Single Well" - st.session_state.option_w:str = "15-1-SNN-3P" - st.session_state.option_x:str = "RHOB" - st.session_state.option_y:str = "DTC" - st.session_state.option_c:str = "WELL" - well_names = np.sort(df.WELL.unique()) - st.radio("", - key=method, - options=["All Wells", "Single Well"],) - st.radio( - "WELL", - key=option_w, - options=well_names,) - st.selectbox( - "X Axis", - key=option_x, - options=(df.columns.sort_values().str.upper().drop(["WELL", "DEPTH"])),) - st.selectbox( - "Y Axis", - key=option_y, - options=(df.columns.sort_values().str.upper().drop(["WELL", "DEPTH"])),) - st.selectbox( - "Color Axis", - key=option_c, - options=df.columns.sort_values().str.upper()) - return st.session_state - -#Interactive Charts----------------------------------------------------------------------- -@st.cache_resource -def interval_define(): - return alt.selection_interval() - -@st.cache_resource -def make_selection(df, _interval, option_x, option_y, option_c): - def c_(df, _interval, option_x, option_y, x_log:str="linear", y_log:str="linear"): - return alt.Chart(df, - title="Crossplot "+option_x+" vs "+option_y+"", - ).mark_point().encode( - x = alt.X(option_x.upper(), - axis=alt.Axis(title=option_x), - scale= alt.Scale(zero=False, type=x_log - ) - ), - y = alt.Y(option_y.upper(), - axis=alt.Axis(title=option_y), - scale=alt.Scale(zero=False,type=y_log - ) - ), - color=alt.condition(_interval, option_c, alt.value('lightgray')), - ).properties( - selection=_interval, - height=570, - width=600)#.transform_regression(option_x.upper(), option_y.upper()).mark_line() - - if option_x in ["LLD", "LLS"]: - x_log = "log" - else: - x_log = "linear" - - if option_y in ["LLD", "LLS"]: - y_log = "log" - else: - y_log = "linear" - return c_(df, _interval, option_x, option_y, x_log, y_log) - -#Histogram----------------------------------------------------------------------- -def bar_plot(data, option_x): - def c_(data, option_x, _log): - return alt.Chart(title="Histogram of "+option_x+"", - data=data - ).mark_bar().encode( - x = alt.X(option_x.upper(), - bin=alt.Bin(maxbins=30), - axis=alt.Axis(title=option_x), - scale=alt.Scale(zero=False) - ), - y = alt.Y('count()', - axis=alt.Axis(title='Number of Values'), - scale=alt.Scale(zero=False, type=_log), - ), - color = alt.Color('WELL', legend=None - ) - ).properties( - height=250, - width=250 - ) - if option_x in ["LLD", "LLS"]: - return c_(data, option_x, "symlog") - else: - return c_(data, option_x, "linear") - -#Curve View----------------------------------------------------------------------- -def curve_plot(data,filted_data, x_column): - def c_(data,filted_data, x_column, _log): - color_codes = {"GR":"lime", - "LLD":"red", - "LLS":"dodgerblue", - "NPHI":"blue", - "RHOB":"red", - "DTC":"red", - "DTS":"magenta", - "FRACTURE_ZONE":"lightcoral", - "FRACTURE_ZONE_PRED":"lightgreen" - } - if x_column in color_codes.keys(): - color_ = color_codes[x_column] - else: - color_ = "blue" - return alt.Chart(data - ).mark_line(size=1, - orient='horizontal', - color=color_, - point=alt.OverlayMarkDef(color="", size=1) #Show raw points - ).encode( - x=alt.X(x_column.upper(), - scale=alt.Scale(zero=False, type=_log), - axis=alt.Axis(title=x_column.upper(), - titleAnchor='middle', - orient='top', - labelAngle=0, - titleColor=color_, - labelColor=color_, - tickColor=color_, - ) - ), - y=alt.Y('DEPTH', - scale=alt.Scale(zero=False, - reverse=True, - ), - axis=alt.Axis(title=None, - labelColor=color_, - tickColor=color_, - ) - ) - ).properties(height=500, - width=129 - ) - - - if x_column in ["LLD", "LLS"]: - curve = c_(data,filted_data, x_column, "log") - else: - curve = c_(data,filted_data, x_column, "linear") - - if filted_data is not None: - point_plot = alt.Chart(filted_data).mark_circle(size=20, - color='red', - opacity=1 - ).encode( - x=x_column, - y='DEPTH' - ) - return curve + point_plot - else: - return curve -# import altair as alt -# def curve_plot(data, filted_data, x_column): -# def c_(data, filted_data, x_column, _log): -# color_codes = { -# "GR": "lime", -# "LLD": "red", -# "LLS": "dodgerblue", -# "NPHI": "blue", -# "RHOB": "red", -# "DTC": "red", -# "DTS": "magenta", -# "FRACTURE_ZONE": "lightcoral", -# "FRACTURE_ZONE_PRED": "lightgreen" -# } -# if x_column in color_codes.keys(): -# color_ = color_codes[x_column] -# else: -# color_ = "blue" -# return alt.Chart(data).mark_line(size=1, orient='horizontal', color=color_, point=alt.OverlayMarkDef(color="", size=1)).encode( -# y=alt.X(x_column.upper(), -# scale=alt.Scale(zero=False, type=_log), -# axis=alt.Axis(title=x_column.upper(), -# titleAnchor='middle', -# orient='top', -# labelAngle=0, -# titleColor=color_, -# labelColor=color_, -# tickColor=color_, -# ) -# ), -# x=alt.Y('DEPTH', -# scale=alt.Scale(zero=False, reverse=True), -# axis=alt.Axis(title=None, labelColor=color_, tickColor=color_)) -# ).properties( -# height=500, -# width=700 -# ) - -# if x_column in ["LLD", "LLS"]: -# curve = c_(data, filted_data, x_column, "log") -# else: -# curve = c_(data, filted_data, x_column, "linear") - -# if filted_data is not None: -# point_plot = alt.Chart(filted_data).mark_circle(size=20, color='red', opacity=1).encode( -# y=alt.X(x_column, scale=alt.Scale(zero=False)), -# x=alt.Y('DEPTH', scale=alt.Scale(zero=False, reverse=True)) -# ) -# return (curve + point_plot).resolve_scale(y='shared') -# else: -# return curve - - -#MissingBar----------------------------------------------------------------------- -def missing_bar(data, x_title): - return alt.Chart(data).mark_bar().encode( - x=alt.X('Columns', sort='-y', title=x_title), - y='Count missing (%)', - color=alt.condition( - alt.datum['Count missing (%)'] >10, # If count missing is > 10%, returns True, - alt.value('orange'), # which sets the bar orange. - alt.value('steelblue') # And if it's not true it sets the bar steelblue. - ) - ).properties( - width=500, - height=250 - ).configure_axis( - grid=False - ) -#BoxPLot----------------------------------------------------------------------- -def missing_box(data, curve): - if curve in ["LLD", "LLS"]: - return alt.Chart(data).mark_boxplot(extent='min-max').encode( - x=alt.X('WELL:O', title=None, - ), - y=alt.Y(f'{curve}:Q', title=curve,scale=alt.Scale(zero=False, type="log") - ), - color='WELL:N' - ).properties( - width=500, - height=300 - ) - else: - return alt.Chart(data).mark_boxplot(extent='min-max').encode( - x=alt.X('WELL:O', title=None - ), - y=alt.Y(f'{curve}:Q', title=curve,scale=alt.Scale(zero=False) - ), - color='WELL:N' - ).properties( - width=500, - height=300 - ) -#Histogram Line----------------------------------------------------------------------- -def hist_line_plot(data, curve): - st.caption(f"Histogram of {curve}") - if curve in ["LLD", "LLS"]: - fig = sns.displot(data, x=curve, hue="WELL", kind="kde", height=5,aspect=1.2, log_scale=True) - fig.set(ylabel="Values") - st.pyplot(fig) - else: - fig = sns.displot(data, x=curve, hue="WELL", kind="kde", height=5,aspect=1.2) - fig.set(ylabel="Values") - st.pyplot(fig) -#CrossPlot----------------------------------------------------------------------- -def crossplot(data, x_curve, y_curve): - fig = sns.jointplot(data=data, x=x_curve, y=y_curve, hue="WELL") - if x_curve in ["LLD", "LLS"]: - fig.ax_joint.set_xscale('log') - fig.ax_marg_x.set_xscale('log') - if y_curve in ["LLD", "LLS"]: - fig.ax_joint.set_yscale('log') - fig.ax_marg_y.set_yscale('log') - st.pyplot(fig) -#PairPlot----------------------------------------------------------------------- -def pairplot(data, rows, cols,color_): - return alt.Chart(data).mark_circle().encode( - alt.X(alt.repeat("column"), type='quantitative', scale=alt.Scale(zero=False)), - alt.Y(alt.repeat("row"), type='quantitative', scale=alt.Scale(zero=False)), - color=color_ - ).properties( - width=100, - height=100 - ).repeat( - row = rows, - column = cols - ).configure_axis( - grid=False - ) -#Heatmap---------------------------------------------------------------- -def heatmap(df): - fig = sns.heatmap(df, annot=True) - st.pyplot(fig) -#Heatmap---------------------------------------------------------------- -def plotly_3d(data, x, y, z, color, size, symbol, log_x, log_y, log_z): - #Data slicer - curvs_ = columns_list(data, no_well=True) - def slicer_(data, sli_key, val_key,): - slicer1_, slicer2_ = st.columns([4, 6]) - # sli=curvs_[0] - with slicer1_: - sli = st.selectbox("Data slicer", key=sli_key, options=curvs_) - with slicer2_: - values = st.slider('Select a range of values', - min_value = float(data[sli].min()), - max_value = float(data[sli].max()), - value=(float(data[sli].min()), float(data[sli].max())), - key=val_key, - ) - data = data.query(f"{sli} >= {values[0]} and {sli} <= {values[1]}") - return data - c1, c2, c3 = st.columns(3) - with c1: - data = slicer_(data, "slicer_1", "sli1_value") - with c2: - data = slicer_(data, "slicer_2", "sli2_value") - with c3: - data = slicer_(data, "slicer_3", "sli3_value") - - fig = px.scatter_3d(data, x=x, - y=y, - z=z, - color=color, - size=size, - size_max=18, - symbol=symbol, - opacity=0.7, - log_x=log_x, - log_y=log_y, - log_z = log_z, - width=1000, height=700, - color_continuous_scale="blugrn") - fig.update_layout(margin=dict(l=0, r=0, b=0, t=0), #tight layout - # paper_bgcolor="LightSteelBlue" - template="none") - st.plotly_chart(fig) \ No newline at end of file diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/PIL/JpegPresets.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/PIL/JpegPresets.py deleted file mode 100644 index a678e248e9ab2465738ea79f7f5c4bbc260c1919..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/PIL/JpegPresets.py +++ /dev/null @@ -1,240 +0,0 @@ -""" -JPEG quality settings equivalent to the Photoshop settings. -Can be used when saving JPEG files. - -The following presets are available by default: -``web_low``, ``web_medium``, ``web_high``, ``web_very_high``, ``web_maximum``, -``low``, ``medium``, ``high``, ``maximum``. -More presets can be added to the :py:data:`presets` dict if needed. - -To apply the preset, specify:: - - quality="preset_name" - -To apply only the quantization table:: - - qtables="preset_name" - -To apply only the subsampling setting:: - - subsampling="preset_name" - -Example:: - - im.save("image_name.jpg", quality="web_high") - -Subsampling ------------ - -Subsampling is the practice of encoding images by implementing less resolution -for chroma information than for luma information. -(ref.: https://en.wikipedia.org/wiki/Chroma_subsampling) - -Possible subsampling values are 0, 1 and 2 that correspond to 4:4:4, 4:2:2 and -4:2:0. - -You can get the subsampling of a JPEG with the -:func:`.JpegImagePlugin.get_sampling` function. - -In JPEG compressed data a JPEG marker is used instead of an EXIF tag. -(ref.: https://exiv2.org/tags.html) - - -Quantization tables -------------------- - -They are values use by the DCT (Discrete cosine transform) to remove -*unnecessary* information from the image (the lossy part of the compression). -(ref.: https://en.wikipedia.org/wiki/Quantization_matrix#Quantization_matrices, -https://en.wikipedia.org/wiki/JPEG#Quantization) - -You can get the quantization tables of a JPEG with:: - - im.quantization - -This will return a dict with a number of lists. You can pass this dict -directly as the qtables argument when saving a JPEG. - -The quantization table format in presets is a list with sublists. These formats -are interchangeable. - -Libjpeg ref.: -https://web.archive.org/web/20120328125543/http://www.jpegcameras.com/libjpeg/libjpeg-3.html - -""" - -# fmt: off -presets = { - 'web_low': {'subsampling': 2, # "4:2:0" - 'quantization': [ - [20, 16, 25, 39, 50, 46, 62, 68, - 16, 18, 23, 38, 38, 53, 65, 68, - 25, 23, 31, 38, 53, 65, 68, 68, - 39, 38, 38, 53, 65, 68, 68, 68, - 50, 38, 53, 65, 68, 68, 68, 68, - 46, 53, 65, 68, 68, 68, 68, 68, - 62, 65, 68, 68, 68, 68, 68, 68, - 68, 68, 68, 68, 68, 68, 68, 68], - [21, 25, 32, 38, 54, 68, 68, 68, - 25, 28, 24, 38, 54, 68, 68, 68, - 32, 24, 32, 43, 66, 68, 68, 68, - 38, 38, 43, 53, 68, 68, 68, 68, - 54, 54, 66, 68, 68, 68, 68, 68, - 68, 68, 68, 68, 68, 68, 68, 68, - 68, 68, 68, 68, 68, 68, 68, 68, - 68, 68, 68, 68, 68, 68, 68, 68] - ]}, - 'web_medium': {'subsampling': 2, # "4:2:0" - 'quantization': [ - [16, 11, 11, 16, 23, 27, 31, 30, - 11, 12, 12, 15, 20, 23, 23, 30, - 11, 12, 13, 16, 23, 26, 35, 47, - 16, 15, 16, 23, 26, 37, 47, 64, - 23, 20, 23, 26, 39, 51, 64, 64, - 27, 23, 26, 37, 51, 64, 64, 64, - 31, 23, 35, 47, 64, 64, 64, 64, - 30, 30, 47, 64, 64, 64, 64, 64], - [17, 15, 17, 21, 20, 26, 38, 48, - 15, 19, 18, 17, 20, 26, 35, 43, - 17, 18, 20, 22, 26, 30, 46, 53, - 21, 17, 22, 28, 30, 39, 53, 64, - 20, 20, 26, 30, 39, 48, 64, 64, - 26, 26, 30, 39, 48, 63, 64, 64, - 38, 35, 46, 53, 64, 64, 64, 64, - 48, 43, 53, 64, 64, 64, 64, 64] - ]}, - 'web_high': {'subsampling': 0, # "4:4:4" - 'quantization': [ - [6, 4, 4, 6, 9, 11, 12, 16, - 4, 5, 5, 6, 8, 10, 12, 12, - 4, 5, 5, 6, 10, 12, 14, 19, - 6, 6, 6, 11, 12, 15, 19, 28, - 9, 8, 10, 12, 16, 20, 27, 31, - 11, 10, 12, 15, 20, 27, 31, 31, - 12, 12, 14, 19, 27, 31, 31, 31, - 16, 12, 19, 28, 31, 31, 31, 31], - [7, 7, 13, 24, 26, 31, 31, 31, - 7, 12, 16, 21, 31, 31, 31, 31, - 13, 16, 17, 31, 31, 31, 31, 31, - 24, 21, 31, 31, 31, 31, 31, 31, - 26, 31, 31, 31, 31, 31, 31, 31, - 31, 31, 31, 31, 31, 31, 31, 31, - 31, 31, 31, 31, 31, 31, 31, 31, - 31, 31, 31, 31, 31, 31, 31, 31] - ]}, - 'web_very_high': {'subsampling': 0, # "4:4:4" - 'quantization': [ - [2, 2, 2, 2, 3, 4, 5, 6, - 2, 2, 2, 2, 3, 4, 5, 6, - 2, 2, 2, 2, 4, 5, 7, 9, - 2, 2, 2, 4, 5, 7, 9, 12, - 3, 3, 4, 5, 8, 10, 12, 12, - 4, 4, 5, 7, 10, 12, 12, 12, - 5, 5, 7, 9, 12, 12, 12, 12, - 6, 6, 9, 12, 12, 12, 12, 12], - [3, 3, 5, 9, 13, 15, 15, 15, - 3, 4, 6, 11, 14, 12, 12, 12, - 5, 6, 9, 14, 12, 12, 12, 12, - 9, 11, 14, 12, 12, 12, 12, 12, - 13, 14, 12, 12, 12, 12, 12, 12, - 15, 12, 12, 12, 12, 12, 12, 12, - 15, 12, 12, 12, 12, 12, 12, 12, - 15, 12, 12, 12, 12, 12, 12, 12] - ]}, - 'web_maximum': {'subsampling': 0, # "4:4:4" - 'quantization': [ - [1, 1, 1, 1, 1, 1, 1, 1, - 1, 1, 1, 1, 1, 1, 1, 1, - 1, 1, 1, 1, 1, 1, 1, 2, - 1, 1, 1, 1, 1, 1, 2, 2, - 1, 1, 1, 1, 1, 2, 2, 3, - 1, 1, 1, 1, 2, 2, 3, 3, - 1, 1, 1, 2, 2, 3, 3, 3, - 1, 1, 2, 2, 3, 3, 3, 3], - [1, 1, 1, 2, 2, 3, 3, 3, - 1, 1, 1, 2, 3, 3, 3, 3, - 1, 1, 1, 3, 3, 3, 3, 3, - 2, 2, 3, 3, 3, 3, 3, 3, - 2, 3, 3, 3, 3, 3, 3, 3, - 3, 3, 3, 3, 3, 3, 3, 3, - 3, 3, 3, 3, 3, 3, 3, 3, - 3, 3, 3, 3, 3, 3, 3, 3] - ]}, - 'low': {'subsampling': 2, # "4:2:0" - 'quantization': [ - [18, 14, 14, 21, 30, 35, 34, 17, - 14, 16, 16, 19, 26, 23, 12, 12, - 14, 16, 17, 21, 23, 12, 12, 12, - 21, 19, 21, 23, 12, 12, 12, 12, - 30, 26, 23, 12, 12, 12, 12, 12, - 35, 23, 12, 12, 12, 12, 12, 12, - 34, 12, 12, 12, 12, 12, 12, 12, - 17, 12, 12, 12, 12, 12, 12, 12], - [20, 19, 22, 27, 20, 20, 17, 17, - 19, 25, 23, 14, 14, 12, 12, 12, - 22, 23, 14, 14, 12, 12, 12, 12, - 27, 14, 14, 12, 12, 12, 12, 12, - 20, 14, 12, 12, 12, 12, 12, 12, - 20, 12, 12, 12, 12, 12, 12, 12, - 17, 12, 12, 12, 12, 12, 12, 12, - 17, 12, 12, 12, 12, 12, 12, 12] - ]}, - 'medium': {'subsampling': 2, # "4:2:0" - 'quantization': [ - [12, 8, 8, 12, 17, 21, 24, 17, - 8, 9, 9, 11, 15, 19, 12, 12, - 8, 9, 10, 12, 19, 12, 12, 12, - 12, 11, 12, 21, 12, 12, 12, 12, - 17, 15, 19, 12, 12, 12, 12, 12, - 21, 19, 12, 12, 12, 12, 12, 12, - 24, 12, 12, 12, 12, 12, 12, 12, - 17, 12, 12, 12, 12, 12, 12, 12], - [13, 11, 13, 16, 20, 20, 17, 17, - 11, 14, 14, 14, 14, 12, 12, 12, - 13, 14, 14, 14, 12, 12, 12, 12, - 16, 14, 14, 12, 12, 12, 12, 12, - 20, 14, 12, 12, 12, 12, 12, 12, - 20, 12, 12, 12, 12, 12, 12, 12, - 17, 12, 12, 12, 12, 12, 12, 12, - 17, 12, 12, 12, 12, 12, 12, 12] - ]}, - 'high': {'subsampling': 0, # "4:4:4" - 'quantization': [ - [6, 4, 4, 6, 9, 11, 12, 16, - 4, 5, 5, 6, 8, 10, 12, 12, - 4, 5, 5, 6, 10, 12, 12, 12, - 6, 6, 6, 11, 12, 12, 12, 12, - 9, 8, 10, 12, 12, 12, 12, 12, - 11, 10, 12, 12, 12, 12, 12, 12, - 12, 12, 12, 12, 12, 12, 12, 12, - 16, 12, 12, 12, 12, 12, 12, 12], - [7, 7, 13, 24, 20, 20, 17, 17, - 7, 12, 16, 14, 14, 12, 12, 12, - 13, 16, 14, 14, 12, 12, 12, 12, - 24, 14, 14, 12, 12, 12, 12, 12, - 20, 14, 12, 12, 12, 12, 12, 12, - 20, 12, 12, 12, 12, 12, 12, 12, - 17, 12, 12, 12, 12, 12, 12, 12, - 17, 12, 12, 12, 12, 12, 12, 12] - ]}, - 'maximum': {'subsampling': 0, # "4:4:4" - 'quantization': [ - [2, 2, 2, 2, 3, 4, 5, 6, - 2, 2, 2, 2, 3, 4, 5, 6, - 2, 2, 2, 2, 4, 5, 7, 9, - 2, 2, 2, 4, 5, 7, 9, 12, - 3, 3, 4, 5, 8, 10, 12, 12, - 4, 4, 5, 7, 10, 12, 12, 12, - 5, 5, 7, 9, 12, 12, 12, 12, - 6, 6, 9, 12, 12, 12, 12, 12], - [3, 3, 5, 9, 13, 15, 15, 15, - 3, 4, 6, 10, 14, 12, 12, 12, - 5, 6, 9, 14, 12, 12, 12, 12, - 9, 10, 14, 12, 12, 12, 12, 12, - 13, 14, 12, 12, 12, 12, 12, 12, - 15, 12, 12, 12, 12, 12, 12, 12, - 15, 12, 12, 12, 12, 12, 12, 12, - 15, 12, 12, 12, 12, 12, 12, 12] - ]}, -} -# fmt: on diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/aiohttp/web_log.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/aiohttp/web_log.py deleted file mode 100644 index bc6e3b5a8a280347d606e91374517fef223fa441..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/aiohttp/web_log.py +++ /dev/null @@ -1,208 +0,0 @@ -import datetime -import functools -import logging -import os -import re -from collections import namedtuple -from typing import Any, Callable, Dict, Iterable, List, Tuple # noqa - -from .abc import AbstractAccessLogger -from .web_request import BaseRequest -from .web_response import StreamResponse - -KeyMethod = namedtuple("KeyMethod", "key method") - - -class AccessLogger(AbstractAccessLogger): - """Helper object to log access. - - Usage: - log = logging.getLogger("spam") - log_format = "%a %{User-Agent}i" - access_logger = AccessLogger(log, log_format) - access_logger.log(request, response, time) - - Format: - %% The percent sign - %a Remote IP-address (IP-address of proxy if using reverse proxy) - %t Time when the request was started to process - %P The process ID of the child that serviced the request - %r First line of request - %s Response status code - %b Size of response in bytes, including HTTP headers - %T Time taken to serve the request, in seconds - %Tf Time taken to serve the request, in seconds with floating fraction - in .06f format - %D Time taken to serve the request, in microseconds - %{FOO}i request.headers['FOO'] - %{FOO}o response.headers['FOO'] - %{FOO}e os.environ['FOO'] - - """ - - LOG_FORMAT_MAP = { - "a": "remote_address", - "t": "request_start_time", - "P": "process_id", - "r": "first_request_line", - "s": "response_status", - "b": "response_size", - "T": "request_time", - "Tf": "request_time_frac", - "D": "request_time_micro", - "i": "request_header", - "o": "response_header", - } - - LOG_FORMAT = '%a %t "%r" %s %b "%{Referer}i" "%{User-Agent}i"' - FORMAT_RE = re.compile(r"%(\{([A-Za-z0-9\-_]+)\}([ioe])|[atPrsbOD]|Tf?)") - CLEANUP_RE = re.compile(r"(%[^s])") - _FORMAT_CACHE: Dict[str, Tuple[str, List[KeyMethod]]] = {} - - def __init__(self, logger: logging.Logger, log_format: str = LOG_FORMAT) -> None: - """Initialise the logger. - - logger is a logger object to be used for logging. - log_format is a string with apache compatible log format description. - - """ - super().__init__(logger, log_format=log_format) - - _compiled_format = AccessLogger._FORMAT_CACHE.get(log_format) - if not _compiled_format: - _compiled_format = self.compile_format(log_format) - AccessLogger._FORMAT_CACHE[log_format] = _compiled_format - - self._log_format, self._methods = _compiled_format - - def compile_format(self, log_format: str) -> Tuple[str, List[KeyMethod]]: - """Translate log_format into form usable by modulo formatting - - All known atoms will be replaced with %s - Also methods for formatting of those atoms will be added to - _methods in appropriate order - - For example we have log_format = "%a %t" - This format will be translated to "%s %s" - Also contents of _methods will be - [self._format_a, self._format_t] - These method will be called and results will be passed - to translated string format. - - Each _format_* method receive 'args' which is list of arguments - given to self.log - - Exceptions are _format_e, _format_i and _format_o methods which - also receive key name (by functools.partial) - - """ - # list of (key, method) tuples, we don't use an OrderedDict as users - # can repeat the same key more than once - methods = list() - - for atom in self.FORMAT_RE.findall(log_format): - if atom[1] == "": - format_key1 = self.LOG_FORMAT_MAP[atom[0]] - m = getattr(AccessLogger, "_format_%s" % atom[0]) - key_method = KeyMethod(format_key1, m) - else: - format_key2 = (self.LOG_FORMAT_MAP[atom[2]], atom[1]) - m = getattr(AccessLogger, "_format_%s" % atom[2]) - key_method = KeyMethod(format_key2, functools.partial(m, atom[1])) - - methods.append(key_method) - - log_format = self.FORMAT_RE.sub(r"%s", log_format) - log_format = self.CLEANUP_RE.sub(r"%\1", log_format) - return log_format, methods - - @staticmethod - def _format_i( - key: str, request: BaseRequest, response: StreamResponse, time: float - ) -> str: - if request is None: - return "(no headers)" - - # suboptimal, make istr(key) once - return request.headers.get(key, "-") - - @staticmethod - def _format_o( - key: str, request: BaseRequest, response: StreamResponse, time: float - ) -> str: - # suboptimal, make istr(key) once - return response.headers.get(key, "-") - - @staticmethod - def _format_a(request: BaseRequest, response: StreamResponse, time: float) -> str: - if request is None: - return "-" - ip = request.remote - return ip if ip is not None else "-" - - @staticmethod - def _format_t(request: BaseRequest, response: StreamResponse, time: float) -> str: - now = datetime.datetime.utcnow() - start_time = now - datetime.timedelta(seconds=time) - return start_time.strftime("[%d/%b/%Y:%H:%M:%S +0000]") - - @staticmethod - def _format_P(request: BaseRequest, response: StreamResponse, time: float) -> str: - return "<%s>" % os.getpid() - - @staticmethod - def _format_r(request: BaseRequest, response: StreamResponse, time: float) -> str: - if request is None: - return "-" - return "{} {} HTTP/{}.{}".format( - request.method, - request.path_qs, - request.version.major, - request.version.minor, - ) - - @staticmethod - def _format_s(request: BaseRequest, response: StreamResponse, time: float) -> int: - return response.status - - @staticmethod - def _format_b(request: BaseRequest, response: StreamResponse, time: float) -> int: - return response.body_length - - @staticmethod - def _format_T(request: BaseRequest, response: StreamResponse, time: float) -> str: - return str(round(time)) - - @staticmethod - def _format_Tf(request: BaseRequest, response: StreamResponse, time: float) -> str: - return "%06f" % time - - @staticmethod - def _format_D(request: BaseRequest, response: StreamResponse, time: float) -> str: - return str(round(time * 1000000)) - - def _format_line( - self, request: BaseRequest, response: StreamResponse, time: float - ) -> Iterable[Tuple[str, Callable[[BaseRequest, StreamResponse, float], str]]]: - return [(key, method(request, response, time)) for key, method in self._methods] - - def log(self, request: BaseRequest, response: StreamResponse, time: float) -> None: - try: - fmt_info = self._format_line(request, response, time) - - values = list() - extra = dict() - for key, value in fmt_info: - values.append(value) - - if key.__class__ is str: - extra[key] = value - else: - k1, k2 = key # type: ignore[misc] - dct = extra.get(k1, {}) # type: ignore[var-annotated,has-type] - dct[k2] = value # type: ignore[index,has-type] - extra[k1] = dct # type: ignore[has-type,assignment] - - self.logger.info(self._log_format % tuple(values), extra=extra) - except Exception: - self.logger.exception("Error in logging") diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/docarray/array/doc_vec/doc_vec.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/docarray/array/doc_vec/doc_vec.py deleted file mode 100644 index e27f6882fe9b4db0c2d5204e2ee77d302824ec91..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/docarray/array/doc_vec/doc_vec.py +++ /dev/null @@ -1,630 +0,0 @@ -from collections import ChainMap -from typing import ( - TYPE_CHECKING, - Any, - Dict, - Iterable, - List, - MutableSequence, - Optional, - Sequence, - Tuple, - Type, - TypeVar, - Union, - cast, - no_type_check, - overload, -) - -from pydantic import BaseConfig, parse_obj_as - -from docarray.array.any_array import AnyDocArray -from docarray.array.doc_list.doc_list import DocList -from docarray.array.doc_vec.column_storage import ColumnStorage, ColumnStorageView -from docarray.array.list_advance_indexing import ListAdvancedIndexing -from docarray.base_doc import AnyDoc, BaseDoc -from docarray.base_doc.mixins.io import _type_to_protobuf -from docarray.typing import NdArray -from docarray.typing.tensor.abstract_tensor import AbstractTensor -from docarray.utils._internal._typing import is_tensor_union -from docarray.utils._internal.misc import is_tf_available, is_torch_available - -if TYPE_CHECKING: - from pydantic.fields import ModelField - - from docarray.proto import DocVecProto - -torch_available = is_torch_available() -if torch_available: - from docarray.typing import TorchTensor -else: - TorchTensor = None # type: ignore - -tf_available = is_tf_available() -if tf_available: - import tensorflow as tf # type: ignore - - from docarray.typing import TensorFlowTensor # noqa: F401 -else: - TensorFlowTensor = None # type: ignore - -T_doc = TypeVar('T_doc', bound=BaseDoc) -T = TypeVar('T', bound='DocVec') -IndexIterType = Union[slice, Iterable[int], Iterable[bool], None] - - -class DocVec(AnyDocArray[T_doc]): - """ - DocVec is a container of Documents appropriates to perform - computation that require batches of data (ex: matrix multiplication, distance - calculation, deep learning forward pass) - - A DocVec has a similar interface as [`DocList`][docarray.array.DocList] - but with an underlying implementation that is column based instead of row based. - Each field of the schema of the `DocVec` (the `.doc_type` which is a - [`BaseDoc`][docarray.BaseDoc]) will be stored in a column. - - If the field is a tensor, the data from all Documents will be stored as a single - (torch/np/tf) tensor. - - If the tensor field is `AnyTensor` or a Union of tensor types, the - `.tensor_type` will be used to determine the type of the column. - - If the field is another [`BaseDoc`][docarray.BaseDoc] the column will be another - `DocVec` that follows the schema of the nested Document. - - If the field is a [`DocList`][docarray.DocList] or `DocVec` then the column will - be a list of `DocVec`. - - For any other type the column is a Python list. - - Every `Document` inside a `DocVec` is a view into the data columns stored at the - `DocVec` level. The `BaseDoc` does not hold any data itself. The behavior of - this Document "view" is similar to the behavior of `view = tensor[i]` in - numpy/PyTorch. - - !!! note - DocVec supports optional fields. Nevertheless if a field is optional it needs to - be homogeneous. This means that if the first document has a None value all of the - other documents should have a None value as well. - !!! note - If one field is Optional the column will be stored - * as None if the first doc is as the field as None - * as a normal column otherwise that cannot contain None value - - :param docs: a homogeneous sequence of `BaseDoc` - :param tensor_type: Tensor Class used to wrap the doc_vec tensors. This is useful - if the BaseDoc of this DocVec has some undefined tensor type like - AnyTensor or Union of NdArray and TorchTensor - """ - - doc_type: Type[T_doc] - - def __init__( - self: T, - docs: Sequence[T_doc], - tensor_type: Type['AbstractTensor'] = NdArray, - ): - - if not hasattr(self, 'doc_type') or self.doc_type == AnyDoc: - raise TypeError( - f'{self.__class__.__name__} does not precise a doc_type. You probably should do' - f'docs = DocVec[MyDoc](docs) instead of DocVec(docs)' - ) - self.tensor_type = tensor_type - - tensor_columns: Dict[str, Optional[AbstractTensor]] = dict() - doc_columns: Dict[str, Optional['DocVec']] = dict() - docs_vec_columns: Dict[str, Optional[ListAdvancedIndexing['DocVec']]] = dict() - any_columns: Dict[str, ListAdvancedIndexing] = dict() - - if len(docs) == 0: - raise ValueError(f'docs {docs}: should not be empty') - docs = ( - docs - if isinstance(docs, DocList) - else DocList.__class_getitem__(self.doc_type)(docs) - ) - - for field_name, field in self.doc_type.__fields__.items(): - # here we iterate over the field of the docs schema, and we collect the data - # from each document and put them in the corresponding column - field_type = self.doc_type._get_field_type(field_name) - - is_field_required = self.doc_type.__fields__[field_name].required - - first_doc_is_none = getattr(docs[0], field_name) is None - - def _verify_optional_field_of_docs(docs): - - if is_field_required: - if first_doc_is_none: - raise ValueError( - f'Field {field_name} is None for {docs[0]} even though it is required' - ) - - if first_doc_is_none: - for i, doc in enumerate(docs): - if getattr(doc, field_name) is not None: - raise ValueError( - f'Field {field_name} is put to None for the first doc. This mean that all of the other docs should have this field set to None as well. This is not the case for {doc} at index {i}' - ) - - def _check_doc_field_not_none(field_name, doc): - if getattr(doc, field_name) is None: - raise ValueError( - f'Field {field_name} is None for {doc} even though it is not None for the first doc' - ) - - if is_tensor_union(field_type): - field_type = tensor_type - - if isinstance(field_type, type): - if tf_available and issubclass(field_type, TensorFlowTensor): - # tf.Tensor does not allow item assignment, therefore the - # optimized way - # of initializing an empty array and assigning values to it - # iteratively - # does not work here, therefore handle separately. - - if first_doc_is_none: - _verify_optional_field_of_docs(docs) - tensor_columns[field_name] = None - else: - tf_stack = [] - for i, doc in enumerate(docs): - val = getattr(doc, field_name) - _check_doc_field_not_none(field_name, doc) - tf_stack.append(val.tensor) - - stacked: tf.Tensor = tf.stack(tf_stack) - tensor_columns[field_name] = TensorFlowTensor(stacked) - - elif issubclass(field_type, AbstractTensor): - if first_doc_is_none: - _verify_optional_field_of_docs(docs) - tensor_columns[field_name] = None - else: - tensor = getattr(docs[0], field_name) - column_shape = ( - (len(docs), *tensor.shape) - if tensor is not None - else (len(docs),) - ) - tensor_columns[field_name] = field_type._docarray_from_native( - field_type.get_comp_backend().empty( - column_shape, - dtype=tensor.dtype - if hasattr(tensor, 'dtype') - else None, - device=tensor.device - if hasattr(tensor, 'device') - else None, - ) - ) - - for i, doc in enumerate(docs): - _check_doc_field_not_none(field_name, doc) - val = getattr(doc, field_name) - cast(AbstractTensor, tensor_columns[field_name])[i] = val - - elif issubclass(field_type, BaseDoc): - if first_doc_is_none: - _verify_optional_field_of_docs(docs) - doc_columns[field_name] = None - else: - if is_field_required: - doc_columns[field_name] = getattr( - docs, field_name - ).to_doc_vec(tensor_type=self.tensor_type) - else: - doc_columns[field_name] = DocList.__class_getitem__( - field_type - )(getattr(docs, field_name)).to_doc_vec( - tensor_type=self.tensor_type - ) - - elif issubclass(field_type, AnyDocArray): - if first_doc_is_none: - _verify_optional_field_of_docs(docs) - doc_columns[field_name] = None - else: - docs_list = list() - for doc in docs: - docs_nested = getattr(doc, field_name) - _check_doc_field_not_none(field_name, doc) - if isinstance(docs_nested, DocList): - docs_nested = docs_nested.to_doc_vec( - tensor_type=self.tensor_type - ) - docs_list.append(docs_nested) - docs_vec_columns[field_name] = ListAdvancedIndexing(docs_list) - else: - any_columns[field_name] = ListAdvancedIndexing( - getattr(docs, field_name) - ) - else: - any_columns[field_name] = ListAdvancedIndexing( - getattr(docs, field_name) - ) - - self._storage = ColumnStorage( - tensor_columns, - doc_columns, - docs_vec_columns, - any_columns, - tensor_type, - ) - - @classmethod - def from_columns_storage(cls: Type[T], storage: ColumnStorage) -> T: - """ - Create a DocVec directly from a storage object - :param storage: the underlying storage. - :return: a DocVec - """ - docs = cls.__new__(cls) - docs.tensor_type = storage.tensor_type - docs._storage = storage - return docs - - @classmethod - def validate( - cls: Type[T], - value: Union[T, Iterable[T_doc]], - field: 'ModelField', - config: 'BaseConfig', - ) -> T: - if isinstance(value, cls): - return value - elif isinstance(value, DocList.__class_getitem__(cls.doc_type)): - return cast(T, value.to_doc_vec()) - elif isinstance(value, Sequence): - return cls(value) - elif isinstance(value, Iterable): - return cls(list(value)) - else: - raise TypeError(f'Expecting an Iterable of {cls.doc_type}') - - def to(self: T, device: str) -> T: - """Move all tensors of this DocVec to the given device - - :param device: the device to move the data to - """ - for field, col_tens in self._storage.tensor_columns.items(): - if col_tens is not None: - self._storage.tensor_columns[ - field - ] = col_tens.get_comp_backend().to_device(col_tens, device) - - for field, col_doc in self._storage.doc_columns.items(): - if col_doc is not None: - self._storage.doc_columns[field] = col_doc.to(device) - for _, col_da in self._storage.docs_vec_columns.items(): - if col_da is not None: - for docs in col_da: - docs.to(device) - - return self - - ################################################ - # Accessing data : Indexing / Getitem related # - ################################################ - - @overload - def __getitem__(self: T, item: int) -> T_doc: - ... - - @overload - def __getitem__(self: T, item: IndexIterType) -> T: - ... - - def __getitem__(self: T, item: Union[int, IndexIterType]) -> Union[T_doc, T]: - if item is None: - return self # PyTorch behaviour - # multiple docs case - if isinstance(item, (slice, Iterable)): - return self.__class__.from_columns_storage(self._storage[item]) - # single doc case - return self.doc_type.from_view(ColumnStorageView(item, self._storage)) - - def _get_data_column( - self: T, - field: str, - ) -> Union[MutableSequence, 'DocVec', AbstractTensor, None]: - """Return one column of the data - - :param field: name of the fields to extract - :return: Returns a list of the field value for each document - in the array like container - """ - if field in self._storage.any_columns.keys(): - return self._storage.any_columns[field] - elif field in self._storage.docs_vec_columns.keys(): - return self._storage.docs_vec_columns[field] - elif field in self._storage.columns.keys(): - return self._storage.columns[field] - else: - raise ValueError(f'{field} does not exist in {self}') - - #################################### - # Updating data : Setitem related # - #################################### - - @overload - def __setitem__(self: T, key: int, value: T_doc): - ... - - @overload - def __setitem__(self: T, key: IndexIterType, value: T): - ... - - @no_type_check - def __setitem__(self: T, key, value): - # single doc case - if not isinstance(key, (slice, Iterable)): - if not isinstance(value, self.doc_type): - raise ValueError(f'{value} is not a {self.doc_type}') - - for field, value in value.dict().items(): - self._storage.columns[field][key] = value # todo we might want to - # define a safety mechanism in someone put a wrong value - else: - # multiple docs case - self._set_data_and_columns(key, value) - - def _set_data_and_columns( - self: T, - index_item: Union[Tuple, Iterable, slice], - value: Union[T, DocList[T_doc]], - ) -> None: - """Delegates the setting to the data and the columns. - - :param index_item: the key used as index. Needs to be a valid index for both - DocList (data) and column types (torch/tensorflow/numpy tensors) - :value: the value to set at the `key` location - """ - if isinstance(index_item, tuple): - index_item = list(index_item) - - # set data and prepare columns - processed_value: T - if isinstance(value, DocList): - if not issubclass(value.doc_type, self.doc_type): - raise TypeError( - f'{value} schema : {value.doc_type} is not compatible with ' - f'this DocVec schema : {self.doc_type}' - ) - processed_value = cast( - T, value.to_doc_vec(tensor_type=self.tensor_type) - ) # we need to copy data here - - elif isinstance(value, DocVec): - if not issubclass(value.doc_type, self.doc_type): - raise TypeError( - f'{value} schema : {value.doc_type} is not compatible with ' - f'this DocVec schema : {self.doc_type}' - ) - processed_value = value - else: - raise TypeError(f'Can not set a DocVec with {type(value)}') - - for field, col in self._storage.columns.items(): - col[index_item] = processed_value._storage.columns[field] - - def _set_data_column( - self: T, - field: str, - values: Union[ - Sequence[DocList[T_doc]], - Sequence[Any], - T, - DocList, - AbstractTensor, - None, - ], - ) -> None: - """Set all Documents in this DocList using the passed values - - :param field: name of the fields to set - :values: the values to set at the DocList level - """ - if values is None: - if field in self._storage.tensor_columns.keys(): - self._storage.tensor_columns[field] = values - elif field in self._storage.doc_columns.keys(): - self._storage.doc_columns[field] = values - elif field in self._storage.docs_vec_columns.keys(): - self._storage.docs_vec_columns[field] = values - elif field in self._storage.any_columns.keys(): - raise ValueError( - f'column {field} cannot be set to None, try to pass ' - f'a list of None instead' - ) - else: - raise ValueError(f'{field} does not exist in {self}') - - else: - if len(values) != len(self._storage): - raise ValueError( - f'{values} has not the right length, expected ' - f'{len(self._storage)} , got {len(values)}' - ) - if field in self._storage.tensor_columns.keys(): - - col = self._storage.tensor_columns[field] - if col is not None: - validation_class = col.__unparametrizedcls__ or col.__class__ - else: - validation_class = self.doc_type.__fields__[field].type_ - - # TODO shape check should be handle by the tensor validation - - values = parse_obj_as(validation_class, values) - self._storage.tensor_columns[field] = values - - elif field in self._storage.doc_columns.keys(): - values_ = parse_obj_as( - DocVec.__class_getitem__(self.doc_type._get_field_type(field)), - values, - ) - self._storage.doc_columns[field] = values_ - - elif field in self._storage.docs_vec_columns.keys(): - values_ = cast(Sequence[DocList[T_doc]], values) - # TODO here we should actually check if this is correct - self._storage.docs_vec_columns[field] = values_ - elif field in self._storage.any_columns.keys(): - # TODO here we should actually check if this is correct - values_ = cast(Sequence, values) - self._storage.any_columns[field] = values_ - else: - raise KeyError(f'{field} is not a valid field for this DocList') - - #################### - # Deleting data # - #################### - - def __delitem__(self, key: Union[int, IndexIterType]) -> None: - raise NotImplementedError( - f'{self.__class__.__name__} does not implement ' - f'__del_item__. You are trying to delete an element' - f'from {self.__class__.__name__} which is not ' - f'designed for this operation. Please `unstack`' - f' before doing the deletion' - ) - - #################### - # Sequence related # - #################### - def __iter__(self): - for i in range(len(self)): - yield self[i] - - def __len__(self): - return len(self._storage) - - #################### - # IO related # - #################### - - @classmethod - def from_protobuf(cls: Type[T], pb_msg: 'DocVecProto') -> T: - """create a Document from a protobuf message""" - storage = ColumnStorage( - pb_msg.tensor_columns, - pb_msg.doc_columns, - pb_msg.docs_vec_columns, - pb_msg.any_columns, - ) - - return cls.from_columns_storage(storage) - - def to_protobuf(self) -> 'DocVecProto': - """Convert DocVec into a Protobuf message""" - from docarray.proto import ( - DocListProto, - DocVecProto, - ListOfAnyProto, - ListOfDocArrayProto, - NdArrayProto, - ) - - da_proto = DocListProto() - for doc in self: - da_proto.docs.append(doc.to_protobuf()) - - doc_columns_proto: Dict[str, DocVecProto] = dict() - tensor_columns_proto: Dict[str, NdArrayProto] = dict() - da_columns_proto: Dict[str, ListOfDocArrayProto] = dict() - any_columns_proto: Dict[str, ListOfAnyProto] = dict() - - for field, col_doc in self._storage.doc_columns.items(): - doc_columns_proto[field] = ( - col_doc.to_protobuf() if col_doc is not None else None - ) - for field, col_tens in self._storage.tensor_columns.items(): - tensor_columns_proto[field] = ( - col_tens.to_protobuf() if col_tens is not None else None - ) - for field, col_da in self._storage.docs_vec_columns.items(): - list_proto = ListOfDocArrayProto() - if col_da: - for docs in col_da: - list_proto.data.append(docs.to_protobuf()) - da_columns_proto[field] = list_proto - for field, col_any in self._storage.any_columns.items(): - list_proto = ListOfAnyProto() - for data in col_any: - list_proto.data.append(_type_to_protobuf(data)) - any_columns_proto[field] = list_proto - - return DocVecProto( - doc_columns=doc_columns_proto, - tensor_columns=tensor_columns_proto, - docs_vec_columns=da_columns_proto, - any_columns=any_columns_proto, - ) - - def to_doc_list(self: T) -> DocList[T_doc]: - """Convert DocVec into a DocList. - - Note this destroys the arguments and returns a new DocList - """ - - unstacked_doc_column: Dict[str, Optional[DocList]] = dict() - unstacked_da_column: Dict[str, Optional[List[DocList]]] = dict() - unstacked_tensor_column: Dict[str, Optional[List[AbstractTensor]]] = dict() - unstacked_any_column = self._storage.any_columns - - for field, doc_col in self._storage.doc_columns.items(): - unstacked_doc_column[field] = doc_col.to_doc_list() if doc_col else None - - for field, da_col in self._storage.docs_vec_columns.items(): - - unstacked_da_column[field] = ( - [docs.to_doc_list() for docs in da_col] if da_col else None - ) - - for field, tensor_col in list(self._storage.tensor_columns.items()): - # list is needed here otherwise we cannot delete the column - if tensor_col is not None: - tensors = list() - for tensor in tensor_col: - tensor_copy = tensor.get_comp_backend().copy(tensor) - tensors.append(tensor_copy) - - unstacked_tensor_column[field] = tensors - del self._storage.tensor_columns[field] - - unstacked_column = ChainMap( # type: ignore - unstacked_any_column, # type: ignore - unstacked_tensor_column, # type: ignore - unstacked_da_column, # type: ignore - unstacked_doc_column, # type: ignore - ) # type: ignore - - docs = [] - - for i in range(len(self)): - data = {field: col[i] for field, col in unstacked_column.items()} - docs.append(self.doc_type.construct(**data)) - - del self._storage - - return DocList.__class_getitem__(self.doc_type).construct(docs) - - def traverse_flat( - self, - access_path: str, - ) -> Union[List[Any], 'TorchTensor', 'NdArray']: - nodes = list(AnyDocArray._traverse(node=self, access_path=access_path)) - flattened = AnyDocArray._flatten_one_level(nodes) - - cls_to_check = (NdArray, TorchTensor) if TorchTensor is not None else (NdArray,) - - if len(flattened) == 1 and isinstance(flattened[0], cls_to_check): - return flattened[0] - else: - return flattened diff --git a/spaces/Sup3r/Image-Upscaling-Playground/README.md b/spaces/Sup3r/Image-Upscaling-Playground/README.md deleted file mode 100644 index 1f50c61d45b587526bf15f6a71d29dea53aaab7a..0000000000000000000000000000000000000000 --- a/spaces/Sup3r/Image-Upscaling-Playground/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Image Upscaling Playground -emoji: 🦆 -colorFrom: green -colorTo: yellow -sdk: gradio -sdk_version: 3.4.1 -app_file: app.py -pinned: false -license: apache-2.0 -duplicated_from: bookbot/Image-Upscaling-Playground ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/Superlang/ImageProcessor/annotator/mediapipe_face/mediapipe_face_common.py b/spaces/Superlang/ImageProcessor/annotator/mediapipe_face/mediapipe_face_common.py deleted file mode 100644 index 0f7d3701dc40eee88977f17a877fa800d0ae328d..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/mediapipe_face/mediapipe_face_common.py +++ /dev/null @@ -1,155 +0,0 @@ -from typing import Mapping - -import mediapipe as mp -import numpy - - -mp_drawing = mp.solutions.drawing_utils -mp_drawing_styles = mp.solutions.drawing_styles -mp_face_detection = mp.solutions.face_detection # Only for counting faces. -mp_face_mesh = mp.solutions.face_mesh -mp_face_connections = mp.solutions.face_mesh_connections.FACEMESH_TESSELATION -mp_hand_connections = mp.solutions.hands_connections.HAND_CONNECTIONS -mp_body_connections = mp.solutions.pose_connections.POSE_CONNECTIONS - -DrawingSpec = mp.solutions.drawing_styles.DrawingSpec -PoseLandmark = mp.solutions.drawing_styles.PoseLandmark - -min_face_size_pixels: int = 64 -f_thick = 2 -f_rad = 1 -right_iris_draw = DrawingSpec(color=(10, 200, 250), thickness=f_thick, circle_radius=f_rad) -right_eye_draw = DrawingSpec(color=(10, 200, 180), thickness=f_thick, circle_radius=f_rad) -right_eyebrow_draw = DrawingSpec(color=(10, 220, 180), thickness=f_thick, circle_radius=f_rad) -left_iris_draw = DrawingSpec(color=(250, 200, 10), thickness=f_thick, circle_radius=f_rad) -left_eye_draw = DrawingSpec(color=(180, 200, 10), thickness=f_thick, circle_radius=f_rad) -left_eyebrow_draw = DrawingSpec(color=(180, 220, 10), thickness=f_thick, circle_radius=f_rad) -mouth_draw = DrawingSpec(color=(10, 180, 10), thickness=f_thick, circle_radius=f_rad) -head_draw = DrawingSpec(color=(10, 200, 10), thickness=f_thick, circle_radius=f_rad) - -# mp_face_mesh.FACEMESH_CONTOURS has all the items we care about. -face_connection_spec = {} -for edge in mp_face_mesh.FACEMESH_FACE_OVAL: - face_connection_spec[edge] = head_draw -for edge in mp_face_mesh.FACEMESH_LEFT_EYE: - face_connection_spec[edge] = left_eye_draw -for edge in mp_face_mesh.FACEMESH_LEFT_EYEBROW: - face_connection_spec[edge] = left_eyebrow_draw -# for edge in mp_face_mesh.FACEMESH_LEFT_IRIS: -# face_connection_spec[edge] = left_iris_draw -for edge in mp_face_mesh.FACEMESH_RIGHT_EYE: - face_connection_spec[edge] = right_eye_draw -for edge in mp_face_mesh.FACEMESH_RIGHT_EYEBROW: - face_connection_spec[edge] = right_eyebrow_draw -# for edge in mp_face_mesh.FACEMESH_RIGHT_IRIS: -# face_connection_spec[edge] = right_iris_draw -for edge in mp_face_mesh.FACEMESH_LIPS: - face_connection_spec[edge] = mouth_draw -iris_landmark_spec = {468: right_iris_draw, 473: left_iris_draw} - - -def draw_pupils(image, landmark_list, drawing_spec, halfwidth: int = 2): - """We have a custom function to draw the pupils because the mp.draw_landmarks method requires a parameter for all - landmarks. Until our PR is merged into mediapipe, we need this separate method.""" - if len(image.shape) != 3: - raise ValueError("Input image must be H,W,C.") - image_rows, image_cols, image_channels = image.shape - if image_channels != 3: # BGR channels - raise ValueError('Input image must contain three channel bgr data.') - for idx, landmark in enumerate(landmark_list.landmark): - if ( - (landmark.HasField('visibility') and landmark.visibility < 0.9) or - (landmark.HasField('presence') and landmark.presence < 0.5) - ): - continue - if landmark.x >= 1.0 or landmark.x < 0 or landmark.y >= 1.0 or landmark.y < 0: - continue - image_x = int(image_cols*landmark.x) - image_y = int(image_rows*landmark.y) - draw_color = None - if isinstance(drawing_spec, Mapping): - if drawing_spec.get(idx) is None: - continue - else: - draw_color = drawing_spec[idx].color - elif isinstance(drawing_spec, DrawingSpec): - draw_color = drawing_spec.color - image[image_y-halfwidth:image_y+halfwidth, image_x-halfwidth:image_x+halfwidth, :] = draw_color - - -def reverse_channels(image): - """Given a numpy array in RGB form, convert to BGR. Will also convert from BGR to RGB.""" - # im[:,:,::-1] is a neat hack to convert BGR to RGB by reversing the indexing order. - # im[:,:,::[2,1,0]] would also work but makes a copy of the data. - return image[:, :, ::-1] - - -def generate_annotation( - img_rgb, - max_faces: int, - min_confidence: float -): - """ - Find up to 'max_faces' inside the provided input image. - If min_face_size_pixels is provided and nonzero it will be used to filter faces that occupy less than this many - pixels in the image. - """ - with mp_face_mesh.FaceMesh( - static_image_mode=True, - max_num_faces=max_faces, - refine_landmarks=True, - min_detection_confidence=min_confidence, - ) as facemesh: - img_height, img_width, img_channels = img_rgb.shape - assert(img_channels == 3) - - results = facemesh.process(img_rgb).multi_face_landmarks - - if results is None: - print("No faces detected in controlnet image for Mediapipe face annotator.") - return numpy.zeros_like(img_rgb) - - # Filter faces that are too small - filtered_landmarks = [] - for lm in results: - landmarks = lm.landmark - face_rect = [ - landmarks[0].x, - landmarks[0].y, - landmarks[0].x, - landmarks[0].y, - ] # Left, up, right, down. - for i in range(len(landmarks)): - face_rect[0] = min(face_rect[0], landmarks[i].x) - face_rect[1] = min(face_rect[1], landmarks[i].y) - face_rect[2] = max(face_rect[2], landmarks[i].x) - face_rect[3] = max(face_rect[3], landmarks[i].y) - if min_face_size_pixels > 0: - face_width = abs(face_rect[2] - face_rect[0]) - face_height = abs(face_rect[3] - face_rect[1]) - face_width_pixels = face_width * img_width - face_height_pixels = face_height * img_height - face_size = min(face_width_pixels, face_height_pixels) - if face_size >= min_face_size_pixels: - filtered_landmarks.append(lm) - else: - filtered_landmarks.append(lm) - - # Annotations are drawn in BGR for some reason, but we don't need to flip a zero-filled image at the start. - empty = numpy.zeros_like(img_rgb) - - # Draw detected faces: - for face_landmarks in filtered_landmarks: - mp_drawing.draw_landmarks( - empty, - face_landmarks, - connections=face_connection_spec.keys(), - landmark_drawing_spec=None, - connection_drawing_spec=face_connection_spec - ) - draw_pupils(empty, face_landmarks, iris_landmark_spec, 2) - - # Flip BGR back to RGB. - empty = reverse_channels(empty).copy() - - return empty diff --git a/spaces/Superlang/ImageProcessor/annotator/oneformer/oneformer/modeling/pixel_decoder/ops/modules/__init__.py b/spaces/Superlang/ImageProcessor/annotator/oneformer/oneformer/modeling/pixel_decoder/ops/modules/__init__.py deleted file mode 100644 index 6fdbf03359958f3d67ab00f879bf6b61a6c8f06a..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/oneformer/oneformer/modeling/pixel_decoder/ops/modules/__init__.py +++ /dev/null @@ -1,12 +0,0 @@ -# ------------------------------------------------------------------------------------------------ -# Deformable DETR -# Copyright (c) 2020 SenseTime. All Rights Reserved. -# Licensed under the Apache License, Version 2.0 [see LICENSE for details] -# ------------------------------------------------------------------------------------------------ -# Modified from https://github.com/chengdazhi/Deformable-Convolution-V2-PyTorch/tree/pytorch_1.0.0 -# ------------------------------------------------------------------------------------------------ - -# Copyright (c) Facebook, Inc. and its affiliates. -# Modified by Bowen Cheng from https://github.com/fundamentalvision/Deformable-DETR - -from .ms_deform_attn import MSDeformAttn diff --git a/spaces/Superlang/ImageProcessor/annotator/uniformer/mmseg/models/backbones/__init__.py b/spaces/Superlang/ImageProcessor/annotator/uniformer/mmseg/models/backbones/__init__.py deleted file mode 100644 index 8339983905fb5d20bae42ba6f76fea75d278b1aa..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/uniformer/mmseg/models/backbones/__init__.py +++ /dev/null @@ -1,17 +0,0 @@ -from .cgnet import CGNet -# from .fast_scnn import FastSCNN -from .hrnet import HRNet -from .mobilenet_v2 import MobileNetV2 -from .mobilenet_v3 import MobileNetV3 -from .resnest import ResNeSt -from .resnet import ResNet, ResNetV1c, ResNetV1d -from .resnext import ResNeXt -from .unet import UNet -from .vit import VisionTransformer -from .uniformer import UniFormer - -__all__ = [ - 'ResNet', 'ResNetV1c', 'ResNetV1d', 'ResNeXt', 'HRNet', - 'ResNeSt', 'MobileNetV2', 'UNet', 'CGNet', 'MobileNetV3', - 'VisionTransformer', 'UniFormer' -] diff --git a/spaces/Superlang/ImageProcessor/annotator/uniformer/mmseg/models/utils/__init__.py b/spaces/Superlang/ImageProcessor/annotator/uniformer/mmseg/models/utils/__init__.py deleted file mode 100644 index 3d3bdd349b9f2ae499a2fcb2ac1d2e3c77befebe..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/uniformer/mmseg/models/utils/__init__.py +++ /dev/null @@ -1,13 +0,0 @@ -from .drop import DropPath -from .inverted_residual import InvertedResidual, InvertedResidualV3 -from .make_divisible import make_divisible -from .res_layer import ResLayer -from .se_layer import SELayer -from .self_attention_block import SelfAttentionBlock -from .up_conv_block import UpConvBlock -from .weight_init import trunc_normal_ - -__all__ = [ - 'ResLayer', 'SelfAttentionBlock', 'make_divisible', 'InvertedResidual', - 'UpConvBlock', 'InvertedResidualV3', 'SELayer', 'DropPath', 'trunc_normal_' -] diff --git a/spaces/TandCAcceptMe/face-swap-docker/chain_img_processor/__init__.py b/spaces/TandCAcceptMe/face-swap-docker/chain_img_processor/__init__.py deleted file mode 100644 index f8841b3954c11071f2596b9851fa3edfac4413d0..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/chain_img_processor/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -from .image import ChainImgProcessor, ChainImgPlugin, get_single_image_processor, version -from .video import ChainVideoProcessor, get_single_video_processor -from .batchimage import ChainBatchImageProcessor -from .ffmpeg_writer import FFMPEG_VideoWriter \ No newline at end of file diff --git a/spaces/Tej3/ECG_Classification/app.py b/spaces/Tej3/ECG_Classification/app.py deleted file mode 100644 index 832bfe9f5d260929e2e0de91df178e5fd6bf20e2..0000000000000000000000000000000000000000 --- a/spaces/Tej3/ECG_Classification/app.py +++ /dev/null @@ -1,114 +0,0 @@ -import os -import shutil -import gradio as gr -import numpy as np -import wfdb -import torch -from wfdb.plot.plot import plot_wfdb -from wfdb.io.record import Record, rdrecord - -from models.CNN import CNN, MMCNN_CAT -from models.RNN import MMRNN -from utils.helper_functions import predict - -import matplotlib -matplotlib.use('Agg') -import matplotlib.pyplot as plt - -from transformers import AutoTokenizer, AutoModel -from langdetect import detect - -# edit this before Running -CWD = os.getcwd() -#CKPT paths -MMCNN_CAT_ckpt_path = f"{CWD}/demo_data/model_MMCNN_CAT_epoch_30_acc_84.pt" -MMRNN_ckpt_path = f"{CWD}/demo_data/model_MMRNN_undersampled_augmented_rn_epoch_20_acc_84.pt" - -# Define clinical models and tokenizers -en_clin_bert = 'emilyalsentzer/Bio_ClinicalBERT' -ger_clin_bert = 'smanjil/German-MedBERT' - -en_tokenizer = AutoTokenizer.from_pretrained(en_clin_bert) -en_model = AutoModel.from_pretrained(en_clin_bert) - -g_tokenizer = AutoTokenizer.from_pretrained(ger_clin_bert) -g_model = AutoModel.from_pretrained(ger_clin_bert) - -def preprocess(data_file_path): - data = [wfdb.rdsamp(data_file_path)] - data = np.array([signal for signal, meta in data]) - return data - -def embed(notes): - if detect(notes) == 'en': - tokens = en_tokenizer(notes, return_tensors='pt') - outputs = en_model(**tokens) - else: - tokens = g_tokenizer(notes, return_tensors='pt') - outputs = g_model(**tokens) - - embeddings = outputs.last_hidden_state - embedding = torch.mean(embeddings, dim=1).squeeze(0) - - return embedding - # return torch.load(f'{"./data/embeddings/"}1.pt') -def plot_ecg(path): - record100 = rdrecord(path) - return plot_wfdb(record=record100, title='ECG Signal Graph', figsize=(12,10), return_fig=True) - -def infer(model,data, notes): - embed_notes = embed(notes).unsqueeze(0) - data= torch.tensor(data) - if model == "CNN": - model = MMCNN_CAT() - checkpoint = torch.load(MMCNN_CAT_ckpt_path, map_location="cpu") - model.load_state_dict(checkpoint['model_state_dict']) - data = data.transpose(1,2).float() - - elif model == "RNN": - model = MMRNN(device='cpu') - model.load_state_dict(torch.load(MMRNN_ckpt_path, map_location="cpu")['model_state_dict']) - data = data.float() - model.eval() - outputs, predicted = predict(model, data, embed_notes, device='cpu') - outputs = torch.sigmoid(outputs)[0] - return {'Conduction Disturbance':round(outputs[0].item(),2), 'Hypertrophy':round(outputs[1].item(),2), 'Myocardial Infarction':round(outputs[2].item(),2), 'Normal ECG':round(outputs[3].item(),2), 'ST/T Change':round(outputs[4].item(),2)} - -def run(model_name, header_file, data_file, notes): - demo_dir = f"{CWD}/demo_data" - hdr_dirname, hdr_basename = os.path.split(header_file.name) - data_dirname, data_basename = os.path.split(data_file.name) - shutil.copyfile(data_file.name, f"{demo_dir}/{data_basename}") - shutil.copyfile(header_file.name, f"{demo_dir}/{hdr_basename}") - data = preprocess(f"{demo_dir}/{hdr_basename.split('.')[0]}") - ECG_graph = plot_ecg(f"{demo_dir}/{hdr_basename.split('.')[0]}") - os.remove(f"{demo_dir}/{data_basename}") - os.remove(f"{demo_dir}/{hdr_basename}") - output = infer(model_name, data, notes) - return output, ECG_graph - -with gr.Blocks() as demo: - with gr.Row(): - model = gr.Radio(['CNN', 'RNN'], label= "Select Model") - with gr.Row(): - with gr.Column(scale=1): - header_file = gr.File(label = "header_file", file_types=[".hea"]) - data_file = gr.File(label = "data_file", file_types=[".dat"]) - notes = gr.Textbox(label = "Clinical Notes") - with gr.Column(scale=1): - output_prob = gr.Label({'Normal ECG':0, 'Myocardial Infarction':0, 'ST/T Change':0, 'Conduction Disturbance':0, 'Hypertrophy':0}, show_label=False) - with gr.Row(): - ecg_graph = gr.Plot(label = "ECG Signal Visualisation") - with gr.Row(): - predict_btn = gr.Button("Predict Class") - predict_btn.click(fn= run, inputs = [model, header_file, data_file, notes], outputs=[output_prob, ecg_graph]) - with gr.Row(): - gr.Examples(examples=[[f"{CWD}/demo_data/test/00001_lr.hea", f"{CWD}/demo_data/test/00001_lr.dat", "sinusrhythmus periphere niederspannung"],\ - [f"{CWD}/demo_data/test/00008_lr.hea", f"{CWD}/demo_data/test/00008_lr.dat", "sinusrhythmus linkstyp qrs(t) abnormal inferiorer infarkt alter unbest."], \ - [f"{CWD}/demo_data/test/00045_lr.hea", f"{CWD}/demo_data/test/00045_lr.dat", "sinusrhythmus unvollstÄndiger rechtsschenkelblock sonst normales ekg"],\ - [f"{CWD}/demo_data/test/00257_lr.hea", f"{CWD}/demo_data/test/00257_lr.dat", "premature atrial contraction(s). sinus rhythm. left atrial enlargement. qs complexes in v2. st segments are slightly elevated in v2,3. st segments are depressed in i, avl. t waves are low or flat in i, v5,6 and inverted in avl. consistent with ischaemic h"],\ - ], - inputs = [header_file, data_file, notes]) - -if __name__ == "__main__": - demo.launch() \ No newline at end of file diff --git a/spaces/Tetel/secondbing/public/index.html b/spaces/Tetel/secondbing/public/index.html deleted file mode 100644 index 306e077820d4be6e0bc3325db6b8a622958d1789..0000000000000000000000000000000000000000 --- a/spaces/Tetel/secondbing/public/index.html +++ /dev/null @@ -1,860 +0,0 @@ - - - - - - ChatSydney - - - - -
- - - - - - diff --git a/spaces/Theivaprakasham/yolov6/tools/eval.py b/spaces/Theivaprakasham/yolov6/tools/eval.py deleted file mode 100644 index 79861ac2200a87fde94c9a8902dbf29f1ba421a3..0000000000000000000000000000000000000000 --- a/spaces/Theivaprakasham/yolov6/tools/eval.py +++ /dev/null @@ -1,86 +0,0 @@ -#!/usr/bin/env python3 -# -*- coding:utf-8 -*- -import argparse -import os -import sys -import torch - -ROOT = os.getcwd() -if str(ROOT) not in sys.path: - sys.path.append(str(ROOT)) - -from yolov6.core.evaler import Evaler -from yolov6.utils.events import LOGGER - - -def get_args_parser(add_help=True): - parser = argparse.ArgumentParser(description='YOLOv6 PyTorch Evalating', add_help=add_help) - parser.add_argument('--data', type=str, default='./data/coco.yaml', help='dataset.yaml path') - parser.add_argument('--weights', type=str, default='./weights/yolov6s.pt', help='model.pt path(s)') - parser.add_argument('--batch-size', type=int, default=32, help='batch size') - parser.add_argument('--img-size', type=int, default=640, help='inference size (pixels)') - parser.add_argument('--conf-thres', type=float, default=0.001, help='confidence threshold') - parser.add_argument('--iou-thres', type=float, default=0.65, help='NMS IoU threshold') - parser.add_argument('--task', default='val', help='val, or speed') - parser.add_argument('--device', default='0', help='cuda device, i.e. 0 or 0,1,2,3 or cpu') - parser.add_argument('--half', default=False, action='store_true', help='whether to use fp16 infer') - parser.add_argument('--save_dir', type=str, default='runs/val/exp', help='evaluation save dir') - args = parser.parse_args() - LOGGER.info(args) - return args - - -@torch.no_grad() -def run(data, - weights=None, - batch_size=32, - img_size=640, - conf_thres=0.001, - iou_thres=0.65, - task='val', - device='', - half=False, - model=None, - dataloader=None, - save_dir='', - ): - """ Run the evaluation process - - This function is the main process of evalutaion, supporting image file and dir containing images. - It has tasks of 'val', 'train' and 'speed'. Task 'train' processes the evaluation during training phase. - Task 'val' processes the evaluation purely and return the mAP of model.pt. Task 'speed' precesses the - evaluation of inference speed of model.pt. - - """ - - # task - Evaler.check_task(task) - if not os.path.exists(save_dir): - os.makedirs(save_dir) - - # reload thres/device/half/data according task - conf_thres, iou_thres = Evaler.reload_thres(conf_thres, iou_thres, task) - device = Evaler.reload_device(device, model, task) - half = device.type != 'cpu' and half - data = Evaler.reload_dataset(data) if isinstance(data, str) else data - - # init - val = Evaler(data, batch_size, img_size, conf_thres, \ - iou_thres, device, half, save_dir) - model = val.init_model(model, weights, task) - dataloader = val.init_data(dataloader, task) - - # eval - model.eval() - pred_result = val.predict_model(model, dataloader, task) - eval_result = val.eval_model(pred_result, model, dataloader, task) - return eval_result - - -def main(args): - run(**vars(args)) - - -if __name__ == "__main__": - args = get_args_parser() - main(args) diff --git a/spaces/TinkerFrank/AppleClassifier/app_serial.py b/spaces/TinkerFrank/AppleClassifier/app_serial.py deleted file mode 100644 index 31643c49e3c7b4270070100f3396fe16ea62bf5b..0000000000000000000000000000000000000000 --- a/spaces/TinkerFrank/AppleClassifier/app_serial.py +++ /dev/null @@ -1,81 +0,0 @@ -import torch -import gradio as gr -from torch import nn -import cv2 -from torchvision.transforms import ToTensor -from torchvision.datasets import ImageFolder -import numpy as np -from PIL import Image - -device = 'cuda' if torch.cuda.is_available() else 'cpu' - -# RuntimeError: Attempting to deserialize object on a CUDA device but -# torch.cuda.is_available() is False. If you are running on a CPU-only machine, -# please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU. - -model = torch.load('apple_resnet_classifier.pt', map_location=torch.device('cpu')) -model.to(device) -model.eval() - - -def predict(image): - img = image.resize((224, 224)) - img = ToTensor()(img).unsqueeze(0).to(device) - #img = transforms.ToTensor()(img).unsqueeze(0) - with torch.no_grad(): - out = model(img) - _, predicted = torch.max(out.data, 1) - probabilities = torch.nn.functional.softmax(out, dim=1)[0] - #probabilities = torch.nn.functional.softmax(out.data, dim=1) - if predicted.item() == 1: - apple = 'Class I Apple' - else: - apple = 'Bad Apple' - - class_labels = ['Bad Apple', 'Normal Apple', 'Rot Apple', 'Scab Apple'] - #result = [] - - #from pictionary tutorial - values, indices = torch.topk(probabilities, 4) - confidences = {class_labels[i]: v.item() for i, v in zip(indices, values)} - return confidences - - # for idx, probability in enumerate(probabilities): - # result.append(f"{class_labels[idx]} - Probability: {probability.item():.2f}") - - - #return result - - -def answer_question(question): - interpreter = gr.Interface('huggingface/gpt2') - response = interpreter.process(question) - return response.result - - -description = """ -
-

Classifier for Apples, based on a finetuned RESNET101 model

- Bad Apple Silhouette -
-""" - -confidences_classifier = gr.Interface(fn=predict, - inputs=gr.Image(type="pil"), - outputs="json", - description=description, - examples=["myapple_1.jpg", "myapple_2.jpg", "myapple_3.jpg", "myapple_4.jpg"]) - -answer_generator = gr.Interface(fn=answer_question, - inputs="text", - outputs="text") - - -# #Open Generative QA: The model generates free text directly based on the context. -# #You can learn more about the Text Generation task in its page. -# ## So far only GPT-3.5 was able to NLP correctly over the dictionary - -# probabilityinterpreter = gr.Interface('huggingface/gpt2') - -gr.Series(confidences_classifier,answer_generator).launch() - diff --git a/spaces/TusharNautiyal/Dynamic-Movie-Recommender-With-Sentiment-Analysis/app.py b/spaces/TusharNautiyal/Dynamic-Movie-Recommender-With-Sentiment-Analysis/app.py deleted file mode 100644 index 5aeba791d5bd52f0262bbe1bc7a73860a01f5a32..0000000000000000000000000000000000000000 --- a/spaces/TusharNautiyal/Dynamic-Movie-Recommender-With-Sentiment-Analysis/app.py +++ /dev/null @@ -1,290 +0,0 @@ -import re -import pandas as pd -import streamlit as st -import requests -import random -import nltk -nltk.download('all') -from nltk.corpus import stopwords -import cloudpickle -import pickle -from urllib.request import urlopen - - -final_df = pd.read_csv('final_df.csv') -global similarity_bert -global similarity_bag_of_Words -global similarity_with_tf_idf_word_2_vec -global similarity_with_word_2_vec -global tf_idf_similarities -similarity = {'similarity_bert':'', - 'similarity_bag_of_Words':'', - 'tf_idf_similarities':'', - 'similarity_with_word_2_vec':'', - 'similarity_with_tf_idf_word_2_vec':''} - -model_sentiment = pickle.load(open('model_sentiment.pkl','rb')) -tf_idf_vectorizer = pickle.load(open('tf_idf_vectorizer.pkl','rb')) - - -def find_closest(text): - text = text.strip() - new_text = text.split(' ') - for word in new_text[:]: - if word in stopwords.words('english'): - new_text.remove(word) - index = random.randint(0,len(new_text)-1) - text = new_text[index] - spliteer = final_df['title_y'].str.split(' ') - i = 0 - for val in spliteer.to_list(): - if text in val: - break - i+=1 - return i - -def recommend(movie,model): - result = [] - movie = movie.lower() - titles = final_df['title_y'].str.lower().to_list() - - if movie in titles: - index = final_df.loc[final_df['title_y'].str.lower() == movie].index[0] - else: - index = find_closest(movie) - if index==4800: - raise ValueError('Please Enter a correct movie name so that we can recommend properly Please recheck') - if(model == 'bert'): - similarity_bert = similarity['similarity_bert'] - distances = sorted(list(enumerate(similarity_bert[index])),reverse=True,key = lambda x: x[1]) - - elif(model=='bag_of_words'): - st.write(similarity['similarity_bag_of_Words']) - similarity_bag_of_Words = similarity['similarity_bag_of_Words'] - distances = sorted(list(enumerate(similarity_bag_of_Words[index])), reverse=True,key = lambda x: x[1]) - - elif(model=='tf-idf'): - tf_idf_similarities = similarity['tf_idf_similarities'] - distances = sorted(list(enumerate(tf_idf_similarities[index])), reverse=True,key = lambda x: x[1]) - - elif(model=='word2vec'): - similarity_with_word_2_vec = similarity['similarity_with_word_2_vec'] - distances = sorted(list(enumerate(similarity_with_word_2_vec[index])), reverse=True,key = lambda x: x[1]) - - elif(model=='tf-idf+word2vec'): - similarity_with_tf_idf_word_2_vec = similarity['similarity_with_tf_idf_word_2_vec'] - distances = sorted(list(enumerate(similarity_with_tf_idf_word_2_vec[index])), reverse=True,key = lambda x: x[1]) - - for i in distances[0:6]: - result.append([final_df.iloc[i[0]].id, final_df.iloc[i[0]].title_y]) - return result - -def main(): - st.set_page_config(layout="wide") - html_footer = """ - - - - """ - st.markdown(html_footer,unsafe_allow_html=True) - hide_footer_style = """ - - """ - st.markdown(hide_footer_style, unsafe_allow_html=True) - - #We will use session states. This will help in saving models once loaded so that for one instance you don't have to do downloads again. - - - with st.form("my_form"): - st.title('Movie Recommeder System') - st.text('You Can Switch Between models to see the performance of recommendation') - st.markdown('We have used Bag of Words ,**BERT** specifically **(multi-qa-MiniLM-L6-cos-v1)** , **TF-IDF**, and implemented **TF-IDF + Word2Vec** Model Check repo to understand better. By Default the Flow is in BERT if you want to swith select a model from below. This recommender system is a content base recommendation system.') - model = st.selectbox('Select A Model Procedure',('Bert', 'Bag of Words','TF-IDF','Word2Vec','TF-IDF + Word2Vec')) - query = st.text_input('Enter Any Movie Name or something related to that movie') - submitted = st.form_submit_button("RECOMMEND") - - if st.session_state.get('button') != True: - st.session_state['button'] = submitted # Saved the state - - - if st.session_state['button'] == True: - if(model=='Bert'): - model= 'bert' - if 'similarity_bert' not in st.session_state: - with st.spinner('Wait Model is Loading.....Till Then How much you like movies'): - st.session_state.similarity_bert = '' - similarity['similarity_bert'] = cloudpickle.load(urlopen('https://drive.google.com/uc?export=download&id=131DguHzk9ZF6AGNozHRawwdFupgycqUT')) - st.session_state.similarity_bert = similarity['similarity_bert'] - else: - similarity['similarity_bert'] = st.session_state['similarity_bert'] - st.success(f'Done!') - - - elif(model == 'Bag of Words'): - model = 'bag_of_words' - - if 'similarity_bag_of_Words' not in st.session_state: - with st.spinner('Wait Model is Loading.....Till Then How much you like movies'): - st.session_state.similarity_bag_of_Words = '' - similarity['similarity_bag_of_Words'] = cloudpickle.load(urlopen('https://drive.google.com/uc?export=download&id=1o7pWZfaku_43do0beNfOI6Pz9JeAM6n3&confirm=t&uuid=ccc39f37-f727-49fb-8a30-c9214b30e5f3')) - st.session_state['similarity_bag_of_Words'] = similarity['similarity_bag_of_Words'] - else: - similarity['similarity_bag_of_Words'] = st.session_state['similarity_bag_of_Words'] - st.success(f'Done!') - - - elif(model == 'TF-IDF'): - model = 'tf-idf' - - if 'tf_idf_similarities' not in st.session_state: - with st.spinner('Wait Model is Loading.....Till Then How much you like movies'): - st.session_state.tf_idf_similarities = '' - similarity['tf_idf_similarities'] = cloudpickle.load(urlopen("https://drive.google.com/uc?export=download&id=1ZcL60svASwVrLoAgBnj43tM8i9IkES_E&confirm=t&uuid=e38591c2-777b-490e-a574-700b33ea642e")) - st.session_state.tf_idf_similarities = similarity['tf_idf_similarities'] - else: - similarity['tf_idf_similarities'] = st.session_state.tf_idf_similarities - st.success(f'Done!') - - elif(model == 'TF-IDF + Word2Vec'): - model = 'tf-idf+word2vec' - if 'similarity_with_tf_idf_word_2_vec' not in st.session_state: - with st.spinner('Wait Model is Loading.....Till Then How much you like movies?'): - st.session_state.similarity_with_tf_idf_word_2_vec = '' - similarity['similarity_with_tf_idf_word_2_vec'] = cloudpickle.load(urlopen("https://drive.google.com/uc?export=download&id=1Ykoqty6n9uXn1oBXRCjuFr6mnqUuIVq6&confirm=t&uuid=12d331b1-5ff7-4c7b-92eb-884dfd7525ab")) - st.session_state.similarity_with_tf_idf_word_2_vec = similarity['similarity_with_tf_idf_word_2_vec'] - else: - similarity['similarity_with_tf_idf_word_2_vec'] = st.session_state.similarity_with_tf_idf_word_2_vec - st.success(f'Done!') - - - elif(model == 'Word2Vec'): - model = 'word2vec' - if 'similarity_with_word_2_vec' not in st.session_state: - with st.spinner('Wait Model is Loading.....Till Then How much you like movies'): - st.session_state.similarity_with_word_2_vec = '' - similarity['similarity_with_word_2_vec'] = cloudpickle.load(urlopen("https://drive.google.com/uc?export=download&id=1dpWQotH3TEPVyJTaBonwTILY3DCtTodb")) - st.session_state.similarity_with_word_2_vec = similarity['similarity_with_word_2_vec'] - else: - similarity['similarity_with_word_2_vec'] = st.session_state.similarity_with_word_2_vec - st.success(f'Done!') - - output_images = [] - output_names = [] - if query!=None: - res = recommend(query,model) - if len(res)<=1: - raise TypeError("Hi Looks Like The Query You have entered iam not able to find Please Try Again dont add spaces during the start of the text or Don't Add special characters like @ - # etc") - for ele in res: - image = requests.get(f'https://api.themoviedb.org/3/movie/{ele[0]}/images?api_key=81428e7817728a742c8e842120989817') - data = image.json() - data = data['backdrops'][0]['file_path'] - output_images.append('http://image.tmdb.org/t/p/w500/'+data) - output_names.append(ele[1]) - - col1, col2, col3 = st.columns(3) - with col1: - st.image(output_images[0]) - st.markdown(output_names[0].upper()) - review = st.text_input(f"How much you liked the movie {output_names[0]}",key='review1') - btn0 = st.button('submit',key = 'btn0') - if btn0: - review = re.sub('[^a-zA-Z0-9 ]','',review) - review = tf_idf_vectorizer.transform([review]) - ans = model_sentiment.predict(review) - if ans == 0: - review = 'Thanks for your positive review' - else: - review = 'Sorry for your negative review' - st.write(review) - with col2: - st.image(output_images[1]) - st.markdown(output_names[1].upper()) - review = st.text_input(f"How much you liked the movie {output_names[1]}",key='review2') - btn1 = st.button('submit',key = 'btn1') - if btn1: - review = re.sub('[^a-zA-Z0-9 ]','',review) - review = tf_idf_vectorizer.transform([review]) - ans = model_sentiment.predict(review) - if ans == 0: - review = 'Thanks for your positive review' - else: - review = 'Sorry for your negative review' - st.write(review) - - with col3: - st.image(output_images[2]) - st.markdown(output_names[2].upper()) - review = st.text_input(f"How much you liked the movie {output_names[2]}",key='review3') - btn2 = st.button('submit',key = 'btn2') - if btn2: - review = re.sub('[^a-zA-Z0-9 ]','',review) - review = tf_idf_vectorizer.transform([review]) - ans = model_sentiment.predict(review) - if ans == 0: - review = 'Thanks for your positive review' - else: - review = 'Sorry for your negative review' - st.write(review) - - col4, col5, col6 = st.columns(3) - - with col4: - st.image(output_images[3]) - st.markdown(output_names[3].upper()) - review = st.text_input(f"How much you liked the movie {output_names[3]}",key='review4') - if st.button('submit',key='btn3'): - review = re.sub('[^a-zA-Z0-9 ]','',review) - review = tf_idf_vectorizer.transform([review]) - ans = model_sentiment.predict(review) - if ans == 0: - review = 'Thanks for your positive review' - else: - review = 'Sorry for your negative review' - st.write(review) - - with col5: - st.image(output_images[4]) - st.markdown(output_names[4].upper()) - review = st.text_input(f"How much you liked the movie {output_names[4]}",key='review5') - if st.button('submit',key='btn4'): - review = re.sub('[^a-zA-Z0-9 ]','',review) - review = tf_idf_vectorizer.transform([review]) - ans = model_sentiment.predict(review) - if ans == 0: - review = 'Thanks for your positive review' - else: - review = 'Sorry for your negative review' - st.write(review) - - with col6: - st.image(output_images[5]) - st.markdown(output_names[5].upper()) - review = st.text_input(f"How much you liked the movie {output_names[5]}",key='review6') - if st.button('submit',key = 'btn5'): - review = re.sub('[^a-zA-Z0-9 ]','',review) - review = tf_idf_vectorizer.transform([review]) - ans = model_sentiment.predict(review) - if ans == 0: - review = 'Thanks for your positive review' - else: - review = 'Sorry for your negative review' - st.write(review) - - -if __name__ == '__main__': - main() \ No newline at end of file diff --git a/spaces/VickyKira/NASAGPT/client/css/buttons.css b/spaces/VickyKira/NASAGPT/client/css/buttons.css deleted file mode 100644 index e13f52d9a0414daaa80518bd205913a645a29563..0000000000000000000000000000000000000000 --- a/spaces/VickyKira/NASAGPT/client/css/buttons.css +++ /dev/null @@ -1,4 +0,0 @@ -.buttons { - display: flex; - justify-content: left; -} diff --git a/spaces/VietVuiVe/PhanLoaiTraiCay/app.py b/spaces/VietVuiVe/PhanLoaiTraiCay/app.py deleted file mode 100644 index 4f86cb4638878fccb1a42d3c8d24c514b84ce91a..0000000000000000000000000000000000000000 --- a/spaces/VietVuiVe/PhanLoaiTraiCay/app.py +++ /dev/null @@ -1,78 +0,0 @@ -### 1. Imports and class names setup ### -import gradio as gr -import os -import torch - -from model import create_effnetb2_model -from timeit import default_timer as timer -from typing import Tuple, Dict - -# Setup class names -class_names = ["Táo (Apple)", "Bơ (Avocado)", "Chuối (Banana)", "Hồng Xiêm (Sapoche)", "Quýt (Clementine)", "Dừa (Coconut)", "Thanh Long (Dragonfruit)", "Sầu Riêng (Durian)", "Nho (Grape)", "Bưởi (Jackfruit)", "Chanh (Lime)", "Nhãn (Longan)", "Vải (Lychee)", "Cam (Orange)", "Đu Đủ (Papaya)", "Dứa (Pineapple)", "Lựu (Pomegranate)", "Dâu (Strawberry)", "Dưa Hấu (Watermelon)"] - -### 2. Model and transforms preparation ### - -# Create EffNetB2 model -effnetb2, effnetb2_transforms = create_effnetb2_model( - num_classes=19, # len(class_names) would also work -) - -# Load saved weights -effnetb2.load_state_dict( - torch.load( - f="pretrained_effnetb2_feature_extractor_.pth", - map_location=torch.device("cpu"), # load to CPU - ) -) - -### 3. Predict function ### - -# Create predict function -def predict(img) -> Tuple[Dict, float]: - """Transforms and performs a prediction on img and returns prediction and time taken. - """ - # Start the timer - start_time = timer() - - # Transform the target image and add a batch dimension - img = effnetb2_transforms(img).unsqueeze(0) - - # Put model into evaluation mode and turn on inference mode - effnetb2.eval() - with torch.inference_mode(): - # Pass the transformed image through the model and turn the prediction logits into prediction probabilities - pred_probs = torch.softmax(effnetb2(img), dim=1) - - # Create a prediction label and prediction probability dictionary for each prediction class (this is the required format for Gradio's output parameter) - pred_labels_and_probs = {class_names[i]: float(pred_probs[0][i]) for i in range(len(class_names))} - - # Calculate the prediction time - pred_time = round(timer() - start_time, 5) - - # Return the prediction dictionary and prediction time - return pred_labels_and_probs, pred_time - -### 4. Gradio app ### - -# Create title, description and article strings -title = "Phân loại trái cây qua hình ảnh 🍓🍉🍌🥑🍏" -description = "Phân loại trái cây qua hình ảnh dùng EfficientNetB0 feature extractor computer vision model. Hiện tại đã phân loại được 19 loại trái cây Việt Nam gồm: Táo, Bơ, Chuối, Hồng Xiêm, Quýt, Dừa, Thanh long, Sầu riêng, Nho, Bưởi, Chanh, Nhãn, Vải, Cam, Đu Đủ, Dứa, Lựu, Dâu, Dưa hấu với tỉ lệ chính xác hơn 91%." -article = "Created by team 9: Xử lý ảnh và ứng dụng - CS406.N11. Public Source Code: https://github.com/19522515/PhanLoaiTraiCay " - -# Create examples list from "examples/" directory -example_list = [["examples/" + example] for example in os.listdir("examples")] - -# Create the Gradio demo -demo = gr.Interface(fn=predict, # mapping function from input to output - inputs=gr.Image(type="pil"), # what are the inputs? - outputs=[gr.Label(num_top_classes=19, label="Predictions"), # what are the outputs? - gr.Number(label="Prediction time (s)")], # our fn has two outputs, therefore we have two outputs - theme='darkhuggingface', - # Create examples list from "examples/" directory - examples=example_list, - title=title, - description=description, - article=article) - -# Launch the demo! -demo.launch() diff --git a/spaces/XzJosh/Bekki-Bert-VITS2/bert/chinese-roberta-wwm-ext-large/README.md b/spaces/XzJosh/Bekki-Bert-VITS2/bert/chinese-roberta-wwm-ext-large/README.md deleted file mode 100644 index 7bce039b7f81ee328fdf8efe3f14409200aacbef..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/Bekki-Bert-VITS2/bert/chinese-roberta-wwm-ext-large/README.md +++ /dev/null @@ -1,57 +0,0 @@ ---- -language: -- zh -tags: -- bert -license: "apache-2.0" ---- - -# Please use 'Bert' related functions to load this model! - -## Chinese BERT with Whole Word Masking -For further accelerating Chinese natural language processing, we provide **Chinese pre-trained BERT with Whole Word Masking**. - -**[Pre-Training with Whole Word Masking for Chinese BERT](https://arxiv.org/abs/1906.08101)** -Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Ziqing Yang, Shijin Wang, Guoping Hu - -This repository is developed based on:https://github.com/google-research/bert - -You may also interested in, -- Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm -- Chinese MacBERT: https://github.com/ymcui/MacBERT -- Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA -- Chinese XLNet: https://github.com/ymcui/Chinese-XLNet -- Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer - -More resources by HFL: https://github.com/ymcui/HFL-Anthology - -## Citation -If you find the technical report or resource is useful, please cite the following technical report in your paper. -- Primary: https://arxiv.org/abs/2004.13922 -``` -@inproceedings{cui-etal-2020-revisiting, - title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing", - author = "Cui, Yiming and - Che, Wanxiang and - Liu, Ting and - Qin, Bing and - Wang, Shijin and - Hu, Guoping", - booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings", - month = nov, - year = "2020", - address = "Online", - publisher = "Association for Computational Linguistics", - url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58", - pages = "657--668", -} -``` -- Secondary: https://arxiv.org/abs/1906.08101 -``` -@article{chinese-bert-wwm, - title={Pre-Training with Whole Word Masking for Chinese BERT}, - author={Cui, Yiming and Che, Wanxiang and Liu, Ting and Qin, Bing and Yang, Ziqing and Wang, Shijin and Hu, Guoping}, - journal={arXiv preprint arXiv:1906.08101}, - year={2019} - } -``` \ No newline at end of file diff --git a/spaces/XzJosh/Carol-Bert-VITS2/losses.py b/spaces/XzJosh/Carol-Bert-VITS2/losses.py deleted file mode 100644 index fb22a0e834dd87edaa37bb8190eee2c3c7abe0d5..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/Carol-Bert-VITS2/losses.py +++ /dev/null @@ -1,61 +0,0 @@ -import torch -from torch.nn import functional as F - -import commons - - -def feature_loss(fmap_r, fmap_g): - loss = 0 - for dr, dg in zip(fmap_r, fmap_g): - for rl, gl in zip(dr, dg): - rl = rl.float().detach() - gl = gl.float() - loss += torch.mean(torch.abs(rl - gl)) - - return loss * 2 - - -def discriminator_loss(disc_real_outputs, disc_generated_outputs): - loss = 0 - r_losses = [] - g_losses = [] - for dr, dg in zip(disc_real_outputs, disc_generated_outputs): - dr = dr.float() - dg = dg.float() - r_loss = torch.mean((1-dr)**2) - g_loss = torch.mean(dg**2) - loss += (r_loss + g_loss) - r_losses.append(r_loss.item()) - g_losses.append(g_loss.item()) - - return loss, r_losses, g_losses - - -def generator_loss(disc_outputs): - loss = 0 - gen_losses = [] - for dg in disc_outputs: - dg = dg.float() - l = torch.mean((1-dg)**2) - gen_losses.append(l) - loss += l - - return loss, gen_losses - - -def kl_loss(z_p, logs_q, m_p, logs_p, z_mask): - """ - z_p, logs_q: [b, h, t_t] - m_p, logs_p: [b, h, t_t] - """ - z_p = z_p.float() - logs_q = logs_q.float() - m_p = m_p.float() - logs_p = logs_p.float() - z_mask = z_mask.float() - - kl = logs_p - logs_q - 0.5 - kl += 0.5 * ((z_p - m_p)**2) * torch.exp(-2. * logs_p) - kl = torch.sum(kl * z_mask) - l = kl / torch.sum(z_mask) - return l diff --git a/spaces/YuAnthony/Audio-Caption/data_handling/collate_fn.py b/spaces/YuAnthony/Audio-Caption/data_handling/collate_fn.py deleted file mode 100644 index 5164c16fcc25cb3d3cc8fc31b54e192e3fb85d5d..0000000000000000000000000000000000000000 --- a/spaces/YuAnthony/Audio-Caption/data_handling/collate_fn.py +++ /dev/null @@ -1,162 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8 -*- - -from typing import MutableSequence, Union, Tuple, AnyStr -from numpy import ndarray -import torch -from torch import cat as pt_cat, zeros as pt_zeros, \ - ones as pt_ones, from_numpy, Tensor -from hparams import hparams as hp -from data_augmentation.SpecAugment import spec_augment - -__author__ = 'Konstantinos Drossos -- Tampere University' -__docformat__ = 'reStructuredText' -__all__ = ['clotho_collate_fn'] - - -def clotho_collate_fn(batch: MutableSequence[ndarray], - nb_t_steps: Union[AnyStr, Tuple[int, int]], - input_pad_at: str, - output_pad_at: str) \ - -> Tuple[Tensor, Tensor]: - """Pads data. - - :param batch: Batch data. - :type batch: list[numpy.ndarray] - :param nb_t_steps: Number of time steps to\ - pad/truncate to. Cab use\ - 'max', 'min', or exact number\ - e.g. (1024, 10). - :type nb_t_steps: str|(int, int) - :param input_pad_at: Pad input at the start or\ - at the end? - :type input_pad_at: str - :param output_pad_at: Pad output at the start or\ - at the end? - :type output_pad_at: str - :return: Padded data. - :rtype: torch.Tensor, torch.Tensor - """ - if type(nb_t_steps) == str: - truncate_fn = max if nb_t_steps.lower() == 'max' else min - in_t_steps = truncate_fn([i[0].shape[0] for i in batch]) - out_t_steps = truncate_fn([i[1].shape[0] for i in batch]) - else: - in_t_steps, out_t_steps = nb_t_steps - - in_dim = batch[0][0].shape[-1] - eos_token = batch[0][1][-1] - PAD = 4367 - - input_tensor, output_tensor = [], [] - - for in_b, out_b in batch: - if in_t_steps >= in_b.shape[0]: - padding = pt_zeros(in_t_steps - in_b.shape[0], in_dim).float() - data = [from_numpy(in_b).float()] - if input_pad_at.lower() == 'start': - data.insert(0, padding) - else: - data.append(padding) - tmp_in: Tensor = pt_cat(data) - else: - tmp_in: Tensor = from_numpy(in_b[:in_t_steps, :]).float() - input_tensor.append(tmp_in.unsqueeze_(0)) - - if out_t_steps >= out_b.shape[0]: - padding = pt_ones(out_t_steps - len(out_b)).mul(PAD).long() - data = [from_numpy(out_b).long()] - if output_pad_at.lower() == 'start': - data.insert(0, padding) - else: - data.append(padding) - - tmp_out: Tensor = pt_cat(data) - else: - tmp_out: Tensor = from_numpy(out_b[:out_t_steps]).long() - output_tensor.append(tmp_out.unsqueeze_(0)) - - input_tensor = pt_cat(input_tensor) - output_tensor = pt_cat(output_tensor) - - return input_tensor, output_tensor - - -def clotho_collate_fn_eval(batch: MutableSequence[ndarray], - nb_t_steps: Union[AnyStr, Tuple[int, int]], - input_pad_at: str, - output_pad_at: str, - split: str, - augment:bool) \ - -> Tuple[Tensor, Tensor, Tensor, list]: - """Pads data. - - :param batch: Batch data. - :type batch: list[numpy.ndarray] - :param nb_t_steps: Number of time steps to\ - pad/truncate to. Cab use\ - 'max', 'min', or exact number\ - e.g. (1024, 10). - :type nb_t_steps: str|(int, int) - :param input_pad_at: Pad input at the start or\ - at the end? - :type input_pad_at: str - :param output_pad_at: Pad output at the start or\ - at the end? - :type output_pad_at: str - :return: Padded data. - :rtype: torch.Tensor, torch.Tensor - """ - if type(nb_t_steps) == str: - truncate_fn = max if nb_t_steps.lower() == 'max' else min - in_t_steps = truncate_fn([i[0].shape[0] for i in batch]) - out_t_steps = truncate_fn([i[1].shape[0] for i in batch]) - else: - in_t_steps, out_t_steps = nb_t_steps - - in_dim = batch[0][0].shape[-1] - eos_token = batch[0][1][-1] - batch = sorted(batch, key=lambda x: x[-1],reverse=True) - PAD = 4367 - input_tensor, output_tensor = [], [] - - for in_b, out_b, ref, filename,out_len in batch: - if in_t_steps >= in_b.shape[0]: - padding = pt_zeros(in_t_steps - in_b.shape[0], in_dim).float() - data = [from_numpy(in_b).float()] - if input_pad_at.lower() == 'start': - data.insert(0, padding) - else: - data.append(padding) - tmp_in: Tensor = pt_cat(data) - else: - tmp_in: Tensor = from_numpy(in_b[:in_t_steps, :]).float() - input_tensor.append(tmp_in.unsqueeze_(0)) - - if out_t_steps >= out_b.shape[0]: - padding = pt_ones(out_t_steps - len(out_b)).mul(PAD).long() - data = [from_numpy(out_b).long()] - if output_pad_at.lower() == 'start': - data.insert(0, padding) - else: - data.append(padding) - - tmp_out: Tensor = pt_cat(data) - else: - tmp_out: Tensor = from_numpy(out_b[:out_t_steps]).long() - output_tensor.append(tmp_out.unsqueeze_(0)) - - input_tensor = pt_cat(input_tensor) - - if augment: - input_tensor = spec_augment(input_tensor) - - output_tensor = pt_cat(output_tensor) - all_ref = [i[2] for i in batch] - filename = [i[3] for i in batch] - *_, target_len = zip(*batch) - target_len = torch.LongTensor(target_len) - - return input_tensor, output_tensor,target_len, all_ref - -# EOF diff --git a/spaces/Yukki-Yui/moe-tts/text/__init__.py b/spaces/Yukki-Yui/moe-tts/text/__init__.py deleted file mode 100644 index 4e69c354dd24e3243980236eca962cd5945a92fc..0000000000000000000000000000000000000000 --- a/spaces/Yukki-Yui/moe-tts/text/__init__.py +++ /dev/null @@ -1,32 +0,0 @@ -""" from https://github.com/keithito/tacotron """ -from text import cleaners - - -def text_to_sequence(text, symbols, cleaner_names): - '''Converts a string of text to a sequence of IDs corresponding to the symbols in the text. - Args: - text: string to convert to a sequence - cleaner_names: names of the cleaner functions to run the text through - Returns: - List of integers corresponding to the symbols in the text - ''' - _symbol_to_id = {s: i for i, s in enumerate(symbols)} - - sequence = [] - - clean_text = _clean_text(text, cleaner_names) - for symbol in clean_text: - if symbol not in _symbol_to_id.keys(): - continue - symbol_id = _symbol_to_id[symbol] - sequence += [symbol_id] - return sequence - - -def _clean_text(text, cleaner_names): - for name in cleaner_names: - cleaner = getattr(cleaners, name) - if not cleaner: - raise Exception('Unknown cleaner: %s' % name) - text = cleaner(text) - return text diff --git a/spaces/Yuliang/ECON/lib/dataset/EvalDataset.py b/spaces/Yuliang/ECON/lib/dataset/EvalDataset.py deleted file mode 100644 index 9ef6426285b103b04db4f0c827c6755056bcc01b..0000000000000000000000000000000000000000 --- a/spaces/Yuliang/ECON/lib/dataset/EvalDataset.py +++ /dev/null @@ -1,298 +0,0 @@ -# -*- coding: utf-8 -*- - -# Max-Planck-Gesellschaft zur Förderung der Wissenschaften e.V. (MPG) is -# holder of all proprietary rights on this computer program. -# You can only use this computer program if you have closed -# a license agreement with MPG or you get the right to use the computer -# program from someone who is authorized to grant you that right. -# Any use of the computer program without a valid license is prohibited and -# liable to prosecution. -# -# Copyright©2019 Max-Planck-Gesellschaft zur Förderung -# der Wissenschaften e.V. (MPG). acting on behalf of its Max Planck Institute -# for Intelligent Systems. All rights reserved. -# -# Contact: ps-license@tuebingen.mpg.de - -import os -import os.path as osp - -import cv2 -import numpy as np -import torch -import torch.nn.functional as F -import torchvision.transforms as transforms -import trimesh -from PIL import Image - -from lib.common.render import Render -from lib.dataset.mesh_util import SMPLX, HoppeMesh, projection, rescale_smpl - -cape_gender = { - "male": - ['00032', '00096', '00122', '00127', '00145', '00215', '02474', '03284', '03375', - '03394'], "female": ['00134', '00159', '03223', '03331', '03383'] -} - - -class EvalDataset: - def __init__(self, cfg, device): - - self.root = cfg.root - self.bsize = cfg.batch_size - - self.opt = cfg.dataset - self.datasets = self.opt.types - self.input_size = self.opt.input_size - self.scales = self.opt.scales - self.vol_res = cfg.vol_res - - # [(feat_name, channel_num),...] - self.in_geo = [item[0] for item in cfg.net.in_geo] - self.in_nml = [item[0] for item in cfg.net.in_nml] - - self.in_geo_dim = [item[1] for item in cfg.net.in_geo] - self.in_nml_dim = [item[1] for item in cfg.net.in_nml] - - self.in_total = self.in_geo + self.in_nml - self.in_total_dim = self.in_geo_dim + self.in_nml_dim - - self.rotations = range(0, 360, 120) - - self.datasets_dict = {} - - for dataset_id, dataset in enumerate(self.datasets): - - dataset_dir = osp.join(self.root, dataset) - - mesh_dir = osp.join(dataset_dir, "scans") - smplx_dir = osp.join(dataset_dir, "smplx") - smpl_dir = osp.join(dataset_dir, "smpl") - - self.datasets_dict[dataset] = { - "smplx_dir": smplx_dir, - "smpl_dir": smpl_dir, - "mesh_dir": mesh_dir, - "scale": self.scales[dataset_id], - } - - self.datasets_dict[dataset].update({ - "subjects": - np.loadtxt(osp.join(dataset_dir, "all.txt"), dtype=str) - }) - - self.subject_list = self.get_subject_list() - self.smplx = SMPLX() - - # PIL to tensor - self.image_to_tensor = transforms.Compose([ - transforms.Resize(self.input_size), - transforms.ToTensor(), - transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)), - ]) - - # PIL to tensor - self.mask_to_tensor = transforms.Compose([ - transforms.Resize(self.input_size), - transforms.ToTensor(), - transforms.Normalize((0.0, ), (1.0, )), - ]) - - self.device = device - self.render = Render(size=512, device=self.device) - - def render_normal(self, verts, faces): - - # render optimized mesh (normal, T_normal, image [-1,1]) - self.render.load_meshes(verts, faces) - return self.render.get_image() - - def get_subject_list(self): - - subject_list = [] - - for dataset in self.datasets: - - split_txt = "" - - if dataset == 'renderpeople': - split_txt = osp.join(self.root, dataset, "loose.txt") - elif dataset == 'cape': - split_txt = osp.join(self.root, dataset, "pose.txt") - - if osp.exists(split_txt) and osp.getsize(split_txt) > 0: - print(f"load from {split_txt}") - subject_list += np.loadtxt(split_txt, dtype=str).tolist() - - return subject_list - - def __len__(self): - return len(self.subject_list) * len(self.rotations) - - def __getitem__(self, index): - - rid = index % len(self.rotations) - mid = index // len(self.rotations) - - rotation = self.rotations[rid] - subject = self.subject_list[mid].split("/")[1] - dataset = self.subject_list[mid].split("/")[0] - render_folder = "/".join([dataset + f"_{self.opt.rotation_num}views", subject]) - - if not osp.exists(osp.join(self.root, render_folder)): - render_folder = "/".join([dataset + "_36views", subject]) - - # setup paths - data_dict = { - "dataset": dataset, - "subject": subject, - "rotation": rotation, - "scale": self.datasets_dict[dataset]["scale"], - "calib_path": osp.join(self.root, render_folder, "calib", f"{rotation:03d}.txt"), - "image_path": osp.join(self.root, render_folder, "render", f"{rotation:03d}.png"), - } - - if dataset == "cape": - data_dict.update({ - "mesh_path": - osp.join(self.datasets_dict[dataset]["mesh_dir"], f"{subject}.obj"), - "smpl_path": - osp.join(self.datasets_dict[dataset]["smpl_dir"], f"{subject}.obj"), - }) - else: - - data_dict.update({ - "mesh_path": - osp.join( - self.datasets_dict[dataset]["mesh_dir"], - f"{subject}.obj", - ), - "smplx_path": - osp.join(self.datasets_dict[dataset]["smplx_dir"], f"{subject}.obj"), - }) - - # load training data - data_dict.update(self.load_calib(data_dict)) - - # image/normal/depth loader - for name, channel in zip(self.in_total, self.in_total_dim): - - if f"{name}_path" not in data_dict.keys(): - data_dict.update({ - f"{name}_path": - osp.join(self.root, render_folder, name, f"{rotation:03d}.png") - }) - - # tensor update - if os.path.exists(data_dict[f"{name}_path"]): - data_dict.update({ - name: - self.imagepath2tensor(data_dict[f"{name}_path"], channel, inv=False) - }) - - data_dict.update(self.load_mesh(data_dict)) - data_dict.update(self.load_smpl(data_dict)) - - del data_dict["mesh"] - - return data_dict - - def imagepath2tensor(self, path, channel=3, inv=False): - - rgba = Image.open(path).convert("RGBA") - - # remove CAPE's noisy outliers using OpenCV's inpainting - if "cape" in path and "T_" not in path: - mask = cv2.imread(path.replace(path.split("/")[-2], "mask"), 0) > 1 - img = np.asarray(rgba)[:, :, :3] - fill_mask = ((mask & (img.sum(axis=2) == 0))).astype(np.uint8) - image = Image.fromarray( - cv2.inpaint(img * mask[..., None], fill_mask, 3, cv2.INPAINT_TELEA) - ) - mask = Image.fromarray(mask) - else: - mask = rgba.split()[-1] - image = rgba.convert("RGB") - - image = self.image_to_tensor(image) - mask = self.mask_to_tensor(mask) - image = (image * mask)[:channel] - - return (image * (0.5 - inv) * 2.0).float() - - def load_calib(self, data_dict): - calib_data = np.loadtxt(data_dict["calib_path"], dtype=float) - extrinsic = calib_data[:4, :4] - intrinsic = calib_data[4:8, :4] - calib_mat = np.matmul(intrinsic, extrinsic) - calib_mat = torch.from_numpy(calib_mat).float() - return {"calib": calib_mat} - - def load_mesh(self, data_dict): - - mesh_path = data_dict["mesh_path"] - scale = data_dict["scale"] - - scan_mesh = trimesh.load(mesh_path) - verts = scan_mesh.vertices - faces = scan_mesh.faces - - mesh = HoppeMesh(verts * scale, faces) - - return { - "mesh": mesh, - "verts": torch.as_tensor(verts * scale).float(), - "faces": torch.as_tensor(faces).long(), - } - - def load_smpl(self, data_dict): - - smpl_type = ("smplx" if ("smplx_path" in data_dict.keys()) else "smpl") - - smplx_verts = rescale_smpl(data_dict[f"{smpl_type}_path"], scale=100.0) - smplx_faces = torch.as_tensor(getattr(self.smplx, f"{smpl_type}_faces")).long() - smplx_verts = projection(smplx_verts, data_dict["calib"]).float() - - return_dict = { - "smpl_verts": smplx_verts, - "smpl_faces": smplx_faces, - } - - return return_dict - - def depth_to_voxel(self, data_dict): - - data_dict["depth_F"] = transforms.Resize(self.vol_res)(data_dict["depth_F"]) - data_dict["depth_B"] = transforms.Resize(self.vol_res)(data_dict["depth_B"]) - - depth_mask = (~torch.isnan(data_dict['depth_F'])) - depth_FB = torch.cat([data_dict['depth_F'], data_dict['depth_B']], dim=0) - depth_FB[:, ~depth_mask[0]] = 0. - - # Important: index_long = depth_value - 1 - index_z = (((depth_FB + 1.) * 0.5 * self.vol_res) - 1).clip(0, self.vol_res - - 1).permute(1, 2, 0) - index_z_ceil = torch.ceil(index_z).long() - index_z_floor = torch.floor(index_z).long() - index_z_frac = torch.frac(index_z) - - index_mask = index_z[..., 0] == torch.tensor(self.vol_res * 0.5 - 1).long() - voxels = F.one_hot(index_z_ceil[..., 0], self.vol_res) * index_z_frac[..., 0] + \ - F.one_hot(index_z_floor[..., 0], self.vol_res) * (1.0-index_z_frac[..., 0]) + \ - F.one_hot(index_z_ceil[..., 1], self.vol_res) * index_z_frac[..., 1]+ \ - F.one_hot(index_z_floor[..., 1], self.vol_res) * (1.0 - index_z_frac[..., 1]) - - voxels[index_mask] *= 0 - voxels = torch.flip(voxels, [2]).permute(2, 0, 1).float() #[x-2, y-0, z-1] - - return { - "depth_voxels": voxels.flip([ - 0, - ]).unsqueeze(0).to(self.device), - } - - def render_depth(self, verts, faces): - - # render optimized mesh (normal, T_normal, image [-1,1]) - self.render.load_meshes(verts, faces) - return self.render.get_image(type="depth") diff --git a/spaces/Yuliang/ECON/lib/pixielib/__init__.py b/spaces/Yuliang/ECON/lib/pixielib/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Yunshansongbai/SVC-Nahida/vdecoder/vdecoder/__init__.py b/spaces/Yunshansongbai/SVC-Nahida/vdecoder/vdecoder/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/ZenXir/FreeVC/speaker_encoder/config.py b/spaces/ZenXir/FreeVC/speaker_encoder/config.py deleted file mode 100644 index 1c21312f3de971bfa008254c6035cebc09f05e4c..0000000000000000000000000000000000000000 --- a/spaces/ZenXir/FreeVC/speaker_encoder/config.py +++ /dev/null @@ -1,45 +0,0 @@ -librispeech_datasets = { - "train": { - "clean": ["LibriSpeech/train-clean-100", "LibriSpeech/train-clean-360"], - "other": ["LibriSpeech/train-other-500"] - }, - "test": { - "clean": ["LibriSpeech/test-clean"], - "other": ["LibriSpeech/test-other"] - }, - "dev": { - "clean": ["LibriSpeech/dev-clean"], - "other": ["LibriSpeech/dev-other"] - }, -} -libritts_datasets = { - "train": { - "clean": ["LibriTTS/train-clean-100", "LibriTTS/train-clean-360"], - "other": ["LibriTTS/train-other-500"] - }, - "test": { - "clean": ["LibriTTS/test-clean"], - "other": ["LibriTTS/test-other"] - }, - "dev": { - "clean": ["LibriTTS/dev-clean"], - "other": ["LibriTTS/dev-other"] - }, -} -voxceleb_datasets = { - "voxceleb1" : { - "train": ["VoxCeleb1/wav"], - "test": ["VoxCeleb1/test_wav"] - }, - "voxceleb2" : { - "train": ["VoxCeleb2/dev/aac"], - "test": ["VoxCeleb2/test_wav"] - } -} - -other_datasets = [ - "LJSpeech-1.1", - "VCTK-Corpus/wav48", -] - -anglophone_nationalites = ["australia", "canada", "ireland", "uk", "usa"] diff --git a/spaces/Zengyf-CVer/Streamlit_YOLOv5_Model2x/models/common.py b/spaces/Zengyf-CVer/Streamlit_YOLOv5_Model2x/models/common.py deleted file mode 100644 index 7ac3a4a2967280a2ee9dbc1802c5f069615868ce..0000000000000000000000000000000000000000 --- a/spaces/Zengyf-CVer/Streamlit_YOLOv5_Model2x/models/common.py +++ /dev/null @@ -1,779 +0,0 @@ -# YOLOv5 🚀 by Ultralytics, GPL-3.0 license -""" -Common modules -""" - -import json -import math -import platform -import warnings -from collections import OrderedDict, namedtuple -from copy import copy -from pathlib import Path - -import cv2 -import numpy as np -import pandas as pd -import requests -import torch -import torch.nn as nn -from PIL import Image -from torch.cuda import amp - -from utils.dataloaders import exif_transpose, letterbox -from utils.general import (LOGGER, ROOT, Profile, check_requirements, check_suffix, check_version, colorstr, - increment_path, make_divisible, non_max_suppression, scale_coords, xywh2xyxy, xyxy2xywh, - yaml_load) -from utils.plots import Annotator, colors, save_one_box -from utils.torch_utils import copy_attr, smart_inference_mode - - -def autopad(k, p=None): # kernel, padding - # Pad to 'same' - if p is None: - p = k // 2 if isinstance(k, int) else [x // 2 for x in k] # auto-pad - return p - - -class Conv(nn.Module): - # Standard convolution - def __init__(self, c1, c2, k=1, s=1, p=None, g=1, act=True): # ch_in, ch_out, kernel, stride, padding, groups - super().__init__() - self.conv = nn.Conv2d(c1, c2, k, s, autopad(k, p), groups=g, bias=False) - self.bn = nn.BatchNorm2d(c2) - self.act = nn.SiLU() if act is True else (act if isinstance(act, nn.Module) else nn.Identity()) - - def forward(self, x): - return self.act(self.bn(self.conv(x))) - - def forward_fuse(self, x): - return self.act(self.conv(x)) - - -class DWConv(Conv): - # Depth-wise convolution class - def __init__(self, c1, c2, k=1, s=1, act=True): # ch_in, ch_out, kernel, stride, padding, groups - super().__init__(c1, c2, k, s, g=math.gcd(c1, c2), act=act) - - -class DWConvTranspose2d(nn.ConvTranspose2d): - # Depth-wise transpose convolution class - def __init__(self, c1, c2, k=1, s=1, p1=0, p2=0): # ch_in, ch_out, kernel, stride, padding, padding_out - super().__init__(c1, c2, k, s, p1, p2, groups=math.gcd(c1, c2)) - - -class TransformerLayer(nn.Module): - # Transformer layer https://arxiv.org/abs/2010.11929 (LayerNorm layers removed for better performance) - def __init__(self, c, num_heads): - super().__init__() - self.q = nn.Linear(c, c, bias=False) - self.k = nn.Linear(c, c, bias=False) - self.v = nn.Linear(c, c, bias=False) - self.ma = nn.MultiheadAttention(embed_dim=c, num_heads=num_heads) - self.fc1 = nn.Linear(c, c, bias=False) - self.fc2 = nn.Linear(c, c, bias=False) - - def forward(self, x): - x = self.ma(self.q(x), self.k(x), self.v(x))[0] + x - x = self.fc2(self.fc1(x)) + x - return x - - -class TransformerBlock(nn.Module): - # Vision Transformer https://arxiv.org/abs/2010.11929 - def __init__(self, c1, c2, num_heads, num_layers): - super().__init__() - self.conv = None - if c1 != c2: - self.conv = Conv(c1, c2) - self.linear = nn.Linear(c2, c2) # learnable position embedding - self.tr = nn.Sequential(*(TransformerLayer(c2, num_heads) for _ in range(num_layers))) - self.c2 = c2 - - def forward(self, x): - if self.conv is not None: - x = self.conv(x) - b, _, w, h = x.shape - p = x.flatten(2).permute(2, 0, 1) - return self.tr(p + self.linear(p)).permute(1, 2, 0).reshape(b, self.c2, w, h) - - -class Bottleneck(nn.Module): - # Standard bottleneck - def __init__(self, c1, c2, shortcut=True, g=1, e=0.5): # ch_in, ch_out, shortcut, groups, expansion - super().__init__() - c_ = int(c2 * e) # hidden channels - self.cv1 = Conv(c1, c_, 1, 1) - self.cv2 = Conv(c_, c2, 3, 1, g=g) - self.add = shortcut and c1 == c2 - - def forward(self, x): - return x + self.cv2(self.cv1(x)) if self.add else self.cv2(self.cv1(x)) - - -class BottleneckCSP(nn.Module): - # CSP Bottleneck https://github.com/WongKinYiu/CrossStagePartialNetworks - def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion - super().__init__() - c_ = int(c2 * e) # hidden channels - self.cv1 = Conv(c1, c_, 1, 1) - self.cv2 = nn.Conv2d(c1, c_, 1, 1, bias=False) - self.cv3 = nn.Conv2d(c_, c_, 1, 1, bias=False) - self.cv4 = Conv(2 * c_, c2, 1, 1) - self.bn = nn.BatchNorm2d(2 * c_) # applied to cat(cv2, cv3) - self.act = nn.SiLU() - self.m = nn.Sequential(*(Bottleneck(c_, c_, shortcut, g, e=1.0) for _ in range(n))) - - def forward(self, x): - y1 = self.cv3(self.m(self.cv1(x))) - y2 = self.cv2(x) - return self.cv4(self.act(self.bn(torch.cat((y1, y2), 1)))) - - -class CrossConv(nn.Module): - # Cross Convolution Downsample - def __init__(self, c1, c2, k=3, s=1, g=1, e=1.0, shortcut=False): - # ch_in, ch_out, kernel, stride, groups, expansion, shortcut - super().__init__() - c_ = int(c2 * e) # hidden channels - self.cv1 = Conv(c1, c_, (1, k), (1, s)) - self.cv2 = Conv(c_, c2, (k, 1), (s, 1), g=g) - self.add = shortcut and c1 == c2 - - def forward(self, x): - return x + self.cv2(self.cv1(x)) if self.add else self.cv2(self.cv1(x)) - - -class C3(nn.Module): - # CSP Bottleneck with 3 convolutions - def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion - super().__init__() - c_ = int(c2 * e) # hidden channels - self.cv1 = Conv(c1, c_, 1, 1) - self.cv2 = Conv(c1, c_, 1, 1) - self.cv3 = Conv(2 * c_, c2, 1) # optional act=FReLU(c2) - self.m = nn.Sequential(*(Bottleneck(c_, c_, shortcut, g, e=1.0) for _ in range(n))) - - def forward(self, x): - return self.cv3(torch.cat((self.m(self.cv1(x)), self.cv2(x)), 1)) - - -class C3x(C3): - # C3 module with cross-convolutions - def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): - super().__init__(c1, c2, n, shortcut, g, e) - c_ = int(c2 * e) - self.m = nn.Sequential(*(CrossConv(c_, c_, 3, 1, g, 1.0, shortcut) for _ in range(n))) - - -class C3TR(C3): - # C3 module with TransformerBlock() - def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): - super().__init__(c1, c2, n, shortcut, g, e) - c_ = int(c2 * e) - self.m = TransformerBlock(c_, c_, 4, n) - - -class C3SPP(C3): - # C3 module with SPP() - def __init__(self, c1, c2, k=(5, 9, 13), n=1, shortcut=True, g=1, e=0.5): - super().__init__(c1, c2, n, shortcut, g, e) - c_ = int(c2 * e) - self.m = SPP(c_, c_, k) - - -class C3Ghost(C3): - # C3 module with GhostBottleneck() - def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): - super().__init__(c1, c2, n, shortcut, g, e) - c_ = int(c2 * e) # hidden channels - self.m = nn.Sequential(*(GhostBottleneck(c_, c_) for _ in range(n))) - - -class SPP(nn.Module): - # Spatial Pyramid Pooling (SPP) layer https://arxiv.org/abs/1406.4729 - def __init__(self, c1, c2, k=(5, 9, 13)): - super().__init__() - c_ = c1 // 2 # hidden channels - self.cv1 = Conv(c1, c_, 1, 1) - self.cv2 = Conv(c_ * (len(k) + 1), c2, 1, 1) - self.m = nn.ModuleList([nn.MaxPool2d(kernel_size=x, stride=1, padding=x // 2) for x in k]) - - def forward(self, x): - x = self.cv1(x) - with warnings.catch_warnings(): - warnings.simplefilter('ignore') # suppress torch 1.9.0 max_pool2d() warning - return self.cv2(torch.cat([x] + [m(x) for m in self.m], 1)) - - -class SPPF(nn.Module): - # Spatial Pyramid Pooling - Fast (SPPF) layer for YOLOv5 by Glenn Jocher - def __init__(self, c1, c2, k=5): # equivalent to SPP(k=(5, 9, 13)) - super().__init__() - c_ = c1 // 2 # hidden channels - self.cv1 = Conv(c1, c_, 1, 1) - self.cv2 = Conv(c_ * 4, c2, 1, 1) - self.m = nn.MaxPool2d(kernel_size=k, stride=1, padding=k // 2) - - def forward(self, x): - x = self.cv1(x) - with warnings.catch_warnings(): - warnings.simplefilter('ignore') # suppress torch 1.9.0 max_pool2d() warning - y1 = self.m(x) - y2 = self.m(y1) - return self.cv2(torch.cat((x, y1, y2, self.m(y2)), 1)) - - -class Focus(nn.Module): - # Focus wh information into c-space - def __init__(self, c1, c2, k=1, s=1, p=None, g=1, act=True): # ch_in, ch_out, kernel, stride, padding, groups - super().__init__() - self.conv = Conv(c1 * 4, c2, k, s, p, g, act) - # self.contract = Contract(gain=2) - - def forward(self, x): # x(b,c,w,h) -> y(b,4c,w/2,h/2) - return self.conv(torch.cat((x[..., ::2, ::2], x[..., 1::2, ::2], x[..., ::2, 1::2], x[..., 1::2, 1::2]), 1)) - # return self.conv(self.contract(x)) - - -class GhostConv(nn.Module): - # Ghost Convolution https://github.com/huawei-noah/ghostnet - def __init__(self, c1, c2, k=1, s=1, g=1, act=True): # ch_in, ch_out, kernel, stride, groups - super().__init__() - c_ = c2 // 2 # hidden channels - self.cv1 = Conv(c1, c_, k, s, None, g, act) - self.cv2 = Conv(c_, c_, 5, 1, None, c_, act) - - def forward(self, x): - y = self.cv1(x) - return torch.cat((y, self.cv2(y)), 1) - - -class GhostBottleneck(nn.Module): - # Ghost Bottleneck https://github.com/huawei-noah/ghostnet - def __init__(self, c1, c2, k=3, s=1): # ch_in, ch_out, kernel, stride - super().__init__() - c_ = c2 // 2 - self.conv = nn.Sequential( - GhostConv(c1, c_, 1, 1), # pw - DWConv(c_, c_, k, s, act=False) if s == 2 else nn.Identity(), # dw - GhostConv(c_, c2, 1, 1, act=False)) # pw-linear - self.shortcut = nn.Sequential(DWConv(c1, c1, k, s, act=False), Conv(c1, c2, 1, 1, - act=False)) if s == 2 else nn.Identity() - - def forward(self, x): - return self.conv(x) + self.shortcut(x) - - -class Contract(nn.Module): - # Contract width-height into channels, i.e. x(1,64,80,80) to x(1,256,40,40) - def __init__(self, gain=2): - super().__init__() - self.gain = gain - - def forward(self, x): - b, c, h, w = x.size() # assert (h / s == 0) and (W / s == 0), 'Indivisible gain' - s = self.gain - x = x.view(b, c, h // s, s, w // s, s) # x(1,64,40,2,40,2) - x = x.permute(0, 3, 5, 1, 2, 4).contiguous() # x(1,2,2,64,40,40) - return x.view(b, c * s * s, h // s, w // s) # x(1,256,40,40) - - -class Expand(nn.Module): - # Expand channels into width-height, i.e. x(1,64,80,80) to x(1,16,160,160) - def __init__(self, gain=2): - super().__init__() - self.gain = gain - - def forward(self, x): - b, c, h, w = x.size() # assert C / s ** 2 == 0, 'Indivisible gain' - s = self.gain - x = x.view(b, s, s, c // s ** 2, h, w) # x(1,2,2,16,80,80) - x = x.permute(0, 3, 4, 1, 5, 2).contiguous() # x(1,16,80,2,80,2) - return x.view(b, c // s ** 2, h * s, w * s) # x(1,16,160,160) - - -class Concat(nn.Module): - # Concatenate a list of tensors along dimension - def __init__(self, dimension=1): - super().__init__() - self.d = dimension - - def forward(self, x): - return torch.cat(x, self.d) - - -class DetectMultiBackend(nn.Module): - # YOLOv5 MultiBackend class for python inference on various backends - def __init__(self, weights='yolov5s.pt', device=torch.device('cpu'), dnn=False, data=None, fp16=False, fuse=True): - # Usage: - # PyTorch: weights = *.pt - # TorchScript: *.torchscript - # ONNX Runtime: *.onnx - # ONNX OpenCV DNN: *.onnx with --dnn - # OpenVINO: *.xml - # CoreML: *.mlmodel - # TensorRT: *.engine - # TensorFlow SavedModel: *_saved_model - # TensorFlow GraphDef: *.pb - # TensorFlow Lite: *.tflite - # TensorFlow Edge TPU: *_edgetpu.tflite - from models.experimental import attempt_download, attempt_load # scoped to avoid circular import - - super().__init__() - w = str(weights[0] if isinstance(weights, list) else weights) - pt, jit, onnx, xml, engine, coreml, saved_model, pb, tflite, edgetpu, tfjs = self._model_type(w) # get backend - w = attempt_download(w) # download if not local - fp16 &= pt or jit or onnx or engine # FP16 - stride = 32 # default stride - - if pt: # PyTorch - model = attempt_load(weights if isinstance(weights, list) else w, device=device, inplace=True, fuse=fuse) - stride = max(int(model.stride.max()), 32) # model stride - names = model.module.names if hasattr(model, 'module') else model.names # get class names - model.half() if fp16 else model.float() - self.model = model # explicitly assign for to(), cpu(), cuda(), half() - elif jit: # TorchScript - LOGGER.info(f'Loading {w} for TorchScript inference...') - extra_files = {'config.txt': ''} # model metadata - model = torch.jit.load(w, _extra_files=extra_files) - model.half() if fp16 else model.float() - if extra_files['config.txt']: # load metadata dict - d = json.loads(extra_files['config.txt'], - object_hook=lambda d: {int(k) if k.isdigit() else k: v - for k, v in d.items()}) - stride, names = int(d['stride']), d['names'] - elif dnn: # ONNX OpenCV DNN - LOGGER.info(f'Loading {w} for ONNX OpenCV DNN inference...') - check_requirements(('opencv-python>=4.5.4',)) - net = cv2.dnn.readNetFromONNX(w) - elif onnx: # ONNX Runtime - LOGGER.info(f'Loading {w} for ONNX Runtime inference...') - cuda = torch.cuda.is_available() and device.type != 'cpu' - check_requirements(('onnx', 'onnxruntime-gpu' if cuda else 'onnxruntime')) - import onnxruntime - providers = ['CUDAExecutionProvider', 'CPUExecutionProvider'] if cuda else ['CPUExecutionProvider'] - session = onnxruntime.InferenceSession(w, providers=providers) - output_names = [x.name for x in session.get_outputs()] - meta = session.get_modelmeta().custom_metadata_map # metadata - if 'stride' in meta: - stride, names = int(meta['stride']), eval(meta['names']) - elif xml: # OpenVINO - LOGGER.info(f'Loading {w} for OpenVINO inference...') - check_requirements(('openvino',)) # requires openvino-dev: https://pypi.org/project/openvino-dev/ - from openvino.runtime import Core, Layout, get_batch - ie = Core() - if not Path(w).is_file(): # if not *.xml - w = next(Path(w).glob('*.xml')) # get *.xml file from *_openvino_model dir - network = ie.read_model(model=w, weights=Path(w).with_suffix('.bin')) - if network.get_parameters()[0].get_layout().empty: - network.get_parameters()[0].set_layout(Layout("NCHW")) - batch_dim = get_batch(network) - if batch_dim.is_static: - batch_size = batch_dim.get_length() - executable_network = ie.compile_model(network, device_name="CPU") # device_name="MYRIAD" for Intel NCS2 - output_layer = next(iter(executable_network.outputs)) - stride, names = self._load_metadata(Path(w).with_suffix('.yaml')) # load metadata - elif engine: # TensorRT - LOGGER.info(f'Loading {w} for TensorRT inference...') - import tensorrt as trt # https://developer.nvidia.com/nvidia-tensorrt-download - check_version(trt.__version__, '7.0.0', hard=True) # require tensorrt>=7.0.0 - if device.type == 'cpu': - device = torch.device('cuda:0') - Binding = namedtuple('Binding', ('name', 'dtype', 'shape', 'data', 'ptr')) - logger = trt.Logger(trt.Logger.INFO) - with open(w, 'rb') as f, trt.Runtime(logger) as runtime: - model = runtime.deserialize_cuda_engine(f.read()) - context = model.create_execution_context() - bindings = OrderedDict() - fp16 = False # default updated below - dynamic = False - for index in range(model.num_bindings): - name = model.get_binding_name(index) - dtype = trt.nptype(model.get_binding_dtype(index)) - if model.binding_is_input(index): - if -1 in tuple(model.get_binding_shape(index)): # dynamic - dynamic = True - context.set_binding_shape(index, tuple(model.get_profile_shape(0, index)[2])) - if dtype == np.float16: - fp16 = True - shape = tuple(context.get_binding_shape(index)) - im = torch.from_numpy(np.empty(shape, dtype=dtype)).to(device) - bindings[name] = Binding(name, dtype, shape, im, int(im.data_ptr())) - binding_addrs = OrderedDict((n, d.ptr) for n, d in bindings.items()) - batch_size = bindings['images'].shape[0] # if dynamic, this is instead max batch size - elif coreml: # CoreML - LOGGER.info(f'Loading {w} for CoreML inference...') - import coremltools as ct - model = ct.models.MLModel(w) - else: # TensorFlow (SavedModel, GraphDef, Lite, Edge TPU) - if saved_model: # SavedModel - LOGGER.info(f'Loading {w} for TensorFlow SavedModel inference...') - import tensorflow as tf - keras = False # assume TF1 saved_model - model = tf.keras.models.load_model(w) if keras else tf.saved_model.load(w) - elif pb: # GraphDef https://www.tensorflow.org/guide/migrate#a_graphpb_or_graphpbtxt - LOGGER.info(f'Loading {w} for TensorFlow GraphDef inference...') - import tensorflow as tf - - def wrap_frozen_graph(gd, inputs, outputs): - x = tf.compat.v1.wrap_function(lambda: tf.compat.v1.import_graph_def(gd, name=""), []) # wrapped - ge = x.graph.as_graph_element - return x.prune(tf.nest.map_structure(ge, inputs), tf.nest.map_structure(ge, outputs)) - - gd = tf.Graph().as_graph_def() # graph_def - with open(w, 'rb') as f: - gd.ParseFromString(f.read()) - frozen_func = wrap_frozen_graph(gd, inputs="x:0", outputs="Identity:0") - elif tflite or edgetpu: # https://www.tensorflow.org/lite/guide/python#install_tensorflow_lite_for_python - try: # https://coral.ai/docs/edgetpu/tflite-python/#update-existing-tf-lite-code-for-the-edge-tpu - from tflite_runtime.interpreter import Interpreter, load_delegate - except ImportError: - import tensorflow as tf - Interpreter, load_delegate = tf.lite.Interpreter, tf.lite.experimental.load_delegate, - if edgetpu: # Edge TPU https://coral.ai/software/#edgetpu-runtime - LOGGER.info(f'Loading {w} for TensorFlow Lite Edge TPU inference...') - delegate = { - 'Linux': 'libedgetpu.so.1', - 'Darwin': 'libedgetpu.1.dylib', - 'Windows': 'edgetpu.dll'}[platform.system()] - interpreter = Interpreter(model_path=w, experimental_delegates=[load_delegate(delegate)]) - else: # Lite - LOGGER.info(f'Loading {w} for TensorFlow Lite inference...') - interpreter = Interpreter(model_path=w) # load TFLite model - interpreter.allocate_tensors() # allocate - input_details = interpreter.get_input_details() # inputs - output_details = interpreter.get_output_details() # outputs - elif tfjs: - raise NotImplementedError('ERROR: YOLOv5 TF.js inference is not supported') - else: - raise NotImplementedError(f'ERROR: {w} is not a supported format') - - # class names - if 'names' not in locals(): - names = yaml_load(data)['names'] if data else {i: f'class{i}' for i in range(999)} - if names[0] == 'n01440764' and len(names) == 1000: # ImageNet - names = yaml_load(ROOT / 'data/ImageNet.yaml')['names'] # human-readable names - - self.__dict__.update(locals()) # assign all variables to self - - def forward(self, im, augment=False, visualize=False): - # YOLOv5 MultiBackend inference - b, ch, h, w = im.shape # batch, channel, height, width - if self.fp16 and im.dtype != torch.float16: - im = im.half() # to FP16 - - if self.pt: # PyTorch - y = self.model(im, augment=augment, visualize=visualize) if augment or visualize else self.model(im) - elif self.jit: # TorchScript - y = self.model(im) - elif self.dnn: # ONNX OpenCV DNN - im = im.cpu().numpy() # torch to numpy - self.net.setInput(im) - y = self.net.forward() - elif self.onnx: # ONNX Runtime - im = im.cpu().numpy() # torch to numpy - y = self.session.run(self.output_names, {self.session.get_inputs()[0].name: im}) - elif self.xml: # OpenVINO - im = im.cpu().numpy() # FP32 - y = self.executable_network([im])[self.output_layer] - elif self.engine: # TensorRT - if self.dynamic and im.shape != self.bindings['images'].shape: - i_in, i_out = (self.model.get_binding_index(x) for x in ('images', 'output')) - self.context.set_binding_shape(i_in, im.shape) # reshape if dynamic - self.bindings['images'] = self.bindings['images']._replace(shape=im.shape) - self.bindings['output'].data.resize_(tuple(self.context.get_binding_shape(i_out))) - s = self.bindings['images'].shape - assert im.shape == s, f"input size {im.shape} {'>' if self.dynamic else 'not equal to'} max model size {s}" - self.binding_addrs['images'] = int(im.data_ptr()) - self.context.execute_v2(list(self.binding_addrs.values())) - y = self.bindings['output'].data - elif self.coreml: # CoreML - im = im.permute(0, 2, 3, 1).cpu().numpy() # torch BCHW to numpy BHWC shape(1,320,192,3) - im = Image.fromarray((im[0] * 255).astype('uint8')) - # im = im.resize((192, 320), Image.ANTIALIAS) - y = self.model.predict({'image': im}) # coordinates are xywh normalized - if 'confidence' in y: - box = xywh2xyxy(y['coordinates'] * [[w, h, w, h]]) # xyxy pixels - conf, cls = y['confidence'].max(1), y['confidence'].argmax(1).astype(np.float) - y = np.concatenate((box, conf.reshape(-1, 1), cls.reshape(-1, 1)), 1) - else: - k = 'var_' + str(sorted(int(k.replace('var_', '')) for k in y)[-1]) # output key - y = y[k] # output - else: # TensorFlow (SavedModel, GraphDef, Lite, Edge TPU) - im = im.permute(0, 2, 3, 1).cpu().numpy() # torch BCHW to numpy BHWC shape(1,320,192,3) - if self.saved_model: # SavedModel - y = (self.model(im, training=False) if self.keras else self.model(im)).numpy() - elif self.pb: # GraphDef - y = self.frozen_func(x=self.tf.constant(im)).numpy() - else: # Lite or Edge TPU - input, output = self.input_details[0], self.output_details[0] - int8 = input['dtype'] == np.uint8 # is TFLite quantized uint8 model - if int8: - scale, zero_point = input['quantization'] - im = (im / scale + zero_point).astype(np.uint8) # de-scale - self.interpreter.set_tensor(input['index'], im) - self.interpreter.invoke() - y = self.interpreter.get_tensor(output['index']) - if int8: - scale, zero_point = output['quantization'] - y = (y.astype(np.float32) - zero_point) * scale # re-scale - y[..., :4] *= [w, h, w, h] # xywh normalized to pixels - - if isinstance(y, (list, tuple)): - return self.from_numpy(y[0]) if len(y) == 1 else [self.from_numpy(x) for x in y] - else: - return self.from_numpy(y) - - def from_numpy(self, x): - return torch.from_numpy(x).to(self.device) if isinstance(x, np.ndarray) else x - - def warmup(self, imgsz=(1, 3, 640, 640)): - # Warmup model by running inference once - warmup_types = self.pt, self.jit, self.onnx, self.engine, self.saved_model, self.pb - if any(warmup_types) and self.device.type != 'cpu': - im = torch.empty(*imgsz, dtype=torch.half if self.fp16 else torch.float, device=self.device) # input - for _ in range(2 if self.jit else 1): # - self.forward(im) # warmup - - @staticmethod - def _model_type(p='path/to/model.pt'): - # Return model type from model path, i.e. path='path/to/model.onnx' -> type=onnx - from export import export_formats - suffixes = list(export_formats().Suffix) + ['.xml'] # export suffixes - check_suffix(p, suffixes) # checks - p = Path(p).name # eliminate trailing separators - pt, jit, onnx, xml, engine, coreml, saved_model, pb, tflite, edgetpu, tfjs, xml2 = (s in p for s in suffixes) - xml |= xml2 # *_openvino_model or *.xml - tflite &= not edgetpu # *.tflite - return pt, jit, onnx, xml, engine, coreml, saved_model, pb, tflite, edgetpu, tfjs - - @staticmethod - def _load_metadata(f=Path('path/to/meta.yaml')): - # Load metadata from meta.yaml if it exists - if f.exists(): - d = yaml_load(f) - return d['stride'], d['names'] # assign stride, names - return None, None - - -class AutoShape(nn.Module): - # YOLOv5 input-robust model wrapper for passing cv2/np/PIL/torch inputs. Includes preprocessing, inference and NMS - conf = 0.25 # NMS confidence threshold - iou = 0.45 # NMS IoU threshold - agnostic = False # NMS class-agnostic - multi_label = False # NMS multiple labels per box - classes = None # (optional list) filter by class, i.e. = [0, 15, 16] for COCO persons, cats and dogs - max_det = 1000 # maximum number of detections per image - amp = False # Automatic Mixed Precision (AMP) inference - - def __init__(self, model, verbose=True): - super().__init__() - if verbose: - LOGGER.info('Adding AutoShape... ') - copy_attr(self, model, include=('yaml', 'nc', 'hyp', 'names', 'stride', 'abc'), exclude=()) # copy attributes - self.dmb = isinstance(model, DetectMultiBackend) # DetectMultiBackend() instance - self.pt = not self.dmb or model.pt # PyTorch model - self.model = model.eval() - if self.pt: - m = self.model.model.model[-1] if self.dmb else self.model.model[-1] # Detect() - m.inplace = False # Detect.inplace=False for safe multithread inference - - def _apply(self, fn): - # Apply to(), cpu(), cuda(), half() to model tensors that are not parameters or registered buffers - self = super()._apply(fn) - if self.pt: - m = self.model.model.model[-1] if self.dmb else self.model.model[-1] # Detect() - m.stride = fn(m.stride) - m.grid = list(map(fn, m.grid)) - if isinstance(m.anchor_grid, list): - m.anchor_grid = list(map(fn, m.anchor_grid)) - return self - - @smart_inference_mode() - def forward(self, ims, size=640, augment=False, profile=False): - # Inference from various sources. For size(height=640, width=1280), RGB images example inputs are: - # file: ims = 'data/images/zidane.jpg' # str or PosixPath - # URI: = 'https://ultralytics.com/images/zidane.jpg' - # OpenCV: = cv2.imread('image.jpg')[:,:,::-1] # HWC BGR to RGB x(640,1280,3) - # PIL: = Image.open('image.jpg') or ImageGrab.grab() # HWC x(640,1280,3) - # numpy: = np.zeros((640,1280,3)) # HWC - # torch: = torch.zeros(16,3,320,640) # BCHW (scaled to size=640, 0-1 values) - # multiple: = [Image.open('image1.jpg'), Image.open('image2.jpg'), ...] # list of images - - dt = (Profile(), Profile(), Profile()) - with dt[0]: - if isinstance(size, int): # expand - size = (size, size) - p = next(self.model.parameters()) if self.pt else torch.empty(1, device=self.model.device) # param - autocast = self.amp and (p.device.type != 'cpu') # Automatic Mixed Precision (AMP) inference - if isinstance(ims, torch.Tensor): # torch - with amp.autocast(autocast): - return self.model(ims.to(p.device).type_as(p), augment, profile) # inference - - # Pre-process - n, ims = (len(ims), list(ims)) if isinstance(ims, (list, tuple)) else (1, [ims]) # number, list of images - shape0, shape1, files = [], [], [] # image and inference shapes, filenames - for i, im in enumerate(ims): - f = f'image{i}' # filename - if isinstance(im, (str, Path)): # filename or uri - im, f = Image.open(requests.get(im, stream=True).raw if str(im).startswith('http') else im), im - im = np.asarray(exif_transpose(im)) - elif isinstance(im, Image.Image): # PIL Image - im, f = np.asarray(exif_transpose(im)), getattr(im, 'filename', f) or f - files.append(Path(f).with_suffix('.jpg').name) - if im.shape[0] < 5: # image in CHW - im = im.transpose((1, 2, 0)) # reverse dataloader .transpose(2, 0, 1) - im = im[..., :3] if im.ndim == 3 else cv2.cvtColor(im, cv2.COLOR_GRAY2BGR) # enforce 3ch input - s = im.shape[:2] # HWC - shape0.append(s) # image shape - g = max(size) / max(s) # gain - shape1.append([y * g for y in s]) - ims[i] = im if im.data.contiguous else np.ascontiguousarray(im) # update - shape1 = [make_divisible(x, self.stride) for x in np.array(shape1).max(0)] if self.pt else size # inf shape - x = [letterbox(im, shape1, auto=False)[0] for im in ims] # pad - x = np.ascontiguousarray(np.array(x).transpose((0, 3, 1, 2))) # stack and BHWC to BCHW - x = torch.from_numpy(x).to(p.device).type_as(p) / 255 # uint8 to fp16/32 - - with amp.autocast(autocast): - # Inference - with dt[1]: - y = self.model(x, augment, profile) # forward - - # Post-process - with dt[2]: - y = non_max_suppression(y if self.dmb else y[0], - self.conf, - self.iou, - self.classes, - self.agnostic, - self.multi_label, - max_det=self.max_det) # NMS - for i in range(n): - scale_coords(shape1, y[i][:, :4], shape0[i]) - - return Detections(ims, y, files, dt, self.names, x.shape) - - -class Detections: - # YOLOv5 detections class for inference results - def __init__(self, ims, pred, files, times=(0, 0, 0), names=None, shape=None): - super().__init__() - d = pred[0].device # device - gn = [torch.tensor([*(im.shape[i] for i in [1, 0, 1, 0]), 1, 1], device=d) for im in ims] # normalizations - self.ims = ims # list of images as numpy arrays - self.pred = pred # list of tensors pred[0] = (xyxy, conf, cls) - self.names = names # class names - self.files = files # image filenames - self.times = times # profiling times - self.xyxy = pred # xyxy pixels - self.xywh = [xyxy2xywh(x) for x in pred] # xywh pixels - self.xyxyn = [x / g for x, g in zip(self.xyxy, gn)] # xyxy normalized - self.xywhn = [x / g for x, g in zip(self.xywh, gn)] # xywh normalized - self.n = len(self.pred) # number of images (batch size) - self.t = tuple(x.t / self.n * 1E3 for x in times) # timestamps (ms) - self.s = shape # inference BCHW shape - - def display(self, pprint=False, show=False, save=False, crop=False, render=False, labels=True, save_dir=Path('')): - crops = [] - for i, (im, pred) in enumerate(zip(self.ims, self.pred)): - s = f'image {i + 1}/{len(self.pred)}: {im.shape[0]}x{im.shape[1]} ' # string - if pred.shape[0]: - for c in pred[:, -1].unique(): - n = (pred[:, -1] == c).sum() # detections per class - s += f"{n} {self.names[int(c)]}{'s' * (n > 1)}, " # add to string - if show or save or render or crop: - annotator = Annotator(im, example=str(self.names)) - for *box, conf, cls in reversed(pred): # xyxy, confidence, class - label = f'{self.names[int(cls)]} {conf:.2f}' - if crop: - file = save_dir / 'crops' / self.names[int(cls)] / self.files[i] if save else None - crops.append({ - 'box': box, - 'conf': conf, - 'cls': cls, - 'label': label, - 'im': save_one_box(box, im, file=file, save=save)}) - else: # all others - annotator.box_label(box, label if labels else '', color=colors(cls)) - im = annotator.im - else: - s += '(no detections)' - - im = Image.fromarray(im.astype(np.uint8)) if isinstance(im, np.ndarray) else im # from np - if pprint: - print(s.rstrip(', ')) - if show: - im.show(self.files[i]) # show - if save: - f = self.files[i] - im.save(save_dir / f) # save - if i == self.n - 1: - LOGGER.info(f"Saved {self.n} image{'s' * (self.n > 1)} to {colorstr('bold', save_dir)}") - if render: - self.ims[i] = np.asarray(im) - if crop: - if save: - LOGGER.info(f'Saved results to {save_dir}\n') - return crops - - def print(self): - self.display(pprint=True) # print results - print(f'Speed: %.1fms pre-process, %.1fms inference, %.1fms NMS per image at shape {tuple(self.s)}' % self.t) - - def show(self, labels=True): - self.display(show=True, labels=labels) # show results - - def save(self, labels=True, save_dir='runs/detect/exp'): - save_dir = increment_path(save_dir, exist_ok=save_dir != 'runs/detect/exp', mkdir=True) # increment save_dir - self.display(save=True, labels=labels, save_dir=save_dir) # save results - - def crop(self, save=True, save_dir='runs/detect/exp'): - save_dir = increment_path(save_dir, exist_ok=save_dir != 'runs/detect/exp', mkdir=True) if save else None - return self.display(crop=True, save=save, save_dir=save_dir) # crop results - - def render(self, labels=True): - self.display(render=True, labels=labels) # render results - return self.ims - - def pandas(self): - # return detections as pandas DataFrames, i.e. print(results.pandas().xyxy[0]) - new = copy(self) # return copy - ca = 'xmin', 'ymin', 'xmax', 'ymax', 'confidence', 'class', 'name' # xyxy columns - cb = 'xcenter', 'ycenter', 'width', 'height', 'confidence', 'class', 'name' # xywh columns - for k, c in zip(['xyxy', 'xyxyn', 'xywh', 'xywhn'], [ca, ca, cb, cb]): - a = [[x[:5] + [int(x[5]), self.names[int(x[5])]] for x in x.tolist()] for x in getattr(self, k)] # update - setattr(new, k, [pd.DataFrame(x, columns=c) for x in a]) - return new - - def tolist(self): - # return a list of Detections objects, i.e. 'for result in results.tolist():' - r = range(self.n) # iterable - x = [Detections([self.ims[i]], [self.pred[i]], [self.files[i]], self.times, self.names, self.s) for i in r] - # for d in x: - # for k in ['ims', 'pred', 'xyxy', 'xyxyn', 'xywh', 'xywhn']: - # setattr(d, k, getattr(d, k)[0]) # pop out of list - return x - - def __len__(self): - return self.n # override len(results) - - def __str__(self): - self.print() # override print(results) - return '' - - -class Classify(nn.Module): - # Classification head, i.e. x(b,c1,20,20) to x(b,c2) - def __init__(self, c1, c2, k=1, s=1, p=None, g=1): # ch_in, ch_out, kernel, stride, padding, groups - super().__init__() - c_ = 1280 # efficientnet_b0 size - self.conv = Conv(c1, c_, k, s, autopad(k, p), g) - self.pool = nn.AdaptiveAvgPool2d(1) # to x(b,c_,1,1) - self.drop = nn.Dropout(p=0.0, inplace=True) - self.linear = nn.Linear(c_, c2) # to x(b,c2) - - def forward(self, x): - if isinstance(x, list): - x = torch.cat(x, 1) - return self.linear(self.drop(self.pool(self.conv(x)).flatten(1))) diff --git a/spaces/abdvl/datahub_qa_bot/docs/enrich-metadata.md b/spaces/abdvl/datahub_qa_bot/docs/enrich-metadata.md deleted file mode 100644 index fa2ad10ddfcb3d8c25cecdb56aaa834e0bd79def..0000000000000000000000000000000000000000 --- a/spaces/abdvl/datahub_qa_bot/docs/enrich-metadata.md +++ /dev/null @@ -1,14 +0,0 @@ -# Enriching Metadata in DataHub - -Metadata Enrichment is a powerful way to annotate entities within DataHub, supercharging data discoverability and ensuring end-users have quick access to critical context for a given entity, such as: - -* **Ownership**: who is responsible/accountable? -* **Description**: what is the intended use case? What known caveats/edge cases exist? -* **Glossary Terms**: how is it relevant to core business metrics? -* **Domain**: how is it associated with organizational domains? - -This section contains detailed usage guides to help you begin enriching your data entities within DataHub. - -

- -

diff --git a/spaces/abhaskumarsinha/MinimalGPT-Ragdoll/subword/subword_nmt.py b/spaces/abhaskumarsinha/MinimalGPT-Ragdoll/subword/subword_nmt.py deleted file mode 100644 index 29104f4d8029524a80d6fa649b69a8acec0b8abc..0000000000000000000000000000000000000000 --- a/spaces/abhaskumarsinha/MinimalGPT-Ragdoll/subword/subword_nmt.py +++ /dev/null @@ -1,97 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8 -*- - -import io -import sys -import codecs -import argparse - -from .learn_bpe import learn_bpe -from .apply_bpe import BPE, read_vocabulary -from .get_vocab import get_vocab -from .learn_joint_bpe_and_vocab import learn_joint_bpe_and_vocab - -from .learn_bpe import create_parser as create_learn_bpe_parser -from .apply_bpe import create_parser as create_apply_bpe_parser -from .get_vocab import create_parser as create_get_vocab_parser -from .learn_joint_bpe_and_vocab import create_parser as create_learn_joint_bpe_and_vocab_parser - -# hack for python2/3 compatibility -argparse.open = io.open - -def main(): - parser = argparse.ArgumentParser( - formatter_class=argparse.RawTextHelpFormatter, - description="subword-nmt: unsupervised word segmentation for neural machine translation and text generation ") - subparsers = parser.add_subparsers(dest='command', - help="""command to run. Run one of the commands with '-h' for more info. - -learn-bpe: learn BPE merge operations on input text. -apply-bpe: apply given BPE operations to input text. -get-vocab: extract vocabulary and word frequencies from input text. -learn-joint-bpe-and-vocab: executes recommended workflow for joint BPE.""") - - learn_bpe_parser = create_learn_bpe_parser(subparsers) - apply_bpe_parser = create_apply_bpe_parser(subparsers) - get_vocab_parser = create_get_vocab_parser(subparsers) - learn_joint_bpe_and_vocab_parser = create_learn_joint_bpe_and_vocab_parser(subparsers) - - args = parser.parse_args() - - if args.command == 'learn-bpe': - # read/write files as UTF-8 - if args.input.name != '': - args.input = codecs.open(args.input.name, encoding='utf-8') - if args.output.name != '': - args.output = codecs.open(args.output.name, 'w', encoding='utf-8') - - learn_bpe(args.input, args.output, args.symbols, args.min_frequency, args.verbose, - is_dict=args.dict_input, total_symbols=args.total_symbols) - elif args.command == 'apply-bpe': - # read/write files as UTF-8 - args.codes = codecs.open(args.codes.name, encoding='utf-8') - if args.input.name != '': - args.input = codecs.open(args.input.name, encoding='utf-8') - if args.output.name != '': - args.output = codecs.open(args.output.name, 'w', encoding='utf-8') - if args.vocabulary: - args.vocabulary = codecs.open(args.vocabulary.name, encoding='utf-8') - - if args.vocabulary: - vocabulary = read_vocabulary(args.vocabulary, args.vocabulary_threshold) - else: - vocabulary = None - - if sys.version_info < (3, 0): - args.separator = args.separator.decode('UTF-8') - if args.glossaries: - args.glossaries = [g.decode('UTF-8') for g in args.glossaries] - - bpe = BPE(args.codes, args.merges, args.separator, vocabulary, args.glossaries) - - for line in args.input: - args.output.write(bpe.process_line(line, args.dropout)) - - elif args.command == 'get-vocab': - if args.input.name != '': - args.input = codecs.open(args.input.name, encoding='utf-8') - if args.output.name != '': - args.output = codecs.open(args.output.name, 'w', encoding='utf-8') - get_vocab(args.input, args.output) - elif args.command == 'learn-joint-bpe-and-vocab': - learn_joint_bpe_and_vocab(args) - if sys.version_info < (3, 0): - args.separator = args.separator.decode('UTF-8') - else: - raise Exception('Invalid command provided') - - -# python 2/3 compatibility -if sys.version_info < (3, 0): - sys.stderr = codecs.getwriter('UTF-8')(sys.stderr) - sys.stdout = codecs.getwriter('UTF-8')(sys.stdout) - sys.stdin = codecs.getreader('UTF-8')(sys.stdin) -else: - sys.stderr = codecs.getwriter('UTF-8')(sys.stderr.buffer) - sys.stdout = codecs.getwriter('UTF-8')(sys.stdout.buffer) - sys.stdin = codecs.getreader('UTF-8')(sys.stdin.buffer) diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/detectors/htc.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/detectors/htc.py deleted file mode 100644 index d9efdf420fa7373f7f1d116f8d97836d73b457bf..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/detectors/htc.py +++ /dev/null @@ -1,15 +0,0 @@ -from ..builder import DETECTORS -from .cascade_rcnn import CascadeRCNN - - -@DETECTORS.register_module() -class HybridTaskCascade(CascadeRCNN): - """Implementation of `HTC `_""" - - def __init__(self, **kwargs): - super(HybridTaskCascade, self).__init__(**kwargs) - - @property - def with_semantic(self): - """bool: whether the detector has a semantic head""" - return self.roi_head.with_semantic diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/losses/gaussian_focal_loss.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/losses/gaussian_focal_loss.py deleted file mode 100644 index e45506a38e8e3c187be8288d0b714cc1ee29cf27..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/losses/gaussian_focal_loss.py +++ /dev/null @@ -1,91 +0,0 @@ -import mmcv -import torch.nn as nn - -from ..builder import LOSSES -from .utils import weighted_loss - - -@mmcv.jit(derivate=True, coderize=True) -@weighted_loss -def gaussian_focal_loss(pred, gaussian_target, alpha=2.0, gamma=4.0): - """`Focal Loss `_ for targets in gaussian - distribution. - - Args: - pred (torch.Tensor): The prediction. - gaussian_target (torch.Tensor): The learning target of the prediction - in gaussian distribution. - alpha (float, optional): A balanced form for Focal Loss. - Defaults to 2.0. - gamma (float, optional): The gamma for calculating the modulating - factor. Defaults to 4.0. - """ - eps = 1e-12 - pos_weights = gaussian_target.eq(1) - neg_weights = (1 - gaussian_target).pow(gamma) - pos_loss = -(pred + eps).log() * (1 - pred).pow(alpha) * pos_weights - neg_loss = -(1 - pred + eps).log() * pred.pow(alpha) * neg_weights - return pos_loss + neg_loss - - -@LOSSES.register_module() -class GaussianFocalLoss(nn.Module): - """GaussianFocalLoss is a variant of focal loss. - - More details can be found in the `paper - `_ - Code is modified from `kp_utils.py - `_ # noqa: E501 - Please notice that the target in GaussianFocalLoss is a gaussian heatmap, - not 0/1 binary target. - - Args: - alpha (float): Power of prediction. - gamma (float): Power of target for negative samples. - reduction (str): Options are "none", "mean" and "sum". - loss_weight (float): Loss weight of current loss. - """ - - def __init__(self, - alpha=2.0, - gamma=4.0, - reduction='mean', - loss_weight=1.0): - super(GaussianFocalLoss, self).__init__() - self.alpha = alpha - self.gamma = gamma - self.reduction = reduction - self.loss_weight = loss_weight - - def forward(self, - pred, - target, - weight=None, - avg_factor=None, - reduction_override=None): - """Forward function. - - Args: - pred (torch.Tensor): The prediction. - target (torch.Tensor): The learning target of the prediction - in gaussian distribution. - weight (torch.Tensor, optional): The weight of loss for each - prediction. Defaults to None. - avg_factor (int, optional): Average factor that is used to average - the loss. Defaults to None. - reduction_override (str, optional): The reduction method used to - override the original reduction method of the loss. - Defaults to None. - """ - assert reduction_override in (None, 'none', 'mean', 'sum') - reduction = ( - reduction_override if reduction_override else self.reduction) - loss_reg = self.loss_weight * gaussian_focal_loss( - pred, - target, - weight, - alpha=self.alpha, - gamma=self.gamma, - reduction=reduction, - avg_factor=avg_factor) - return loss_reg diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/roi_heads/__init__.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/roi_heads/__init__.py deleted file mode 100644 index ca0a38ec42cd41fbd97e07589a13d1af46f47f2f..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/roi_heads/__init__.py +++ /dev/null @@ -1,34 +0,0 @@ -from .base_roi_head import BaseRoIHead -from .bbox_heads import (BBoxHead, ConvFCBBoxHead, DoubleConvFCBBoxHead, - SCNetBBoxHead, Shared2FCBBoxHead, - Shared4Conv1FCBBoxHead) -from .cascade_roi_head import CascadeRoIHead -from .double_roi_head import DoubleHeadRoIHead -from .dynamic_roi_head import DynamicRoIHead -from .grid_roi_head import GridRoIHead -from .htc_roi_head import HybridTaskCascadeRoIHead -from .mask_heads import (CoarseMaskHead, FCNMaskHead, FeatureRelayHead, - FusedSemanticHead, GlobalContextHead, GridHead, - HTCMaskHead, MaskIoUHead, MaskPointHead, - SCNetMaskHead, SCNetSemanticHead) -from .mask_scoring_roi_head import MaskScoringRoIHead -from .pisa_roi_head import PISARoIHead -from .point_rend_roi_head import PointRendRoIHead -from .roi_extractors import SingleRoIExtractor -from .scnet_roi_head import SCNetRoIHead -from .shared_heads import ResLayer -from .sparse_roi_head import SparseRoIHead -from .standard_roi_head import StandardRoIHead -from .trident_roi_head import TridentRoIHead - -__all__ = [ - 'BaseRoIHead', 'CascadeRoIHead', 'DoubleHeadRoIHead', 'MaskScoringRoIHead', - 'HybridTaskCascadeRoIHead', 'GridRoIHead', 'ResLayer', 'BBoxHead', - 'ConvFCBBoxHead', 'Shared2FCBBoxHead', 'StandardRoIHead', - 'Shared4Conv1FCBBoxHead', 'DoubleConvFCBBoxHead', 'FCNMaskHead', - 'HTCMaskHead', 'FusedSemanticHead', 'GridHead', 'MaskIoUHead', - 'SingleRoIExtractor', 'PISARoIHead', 'PointRendRoIHead', 'MaskPointHead', - 'CoarseMaskHead', 'DynamicRoIHead', 'SparseRoIHead', 'TridentRoIHead', - 'SCNetRoIHead', 'SCNetMaskHead', 'SCNetSemanticHead', 'SCNetBBoxHead', - 'FeatureRelayHead', 'GlobalContextHead' -] diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/core/bbox/assigners/region_assigner.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/core/bbox/assigners/region_assigner.py deleted file mode 100644 index dd7d4326b31f0b637018159a31a68c0303afd06b..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/core/bbox/assigners/region_assigner.py +++ /dev/null @@ -1,221 +0,0 @@ -import torch - -from annotator.uniformer.mmdet.core import anchor_inside_flags -from ..builder import BBOX_ASSIGNERS -from .assign_result import AssignResult -from .base_assigner import BaseAssigner - - -def calc_region(bbox, ratio, stride, featmap_size=None): - """Calculate region of the box defined by the ratio, the ratio is from the - center of the box to every edge.""" - # project bbox on the feature - f_bbox = bbox / stride - x1 = torch.round((1 - ratio) * f_bbox[0] + ratio * f_bbox[2]) - y1 = torch.round((1 - ratio) * f_bbox[1] + ratio * f_bbox[3]) - x2 = torch.round(ratio * f_bbox[0] + (1 - ratio) * f_bbox[2]) - y2 = torch.round(ratio * f_bbox[1] + (1 - ratio) * f_bbox[3]) - if featmap_size is not None: - x1 = x1.clamp(min=0, max=featmap_size[1]) - y1 = y1.clamp(min=0, max=featmap_size[0]) - x2 = x2.clamp(min=0, max=featmap_size[1]) - y2 = y2.clamp(min=0, max=featmap_size[0]) - return (x1, y1, x2, y2) - - -def anchor_ctr_inside_region_flags(anchors, stride, region): - """Get the flag indicate whether anchor centers are inside regions.""" - x1, y1, x2, y2 = region - f_anchors = anchors / stride - x = (f_anchors[:, 0] + f_anchors[:, 2]) * 0.5 - y = (f_anchors[:, 1] + f_anchors[:, 3]) * 0.5 - flags = (x >= x1) & (x <= x2) & (y >= y1) & (y <= y2) - return flags - - -@BBOX_ASSIGNERS.register_module() -class RegionAssigner(BaseAssigner): - """Assign a corresponding gt bbox or background to each bbox. - - Each proposals will be assigned with `-1`, `0`, or a positive integer - indicating the ground truth index. - - - -1: don't care - - 0: negative sample, no assigned gt - - positive integer: positive sample, index (1-based) of assigned gt - - Args: - center_ratio: ratio of the region in the center of the bbox to - define positive sample. - ignore_ratio: ratio of the region to define ignore samples. - """ - - def __init__(self, center_ratio=0.2, ignore_ratio=0.5): - self.center_ratio = center_ratio - self.ignore_ratio = ignore_ratio - - def assign(self, - mlvl_anchors, - mlvl_valid_flags, - gt_bboxes, - img_meta, - featmap_sizes, - anchor_scale, - anchor_strides, - gt_bboxes_ignore=None, - gt_labels=None, - allowed_border=0): - """Assign gt to anchors. - - This method assign a gt bbox to every bbox (proposal/anchor), each bbox - will be assigned with -1, 0, or a positive number. -1 means don't care, - 0 means negative sample, positive number is the index (1-based) of - assigned gt. - The assignment is done in following steps, the order matters. - - 1. Assign every anchor to 0 (negative) - For each gt_bboxes: - 2. Compute ignore flags based on ignore_region then - assign -1 to anchors w.r.t. ignore flags - 3. Compute pos flags based on center_region then - assign gt_bboxes to anchors w.r.t. pos flags - 4. Compute ignore flags based on adjacent anchor lvl then - assign -1 to anchors w.r.t. ignore flags - 5. Assign anchor outside of image to -1 - - Args: - mlvl_anchors (list[Tensor]): Multi level anchors. - mlvl_valid_flags (list[Tensor]): Multi level valid flags. - gt_bboxes (Tensor): Ground truth bboxes of image - img_meta (dict): Meta info of image. - featmap_sizes (list[Tensor]): Feature mapsize each level - anchor_scale (int): Scale of the anchor. - anchor_strides (list[int]): Stride of the anchor. - gt_bboxes (Tensor): Groundtruth boxes, shape (k, 4). - gt_bboxes_ignore (Tensor, optional): Ground truth bboxes that are - labelled as `ignored`, e.g., crowd boxes in COCO. - gt_labels (Tensor, optional): Label of gt_bboxes, shape (k, ). - allowed_border (int, optional): The border to allow the valid - anchor. Defaults to 0. - - Returns: - :obj:`AssignResult`: The assign result. - """ - if gt_bboxes_ignore is not None: - raise NotImplementedError - - num_gts = gt_bboxes.shape[0] - num_bboxes = sum(x.shape[0] for x in mlvl_anchors) - - if num_gts == 0 or num_bboxes == 0: - # No ground truth or boxes, return empty assignment - max_overlaps = gt_bboxes.new_zeros((num_bboxes, )) - assigned_gt_inds = gt_bboxes.new_zeros((num_bboxes, ), - dtype=torch.long) - if gt_labels is None: - assigned_labels = None - else: - assigned_labels = gt_bboxes.new_full((num_bboxes, ), - -1, - dtype=torch.long) - return AssignResult( - num_gts, - assigned_gt_inds, - max_overlaps, - labels=assigned_labels) - - num_lvls = len(mlvl_anchors) - r1 = (1 - self.center_ratio) / 2 - r2 = (1 - self.ignore_ratio) / 2 - - scale = torch.sqrt((gt_bboxes[:, 2] - gt_bboxes[:, 0]) * - (gt_bboxes[:, 3] - gt_bboxes[:, 1])) - min_anchor_size = scale.new_full( - (1, ), float(anchor_scale * anchor_strides[0])) - target_lvls = torch.floor( - torch.log2(scale) - torch.log2(min_anchor_size) + 0.5) - target_lvls = target_lvls.clamp(min=0, max=num_lvls - 1).long() - - # 1. assign 0 (negative) by default - mlvl_assigned_gt_inds = [] - mlvl_ignore_flags = [] - for lvl in range(num_lvls): - h, w = featmap_sizes[lvl] - assert h * w == mlvl_anchors[lvl].shape[0] - assigned_gt_inds = gt_bboxes.new_full((h * w, ), - 0, - dtype=torch.long) - ignore_flags = torch.zeros_like(assigned_gt_inds) - mlvl_assigned_gt_inds.append(assigned_gt_inds) - mlvl_ignore_flags.append(ignore_flags) - - for gt_id in range(num_gts): - lvl = target_lvls[gt_id].item() - featmap_size = featmap_sizes[lvl] - stride = anchor_strides[lvl] - anchors = mlvl_anchors[lvl] - gt_bbox = gt_bboxes[gt_id, :4] - - # Compute regions - ignore_region = calc_region(gt_bbox, r2, stride, featmap_size) - ctr_region = calc_region(gt_bbox, r1, stride, featmap_size) - - # 2. Assign -1 to ignore flags - ignore_flags = anchor_ctr_inside_region_flags( - anchors, stride, ignore_region) - mlvl_assigned_gt_inds[lvl][ignore_flags] = -1 - - # 3. Assign gt_bboxes to pos flags - pos_flags = anchor_ctr_inside_region_flags(anchors, stride, - ctr_region) - mlvl_assigned_gt_inds[lvl][pos_flags] = gt_id + 1 - - # 4. Assign -1 to ignore adjacent lvl - if lvl > 0: - d_lvl = lvl - 1 - d_anchors = mlvl_anchors[d_lvl] - d_featmap_size = featmap_sizes[d_lvl] - d_stride = anchor_strides[d_lvl] - d_ignore_region = calc_region(gt_bbox, r2, d_stride, - d_featmap_size) - ignore_flags = anchor_ctr_inside_region_flags( - d_anchors, d_stride, d_ignore_region) - mlvl_ignore_flags[d_lvl][ignore_flags] = 1 - if lvl < num_lvls - 1: - u_lvl = lvl + 1 - u_anchors = mlvl_anchors[u_lvl] - u_featmap_size = featmap_sizes[u_lvl] - u_stride = anchor_strides[u_lvl] - u_ignore_region = calc_region(gt_bbox, r2, u_stride, - u_featmap_size) - ignore_flags = anchor_ctr_inside_region_flags( - u_anchors, u_stride, u_ignore_region) - mlvl_ignore_flags[u_lvl][ignore_flags] = 1 - - # 4. (cont.) Assign -1 to ignore adjacent lvl - for lvl in range(num_lvls): - ignore_flags = mlvl_ignore_flags[lvl] - mlvl_assigned_gt_inds[lvl][ignore_flags] = -1 - - # 5. Assign -1 to anchor outside of image - flat_assigned_gt_inds = torch.cat(mlvl_assigned_gt_inds) - flat_anchors = torch.cat(mlvl_anchors) - flat_valid_flags = torch.cat(mlvl_valid_flags) - assert (flat_assigned_gt_inds.shape[0] == flat_anchors.shape[0] == - flat_valid_flags.shape[0]) - inside_flags = anchor_inside_flags(flat_anchors, flat_valid_flags, - img_meta['img_shape'], - allowed_border) - outside_flags = ~inside_flags - flat_assigned_gt_inds[outside_flags] = -1 - - if gt_labels is not None: - assigned_labels = torch.zeros_like(flat_assigned_gt_inds) - pos_flags = assigned_gt_inds > 0 - assigned_labels[pos_flags] = gt_labels[ - flat_assigned_gt_inds[pos_flags] - 1] - else: - assigned_labels = None - - return AssignResult( - num_gts, flat_assigned_gt_inds, None, labels=assigned_labels) diff --git a/spaces/adhisetiawan/anime-voice-generator/text/cleaners.py b/spaces/adhisetiawan/anime-voice-generator/text/cleaners.py deleted file mode 100644 index 68c9ad24d5a303b68a521fba2e8776c8cc867356..0000000000000000000000000000000000000000 --- a/spaces/adhisetiawan/anime-voice-generator/text/cleaners.py +++ /dev/null @@ -1,475 +0,0 @@ -""" from https://github.com/keithito/tacotron """ - -''' -Cleaners are transformations that run over the input text at both training and eval time. - -Cleaners can be selected by passing a comma-delimited list of cleaner names as the "cleaners" -hyperparameter. Some cleaners are English-specific. You'll typically want to use: - 1. "english_cleaners" for English text - 2. "transliteration_cleaners" for non-English text that can be transliterated to ASCII using - the Unidecode library (https://pypi.python.org/pypi/Unidecode) - 3. "basic_cleaners" if you do not want to transliterate (in this case, you should also update - the symbols in symbols.py to match your data). -''' - -import re -from unidecode import unidecode -import pyopenjtalk -from jamo import h2j, j2hcj -from pypinyin import lazy_pinyin, BOPOMOFO -import jieba, cn2an - - -# This is a list of Korean classifiers preceded by pure Korean numerals. -_korean_classifiers = '군데 권 개 그루 닢 대 두 마리 모 모금 뭇 발 발짝 방 번 벌 보루 살 수 술 시 쌈 움큼 정 짝 채 척 첩 축 켤레 톨 통' - -# Regular expression matching whitespace: -_whitespace_re = re.compile(r'\s+') - -# Regular expression matching Japanese without punctuation marks: -_japanese_characters = re.compile(r'[A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]') - -# Regular expression matching non-Japanese characters or punctuation marks: -_japanese_marks = re.compile(r'[^A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]') - -# List of (regular expression, replacement) pairs for abbreviations: -_abbreviations = [(re.compile('\\b%s\\.' % x[0], re.IGNORECASE), x[1]) for x in [ - ('mrs', 'misess'), - ('mr', 'mister'), - ('dr', 'doctor'), - ('st', 'saint'), - ('co', 'company'), - ('jr', 'junior'), - ('maj', 'major'), - ('gen', 'general'), - ('drs', 'doctors'), - ('rev', 'reverend'), - ('lt', 'lieutenant'), - ('hon', 'honorable'), - ('sgt', 'sergeant'), - ('capt', 'captain'), - ('esq', 'esquire'), - ('ltd', 'limited'), - ('col', 'colonel'), - ('ft', 'fort'), -]] - -# List of (hangul, hangul divided) pairs: -_hangul_divided = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('ㄳ', 'ㄱㅅ'), - ('ㄵ', 'ㄴㅈ'), - ('ㄶ', 'ㄴㅎ'), - ('ㄺ', 'ㄹㄱ'), - ('ㄻ', 'ㄹㅁ'), - ('ㄼ', 'ㄹㅂ'), - ('ㄽ', 'ㄹㅅ'), - ('ㄾ', 'ㄹㅌ'), - ('ㄿ', 'ㄹㅍ'), - ('ㅀ', 'ㄹㅎ'), - ('ㅄ', 'ㅂㅅ'), - ('ㅘ', 'ㅗㅏ'), - ('ㅙ', 'ㅗㅐ'), - ('ㅚ', 'ㅗㅣ'), - ('ㅝ', 'ㅜㅓ'), - ('ㅞ', 'ㅜㅔ'), - ('ㅟ', 'ㅜㅣ'), - ('ㅢ', 'ㅡㅣ'), - ('ㅑ', 'ㅣㅏ'), - ('ㅒ', 'ㅣㅐ'), - ('ㅕ', 'ㅣㅓ'), - ('ㅖ', 'ㅣㅔ'), - ('ㅛ', 'ㅣㅗ'), - ('ㅠ', 'ㅣㅜ') -]] - -# List of (Latin alphabet, hangul) pairs: -_latin_to_hangul = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [ - ('a', '에이'), - ('b', '비'), - ('c', '시'), - ('d', '디'), - ('e', '이'), - ('f', '에프'), - ('g', '지'), - ('h', '에이치'), - ('i', '아이'), - ('j', '제이'), - ('k', '케이'), - ('l', '엘'), - ('m', '엠'), - ('n', '엔'), - ('o', '오'), - ('p', '피'), - ('q', '큐'), - ('r', '아르'), - ('s', '에스'), - ('t', '티'), - ('u', '유'), - ('v', '브이'), - ('w', '더블유'), - ('x', '엑스'), - ('y', '와이'), - ('z', '제트') -]] - -# List of (Latin alphabet, bopomofo) pairs: -_latin_to_bopomofo = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [ - ('a', 'ㄟˉ'), - ('b', 'ㄅㄧˋ'), - ('c', 'ㄙㄧˉ'), - ('d', 'ㄉㄧˋ'), - ('e', 'ㄧˋ'), - ('f', 'ㄝˊㄈㄨˋ'), - ('g', 'ㄐㄧˋ'), - ('h', 'ㄝˇㄑㄩˋ'), - ('i', 'ㄞˋ'), - ('j', 'ㄐㄟˋ'), - ('k', 'ㄎㄟˋ'), - ('l', 'ㄝˊㄛˋ'), - ('m', 'ㄝˊㄇㄨˋ'), - ('n', 'ㄣˉ'), - ('o', 'ㄡˉ'), - ('p', 'ㄆㄧˉ'), - ('q', 'ㄎㄧㄡˉ'), - ('r', 'ㄚˋ'), - ('s', 'ㄝˊㄙˋ'), - ('t', 'ㄊㄧˋ'), - ('u', 'ㄧㄡˉ'), - ('v', 'ㄨㄧˉ'), - ('w', 'ㄉㄚˋㄅㄨˋㄌㄧㄡˋ'), - ('x', 'ㄝˉㄎㄨˋㄙˋ'), - ('y', 'ㄨㄞˋ'), - ('z', 'ㄗㄟˋ') -]] - - -# List of (bopomofo, romaji) pairs: -_bopomofo_to_romaji = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [ - ('ㄅㄛ', 'p⁼wo'), - ('ㄆㄛ', 'pʰwo'), - ('ㄇㄛ', 'mwo'), - ('ㄈㄛ', 'fwo'), - ('ㄅ', 'p⁼'), - ('ㄆ', 'pʰ'), - ('ㄇ', 'm'), - ('ㄈ', 'f'), - ('ㄉ', 't⁼'), - ('ㄊ', 'tʰ'), - ('ㄋ', 'n'), - ('ㄌ', 'l'), - ('ㄍ', 'k⁼'), - ('ㄎ', 'kʰ'), - ('ㄏ', 'h'), - ('ㄐ', 'ʧ⁼'), - ('ㄑ', 'ʧʰ'), - ('ㄒ', 'ʃ'), - ('ㄓ', 'ʦ`⁼'), - ('ㄔ', 'ʦ`ʰ'), - ('ㄕ', 's`'), - ('ㄖ', 'ɹ`'), - ('ㄗ', 'ʦ⁼'), - ('ㄘ', 'ʦʰ'), - ('ㄙ', 's'), - ('ㄚ', 'a'), - ('ㄛ', 'o'), - ('ㄜ', 'ə'), - ('ㄝ', 'e'), - ('ㄞ', 'ai'), - ('ㄟ', 'ei'), - ('ㄠ', 'au'), - ('ㄡ', 'ou'), - ('ㄧㄢ', 'yeNN'), - ('ㄢ', 'aNN'), - ('ㄧㄣ', 'iNN'), - ('ㄣ', 'əNN'), - ('ㄤ', 'aNg'), - ('ㄧㄥ', 'iNg'), - ('ㄨㄥ', 'uNg'), - ('ㄩㄥ', 'yuNg'), - ('ㄥ', 'əNg'), - ('ㄦ', 'əɻ'), - ('ㄧ', 'i'), - ('ㄨ', 'u'), - ('ㄩ', 'ɥ'), - ('ˉ', '→'), - ('ˊ', '↑'), - ('ˇ', '↓↑'), - ('ˋ', '↓'), - ('˙', ''), - (',', ','), - ('。', '.'), - ('!', '!'), - ('?', '?'), - ('—', '-') -]] - - -def expand_abbreviations(text): - for regex, replacement in _abbreviations: - text = re.sub(regex, replacement, text) - return text - - -def lowercase(text): - return text.lower() - - -def collapse_whitespace(text): - return re.sub(_whitespace_re, ' ', text) - - -def convert_to_ascii(text): - return unidecode(text) - - -def japanese_to_romaji_with_accent(text): - '''Reference https://r9y9.github.io/ttslearn/latest/notebooks/ch10_Recipe-Tacotron.html''' - sentences = re.split(_japanese_marks, text) - marks = re.findall(_japanese_marks, text) - text = '' - for i, sentence in enumerate(sentences): - if re.match(_japanese_characters, sentence): - if text!='': - text+=' ' - labels = pyopenjtalk.extract_fullcontext(sentence) - for n, label in enumerate(labels): - phoneme = re.search(r'\-([^\+]*)\+', label).group(1) - if phoneme not in ['sil','pau']: - text += phoneme.replace('ch','ʧ').replace('sh','ʃ').replace('cl','Q') - else: - continue - n_moras = int(re.search(r'/F:(\d+)_', label).group(1)) - a1 = int(re.search(r"/A:(\-?[0-9]+)\+", label).group(1)) - a2 = int(re.search(r"\+(\d+)\+", label).group(1)) - a3 = int(re.search(r"\+(\d+)/", label).group(1)) - if re.search(r'\-([^\+]*)\+', labels[n + 1]).group(1) in ['sil','pau']: - a2_next=-1 - else: - a2_next = int(re.search(r"\+(\d+)\+", labels[n + 1]).group(1)) - # Accent phrase boundary - if a3 == 1 and a2_next == 1: - text += ' ' - # Falling - elif a1 == 0 and a2_next == a2 + 1 and a2 != n_moras: - text += '↓' - # Rising - elif a2 == 1 and a2_next == 2: - text += '↑' - if i`_ ) - """ - - def _convert_category_id(segment_info, meta): - if segment_info["category_id"] in meta["thing_dataset_id_to_contiguous_id"]: - segment_info["category_id"] = meta["thing_dataset_id_to_contiguous_id"][ - segment_info["category_id"] - ] - segment_info["isthing"] = True - else: - segment_info["category_id"] = meta["stuff_dataset_id_to_contiguous_id"][ - segment_info["category_id"] - ] - segment_info["isthing"] = False - return segment_info - - with PathManager.open(json_file) as f: - json_info = json.load(f) - - ret = [] - for ann in json_info["annotations"]: - image_id = ann["image_id"] - # TODO: currently we assume image and label has the same filename but - # different extension, and images have extension ".jpg" for COCO. Need - # to make image extension a user-provided argument if we extend this - # function to support other COCO-like datasets. - image_file = os.path.join(image_dir, os.path.splitext(ann["file_name"])[0] + ".jpg") - label_file = os.path.join(gt_dir, ann["file_name"]) - sem_label_file = os.path.join(semseg_dir, ann["file_name"]) - segments_info = [_convert_category_id(x, meta) for x in ann["segments_info"]] - ret.append( - { - "file_name": image_file, - "image_id": image_id, - "pan_seg_file_name": label_file, - "sem_seg_file_name": sem_label_file, - "segments_info": segments_info, - } - ) - assert len(ret), f"No images found in {image_dir}!" - assert PathManager.isfile(ret[0]["file_name"]), ret[0]["file_name"] - assert PathManager.isfile(ret[0]["pan_seg_file_name"]), ret[0]["pan_seg_file_name"] - assert PathManager.isfile(ret[0]["sem_seg_file_name"]), ret[0]["sem_seg_file_name"] - return ret - - -def register_mapillary_vistas_panoptic( - name, metadata, image_root, panoptic_root, semantic_root, panoptic_json, instances_json=None -): - """ - Register a "standard" version of ADE20k panoptic segmentation dataset named `name`. - The dictionaries in this registered dataset follows detectron2's standard format. - Hence it's called "standard". - Args: - name (str): the name that identifies a dataset, - e.g. "ade20k_panoptic_train" - metadata (dict): extra metadata associated with this dataset. - image_root (str): directory which contains all the images - panoptic_root (str): directory which contains panoptic annotation images in COCO format - panoptic_json (str): path to the json panoptic annotation file in COCO format - sem_seg_root (none): not used, to be consistent with - `register_coco_panoptic_separated`. - instances_json (str): path to the json instance annotation file - """ - panoptic_name = name - DatasetCatalog.register( - panoptic_name, - lambda: load_mapillary_vistas_panoptic_json( - panoptic_json, image_root, panoptic_root, semantic_root, metadata - ), - ) - MetadataCatalog.get(panoptic_name).set( - panoptic_root=panoptic_root, - image_root=image_root, - panoptic_json=panoptic_json, - json_file=instances_json, - evaluator_type="mapillary_vistas_panoptic_seg", - ignore_label=65, # different from other datasets, Mapillary Vistas sets ignore_label to 65 - label_divisor=1000, - **metadata, - ) - - -_PREDEFINED_SPLITS_ADE20K_PANOPTIC = { - "mapillary_vistas_panoptic_train": ( - "mapillary_vistas/training/images", - "mapillary_vistas/training/panoptic", - "mapillary_vistas/training/panoptic/panoptic_2018.json", - "mapillary_vistas/training/labels", - ), - "mapillary_vistas_panoptic_val": ( - "mapillary_vistas/validation/images", - "mapillary_vistas/validation/panoptic", - "mapillary_vistas/validation/panoptic/panoptic_2018.json", - "mapillary_vistas/validation/labels", - ), -} - - -def get_metadata(): - meta = {} - # The following metadata maps contiguous id from [0, #thing categories + - # #stuff categories) to their names and colors. We have to replica of the - # same name and color under "thing_*" and "stuff_*" because the current - # visualization function in D2 handles thing and class classes differently - # due to some heuristic used in Panoptic FPN. We keep the same naming to - # enable reusing existing visualization functions. - thing_classes = [k["name"] for k in MAPILLARY_VISTAS_SEM_SEG_CATEGORIES] - thing_colors = [k["color"] for k in MAPILLARY_VISTAS_SEM_SEG_CATEGORIES] - stuff_classes = [k["name"] for k in MAPILLARY_VISTAS_SEM_SEG_CATEGORIES] - stuff_colors = [k["color"] for k in MAPILLARY_VISTAS_SEM_SEG_CATEGORIES] - - meta["thing_classes"] = thing_classes - meta["thing_colors"] = thing_colors - meta["stuff_classes"] = stuff_classes - meta["stuff_colors"] = stuff_colors - - # Convert category id for training: - # category id: like semantic segmentation, it is the class id for each - # pixel. Since there are some classes not used in evaluation, the category - # id is not always contiguous and thus we have two set of category ids: - # - original category id: category id in the original dataset, mainly - # used for evaluation. - # - contiguous category id: [0, #classes), in order to train the linear - # softmax classifier. - thing_dataset_id_to_contiguous_id = {} - stuff_dataset_id_to_contiguous_id = {} - - for i, cat in enumerate(MAPILLARY_VISTAS_SEM_SEG_CATEGORIES): - if cat["isthing"]: - thing_dataset_id_to_contiguous_id[cat["id"]] = i - # else: - # stuff_dataset_id_to_contiguous_id[cat["id"]] = i - - # in order to use sem_seg evaluator - stuff_dataset_id_to_contiguous_id[cat["id"]] = i - - meta["thing_dataset_id_to_contiguous_id"] = thing_dataset_id_to_contiguous_id - meta["stuff_dataset_id_to_contiguous_id"] = stuff_dataset_id_to_contiguous_id - - return meta - - -def register_all_mapillary_vistas_panoptic(root): - metadata = get_metadata() - for ( - prefix, - (image_root, panoptic_root, panoptic_json, semantic_root), - ) in _PREDEFINED_SPLITS_ADE20K_PANOPTIC.items(): - # The "standard" version of COCO panoptic segmentation dataset, - # e.g. used by Panoptic-DeepLab - register_mapillary_vistas_panoptic( - prefix, - metadata, - os.path.join(root, image_root), - os.path.join(root, panoptic_root), - os.path.join(root, semantic_root), - os.path.join(root, panoptic_json), - ) - - -_root = os.getenv("DETECTRON2_DATASETS", "datasets") -register_all_mapillary_vistas_panoptic(_root) diff --git a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_internal/vcs/subversion.py b/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_internal/vcs/subversion.py deleted file mode 100644 index 89c8754ce090a41f94ac9691098db6a9ec119930..0000000000000000000000000000000000000000 --- a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_internal/vcs/subversion.py +++ /dev/null @@ -1,324 +0,0 @@ -import logging -import os -import re -from typing import List, Optional, Tuple - -from pip._internal.utils.misc import ( - HiddenText, - display_path, - is_console_interactive, - is_installable_dir, - split_auth_from_netloc, -) -from pip._internal.utils.subprocess import CommandArgs, make_command -from pip._internal.vcs.versioncontrol import ( - AuthInfo, - RemoteNotFoundError, - RevOptions, - VersionControl, - vcs, -) - -logger = logging.getLogger(__name__) - -_svn_xml_url_re = re.compile('url="([^"]+)"') -_svn_rev_re = re.compile(r'committed-rev="(\d+)"') -_svn_info_xml_rev_re = re.compile(r'\s*revision="(\d+)"') -_svn_info_xml_url_re = re.compile(r"(.*)") - - -class Subversion(VersionControl): - name = "svn" - dirname = ".svn" - repo_name = "checkout" - schemes = ("svn+ssh", "svn+http", "svn+https", "svn+svn", "svn+file") - - @classmethod - def should_add_vcs_url_prefix(cls, remote_url: str) -> bool: - return True - - @staticmethod - def get_base_rev_args(rev: str) -> List[str]: - return ["-r", rev] - - @classmethod - def get_revision(cls, location: str) -> str: - """ - Return the maximum revision for all files under a given location - """ - # Note: taken from setuptools.command.egg_info - revision = 0 - - for base, dirs, _ in os.walk(location): - if cls.dirname not in dirs: - dirs[:] = [] - continue # no sense walking uncontrolled subdirs - dirs.remove(cls.dirname) - entries_fn = os.path.join(base, cls.dirname, "entries") - if not os.path.exists(entries_fn): - # FIXME: should we warn? - continue - - dirurl, localrev = cls._get_svn_url_rev(base) - - if base == location: - assert dirurl is not None - base = dirurl + "/" # save the root url - elif not dirurl or not dirurl.startswith(base): - dirs[:] = [] - continue # not part of the same svn tree, skip it - revision = max(revision, localrev) - return str(revision) - - @classmethod - def get_netloc_and_auth( - cls, netloc: str, scheme: str - ) -> Tuple[str, Tuple[Optional[str], Optional[str]]]: - """ - This override allows the auth information to be passed to svn via the - --username and --password options instead of via the URL. - """ - if scheme == "ssh": - # The --username and --password options can't be used for - # svn+ssh URLs, so keep the auth information in the URL. - return super().get_netloc_and_auth(netloc, scheme) - - return split_auth_from_netloc(netloc) - - @classmethod - def get_url_rev_and_auth(cls, url: str) -> Tuple[str, Optional[str], AuthInfo]: - # hotfix the URL scheme after removing svn+ from svn+ssh:// readd it - url, rev, user_pass = super().get_url_rev_and_auth(url) - if url.startswith("ssh://"): - url = "svn+" + url - return url, rev, user_pass - - @staticmethod - def make_rev_args( - username: Optional[str], password: Optional[HiddenText] - ) -> CommandArgs: - extra_args: CommandArgs = [] - if username: - extra_args += ["--username", username] - if password: - extra_args += ["--password", password] - - return extra_args - - @classmethod - def get_remote_url(cls, location: str) -> str: - # In cases where the source is in a subdirectory, we have to look up in - # the location until we find a valid project root. - orig_location = location - while not is_installable_dir(location): - last_location = location - location = os.path.dirname(location) - if location == last_location: - # We've traversed up to the root of the filesystem without - # finding a Python project. - logger.warning( - "Could not find Python project for directory %s (tried all " - "parent directories)", - orig_location, - ) - raise RemoteNotFoundError - - url, _rev = cls._get_svn_url_rev(location) - if url is None: - raise RemoteNotFoundError - - return url - - @classmethod - def _get_svn_url_rev(cls, location: str) -> Tuple[Optional[str], int]: - from pip._internal.exceptions import InstallationError - - entries_path = os.path.join(location, cls.dirname, "entries") - if os.path.exists(entries_path): - with open(entries_path) as f: - data = f.read() - else: # subversion >= 1.7 does not have the 'entries' file - data = "" - - url = None - if data.startswith("8") or data.startswith("9") or data.startswith("10"): - entries = list(map(str.splitlines, data.split("\n\x0c\n"))) - del entries[0][0] # get rid of the '8' - url = entries[0][3] - revs = [int(d[9]) for d in entries if len(d) > 9 and d[9]] + [0] - elif data.startswith("= 1.7 - # Note that using get_remote_call_options is not necessary here - # because `svn info` is being run against a local directory. - # We don't need to worry about making sure interactive mode - # is being used to prompt for passwords, because passwords - # are only potentially needed for remote server requests. - xml = cls.run_command( - ["info", "--xml", location], - show_stdout=False, - stdout_only=True, - ) - match = _svn_info_xml_url_re.search(xml) - assert match is not None - url = match.group(1) - revs = [int(m.group(1)) for m in _svn_info_xml_rev_re.finditer(xml)] - except InstallationError: - url, revs = None, [] - - if revs: - rev = max(revs) - else: - rev = 0 - - return url, rev - - @classmethod - def is_commit_id_equal(cls, dest: str, name: Optional[str]) -> bool: - """Always assume the versions don't match""" - return False - - def __init__(self, use_interactive: bool = None) -> None: - if use_interactive is None: - use_interactive = is_console_interactive() - self.use_interactive = use_interactive - - # This member is used to cache the fetched version of the current - # ``svn`` client. - # Special value definitions: - # None: Not evaluated yet. - # Empty tuple: Could not parse version. - self._vcs_version: Optional[Tuple[int, ...]] = None - - super().__init__() - - def call_vcs_version(self) -> Tuple[int, ...]: - """Query the version of the currently installed Subversion client. - - :return: A tuple containing the parts of the version information or - ``()`` if the version returned from ``svn`` could not be parsed. - :raises: BadCommand: If ``svn`` is not installed. - """ - # Example versions: - # svn, version 1.10.3 (r1842928) - # compiled Feb 25 2019, 14:20:39 on x86_64-apple-darwin17.0.0 - # svn, version 1.7.14 (r1542130) - # compiled Mar 28 2018, 08:49:13 on x86_64-pc-linux-gnu - # svn, version 1.12.0-SlikSvn (SlikSvn/1.12.0) - # compiled May 28 2019, 13:44:56 on x86_64-microsoft-windows6.2 - version_prefix = "svn, version " - version = self.run_command(["--version"], show_stdout=False, stdout_only=True) - if not version.startswith(version_prefix): - return () - - version = version[len(version_prefix) :].split()[0] - version_list = version.partition("-")[0].split(".") - try: - parsed_version = tuple(map(int, version_list)) - except ValueError: - return () - - return parsed_version - - def get_vcs_version(self) -> Tuple[int, ...]: - """Return the version of the currently installed Subversion client. - - If the version of the Subversion client has already been queried, - a cached value will be used. - - :return: A tuple containing the parts of the version information or - ``()`` if the version returned from ``svn`` could not be parsed. - :raises: BadCommand: If ``svn`` is not installed. - """ - if self._vcs_version is not None: - # Use cached version, if available. - # If parsing the version failed previously (empty tuple), - # do not attempt to parse it again. - return self._vcs_version - - vcs_version = self.call_vcs_version() - self._vcs_version = vcs_version - return vcs_version - - def get_remote_call_options(self) -> CommandArgs: - """Return options to be used on calls to Subversion that contact the server. - - These options are applicable for the following ``svn`` subcommands used - in this class. - - - checkout - - switch - - update - - :return: A list of command line arguments to pass to ``svn``. - """ - if not self.use_interactive: - # --non-interactive switch is available since Subversion 0.14.4. - # Subversion < 1.8 runs in interactive mode by default. - return ["--non-interactive"] - - svn_version = self.get_vcs_version() - # By default, Subversion >= 1.8 runs in non-interactive mode if - # stdin is not a TTY. Since that is how pip invokes SVN, in - # call_subprocess(), pip must pass --force-interactive to ensure - # the user can be prompted for a password, if required. - # SVN added the --force-interactive option in SVN 1.8. Since - # e.g. RHEL/CentOS 7, which is supported until 2024, ships with - # SVN 1.7, pip should continue to support SVN 1.7. Therefore, pip - # can't safely add the option if the SVN version is < 1.8 (or unknown). - if svn_version >= (1, 8): - return ["--force-interactive"] - - return [] - - def fetch_new( - self, dest: str, url: HiddenText, rev_options: RevOptions, verbosity: int - ) -> None: - rev_display = rev_options.to_display() - logger.info( - "Checking out %s%s to %s", - url, - rev_display, - display_path(dest), - ) - if verbosity <= 0: - flag = "--quiet" - else: - flag = "" - cmd_args = make_command( - "checkout", - flag, - self.get_remote_call_options(), - rev_options.to_args(), - url, - dest, - ) - self.run_command(cmd_args) - - def switch(self, dest: str, url: HiddenText, rev_options: RevOptions) -> None: - cmd_args = make_command( - "switch", - self.get_remote_call_options(), - rev_options.to_args(), - url, - dest, - ) - self.run_command(cmd_args) - - def update(self, dest: str, url: HiddenText, rev_options: RevOptions) -> None: - cmd_args = make_command( - "update", - self.get_remote_call_options(), - rev_options.to_args(), - dest, - ) - self.run_command(cmd_args) - - -vcs.register(Subversion) diff --git a/spaces/allknowingroger/Image-Models-Test131/app.py b/spaces/allknowingroger/Image-Models-Test131/app.py deleted file mode 100644 index ce80fa691b29a6b0d338222bc661e2841067fed6..0000000000000000000000000000000000000000 --- a/spaces/allknowingroger/Image-Models-Test131/app.py +++ /dev/null @@ -1,144 +0,0 @@ -import gradio as gr -# import os -# import sys -# from pathlib import Path -import time - -models =[ - "remg1997/dynabench-sdxl10", - "Yntec/BasilRemix", - "bellagio-ai/WalterNgo-face-vn-pictures-dreambooth-512-2k", - "KyriaAnnwyn/lora-trained-plu4-xl", - "FFusion/FFXL400", - "kear24100712/juansebasia", - "Robo0890/roboxl", - "tonyassi/tony-dreambooth-1-0", - "Yntec/lamettaRemix", -] - - -model_functions = {} -model_idx = 1 -for model_path in models: - try: - model_functions[model_idx] = gr.Interface.load(f"models/{model_path}", live=False, preprocess=True, postprocess=False) - except Exception as error: - def the_fn(txt): - return None - model_functions[model_idx] = gr.Interface(fn=the_fn, inputs=["text"], outputs=["image"]) - model_idx+=1 - - -def send_it_idx(idx): - def send_it_fn(prompt): - output = (model_functions.get(str(idx)) or model_functions.get(str(1)))(prompt) - return output - return send_it_fn - -def get_prompts(prompt_text): - return prompt_text - -def clear_it(val): - if int(val) != 0: - val = 0 - else: - val = 0 - pass - return val - -def all_task_end(cnt,t_stamp): - to = t_stamp + 60 - et = time.time() - if et > to and t_stamp != 0: - d = gr.update(value=0) - tog = gr.update(value=1) - #print(f'to: {to} et: {et}') - else: - if cnt != 0: - d = gr.update(value=et) - else: - d = gr.update(value=0) - tog = gr.update(value=0) - #print (f'passing: to: {to} et: {et}') - pass - return d, tog - -def all_task_start(): - print("\n\n\n\n\n\n\n") - t = time.gmtime() - t_stamp = time.time() - current_time = time.strftime("%H:%M:%S", t) - return gr.update(value=t_stamp), gr.update(value=t_stamp), gr.update(value=0) - -def clear_fn(): - nn = len(models) - return tuple([None, *[None for _ in range(nn)]]) - - - -with gr.Blocks(title="SD Models") as my_interface: - with gr.Column(scale=12): - # with gr.Row(): - # gr.Markdown("""- Primary prompt: 你想画的内容(英文单词,如 a cat, 加英文逗号效果更好;点 Improve 按钮进行完善)\n- Real prompt: 完善后的提示词,出现后再点右边的 Run 按钮开始运行""") - with gr.Row(): - with gr.Row(scale=6): - primary_prompt=gr.Textbox(label="Prompt", value="") - # real_prompt=gr.Textbox(label="Real prompt") - with gr.Row(scale=6): - # improve_prompts_btn=gr.Button("Improve") - with gr.Row(): - run=gr.Button("Run",variant="primary") - clear_btn=gr.Button("Clear") - with gr.Row(): - sd_outputs = {} - model_idx = 1 - for model_path in models: - with gr.Column(scale=3, min_width=320): - with gr.Box(): - sd_outputs[model_idx] = gr.Image(label=model_path) - pass - model_idx += 1 - pass - pass - - with gr.Row(visible=False): - start_box=gr.Number(interactive=False) - end_box=gr.Number(interactive=False) - tog_box=gr.Textbox(value=0,interactive=False) - - start_box.change( - all_task_end, - [start_box, end_box], - [start_box, tog_box], - every=1, - show_progress=False) - - primary_prompt.submit(all_task_start, None, [start_box, end_box, tog_box]) - run.click(all_task_start, None, [start_box, end_box, tog_box]) - runs_dict = {} - model_idx = 1 - for model_path in models: - runs_dict[model_idx] = run.click(model_functions[model_idx], inputs=[primary_prompt], outputs=[sd_outputs[model_idx]]) - model_idx += 1 - pass - pass - - # improve_prompts_btn_clicked=improve_prompts_btn.click( - # get_prompts, - # inputs=[primary_prompt], - # outputs=[primary_prompt], - # cancels=list(runs_dict.values())) - clear_btn.click( - clear_fn, - None, - [primary_prompt, *list(sd_outputs.values())], - cancels=[*list(runs_dict.values())]) - tog_box.change( - clear_it, - tog_box, - tog_box, - cancels=[*list(runs_dict.values())]) - -my_interface.queue(concurrency_count=600, status_update_rate=1) -my_interface.launch(inline=True, show_api=False) - \ No newline at end of file diff --git a/spaces/amarchheda/ChordDuplicate/portaudio/src/common/pa_allocation.c b/spaces/amarchheda/ChordDuplicate/portaudio/src/common/pa_allocation.c deleted file mode 100644 index 7e3298a539031b98129cf2a2af65431e39bfeda7..0000000000000000000000000000000000000000 --- a/spaces/amarchheda/ChordDuplicate/portaudio/src/common/pa_allocation.c +++ /dev/null @@ -1,242 +0,0 @@ -/* - * $Id$ - * Portable Audio I/O Library allocation group implementation - * memory allocation group for tracking allocation groups - * - * Based on the Open Source API proposed by Ross Bencina - * Copyright (c) 1999-2002 Ross Bencina, Phil Burk - * - * Permission is hereby granted, free of charge, to any person obtaining - * a copy of this software and associated documentation files - * (the "Software"), to deal in the Software without restriction, - * including without limitation the rights to use, copy, modify, merge, - * publish, distribute, sublicense, and/or sell copies of the Software, - * and to permit persons to whom the Software is furnished to do so, - * subject to the following conditions: - * - * The above copyright notice and this permission notice shall be - * included in all copies or substantial portions of the Software. - * - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, - * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR - * ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF - * CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION - * WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - */ - -/* - * The text above constitutes the entire PortAudio license; however, - * the PortAudio community also makes the following non-binding requests: - * - * Any person wishing to distribute modifications to the Software is - * requested to send the modifications to the original developer so that - * they can be incorporated into the canonical version. It is also - * requested that these non-binding requests be included along with the - * license above. - */ - -/** @file - @ingroup common_src - - @brief Allocation Group implementation. -*/ - - -#include "pa_allocation.h" -#include "pa_util.h" - - -/* - Maintain 3 singly linked lists... - linkBlocks: the buffers used to allocate the links - spareLinks: links available for use in the allocations list - allocations: the buffers currently allocated using PaUtil_ContextAllocateMemory() - - Link block size is doubled every time new links are allocated. -*/ - - -#define PA_INITIAL_LINK_COUNT_ 16 - -struct PaUtilAllocationGroupLink -{ - struct PaUtilAllocationGroupLink *next; - void *buffer; -}; - -/* - Allocate a block of links. The first link will have it's buffer member - pointing to the block, and it's next member set to . The remaining - links will have NULL buffer members, and each link will point to - the next link except the last, which will point to -*/ -static struct PaUtilAllocationGroupLink *AllocateLinks( long count, - struct PaUtilAllocationGroupLink *nextBlock, - struct PaUtilAllocationGroupLink *nextSpare ) -{ - struct PaUtilAllocationGroupLink *result; - int i; - - result = (struct PaUtilAllocationGroupLink *)PaUtil_AllocateMemory( - sizeof(struct PaUtilAllocationGroupLink) * count ); - if( result ) - { - /* the block link */ - result[0].buffer = result; - result[0].next = nextBlock; - - /* the spare links */ - for( i=1; ilinkCount = PA_INITIAL_LINK_COUNT_; - result->linkBlocks = &links[0]; - result->spareLinks = &links[1]; - result->allocations = 0; - } - else - { - PaUtil_FreeMemory( links ); - } - } - - return result; -} - - -void PaUtil_DestroyAllocationGroup( PaUtilAllocationGroup* group ) -{ - struct PaUtilAllocationGroupLink *current = group->linkBlocks; - struct PaUtilAllocationGroupLink *next; - - while( current ) - { - next = current->next; - PaUtil_FreeMemory( current->buffer ); - current = next; - } - - PaUtil_FreeMemory( group ); -} - - -void* PaUtil_GroupAllocateMemory( PaUtilAllocationGroup* group, long size ) -{ - struct PaUtilAllocationGroupLink *links, *link; - void *result = 0; - - /* allocate more links if necessary */ - if( !group->spareLinks ) - { - /* double the link count on each block allocation */ - links = AllocateLinks( group->linkCount, group->linkBlocks, group->spareLinks ); - if( links ) - { - group->linkCount += group->linkCount; - group->linkBlocks = &links[0]; - group->spareLinks = &links[1]; - } - } - - if( group->spareLinks ) - { - result = PaUtil_AllocateMemory( size ); - if( result ) - { - link = group->spareLinks; - group->spareLinks = link->next; - - link->buffer = result; - link->next = group->allocations; - - group->allocations = link; - } - } - - return result; -} - - -void PaUtil_GroupFreeMemory( PaUtilAllocationGroup* group, void *buffer ) -{ - struct PaUtilAllocationGroupLink *current = group->allocations; - struct PaUtilAllocationGroupLink *previous = 0; - - if( buffer == 0 ) - return; - - /* find the right link and remove it */ - while( current ) - { - if( current->buffer == buffer ) - { - if( previous ) - { - previous->next = current->next; - } - else - { - group->allocations = current->next; - } - - current->buffer = 0; - current->next = group->spareLinks; - group->spareLinks = current; - - break; - } - - previous = current; - current = current->next; - } - - PaUtil_FreeMemory( buffer ); /* free the memory whether we found it in the list or not */ -} - - -void PaUtil_FreeAllAllocations( PaUtilAllocationGroup* group ) -{ - struct PaUtilAllocationGroupLink *current = group->allocations; - struct PaUtilAllocationGroupLink *previous = 0; - - /* free all buffers in the allocations list */ - while( current ) - { - PaUtil_FreeMemory( current->buffer ); - current->buffer = 0; - - previous = current; - current = current->next; - } - - /* link the former allocations list onto the front of the spareLinks list */ - if( previous ) - { - previous->next = group->spareLinks; - group->spareLinks = group->allocations; - group->allocations = 0; - } -} diff --git a/spaces/amarchheda/ChordDuplicate/portaudio/src/hostapi/oss/recplay.c b/spaces/amarchheda/ChordDuplicate/portaudio/src/hostapi/oss/recplay.c deleted file mode 100644 index fe6dfdb633fdacc420fcca87a00481c827a8550c..0000000000000000000000000000000000000000 --- a/spaces/amarchheda/ChordDuplicate/portaudio/src/hostapi/oss/recplay.c +++ /dev/null @@ -1,114 +0,0 @@ -/* - * recplay.c - * Phil Burk - * Minimal record and playback test. - * - */ -#include -#include -#include -#ifndef __STDC__ -/* #include */ -#endif /* __STDC__ */ -#include -#ifdef __STDC__ -#include -#else /* __STDC__ */ -#include -#endif /* __STDC__ */ -#include - -#define NUM_BYTES (64*1024) -#define BLOCK_SIZE (4*1024) - -#define AUDIO "/dev/dsp" - -char buffer[NUM_BYTES]; - -int audioDev = 0; - -main (int argc, char *argv[]) -{ - int numLeft; - char *ptr; - int num; - int samplesize; - - /********** RECORD ********************/ - /* Open audio device. */ - audioDev = open (AUDIO, O_RDONLY, 0); - if (audioDev == -1) - { - perror (AUDIO); - exit (-1); - } - - /* Set to 16 bit samples. */ - samplesize = 16; - ioctl(audioDev, SNDCTL_DSP_SAMPLESIZE, &samplesize); - if (samplesize != 16) - { - perror("Unable to set the sample size."); - exit(-1); - } - - /* Record in blocks */ - printf("Begin recording.\n"); - numLeft = NUM_BYTES; - ptr = buffer; - while( numLeft >= BLOCK_SIZE ) - { - if ( (num = read (audioDev, ptr, BLOCK_SIZE)) < 0 ) - { - perror (AUDIO); - exit (-1); - } - else - { - printf("Read %d bytes\n", num); - ptr += num; - numLeft -= num; - } - } - - close( audioDev ); - - /********** PLAYBACK ********************/ - /* Open audio device for writing. */ - audioDev = open (AUDIO, O_WRONLY, 0); - if (audioDev == -1) - { - perror (AUDIO); - exit (-1); - } - - /* Set to 16 bit samples. */ - samplesize = 16; - ioctl(audioDev, SNDCTL_DSP_SAMPLESIZE, &samplesize); - if (samplesize != 16) - { - perror("Unable to set the sample size."); - exit(-1); - } - - /* Play in blocks */ - printf("Begin playing.\n"); - numLeft = NUM_BYTES; - ptr = buffer; - while( numLeft >= BLOCK_SIZE ) - { - if ( (num = write (audioDev, ptr, BLOCK_SIZE)) < 0 ) - { - perror (AUDIO); - exit (-1); - } - else - { - printf("Wrote %d bytes\n", num); - ptr += num; - numLeft -= num; - } - } - - close( audioDev ); -} diff --git a/spaces/amirDev/crowd-counting-p2p/flask_app.py b/spaces/amirDev/crowd-counting-p2p/flask_app.py deleted file mode 100644 index 9198622157788353048851be6c810d1b1ee203c2..0000000000000000000000000000000000000000 --- a/spaces/amirDev/crowd-counting-p2p/flask_app.py +++ /dev/null @@ -1,44 +0,0 @@ -from time import sleep -from flask import Flask, render_template, request, send_file -from werkzeug.utils import secure_filename -from werkzeug.datastructures import FileStorage -import cv2 -import os -import glob -import inference_flask as util -app = Flask(__name__) - -model, transform, device = util.load_model() - -@app.route('/') -def r_upload_file(): - return render_template('upload.html') - -@app.route('/image', methods = ['GET', 'POST']) -def image(): - global model, transform, device - for file in glob.glob('./*'): - if file.endswith('.jpg') or file.endswith('.png') or file.endswith('jpeg'): - os.remove(file) - if request.method == 'POST': - f = request.files['file'] - f.save(secure_filename(f.filename)) - # inference - util.image_inference(model, transform, device, secure_filename(f.filename)) - return send_file(secure_filename(f.filename)) - -@app.route('/video', methods = ['GET', 'POST']) -def video(): - global model, transform, device - for file in glob.glob('./*'): - if file.endswith('.mp4') or file.endswith('.avi'): - os.remove(file) - if request.method == 'POST': - f = request.files['file'] - f.save(secure_filename(f.filename)) - # inference - util.video_inference(model, transform, device, secure_filename(f.filename)) - return send_file(secure_filename(f.filename)+'.avi') - -if __name__ == '__main__': - app.run(debug = False) \ No newline at end of file diff --git a/spaces/amoghv/Fast-food-classifier/app.py b/spaces/amoghv/Fast-food-classifier/app.py deleted file mode 100644 index a526d21dac9ad197028b7728c0bad326056f2349..0000000000000000000000000000000000000000 --- a/spaces/amoghv/Fast-food-classifier/app.py +++ /dev/null @@ -1,19 +0,0 @@ -__all__ = ['learn', 'categories', 'classify_image', 'input_image', 'labels', 'intf'] - -from fastai.vision.all import * -import gradio -import nbdev - -learn = load_learner('fast-food-model.pkl') - -categories = ('Baked Potato', 'Burger', 'Crispy Chicken', 'Donut', 'Fries','Hot Dog','Pizza','Sandwich','Taco','Taquito') -def classify_image (image): - pred,idx,probs = learn.predict(image) - return dict(zip(categories, map(float, probs))) - -input_image = gradio.inputs.Image(shape=(192, 192)) -labels = gradio.outputs.Label() - -intf = gradio.Interface(fn = classify_image, inputs = input_image, outputs = labels) -intf.launch(inline = False) - diff --git a/spaces/artificialguybr/video-dubbing/TTS/TTS/tts/models/tacotron2.py b/spaces/artificialguybr/video-dubbing/TTS/TTS/tts/models/tacotron2.py deleted file mode 100644 index 71ab1eac37aa70900a795cf8aa3df7a9ce77c49c..0000000000000000000000000000000000000000 --- a/spaces/artificialguybr/video-dubbing/TTS/TTS/tts/models/tacotron2.py +++ /dev/null @@ -1,433 +0,0 @@ -# coding: utf-8 - -from typing import Dict, List, Union - -import torch -from torch import nn -from torch.cuda.amp.autocast_mode import autocast -from trainer.trainer_utils import get_optimizer, get_scheduler - -from TTS.tts.layers.tacotron.capacitron_layers import CapacitronVAE -from TTS.tts.layers.tacotron.gst_layers import GST -from TTS.tts.layers.tacotron.tacotron2 import Decoder, Encoder, Postnet -from TTS.tts.models.base_tacotron import BaseTacotron -from TTS.tts.utils.measures import alignment_diagonal_score -from TTS.tts.utils.speakers import SpeakerManager -from TTS.tts.utils.text.tokenizer import TTSTokenizer -from TTS.tts.utils.visual import plot_alignment, plot_spectrogram -from TTS.utils.capacitron_optimizer import CapacitronOptimizer - - -class Tacotron2(BaseTacotron): - """Tacotron2 model implementation inherited from :class:`TTS.tts.models.base_tacotron.BaseTacotron`. - - Paper:: - https://arxiv.org/abs/1712.05884 - - Paper abstract:: - This paper describes Tacotron 2, a neural network architecture for speech synthesis directly from text. - The system is composed of a recurrent sequence-to-sequence feature prediction network that maps character - embeddings to mel-scale spectrograms, followed by a modified WaveNet model acting as a vocoder to synthesize - timedomain waveforms from those spectrograms. Our model achieves a mean opinion score (MOS) of 4.53 comparable - to a MOS of 4.58 for professionally recorded speech. To validate our design choices, we present ablation - studies of key components of our system and evaluate the impact of using mel spectrograms as the input to - WaveNet instead of linguistic, duration, and F0 features. We further demonstrate that using a compact acoustic - intermediate representation enables significant simplification of the WaveNet architecture. - - Check :class:`TTS.tts.configs.tacotron2_config.Tacotron2Config` for model arguments. - - Args: - config (TacotronConfig): - Configuration for the Tacotron2 model. - speaker_manager (SpeakerManager): - Speaker manager for multi-speaker training. Uuse only for multi-speaker training. Defaults to None. - """ - - def __init__( - self, - config: "Tacotron2Config", - ap: "AudioProcessor" = None, - tokenizer: "TTSTokenizer" = None, - speaker_manager: SpeakerManager = None, - ): - super().__init__(config, ap, tokenizer, speaker_manager) - - self.decoder_output_dim = config.out_channels - - # pass all config fields to `self` - # for fewer code change - for key in config: - setattr(self, key, config[key]) - - # init multi-speaker layers - if self.use_speaker_embedding or self.use_d_vector_file: - self.init_multispeaker(config) - self.decoder_in_features += self.embedded_speaker_dim # add speaker embedding dim - - if self.use_gst: - self.decoder_in_features += self.gst.gst_embedding_dim - - if self.use_capacitron_vae: - self.decoder_in_features += self.capacitron_vae.capacitron_VAE_embedding_dim - - # embedding layer - self.embedding = nn.Embedding(self.num_chars, 512, padding_idx=0) - - # base model layers - self.encoder = Encoder(self.encoder_in_features) - - self.decoder = Decoder( - self.decoder_in_features, - self.decoder_output_dim, - self.r, - self.attention_type, - self.attention_win, - self.attention_norm, - self.prenet_type, - self.prenet_dropout, - self.use_forward_attn, - self.transition_agent, - self.forward_attn_mask, - self.location_attn, - self.attention_heads, - self.separate_stopnet, - self.max_decoder_steps, - ) - self.postnet = Postnet(self.out_channels) - - # setup prenet dropout - self.decoder.prenet.dropout_at_inference = self.prenet_dropout_at_inference - - # global style token layers - if self.gst and self.use_gst: - self.gst_layer = GST( - num_mel=self.decoder_output_dim, - num_heads=self.gst.gst_num_heads, - num_style_tokens=self.gst.gst_num_style_tokens, - gst_embedding_dim=self.gst.gst_embedding_dim, - ) - - # Capacitron VAE Layers - if self.capacitron_vae and self.use_capacitron_vae: - self.capacitron_vae_layer = CapacitronVAE( - num_mel=self.decoder_output_dim, - encoder_output_dim=self.encoder_in_features, - capacitron_VAE_embedding_dim=self.capacitron_vae.capacitron_VAE_embedding_dim, - speaker_embedding_dim=self.embedded_speaker_dim - if self.capacitron_vae.capacitron_use_speaker_embedding - else None, - text_summary_embedding_dim=self.capacitron_vae.capacitron_text_summary_embedding_dim - if self.capacitron_vae.capacitron_use_text_summary_embeddings - else None, - ) - - # backward pass decoder - if self.bidirectional_decoder: - self._init_backward_decoder() - # setup DDC - if self.double_decoder_consistency: - self.coarse_decoder = Decoder( - self.decoder_in_features, - self.decoder_output_dim, - self.ddc_r, - self.attention_type, - self.attention_win, - self.attention_norm, - self.prenet_type, - self.prenet_dropout, - self.use_forward_attn, - self.transition_agent, - self.forward_attn_mask, - self.location_attn, - self.attention_heads, - self.separate_stopnet, - self.max_decoder_steps, - ) - - @staticmethod - def shape_outputs(mel_outputs, mel_outputs_postnet, alignments): - """Final reshape of the model output tensors.""" - mel_outputs = mel_outputs.transpose(1, 2) - mel_outputs_postnet = mel_outputs_postnet.transpose(1, 2) - return mel_outputs, mel_outputs_postnet, alignments - - def forward( # pylint: disable=dangerous-default-value - self, text, text_lengths, mel_specs=None, mel_lengths=None, aux_input={"speaker_ids": None, "d_vectors": None} - ): - """Forward pass for training with Teacher Forcing. - - Shapes: - text: :math:`[B, T_in]` - text_lengths: :math:`[B]` - mel_specs: :math:`[B, T_out, C]` - mel_lengths: :math:`[B]` - aux_input: 'speaker_ids': :math:`[B, 1]` and 'd_vectors': :math:`[B, C]` - """ - aux_input = self._format_aux_input(aux_input) - outputs = {"alignments_backward": None, "decoder_outputs_backward": None} - # compute mask for padding - # B x T_in_max (boolean) - input_mask, output_mask = self.compute_masks(text_lengths, mel_lengths) - # B x D_embed x T_in_max - embedded_inputs = self.embedding(text).transpose(1, 2) - # B x T_in_max x D_en - encoder_outputs = self.encoder(embedded_inputs, text_lengths) - if self.gst and self.use_gst: - # B x gst_dim - encoder_outputs = self.compute_gst(encoder_outputs, mel_specs) - - if self.use_speaker_embedding or self.use_d_vector_file: - if not self.use_d_vector_file: - # B x 1 x speaker_embed_dim - embedded_speakers = self.speaker_embedding(aux_input["speaker_ids"])[:, None] - else: - # B x 1 x speaker_embed_dim - embedded_speakers = torch.unsqueeze(aux_input["d_vectors"], 1) - encoder_outputs = self._concat_speaker_embedding(encoder_outputs, embedded_speakers) - - # capacitron - if self.capacitron_vae and self.use_capacitron_vae: - # B x capacitron_VAE_embedding_dim - encoder_outputs, *capacitron_vae_outputs = self.compute_capacitron_VAE_embedding( - encoder_outputs, - reference_mel_info=[mel_specs, mel_lengths], - text_info=[embedded_inputs.transpose(1, 2), text_lengths] - if self.capacitron_vae.capacitron_use_text_summary_embeddings - else None, - speaker_embedding=embedded_speakers if self.capacitron_vae.capacitron_use_speaker_embedding else None, - ) - else: - capacitron_vae_outputs = None - - encoder_outputs = encoder_outputs * input_mask.unsqueeze(2).expand_as(encoder_outputs) - - # B x mel_dim x T_out -- B x T_out//r x T_in -- B x T_out//r - decoder_outputs, alignments, stop_tokens = self.decoder(encoder_outputs, mel_specs, input_mask) - # sequence masking - if mel_lengths is not None: - decoder_outputs = decoder_outputs * output_mask.unsqueeze(1).expand_as(decoder_outputs) - # B x mel_dim x T_out - postnet_outputs = self.postnet(decoder_outputs) - postnet_outputs = decoder_outputs + postnet_outputs - # sequence masking - if output_mask is not None: - postnet_outputs = postnet_outputs * output_mask.unsqueeze(1).expand_as(postnet_outputs) - # B x T_out x mel_dim -- B x T_out x mel_dim -- B x T_out//r x T_in - decoder_outputs, postnet_outputs, alignments = self.shape_outputs(decoder_outputs, postnet_outputs, alignments) - if self.bidirectional_decoder: - decoder_outputs_backward, alignments_backward = self._backward_pass(mel_specs, encoder_outputs, input_mask) - outputs["alignments_backward"] = alignments_backward - outputs["decoder_outputs_backward"] = decoder_outputs_backward - if self.double_decoder_consistency: - decoder_outputs_backward, alignments_backward = self._coarse_decoder_pass( - mel_specs, encoder_outputs, alignments, input_mask - ) - outputs["alignments_backward"] = alignments_backward - outputs["decoder_outputs_backward"] = decoder_outputs_backward - outputs.update( - { - "model_outputs": postnet_outputs, - "decoder_outputs": decoder_outputs, - "alignments": alignments, - "stop_tokens": stop_tokens, - "capacitron_vae_outputs": capacitron_vae_outputs, - } - ) - return outputs - - @torch.no_grad() - def inference(self, text, aux_input=None): - """Forward pass for inference with no Teacher-Forcing. - - Shapes: - text: :math:`[B, T_in]` - text_lengths: :math:`[B]` - """ - aux_input = self._format_aux_input(aux_input) - embedded_inputs = self.embedding(text).transpose(1, 2) - encoder_outputs = self.encoder.inference(embedded_inputs) - - if self.gst and self.use_gst: - # B x gst_dim - encoder_outputs = self.compute_gst(encoder_outputs, aux_input["style_mel"], aux_input["d_vectors"]) - - if self.capacitron_vae and self.use_capacitron_vae: - if aux_input["style_text"] is not None: - style_text_embedding = self.embedding(aux_input["style_text"]) - style_text_length = torch.tensor([style_text_embedding.size(1)], dtype=torch.int64).to( - encoder_outputs.device - ) # pylint: disable=not-callable - reference_mel_length = ( - torch.tensor([aux_input["style_mel"].size(1)], dtype=torch.int64).to(encoder_outputs.device) - if aux_input["style_mel"] is not None - else None - ) # pylint: disable=not-callable - # B x capacitron_VAE_embedding_dim - encoder_outputs, *_ = self.compute_capacitron_VAE_embedding( - encoder_outputs, - reference_mel_info=[aux_input["style_mel"], reference_mel_length] - if aux_input["style_mel"] is not None - else None, - text_info=[style_text_embedding, style_text_length] if aux_input["style_text"] is not None else None, - speaker_embedding=aux_input["d_vectors"] - if self.capacitron_vae.capacitron_use_speaker_embedding - else None, - ) - - if self.num_speakers > 1: - if not self.use_d_vector_file: - embedded_speakers = self.speaker_embedding(aux_input["speaker_ids"])[None] - # reshape embedded_speakers - if embedded_speakers.ndim == 1: - embedded_speakers = embedded_speakers[None, None, :] - elif embedded_speakers.ndim == 2: - embedded_speakers = embedded_speakers[None, :] - else: - embedded_speakers = aux_input["d_vectors"] - - encoder_outputs = self._concat_speaker_embedding(encoder_outputs, embedded_speakers) - - decoder_outputs, alignments, stop_tokens = self.decoder.inference(encoder_outputs) - postnet_outputs = self.postnet(decoder_outputs) - postnet_outputs = decoder_outputs + postnet_outputs - decoder_outputs, postnet_outputs, alignments = self.shape_outputs(decoder_outputs, postnet_outputs, alignments) - outputs = { - "model_outputs": postnet_outputs, - "decoder_outputs": decoder_outputs, - "alignments": alignments, - "stop_tokens": stop_tokens, - } - return outputs - - def before_backward_pass(self, loss_dict, optimizer) -> None: - # Extracting custom training specific operations for capacitron - # from the trainer - if self.use_capacitron_vae: - loss_dict["capacitron_vae_beta_loss"].backward() - optimizer.first_step() - - def train_step(self, batch: Dict, criterion: torch.nn.Module): - """A single training step. Forward pass and loss computation. - - Args: - batch ([Dict]): A dictionary of input tensors. - criterion ([type]): Callable criterion to compute model loss. - """ - text_input = batch["text_input"] - text_lengths = batch["text_lengths"] - mel_input = batch["mel_input"] - mel_lengths = batch["mel_lengths"] - stop_targets = batch["stop_targets"] - stop_target_lengths = batch["stop_target_lengths"] - speaker_ids = batch["speaker_ids"] - d_vectors = batch["d_vectors"] - - aux_input = {"speaker_ids": speaker_ids, "d_vectors": d_vectors} - outputs = self.forward(text_input, text_lengths, mel_input, mel_lengths, aux_input) - - # set the [alignment] lengths wrt reduction factor for guided attention - if mel_lengths.max() % self.decoder.r != 0: - alignment_lengths = ( - mel_lengths + (self.decoder.r - (mel_lengths.max() % self.decoder.r)) - ) // self.decoder.r - else: - alignment_lengths = mel_lengths // self.decoder.r - - # compute loss - with autocast(enabled=False): # use float32 for the criterion - loss_dict = criterion( - outputs["model_outputs"].float(), - outputs["decoder_outputs"].float(), - mel_input.float(), - None, - outputs["stop_tokens"].float(), - stop_targets.float(), - stop_target_lengths, - outputs["capacitron_vae_outputs"] if self.capacitron_vae else None, - mel_lengths, - None if outputs["decoder_outputs_backward"] is None else outputs["decoder_outputs_backward"].float(), - outputs["alignments"].float(), - alignment_lengths, - None if outputs["alignments_backward"] is None else outputs["alignments_backward"].float(), - text_lengths, - ) - - # compute alignment error (the lower the better ) - align_error = 1 - alignment_diagonal_score(outputs["alignments"]) - loss_dict["align_error"] = align_error - return outputs, loss_dict - - def get_optimizer(self) -> List: - if self.use_capacitron_vae: - return CapacitronOptimizer(self.config, self.named_parameters()) - return get_optimizer(self.config.optimizer, self.config.optimizer_params, self.config.lr, self) - - def get_scheduler(self, optimizer: object): - opt = optimizer.primary_optimizer if self.use_capacitron_vae else optimizer - return get_scheduler(self.config.lr_scheduler, self.config.lr_scheduler_params, opt) - - def before_gradient_clipping(self): - if self.use_capacitron_vae: - # Capacitron model specific gradient clipping - model_params_to_clip = [] - for name, param in self.named_parameters(): - if param.requires_grad: - if name != "capacitron_vae_layer.beta": - model_params_to_clip.append(param) - torch.nn.utils.clip_grad_norm_(model_params_to_clip, self.capacitron_vae.capacitron_grad_clip) - - def _create_logs(self, batch, outputs, ap): - """Create dashboard log information.""" - postnet_outputs = outputs["model_outputs"] - alignments = outputs["alignments"] - alignments_backward = outputs["alignments_backward"] - mel_input = batch["mel_input"] - - pred_spec = postnet_outputs[0].data.cpu().numpy() - gt_spec = mel_input[0].data.cpu().numpy() - align_img = alignments[0].data.cpu().numpy() - - figures = { - "prediction": plot_spectrogram(pred_spec, ap, output_fig=False), - "ground_truth": plot_spectrogram(gt_spec, ap, output_fig=False), - "alignment": plot_alignment(align_img, output_fig=False), - } - - if self.bidirectional_decoder or self.double_decoder_consistency: - figures["alignment_backward"] = plot_alignment(alignments_backward[0].data.cpu().numpy(), output_fig=False) - - # Sample audio - audio = ap.inv_melspectrogram(pred_spec.T) - return figures, {"audio": audio} - - def train_log( - self, batch: dict, outputs: dict, logger: "Logger", assets: dict, steps: int - ) -> None: # pylint: disable=no-self-use - """Log training progress.""" - figures, audios = self._create_logs(batch, outputs, self.ap) - logger.train_figures(steps, figures) - logger.train_audios(steps, audios, self.ap.sample_rate) - - def eval_step(self, batch: dict, criterion: nn.Module): - return self.train_step(batch, criterion) - - def eval_log(self, batch: dict, outputs: dict, logger: "Logger", assets: dict, steps: int) -> None: - figures, audios = self._create_logs(batch, outputs, self.ap) - logger.eval_figures(steps, figures) - logger.eval_audios(steps, audios, self.ap.sample_rate) - - @staticmethod - def init_from_config(config: "Tacotron2Config", samples: Union[List[List], List[Dict]] = None): - """Initiate model from config - - Args: - config (Tacotron2Config): Model config. - samples (Union[List[List], List[Dict]]): Training samples to parse speaker ids for training. - Defaults to None. - """ - from TTS.utils.audio import AudioProcessor - - ap = AudioProcessor.init_from_config(config) - tokenizer, new_config = TTSTokenizer.init_from_config(config) - speaker_manager = SpeakerManager.init_from_config(new_config, samples) - return Tacotron2(new_config, ap, tokenizer, speaker_manager) diff --git a/spaces/artificialguybr/video-dubbing/TTS/TTS/vocoder/layers/hifigan.py b/spaces/artificialguybr/video-dubbing/TTS/TTS/vocoder/layers/hifigan.py deleted file mode 100644 index f51200724887b04746a125b7d7c368e0315ce7da..0000000000000000000000000000000000000000 --- a/spaces/artificialguybr/video-dubbing/TTS/TTS/vocoder/layers/hifigan.py +++ /dev/null @@ -1,53 +0,0 @@ -from torch import nn - - -# pylint: disable=dangerous-default-value -class ResStack(nn.Module): - def __init__(self, kernel, channel, padding, dilations=[1, 3, 5]): - super().__init__() - resstack = [] - for dilation in dilations: - resstack += [ - nn.LeakyReLU(0.2), - nn.ReflectionPad1d(dilation), - nn.utils.weight_norm(nn.Conv1d(channel, channel, kernel_size=kernel, dilation=dilation)), - nn.LeakyReLU(0.2), - nn.ReflectionPad1d(padding), - nn.utils.weight_norm(nn.Conv1d(channel, channel, kernel_size=1)), - ] - self.resstack = nn.Sequential(*resstack) - - self.shortcut = nn.utils.weight_norm(nn.Conv1d(channel, channel, kernel_size=1)) - - def forward(self, x): - x1 = self.shortcut(x) - x2 = self.resstack(x) - return x1 + x2 - - def remove_weight_norm(self): - nn.utils.remove_weight_norm(self.shortcut) - nn.utils.remove_weight_norm(self.resstack[2]) - nn.utils.remove_weight_norm(self.resstack[5]) - nn.utils.remove_weight_norm(self.resstack[8]) - nn.utils.remove_weight_norm(self.resstack[11]) - nn.utils.remove_weight_norm(self.resstack[14]) - nn.utils.remove_weight_norm(self.resstack[17]) - - -class MRF(nn.Module): - def __init__(self, kernels, channel, dilations=[1, 3, 5]): # # pylint: disable=dangerous-default-value - super().__init__() - self.resblock1 = ResStack(kernels[0], channel, 0, dilations) - self.resblock2 = ResStack(kernels[1], channel, 6, dilations) - self.resblock3 = ResStack(kernels[2], channel, 12, dilations) - - def forward(self, x): - x1 = self.resblock1(x) - x2 = self.resblock2(x) - x3 = self.resblock3(x) - return x1 + x2 + x3 - - def remove_weight_norm(self): - self.resblock1.remove_weight_norm() - self.resblock2.remove_weight_norm() - self.resblock3.remove_weight_norm() diff --git a/spaces/artificialguybr/video-dubbing/TTS/tests/api_tests/test_python_api.py b/spaces/artificialguybr/video-dubbing/TTS/tests/api_tests/test_python_api.py deleted file mode 100644 index 2025fcd9c6b6558742779083efa96008d383cd80..0000000000000000000000000000000000000000 --- a/spaces/artificialguybr/video-dubbing/TTS/tests/api_tests/test_python_api.py +++ /dev/null @@ -1,113 +0,0 @@ -import os -import unittest - -from tests import get_tests_data_path, get_tests_output_path -from TTS.api import CS_API, TTS - -OUTPUT_PATH = os.path.join(get_tests_output_path(), "test_python_api.wav") -cloning_test_wav_path = os.path.join(get_tests_data_path(), "ljspeech/wavs/LJ001-0028.wav") - - -is_coqui_available = os.environ.get("COQUI_STUDIO_TOKEN") - - -if is_coqui_available: - - class CS_APITest(unittest.TestCase): - def test_speakers(self): - tts = CS_API() - self.assertGreater(len(tts.speakers), 1) - - def test_emotions(self): - tts = CS_API() - self.assertGreater(len(tts.emotions), 1) - - def test_list_calls(self): - tts = CS_API() - self.assertGreater(len(tts.list_voices()), 1) - self.assertGreater(len(tts.list_speakers()), 1) - self.assertGreater(len(tts.list_all_speakers()), 1) - self.assertGreater(len(tts.list_speakers_as_tts_models()), 1) - - def test_name_to_speaker(self): - tts = CS_API() - speaker_name = tts.list_speakers_as_tts_models()[0].split("/")[2] - speaker = tts.name_to_speaker(speaker_name) - self.assertEqual(speaker.name, speaker_name) - - def test_tts(self): - tts = CS_API() - wav, sr = tts.tts(text="This is a test.", speaker_name=tts.list_speakers()[0].name) - self.assertEqual(sr, 44100) - self.assertGreater(len(wav), 1) - - class TTSTest(unittest.TestCase): - def test_single_speaker_model(self): - tts = TTS(model_name="tts_models/de/thorsten/tacotron2-DDC", progress_bar=False, gpu=False) - - error_raised = False - try: - tts.tts_to_file(text="Ich bin eine Testnachricht.", speaker="Thorsten", language="de") - except ValueError: - error_raised = True - - tts.tts_to_file(text="Ich bin eine Testnachricht.", file_path=OUTPUT_PATH) - - self.assertTrue(error_raised) - self.assertFalse(tts.is_multi_speaker) - self.assertFalse(tts.is_multi_lingual) - self.assertIsNone(tts.speakers) - self.assertIsNone(tts.languages) - - def test_studio_model(self): - tts = TTS(model_name="coqui_studio/en/Zacharie Aimilios/coqui_studio") - tts.tts_to_file(text="This is a test.") - - # check speed > 2.0 raises error - raised_error = False - try: - _ = tts.tts(text="This is a test.", speed=4.0, emotion="Sad") # should raise error with speed > 2.0 - except ValueError: - raised_error = True - self.assertTrue(raised_error) - - # check emotion is invalid - raised_error = False - try: - _ = tts.tts(text="This is a test.", speed=2.0, emotion="No Emo") # should raise error with speed > 2.0 - except ValueError: - raised_error = True - self.assertTrue(raised_error) - - # check valid call - wav = tts.tts(text="This is a test.", speed=2.0, emotion="Sad") - self.assertGreater(len(wav), 0) - - def test_fairseq_model(self): # pylint: disable=no-self-use - tts = TTS(model_name="tts_models/eng/fairseq/vits") - tts.tts_to_file(text="This is a test.") - - def test_multi_speaker_multi_lingual_model(self): - tts = TTS() - tts.load_tts_model_by_name(tts.models[0]) # YourTTS - tts.tts_to_file( - text="Hello world!", speaker=tts.speakers[0], language=tts.languages[0], file_path=OUTPUT_PATH - ) - - self.assertTrue(tts.is_multi_speaker) - self.assertTrue(tts.is_multi_lingual) - self.assertGreater(len(tts.speakers), 1) - self.assertGreater(len(tts.languages), 1) - - def test_voice_cloning(self): # pylint: disable=no-self-use - tts = TTS() - tts.load_tts_model_by_name("tts_models/multilingual/multi-dataset/your_tts") - tts.tts_to_file("Hello world!", speaker_wav=cloning_test_wav_path, language="en", file_path=OUTPUT_PATH) - - def test_voice_conversion(self): # pylint: disable=no-self-use - tts = TTS(model_name="voice_conversion_models/multilingual/vctk/freevc24", progress_bar=False, gpu=False) - tts.voice_conversion_to_file( - source_wav=cloning_test_wav_path, - target_wav=cloning_test_wav_path, - file_path=OUTPUT_PATH, - ) diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Cython/Utility/ExtensionTypes.c b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Cython/Utility/ExtensionTypes.c deleted file mode 100644 index dc187ab49e8c83d759d4a98b344b2273031ac7ad..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Cython/Utility/ExtensionTypes.c +++ /dev/null @@ -1,301 +0,0 @@ -/////////////// PyType_Ready.proto /////////////// - -static int __Pyx_PyType_Ready(PyTypeObject *t); - -/////////////// PyType_Ready /////////////// - -// Wrapper around PyType_Ready() with some runtime checks and fixes -// to deal with multiple inheritance. -static int __Pyx_PyType_Ready(PyTypeObject *t) { - // Loop over all bases (except the first) and check that those - // really are heap types. Otherwise, it would not be safe to - // subclass them. - // - // We also check tp_dictoffset: it is unsafe to inherit - // tp_dictoffset from a base class because the object structures - // would not be compatible. So, if our extension type doesn't set - // tp_dictoffset (i.e. there is no __dict__ attribute in the object - // structure), we need to check that none of the base classes sets - // it either. - int r; - PyObject *bases = t->tp_bases; - if (bases) - { - Py_ssize_t i, n = PyTuple_GET_SIZE(bases); - for (i = 1; i < n; i++) /* Skip first base */ - { - PyObject *b0 = PyTuple_GET_ITEM(bases, i); - PyTypeObject *b; -#if PY_MAJOR_VERSION < 3 - /* Disallow old-style classes */ - if (PyClass_Check(b0)) - { - PyErr_Format(PyExc_TypeError, "base class '%.200s' is an old-style class", - PyString_AS_STRING(((PyClassObject*)b0)->cl_name)); - return -1; - } -#endif - b = (PyTypeObject*)b0; - if (!PyType_HasFeature(b, Py_TPFLAGS_HEAPTYPE)) - { - PyErr_Format(PyExc_TypeError, "base class '%.200s' is not a heap type", - b->tp_name); - return -1; - } - if (t->tp_dictoffset == 0 && b->tp_dictoffset) - { - PyErr_Format(PyExc_TypeError, - "extension type '%.200s' has no __dict__ slot, but base type '%.200s' has: " - "either add 'cdef dict __dict__' to the extension type " - "or add '__slots__ = [...]' to the base type", - t->tp_name, b->tp_name); - return -1; - } - } - } - -#if PY_VERSION_HEX >= 0x03050000 && !defined(PYSTON_MAJOR_VERSION) - { - // Make sure GC does not pick up our non-heap type as heap type with this hack! - // For details, see https://github.com/cython/cython/issues/3603 - PyObject *ret, *py_status; - int gc_was_enabled; - PyObject *gc = PyImport_Import(PYUNICODE("gc")); - if (unlikely(!gc)) return -1; - py_status = PyObject_CallMethodObjArgs(gc, PYUNICODE("isenabled"), NULL); - if (unlikely(!py_status)) { - Py_DECREF(gc); - return -1; - } - gc_was_enabled = __Pyx_PyObject_IsTrue(py_status); - Py_DECREF(py_status); - if (gc_was_enabled > 0) { - ret = PyObject_CallMethodObjArgs(gc, PYUNICODE("disable"), NULL); - if (unlikely(!ret)) { - Py_DECREF(gc); - return -1; - } - Py_DECREF(ret); - } else if (unlikely(gc_was_enabled == -1)) { - Py_DECREF(gc); - return -1; - } - - // As of https://bugs.python.org/issue22079 - // PyType_Ready enforces that all bases of a non-heap type are - // non-heap. We know that this is the case for the solid base but - // other bases are heap allocated and are kept alive through the - // tp_bases reference. - // Other than this check, the Py_TPFLAGS_HEAPTYPE flag is unused - // in PyType_Ready(). - t->tp_flags |= Py_TPFLAGS_HEAPTYPE; -#endif - - r = PyType_Ready(t); - -#if PY_VERSION_HEX >= 0x03050000 && !defined(PYSTON_MAJOR_VERSION) - t->tp_flags &= ~Py_TPFLAGS_HEAPTYPE; - - if (gc_was_enabled) { - PyObject *t, *v, *tb; - PyErr_Fetch(&t, &v, &tb); - ret = PyObject_CallMethodObjArgs(gc, PYUNICODE("enable"), NULL); - if (likely(ret || r == -1)) { - Py_XDECREF(ret); - // do not overwrite exceptions raised by PyType_Ready() above - PyErr_Restore(t, v, tb); - } else { - // PyType_Ready() succeeded, but gc.enable() failed. - Py_XDECREF(t); - Py_XDECREF(v); - Py_XDECREF(tb); - r = -1; - } - } - Py_DECREF(gc); - } -#endif - - return r; -} - -/////////////// CallNextTpDealloc.proto /////////////// - -static void __Pyx_call_next_tp_dealloc(PyObject* obj, destructor current_tp_dealloc); - -/////////////// CallNextTpDealloc /////////////// - -static void __Pyx_call_next_tp_dealloc(PyObject* obj, destructor current_tp_dealloc) { - PyTypeObject* type = Py_TYPE(obj); - /* try to find the first parent type that has a different tp_dealloc() function */ - while (type && type->tp_dealloc != current_tp_dealloc) - type = type->tp_base; - while (type && type->tp_dealloc == current_tp_dealloc) - type = type->tp_base; - if (type) - type->tp_dealloc(obj); -} - -/////////////// CallNextTpTraverse.proto /////////////// - -static int __Pyx_call_next_tp_traverse(PyObject* obj, visitproc v, void *a, traverseproc current_tp_traverse); - -/////////////// CallNextTpTraverse /////////////// - -static int __Pyx_call_next_tp_traverse(PyObject* obj, visitproc v, void *a, traverseproc current_tp_traverse) { - PyTypeObject* type = Py_TYPE(obj); - /* try to find the first parent type that has a different tp_traverse() function */ - while (type && type->tp_traverse != current_tp_traverse) - type = type->tp_base; - while (type && type->tp_traverse == current_tp_traverse) - type = type->tp_base; - if (type && type->tp_traverse) - return type->tp_traverse(obj, v, a); - // FIXME: really ignore? - return 0; -} - -/////////////// CallNextTpClear.proto /////////////// - -static void __Pyx_call_next_tp_clear(PyObject* obj, inquiry current_tp_dealloc); - -/////////////// CallNextTpClear /////////////// - -static void __Pyx_call_next_tp_clear(PyObject* obj, inquiry current_tp_clear) { - PyTypeObject* type = Py_TYPE(obj); - /* try to find the first parent type that has a different tp_clear() function */ - while (type && type->tp_clear != current_tp_clear) - type = type->tp_base; - while (type && type->tp_clear == current_tp_clear) - type = type->tp_base; - if (type && type->tp_clear) - type->tp_clear(obj); -} - -/////////////// SetupReduce.proto /////////////// - -static int __Pyx_setup_reduce(PyObject* type_obj); - -/////////////// SetupReduce /////////////// -//@requires: ObjectHandling.c::PyObjectGetAttrStrNoError -//@requires: ObjectHandling.c::PyObjectGetAttrStr -//@substitute: naming - -static int __Pyx_setup_reduce_is_named(PyObject* meth, PyObject* name) { - int ret; - PyObject *name_attr; - - name_attr = __Pyx_PyObject_GetAttrStr(meth, PYIDENT("__name__")); - if (likely(name_attr)) { - ret = PyObject_RichCompareBool(name_attr, name, Py_EQ); - } else { - ret = -1; - } - - if (unlikely(ret < 0)) { - PyErr_Clear(); - ret = 0; - } - - Py_XDECREF(name_attr); - return ret; -} - -static int __Pyx_setup_reduce(PyObject* type_obj) { - int ret = 0; - PyObject *object_reduce = NULL; - PyObject *object_getstate = NULL; - PyObject *object_reduce_ex = NULL; - PyObject *reduce = NULL; - PyObject *reduce_ex = NULL; - PyObject *reduce_cython = NULL; - PyObject *setstate = NULL; - PyObject *setstate_cython = NULL; - PyObject *getstate = NULL; - -#if CYTHON_USE_PYTYPE_LOOKUP - getstate = _PyType_Lookup((PyTypeObject*)type_obj, PYIDENT("__getstate__")); -#else - getstate = __Pyx_PyObject_GetAttrStrNoError(type_obj, PYIDENT("__getstate__")); - if (!getstate && PyErr_Occurred()) { - goto __PYX_BAD; - } -#endif - if (getstate) { - // Python 3.11 introduces object.__getstate__. Because it's version-specific failure to find it should not be an error -#if CYTHON_USE_PYTYPE_LOOKUP - object_getstate = _PyType_Lookup(&PyBaseObject_Type, PYIDENT("__getstate__")); -#else - object_getstate = __Pyx_PyObject_GetAttrStrNoError((PyObject*)&PyBaseObject_Type, PYIDENT("__getstate__")); - if (!object_getstate && PyErr_Occurred()) { - goto __PYX_BAD; - } -#endif - if (object_getstate != getstate) { - goto __PYX_GOOD; - } - } - -#if CYTHON_USE_PYTYPE_LOOKUP - object_reduce_ex = _PyType_Lookup(&PyBaseObject_Type, PYIDENT("__reduce_ex__")); if (!object_reduce_ex) goto __PYX_BAD; -#else - object_reduce_ex = __Pyx_PyObject_GetAttrStr((PyObject*)&PyBaseObject_Type, PYIDENT("__reduce_ex__")); if (!object_reduce_ex) goto __PYX_BAD; -#endif - - reduce_ex = __Pyx_PyObject_GetAttrStr(type_obj, PYIDENT("__reduce_ex__")); if (unlikely(!reduce_ex)) goto __PYX_BAD; - if (reduce_ex == object_reduce_ex) { - -#if CYTHON_USE_PYTYPE_LOOKUP - object_reduce = _PyType_Lookup(&PyBaseObject_Type, PYIDENT("__reduce__")); if (!object_reduce) goto __PYX_BAD; -#else - object_reduce = __Pyx_PyObject_GetAttrStr((PyObject*)&PyBaseObject_Type, PYIDENT("__reduce__")); if (!object_reduce) goto __PYX_BAD; -#endif - reduce = __Pyx_PyObject_GetAttrStr(type_obj, PYIDENT("__reduce__")); if (unlikely(!reduce)) goto __PYX_BAD; - - if (reduce == object_reduce || __Pyx_setup_reduce_is_named(reduce, PYIDENT("__reduce_cython__"))) { - reduce_cython = __Pyx_PyObject_GetAttrStrNoError(type_obj, PYIDENT("__reduce_cython__")); - if (likely(reduce_cython)) { - ret = PyDict_SetItem(((PyTypeObject*)type_obj)->tp_dict, PYIDENT("__reduce__"), reduce_cython); if (unlikely(ret < 0)) goto __PYX_BAD; - ret = PyDict_DelItem(((PyTypeObject*)type_obj)->tp_dict, PYIDENT("__reduce_cython__")); if (unlikely(ret < 0)) goto __PYX_BAD; - } else if (reduce == object_reduce || PyErr_Occurred()) { - // Ignore if we're done, i.e. if 'reduce' already has the right name and the original is gone. - // Otherwise: error. - goto __PYX_BAD; - } - - setstate = __Pyx_PyObject_GetAttrStr(type_obj, PYIDENT("__setstate__")); - if (!setstate) PyErr_Clear(); - if (!setstate || __Pyx_setup_reduce_is_named(setstate, PYIDENT("__setstate_cython__"))) { - setstate_cython = __Pyx_PyObject_GetAttrStrNoError(type_obj, PYIDENT("__setstate_cython__")); - if (likely(setstate_cython)) { - ret = PyDict_SetItem(((PyTypeObject*)type_obj)->tp_dict, PYIDENT("__setstate__"), setstate_cython); if (unlikely(ret < 0)) goto __PYX_BAD; - ret = PyDict_DelItem(((PyTypeObject*)type_obj)->tp_dict, PYIDENT("__setstate_cython__")); if (unlikely(ret < 0)) goto __PYX_BAD; - } else if (!setstate || PyErr_Occurred()) { - // Ignore if we're done, i.e. if 'setstate' already has the right name and the original is gone. - // Otherwise: error. - goto __PYX_BAD; - } - } - PyType_Modified((PyTypeObject*)type_obj); - } - } - goto __PYX_GOOD; - -__PYX_BAD: - if (!PyErr_Occurred()) - PyErr_Format(PyExc_RuntimeError, "Unable to initialize pickling for %s", ((PyTypeObject*)type_obj)->tp_name); - ret = -1; -__PYX_GOOD: -#if !CYTHON_USE_PYTYPE_LOOKUP - Py_XDECREF(object_reduce); - Py_XDECREF(object_reduce_ex); - Py_XDECREF(object_getstate); - Py_XDECREF(getstate); -#endif - Py_XDECREF(reduce); - Py_XDECREF(reduce_ex); - Py_XDECREF(reduce_cython); - Py_XDECREF(setstate); - Py_XDECREF(setstate_cython); - return ret; -} diff --git a/spaces/asciicorp/hotel-chat/app.py b/spaces/asciicorp/hotel-chat/app.py deleted file mode 100644 index 4a1365c83c878226c5c12497d70822949ed2c7a3..0000000000000000000000000000000000000000 --- a/spaces/asciicorp/hotel-chat/app.py +++ /dev/null @@ -1,199 +0,0 @@ -import os -import streamlit as st -from streamlit_option_menu import option_menu -from streamlit_chat import message -from audio_recorder_streamlit import audio_recorder -import uuid -import requests -from gtts import gTTS - -from memory import memory -from main_chain import agent_chain, agent_chain_base, agent_chain_simple -from markup import hotelchat_app, hotelchat_app_hf - -from transformers import SpeechT5Processor, SpeechT5ForTextToSpeech, SpeechT5HifiGan -from datasets import load_dataset -import torch -from datasets import load_dataset -import soundfile as sf - -import sqlite3 -import pandas as pd -import base64 - - -def autoplay_audio(file_path: str): - with open(file_path, "rb") as f: - data = f.read() - b64 = base64.b64encode(data).decode() - md = f""" - - """ - st.markdown( - md, - unsafe_allow_html=True, - ) - - -processor = SpeechT5Processor.from_pretrained("microsoft/speecht5_tts") -model = SpeechT5ForTextToSpeech.from_pretrained("microsoft/speecht5_tts") -vocoder = SpeechT5HifiGan.from_pretrained("microsoft/speecht5_hifigan") -embeddings_dataset = load_dataset("Matthijs/cmu-arctic-xvectors", split="validation") - - -def robot_audio(text): - myobj = gTTS(text=text, lang="en", slow=False) - temp_robot_file = str(uuid.uuid4()) + ".wav" - myobj.save(temp_robot_file) - autoplay_audio(temp_robot_file) - - -def humanlike_audio(text): - speaker_embeddings = torch.tensor(embeddings_dataset[7490]["xvector"]).unsqueeze(0) - inputs = processor(text=text, return_tensors="pt") - speech = model.generate_speech( - inputs["input_ids"], speaker_embeddings, vocoder=vocoder - ) - temp_humanlike_file = str(uuid.uuid4()) + ".wav" - sf.write(temp_humanlike_file, speech.tolist(), 16000, "PCM_24") - - autoplay_audio(temp_humanlike_file) - - -def get_audio_transcription(filename): - with open(filename, "rb") as f: - data = f.read() - response = requests.post( - "https://api-inference.huggingface.co/models/openai/whisper-medium", - headers={"Authorization": "Bearer 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"}, - data=data, - ) - return response.json() - - -os.environ["OPENAI_API_KEY"] = "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" - -if "memory" not in st.session_state: - st.session_state["memory"] = "" -if "generated" not in st.session_state: - st.session_state["generated"] = [] -if "past" not in st.session_state: - st.session_state["past"] = [] - - -def tab1(): - st.header("Hotel Chatbot Demo") - col1, col2 = st.columns([1, 2]) - with col1: - st.image("image.jpg", use_column_width=True) - with col2: - st.markdown(hotelchat_app(), unsafe_allow_html=True) - st.markdown(hotelchat_app_hf(), unsafe_allow_html=True) - - -def tab2(): - st.header("Chat") - - chain_options = { - "QA Chatbot": agent_chain_simple, - "Chatbot with room reservation - experimental": agent_chain, - "Chatbot without room reservation": agent_chain_base, - } - chain_choice = st.sidebar.selectbox("Choose a chatbot:", list(chain_options.keys())) - selected_chain = chain_options[chain_choice] - - output_type = st.sidebar.selectbox( - "Output Type", ["Text", "Robotic Audio", "Human Like Audio"], index=0 - ) - input_type = st.sidebar.selectbox("Input Type", ["Text", "Audio"], index=0) - if input_type == "Text": - user_input = st.text_input("You: ") - else: - audio_bytes = audio_recorder() - if audio_bytes: - temp_rec_file = str(uuid.uuid4()) + ".flac" - with open(temp_rec_file, "wb") as fp: - fp.write(audio_bytes) - transcription = get_audio_transcription(temp_rec_file).get("text", "") - if transcription: - user_input = transcription - else: - st.info("Sorry, I didn't get that.") - - if st.button("Submit"): - with st.spinner("Thinking..."): - output = selected_chain.run(input=user_input) - if output_type == "Robotic Audio": - robot_audio(output) - elif output_type == "Human Like Audio": - humanlike_audio(output) - - st.session_state.past.append(user_input) - st.session_state.generated.append(output) - st.session_state["memory"] += memory.buffer - - if st.session_state["generated"]: - for i in range(len(st.session_state["generated"]) - 1, -1, -1): - message(st.session_state["generated"][i], key=str(i)) - message(st.session_state["past"][i], is_user=True, key=str(i) + "_user") - - -def tab3(): - def get_booked_rooms(): - conn = sqlite3.connect("hotel.db") - cursor = conn.cursor() - - booked_rooms = cursor.execute( - """SELECT customer_name, room_number, arrival_date, departure_date, room_type - FROM customer_info""" - ).fetchall() - - conn.close() - - return booked_rooms - - booked_rooms = get_booked_rooms() - - if not booked_rooms: - st.write("No rooms have been booked yet.") - else: - st.subheader(f"Total number of booked rooms: {len(booked_rooms)}") - - columns = [ - "Customer Name", - "Room Number", - "Room Type", - "Arrival Date", - "Departure Date", - ] - data = [] - for booking in booked_rooms: - data.append([booking[0], booking[1], booking[4], booking[2], booking[3]]) - df = pd.DataFrame(data, columns=columns) - - st.dataframe(df.style.highlight_max(axis=0)) - - -def main(): - st.set_page_config( - page_title="Hotel Chatbot demo", page_icon=":memo:", layout="wide" - ) - tabs = ["Intro", "Chat", "Dashboard"] - - with st.sidebar: - current_tab = option_menu("Select a Tab", tabs, menu_icon="cast") - - tab_functions = { - "Intro": tab1, - "Chat": tab2, - "Dashboard": tab3, - } - - if current_tab in tab_functions: - tab_functions[current_tab]() - - -if __name__ == "__main__": - main() diff --git a/spaces/ashercn97/AsherTesting/docs/System-requirements.md b/spaces/ashercn97/AsherTesting/docs/System-requirements.md deleted file mode 100644 index 3a88416d34ad7c8babd90a81db902e95288a8197..0000000000000000000000000000000000000000 --- a/spaces/ashercn97/AsherTesting/docs/System-requirements.md +++ /dev/null @@ -1,42 +0,0 @@ -These are the VRAM and RAM requirements (in MiB) to run some examples of models **in 16-bit (default) precision**: - -| model | VRAM (GPU) | RAM | -|:-----------------------|-------------:|--------:| -| arxiv_ai_gpt2 | 1512.37 | 5824.2 | -| blenderbot-1B-distill | 2441.75 | 4425.91 | -| opt-1.3b | 2509.61 | 4427.79 | -| gpt-neo-1.3b | 2605.27 | 5851.58 | -| opt-2.7b | 5058.05 | 4863.95 | -| gpt4chan_model_float16 | 11653.7 | 4437.71 | -| gpt-j-6B | 11653.7 | 5633.79 | -| galactica-6.7b | 12697.9 | 4429.89 | -| opt-6.7b | 12700 | 4368.66 | -| bloomz-7b1-p3 | 13483.1 | 4470.34 | - -#### GPU mode with 8-bit precision - -Allows you to load models that would not normally fit into your GPU. Enabled by default for 13b and 20b models in this web UI. - -| model | VRAM (GPU) | RAM | -|:---------------|-------------:|--------:| -| opt-13b | 12528.1 | 1152.39 | -| gpt-neox-20b | 20384 | 2291.7 | - -#### CPU mode (32-bit precision) - -A lot slower, but does not require a GPU. - -On my i5-12400F, 6B models take around 10-20 seconds to respond in chat mode, and around 5 minutes to generate a 200 tokens completion. - -| model | RAM | -|:-----------------------|---------:| -| arxiv_ai_gpt2 | 4430.82 | -| gpt-neo-1.3b | 6089.31 | -| opt-1.3b | 8411.12 | -| blenderbot-1B-distill | 8508.16 | -| opt-2.7b | 14969.3 | -| bloomz-7b1-p3 | 21371.2 | -| gpt-j-6B | 24200.3 | -| gpt4chan_model | 24246.3 | -| galactica-6.7b | 26561.4 | -| opt-6.7b | 29596.6 | diff --git a/spaces/attention-refocusing/Attention-refocusing/gligen/ldm/modules/diffusionmodules/__init__.py b/spaces/attention-refocusing/Attention-refocusing/gligen/ldm/modules/diffusionmodules/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/awacke1/Dice-Roll-Fractals-STEM-Math/app_backup.py b/spaces/awacke1/Dice-Roll-Fractals-STEM-Math/app_backup.py deleted file mode 100644 index 89d24593239f26c935aa005692685819ce743fa4..0000000000000000000000000000000000000000 --- a/spaces/awacke1/Dice-Roll-Fractals-STEM-Math/app_backup.py +++ /dev/null @@ -1,52 +0,0 @@ -import streamlit as st -import numpy as np -import pandas as pd -import plotly.graph_objects as go -from datetime import datetime -from base64 import b64encode - -EMOJI_LIST = {4: "🎂", 6: "🍀", 8: "🍄", 10: "🍁", 12: "🍂", 20: "🍃", 50: "🍒", 100: "🌟"} -DICE_TYPES = [4, 6, 8, 10, 12, 20, 50, 100] -DEFAULT_ROLLS = 3 - -def roll_dice(num_rolls, dice_type): - rolls = np.random.randint(1, dice_type + 1, size=num_rolls) - return rolls - -def plot_tokens(health_tokens, coin_tokens): - fig = go.Figure() - fig.add_trace(go.Scatter(x=list(range(1, len(health_tokens) + 1)), y=health_tokens, name="💖 Health")) - fig.add_trace(go.Scatter(x=list(range(1, len(coin_tokens) + 1)), y=coin_tokens, name="💰 Coins")) - fig.update_layout(title="Token Accumulation", xaxis_title="Rolls", yaxis_title="Tokens") - st.plotly_chart(fig) - -st.title("🎲 Dice Rolling Game") -username = st.text_input("👤 Enter your username:") -num_rolls = st.slider("🔢 Choose the number of rolls:", 1, 100, DEFAULT_ROLLS) - -history = {"health_tokens": [0], "coin_tokens": [0]} -for dice_type in DICE_TYPES: - rolls = roll_dice(num_rolls, dice_type) - highest_rolls = sum(roll == dice_type for roll in rolls) - coin_tokens_added = 0 - dice_results = [f"{EMOJI_LIST[dice_type]} {roll}" for roll in rolls] - st.write(f"🎲 Results for {dice_type}-sided dice: {' | '.join(dice_results)}") - for roll in rolls: - if roll == dice_type: - st.write(f"🎉 Congratulations! You rolled the {EMOJI_LIST[dice_type]} highest value! 💰 Adding 3 coins.") - coin_tokens_added += 3 - if roll == max(rolls): - st.write(f"🎉 Congratulations! You rolled the {EMOJI_LIST[dice_type]} maximum value! 💖 Adding 10 health tokens.") - if dice_type == 100: - history["health_tokens"].append(history["health_tokens"][-1] + 10) - history[f"{dice_type}-sided dice high rolls"] = highest_rolls - history["roll_history"] = {**history.get("roll_history", {}), dice_type: rolls} - history["coin_tokens"].append(history["coin_tokens"][-1] + coin_tokens_added) - -st.write("💰💖 Token Accumulation:") -plot_tokens(history["health_tokens"], history["coin_tokens"]) -df = pd.concat([pd.DataFrame(history["roll_history"]), pd.DataFrame(history["health_tokens"], columns=["Health Tokens"]), pd.DataFrame(history["coin_tokens"], columns=["Coin Tokens"])], axis=1) -timestamp = datetime.now().strftime("%m-%d-%Y-%H-%M-%S") -filename = f"{username}_{timestamp}.csv" -df.to_csv(filename, index=False) -st.markdown(f'Download CSV File', unsafe_allow_html=True) diff --git a/spaces/awacke1/DockerGoFlanT5/main.go b/spaces/awacke1/DockerGoFlanT5/main.go deleted file mode 100644 index cf4629d11369180620cb9c74a3503bf2296e8b48..0000000000000000000000000000000000000000 --- a/spaces/awacke1/DockerGoFlanT5/main.go +++ /dev/null @@ -1,17 +0,0 @@ -package main - -import ( - "fmt" - "net/http" - "net/url" -) - -func main() { - http.HandleFunc("/", HelloServer) - http.ListenAndServe(":8080", nil) -} - -func HelloServer(w http.ResponseWriter, r *http.Request) { - m, _ := url.ParseQuery(r.URL.RawQuery) - fmt.Fprintf(w, "Hi to Go Golang https://en.wikipedia.org/wiki/Go_(programming_language) , %s!", m["q"]) -} \ No newline at end of file diff --git a/spaces/awacke1/Embedding-Iframe-HTML5-to-Gradio/index.html b/spaces/awacke1/Embedding-Iframe-HTML5-to-Gradio/index.html deleted file mode 100644 index 9fb7772a6edf86474c47c916dce1aa66136dc72b..0000000000000000000000000000000000000000 --- a/spaces/awacke1/Embedding-Iframe-HTML5-to-Gradio/index.html +++ /dev/null @@ -1,61 +0,0 @@ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - diff --git a/spaces/awacke1/Slot-Machine-HTML5/index.html b/spaces/awacke1/Slot-Machine-HTML5/index.html deleted file mode 100644 index 5145c165b351bf8de593651f21138f691fe17dd3..0000000000000000000000000000000000000000 --- a/spaces/awacke1/Slot-Machine-HTML5/index.html +++ /dev/null @@ -1,134 +0,0 @@ - - - - - - Emoji Slot Machine - - - -

Emoji Slot Machine

-
-
🍎
-
🍌
-
🍒
-
🍇
-
🍉
-
-
-
🍍
-
🍓
-
🥑
-
🌽
-
🍔
-
-
-
🍟
-
🍕
-
🍩
-
🍪
-
🌮
-
-
-
🍣
-
🍦
-
🥗
-
🥪
-
🍱
-
- - -
Balance: $10.00
-
- - - - - diff --git a/spaces/awacke1/Text-to-Image-stabilityai-stable-diffusion-2-1/app.py b/spaces/awacke1/Text-to-Image-stabilityai-stable-diffusion-2-1/app.py deleted file mode 100644 index 0160420876923d89f2ab5fccb9f4d13725e29972..0000000000000000000000000000000000000000 --- a/spaces/awacke1/Text-to-Image-stabilityai-stable-diffusion-2-1/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/stabilityai/stable-diffusion-2-1").launch() \ No newline at end of file diff --git a/spaces/banana-projects/web3d/node_modules/@tweenjs/tween.js/benchmarks/additionWithStart.js b/spaces/banana-projects/web3d/node_modules/@tweenjs/tween.js/benchmarks/additionWithStart.js deleted file mode 100644 index 576ca92f8465976414a645e8fe89b218b0f6a87d..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/@tweenjs/tween.js/benchmarks/additionWithStart.js +++ /dev/null @@ -1,9 +0,0 @@ -function additionWithStart() { - var numAdditions = 1e4; - - for (var i = 0; i < numAdditions; ++i) { - var currentTween = new TWEEN.Tween({a: 0.0}); - currentTween.to({a: 1.0}, 1.0); - currentTween.start(); - } -} \ No newline at end of file diff --git a/spaces/banana-projects/web3d/node_modules/three/examples/js/postprocessing/SavePass.js b/spaces/banana-projects/web3d/node_modules/three/examples/js/postprocessing/SavePass.js deleted file mode 100644 index 2eb319c71a97ad2dc6bb4f88bbcdd46118405cda..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/examples/js/postprocessing/SavePass.js +++ /dev/null @@ -1,59 +0,0 @@ -/** - * @author alteredq / http://alteredqualia.com/ - */ - -THREE.SavePass = function ( renderTarget ) { - - THREE.Pass.call( this ); - - if ( THREE.CopyShader === undefined ) - console.error( "THREE.SavePass relies on THREE.CopyShader" ); - - var shader = THREE.CopyShader; - - this.textureID = "tDiffuse"; - - this.uniforms = THREE.UniformsUtils.clone( shader.uniforms ); - - this.material = new THREE.ShaderMaterial( { - - uniforms: this.uniforms, - vertexShader: shader.vertexShader, - fragmentShader: shader.fragmentShader - - } ); - - this.renderTarget = renderTarget; - - if ( this.renderTarget === undefined ) { - - this.renderTarget = new THREE.WebGLRenderTarget( window.innerWidth, window.innerHeight, { minFilter: THREE.LinearFilter, magFilter: THREE.LinearFilter, format: THREE.RGBFormat, stencilBuffer: false } ); - this.renderTarget.texture.name = "SavePass.rt"; - - } - - this.needsSwap = false; - - this.fsQuad = new THREE.Pass.FullScreenQuad( this.material ); - -}; - -THREE.SavePass.prototype = Object.assign( Object.create( THREE.Pass.prototype ), { - - constructor: THREE.SavePass, - - render: function ( renderer, writeBuffer, readBuffer ) { - - if ( this.uniforms[ this.textureID ] ) { - - this.uniforms[ this.textureID ].value = readBuffer.texture; - - } - - renderer.setRenderTarget( this.renderTarget ); - if ( this.clear ) renderer.clear(); - this.fsQuad.render( renderer ); - - } - -} ); diff --git a/spaces/bbz662bbz/chatgpt_cost_calc/README.md b/spaces/bbz662bbz/chatgpt_cost_calc/README.md deleted file mode 100644 index f676d2874d445e75b40236cde0a5968e57e415a4..0000000000000000000000000000000000000000 --- a/spaces/bbz662bbz/chatgpt_cost_calc/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Chatgpt Cost Calc -emoji: 🏃 -colorFrom: gray -colorTo: gray -sdk: gradio -sdk_version: 4.1.2 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/beihai/GFPGAN-V1.3-whole-image/basicsr/__init__.py b/spaces/beihai/GFPGAN-V1.3-whole-image/basicsr/__init__.py deleted file mode 100644 index 871b6366a986e7a816a5a0dd0ca900b3ca4450c1..0000000000000000000000000000000000000000 --- a/spaces/beihai/GFPGAN-V1.3-whole-image/basicsr/__init__.py +++ /dev/null @@ -1,12 +0,0 @@ -# https://github.com/xinntao/BasicSR -# flake8: noqa -from .archs import * -from .data import * -from .losses import * -from .metrics import * -from .models import * -from .ops import * -from .test import * -from .train import * -from .utils import * -from .version import __gitsha__, __version__ diff --git a/spaces/better57/CHATGPT/modules/pdf_func.py b/spaces/better57/CHATGPT/modules/pdf_func.py deleted file mode 100644 index 0aba6b7b891fc527c79b887256b0cbaa81ae5b3d..0000000000000000000000000000000000000000 --- a/spaces/better57/CHATGPT/modules/pdf_func.py +++ /dev/null @@ -1,180 +0,0 @@ -from types import SimpleNamespace -import pdfplumber -import logging -from llama_index import Document - -def prepare_table_config(crop_page): - """Prepare table查找边界, 要求page为原始page - - From https://github.com/jsvine/pdfplumber/issues/242 - """ - page = crop_page.root_page # root/parent - cs = page.curves + page.edges - def curves_to_edges(): - """See https://github.com/jsvine/pdfplumber/issues/127""" - edges = [] - for c in cs: - edges += pdfplumber.utils.rect_to_edges(c) - return edges - edges = curves_to_edges() - return { - "vertical_strategy": "explicit", - "horizontal_strategy": "explicit", - "explicit_vertical_lines": edges, - "explicit_horizontal_lines": edges, - "intersection_y_tolerance": 10, - } - -def get_text_outside_table(crop_page): - ts = prepare_table_config(crop_page) - if len(ts["explicit_vertical_lines"]) == 0 or len(ts["explicit_horizontal_lines"]) == 0: - return crop_page - - ### Get the bounding boxes of the tables on the page. - bboxes = [table.bbox for table in crop_page.root_page.find_tables(table_settings=ts)] - def not_within_bboxes(obj): - """Check if the object is in any of the table's bbox.""" - def obj_in_bbox(_bbox): - """See https://github.com/jsvine/pdfplumber/blob/stable/pdfplumber/table.py#L404""" - v_mid = (obj["top"] + obj["bottom"]) / 2 - h_mid = (obj["x0"] + obj["x1"]) / 2 - x0, top, x1, bottom = _bbox - return (h_mid >= x0) and (h_mid < x1) and (v_mid >= top) and (v_mid < bottom) - return not any(obj_in_bbox(__bbox) for __bbox in bboxes) - - return crop_page.filter(not_within_bboxes) -# 请使用 LaTeX 表达公式,行内公式以 $ 包裹,行间公式以 $$ 包裹 - -extract_words = lambda page: page.extract_words(keep_blank_chars=True, y_tolerance=0, x_tolerance=1, extra_attrs=["fontname", "size", "object_type"]) -# dict_keys(['text', 'x0', 'x1', 'top', 'doctop', 'bottom', 'upright', 'direction', 'fontname', 'size']) - -def get_title_with_cropped_page(first_page): - title = [] # 处理标题 - x0,top,x1,bottom = first_page.bbox # 获取页面边框 - - for word in extract_words(first_page): - word = SimpleNamespace(**word) - - if word.size >= 14: - title.append(word.text) - title_bottom = word.bottom - elif word.text == "Abstract": # 获取页面abstract - top = word.top - - user_info = [i["text"] for i in extract_words(first_page.within_bbox((x0,title_bottom,x1,top)))] - # 裁剪掉上半部分, within_bbox: full_included; crop: partial_included - return title, user_info, first_page.within_bbox((x0,top,x1,bottom)) - -def get_column_cropped_pages(pages, two_column=True): - new_pages = [] - for page in pages: - if two_column: - left = page.within_bbox((0, 0, page.width/2, page.height),relative=True) - right = page.within_bbox((page.width/2, 0, page.width, page.height), relative=True) - new_pages.append(left) - new_pages.append(right) - else: - new_pages.append(page) - - return new_pages - -def parse_pdf(filename, two_column = True): - level = logging.getLogger().level - if level == logging.getLevelName("DEBUG"): - logging.getLogger().setLevel("INFO") - - with pdfplumber.open(filename) as pdf: - title, user_info, first_page = get_title_with_cropped_page(pdf.pages[0]) - new_pages = get_column_cropped_pages([first_page] + pdf.pages[1:], two_column) - - chapters = [] - # tuple (chapter_name, [pageid] (start,stop), chapter_text) - create_chapter = lambda page_start,name_top,name_bottom: SimpleNamespace( - name=[], - name_top=name_top, - name_bottom=name_bottom, - record_chapter_name = True, - - page_start=page_start, - page_stop=None, - - text=[], - ) - cur_chapter = None - - # 按页遍历PDF文档 - for idx, page in enumerate(new_pages): - page = get_text_outside_table(page) - - # 按行遍历页面文本 - for word in extract_words(page): - word = SimpleNamespace(**word) - - # 检查行文本是否以12号字体打印,如果是,则将其作为新章节开始 - if word.size >= 11: # 出现chapter name - if cur_chapter is None: - cur_chapter = create_chapter(page.page_number, word.top, word.bottom) - elif not cur_chapter.record_chapter_name or (cur_chapter.name_bottom != cur_chapter.name_bottom and cur_chapter.name_top != cur_chapter.name_top): - # 不再继续写chapter name - cur_chapter.page_stop = page.page_number # stop id - chapters.append(cur_chapter) - # 重置当前chapter信息 - cur_chapter = create_chapter(page.page_number, word.top, word.bottom) - - # print(word.size, word.top, word.bottom, word.text) - cur_chapter.name.append(word.text) - else: - cur_chapter.record_chapter_name = False # chapter name 结束 - cur_chapter.text.append(word.text) - else: - # 处理最后一个章节 - cur_chapter.page_stop = page.page_number # stop id - chapters.append(cur_chapter) - - for i in chapters: - logging.info(f"section: {i.name} pages:{i.page_start, i.page_stop} word-count:{len(i.text)}") - logging.debug(" ".join(i.text)) - - title = " ".join(title) - user_info = " ".join(user_info) - text = f"Article Title: {title}, Information:{user_info}\n" - for idx, chapter in enumerate(chapters): - chapter.name = " ".join(chapter.name) - text += f"The {idx}th Chapter {chapter.name}: " + " ".join(chapter.text) + "\n" - - logging.getLogger().setLevel(level) - return Document(text=text, extra_info={"title": title}) - -BASE_POINTS = """ -1. Who are the authors? -2. What is the process of the proposed method? -3. What is the performance of the proposed method? Please note down its performance metrics. -4. What are the baseline models and their performances? Please note down these baseline methods. -5. What dataset did this paper use? -""" - -READING_PROMPT = """ -You are a researcher helper bot. You can help the user with research paper reading and summarizing. \n -Now I am going to send you a paper. You need to read it and summarize it for me part by part. \n -When you are reading, You need to focus on these key points:{} -""" - -READING_PROMT_V2 = """ -You are a researcher helper bot. You can help the user with research paper reading and summarizing. \n -Now I am going to send you a paper. You need to read it and summarize it for me part by part. \n -When you are reading, You need to focus on these key points:{}, - -And You need to generate a brief but informative title for this part. -Your return format: -- title: '...' -- summary: '...' -""" - -SUMMARY_PROMPT = "You are a researcher helper bot. Now you need to read the summaries of a research paper." - - -if __name__ == '__main__': - # Test code - z = parse_pdf("./build/test.pdf") - print(z["user_info"]) - print(z["title"]) \ No newline at end of file diff --git a/spaces/bguberfain/Detic/detic/modeling/roi_heads/zero_shot_classifier.py b/spaces/bguberfain/Detic/detic/modeling/roi_heads/zero_shot_classifier.py deleted file mode 100644 index edf217c6dbe74fa68e4d7653488bdd5e2e0c2f0e..0000000000000000000000000000000000000000 --- a/spaces/bguberfain/Detic/detic/modeling/roi_heads/zero_shot_classifier.py +++ /dev/null @@ -1,87 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F -from detectron2.config import configurable -from detectron2.layers import Linear, ShapeSpec - -class ZeroShotClassifier(nn.Module): - @configurable - def __init__( - self, - input_shape: ShapeSpec, - *, - num_classes: int, - zs_weight_path: str, - zs_weight_dim: int = 512, - use_bias: float = 0.0, - norm_weight: bool = True, - norm_temperature: float = 50.0, - ): - super().__init__() - if isinstance(input_shape, int): # some backward compatibility - input_shape = ShapeSpec(channels=input_shape) - input_size = input_shape.channels * (input_shape.width or 1) * (input_shape.height or 1) - self.norm_weight = norm_weight - self.norm_temperature = norm_temperature - - self.use_bias = use_bias < 0 - if self.use_bias: - self.cls_bias = nn.Parameter(torch.ones(1) * use_bias) - - self.linear = nn.Linear(input_size, zs_weight_dim) - - if zs_weight_path == 'rand': - zs_weight = torch.randn((zs_weight_dim, num_classes)) - nn.init.normal_(zs_weight, std=0.01) - else: - zs_weight = torch.tensor( - np.load(zs_weight_path), - dtype=torch.float32).permute(1, 0).contiguous() # D x C - zs_weight = torch.cat( - [zs_weight, zs_weight.new_zeros((zs_weight_dim, 1))], - dim=1) # D x (C + 1) - - if self.norm_weight: - zs_weight = F.normalize(zs_weight, p=2, dim=0) - - if zs_weight_path == 'rand': - self.zs_weight = nn.Parameter(zs_weight) - else: - self.register_buffer('zs_weight', zs_weight) - - assert self.zs_weight.shape[1] == num_classes + 1, self.zs_weight.shape - - - @classmethod - def from_config(cls, cfg, input_shape): - return { - 'input_shape': input_shape, - 'num_classes': cfg.MODEL.ROI_HEADS.NUM_CLASSES, - 'zs_weight_path': cfg.MODEL.ROI_BOX_HEAD.ZEROSHOT_WEIGHT_PATH, - 'zs_weight_dim': cfg.MODEL.ROI_BOX_HEAD.ZEROSHOT_WEIGHT_DIM, - 'use_bias': cfg.MODEL.ROI_BOX_HEAD.USE_BIAS, - 'norm_weight': cfg.MODEL.ROI_BOX_HEAD.NORM_WEIGHT, - 'norm_temperature': cfg.MODEL.ROI_BOX_HEAD.NORM_TEMP, - } - - def forward(self, x, classifier=None): - ''' - Inputs: - x: B x D' - classifier_info: (C', C' x D) - ''' - x = self.linear(x) - if classifier is not None: - zs_weight = classifier.permute(1, 0).contiguous() # D x C' - zs_weight = F.normalize(zs_weight, p=2, dim=0) \ - if self.norm_weight else zs_weight - else: - zs_weight = self.zs_weight - if self.norm_weight: - x = self.norm_temperature * F.normalize(x, p=2, dim=1) - x = torch.mm(x, zs_weight) - if self.use_bias: - x = x + self.cls_bias - return x \ No newline at end of file diff --git a/spaces/bioriAsaeru/text-to-voice/Download Dragon Ball Z Budokai Tenkaichi 3 Psp Iso and Unleash Your Super Saiyan Power.md b/spaces/bioriAsaeru/text-to-voice/Download Dragon Ball Z Budokai Tenkaichi 3 Psp Iso and Unleash Your Super Saiyan Power.md deleted file mode 100644 index 244704805d44666245d986d9879871258d7bd9ea..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Download Dragon Ball Z Budokai Tenkaichi 3 Psp Iso and Unleash Your Super Saiyan Power.md +++ /dev/null @@ -1,6 +0,0 @@ -

Dragon Ball Z Budokai Tenkaichi 3 Psp Iso


Download Ziphttps://urloso.com/2uyPHX



- - aaccfb2cb3
-
-
-

diff --git a/spaces/bioriAsaeru/text-to-voice/Haasil Full Movie Hd 1080p Subtitles Free A Masterpiece of Indian Cinema by Tigmanshu Dhulia.md b/spaces/bioriAsaeru/text-to-voice/Haasil Full Movie Hd 1080p Subtitles Free A Masterpiece of Indian Cinema by Tigmanshu Dhulia.md deleted file mode 100644 index 750d27592634518de1a6a2e50368e5576786ad69..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Haasil Full Movie Hd 1080p Subtitles Free A Masterpiece of Indian Cinema by Tigmanshu Dhulia.md +++ /dev/null @@ -1,6 +0,0 @@ -
-

Watch the movie Haasil on the free film streaming website www.onlinemovieshindi.com (new web URL: ). Online streaming or downloading the video file easily. Watch or download Haasil online movie Hindi dubbed here.

-

Haasil Full Movie Hd 1080p Subtitles Free


Download File ★★★ https://urloso.com/2uyOcm



-

The same as other websites such as hdmovieslatest, filmypunjab, moviemora, fridaybug and etc. You can watch the free online movie Hindi dubbed here. HD movies latest to see without a proxy unblocker app.

aaccfb2cb3
-
-
\ No newline at end of file diff --git a/spaces/brainblow/AudioCreator_Music-Audio_Generation/docs/CONDITIONING.md b/spaces/brainblow/AudioCreator_Music-Audio_Generation/docs/CONDITIONING.md deleted file mode 100644 index 6e356cb8e9912d3e18fc84598c1acf77c6e7abc5..0000000000000000000000000000000000000000 --- a/spaces/brainblow/AudioCreator_Music-Audio_Generation/docs/CONDITIONING.md +++ /dev/null @@ -1,146 +0,0 @@ -# AudioCraft conditioning modules - -AudioCraft provides a -[modular implementation of conditioning modules](../audiocraft/modules/conditioners.py) -that can be used with the language model to condition the generation. -The codebase was developed in order to easily extend the set of modules -currently supported to easily develop new ways of controlling the generation. - - -## Conditioning methods - -For now, we support 3 main types of conditioning within AudioCraft: -* Text-based conditioning methods -* Waveform-based conditioning methods -* Joint embedding conditioning methods for text and audio projected in a shared latent space. - -The Language Model relies on 2 core components that handle processing information: -* The `ConditionProvider` class, that maps metadata to processed conditions leveraging -all the defined conditioners for the given task. -* The `ConditionFuser` class, that takes preprocessed conditions and properly fuse the -conditioning embedding to the language model inputs following a given fusing strategy. - -Different conditioners (for text, waveform, joint embeddings...) are provided as torch -modules in AudioCraft and are used internally in the language model to process the -conditioning signals and feed them to the language model. - - -## Core concepts - -### Conditioners - -The `BaseConditioner` torch module is the base implementation for all conditioners in audiocraft. - -Each conditioner is expected to implement 2 methods: -* The `tokenize` method that is used as a preprocessing method that contains all processing -that can lead to synchronization points (e.g. BPE tokenization with transfer to the GPU). -The output of the tokenize method will then be used to feed the forward method. -* The `forward` method that takes the output of the tokenize method and contains the core computation -to obtain the conditioning embedding along with a mask indicating valid indices (e.g. padding tokens). - -### ConditionProvider - -The ConditionProvider prepares and provides conditions given a dictionary of conditioners. - -Conditioners are specified as a dictionary of attributes and the corresponding conditioner -providing the processing logic for the given attribute. - -Similarly to the conditioners, the condition provider works in two steps to avoid sychronization points: -* A `tokenize` method that takes a list of conditioning attributes for the batch, -and run all tokenize steps for the set of conditioners. -* A `forward` method that takes the output of the tokenize step and run all the forward steps -for the set of conditioners. - -The list of conditioning attributes is passed as a list of `ConditioningAttributes` -that is presented just below. - -### ConditionFuser - -Once all conditioning signals have been extracted and processed by the `ConditionProvider` -as dense embeddings, they remain to be passed to the language model along with the original -language model inputs. - -The `ConditionFuser` handles specifically the logic to combine the different conditions -to the actual model input, supporting different strategies to combine them. - -One can therefore define different strategies to combine or fuse the condition to the input, in particular: -* Prepending the conditioning signal to the input with the `prepend` strategy, -* Summing the conditioning signal to the input with the `sum` strategy, -* Combining the conditioning relying on a cross-attention mechanism with the `cross` strategy, -* Using input interpolation with the `input_interpolate` strategy. - -### SegmentWithAttributes and ConditioningAttributes: From metadata to conditions - -The `ConditioningAttributes` dataclass is the base class for metadata -containing all attributes used for conditioning the language model. - -It currently supports the following types of attributes: -* Text conditioning attributes: Dictionary of textual attributes used for text-conditioning. -* Wav conditioning attributes: Dictionary of waveform attributes used for waveform-based -conditioning such as the chroma conditioning. -* JointEmbed conditioning attributes: Dictionary of text and waveform attributes -that are expected to be represented in a shared latent space. - -These different types of attributes are the attributes that are processed -by the different conditioners. - -`ConditioningAttributes` are extracted from metadata loaded along the audio in the datasets, -provided that the metadata used by the dataset implements the `SegmentWithAttributes` abstraction. - -All metadata-enabled datasets to use for conditioning in AudioCraft inherits -the [`audiocraft.data.info_dataset.InfoAudioDataset`](../audiocraft/data/info_audio_dataset.py) class -and the corresponding metadata inherits and implements the `SegmentWithAttributes` abstraction. -Refer to the [`audiocraft.data.music_dataset.MusicAudioDataset`](../audiocraft/data/music_dataset.py) -class as an example. - - -## Available conditioners - -### Text conditioners - -All text conditioners are expected to inherit from the `TextConditioner` class. - -AudioCraft currently provides two text conditioners: -* The `LUTConditioner` that relies on look-up-table of embeddings learned at train time, -and relying on either no tokenizer or a spacy tokenizer. This conditioner is particularly -useful for simple experiments and categorical labels. -* The `T5Conditioner` that relies on a -[pre-trained T5 model](https://huggingface.co/docs/transformers/model_doc/t5) -frozen or fine-tuned at train time to extract the text embeddings. - -### Waveform conditioners - -All waveform conditioners are expected to inherit from the `WaveformConditioner` class and -consists of conditioning method that takes a waveform as input. The waveform conditioner -must implement the logic to extract the embedding from the waveform and define the downsampling -factor from the waveform to the resulting embedding. - -The `ChromaStemConditioner` conditioner is a waveform conditioner for the chroma features -conditioning used by MusicGen. It takes a given waveform, extract relevant stems for melody -(namely all non drums and bass stems) using a -[pre-trained Demucs model](https://github.com/facebookresearch/demucs) -and then extract the chromagram bins from the remaining mix of stems. - -### Joint embeddings conditioners - -We finally provide support for conditioning based on joint text and audio embeddings through -the `JointEmbeddingConditioner` class and the `CLAPEmbeddingConditioner` that implements such -a conditioning method relying on a [pretrained CLAP model](https://github.com/LAION-AI/CLAP). - -## Classifier Free Guidance - -We provide a Classifier Free Guidance implementation in AudioCraft. With the classifier free -guidance dropout, all attributes are dropped with the same probability. - -## Attribute Dropout - -We further provide an attribute dropout strategy. Unlike the classifier free guidance dropout, -the attribute dropout drops given attributes with a defined probability, allowing the model -not to expect all conditioning signals to be provided at once. - -## Faster computation of conditions - -Conditioners that require some heavy computation on the waveform can be cached, in particular -the `ChromaStemConditioner` or `CLAPEmbeddingConditioner`. You just need to provide the -`cache_path` parameter to them. We recommend running dummy jobs for filling up the cache quickly. -An example is provied in the [musicgen.musicgen_melody_32khz grid](../audiocraft/grids/musicgen/musicgen_melody_32khz.py). \ No newline at end of file diff --git a/spaces/breadlicker45/Text-to-music-longer/README.md b/spaces/breadlicker45/Text-to-music-longer/README.md deleted file mode 100644 index 9ba1585d8ef26e29e37ba6c67a6030a069d13324..0000000000000000000000000000000000000000 --- a/spaces/breadlicker45/Text-to-music-longer/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Ttm Longer -emoji: 🐢 -colorFrom: green -colorTo: red -sdk: gradio -sdk_version: 3.8.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/tests/data/test_coco.py b/spaces/brjathu/HMR2.0/vendor/detectron2/tests/data/test_coco.py deleted file mode 100644 index caabead5527639056daeef71027a69c47ee2ebf7..0000000000000000000000000000000000000000 --- a/spaces/brjathu/HMR2.0/vendor/detectron2/tests/data/test_coco.py +++ /dev/null @@ -1,139 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import json -import numpy as np -import os -import tempfile -import unittest -import pycocotools.mask as mask_util - -from detectron2.data import DatasetCatalog, MetadataCatalog -from detectron2.data.datasets.coco import convert_to_coco_dict, load_coco_json -from detectron2.structures import BoxMode - - -def make_mask(): - """ - Makes a donut shaped binary mask. - """ - H = 100 - W = 100 - mask = np.zeros([H, W], dtype=np.uint8) - for x in range(W): - for y in range(H): - d = np.linalg.norm(np.array([W, H]) / 2 - np.array([x, y])) - if d > 10 and d < 20: - mask[y, x] = 1 - return mask - - -def uncompressed_rle(mask): - l = mask.flatten(order="F").tolist() - counts = [] - p = False - cnt = 0 - for i in l: - if i == p: - cnt += 1 - else: - counts.append(cnt) - p = i - cnt = 1 - counts.append(cnt) - return {"counts": counts, "size": [mask.shape[0], mask.shape[1]]} - - -def make_dataset_dicts(mask, compressed: bool = True): - """ - Returns a list of dicts that represents a single COCO data point for - object detection. The single instance given by `mask` is represented by - RLE, either compressed or uncompressed. - """ - record = {} - record["file_name"] = "test" - record["image_id"] = 0 - record["height"] = mask.shape[0] - record["width"] = mask.shape[1] - - y, x = np.nonzero(mask) - if compressed: - segmentation = mask_util.encode(np.asarray(mask, order="F")) - else: - segmentation = uncompressed_rle(mask) - min_x = np.min(x) - max_x = np.max(x) - min_y = np.min(y) - max_y = np.max(y) - obj = { - "bbox": [min_x, min_y, max_x, max_y], - "bbox_mode": BoxMode.XYXY_ABS, - "category_id": 0, - "iscrowd": 0, - "segmentation": segmentation, - } - record["annotations"] = [obj] - return [record] - - -class TestRLEToJson(unittest.TestCase): - def test(self): - # Make a dummy dataset. - mask = make_mask() - DatasetCatalog.register("test_dataset", lambda: make_dataset_dicts(mask)) - MetadataCatalog.get("test_dataset").set(thing_classes=["test_label"]) - - # Dump to json. - json_dict = convert_to_coco_dict("test_dataset") - with tempfile.TemporaryDirectory() as tmpdir: - json_file_name = os.path.join(tmpdir, "test.json") - with open(json_file_name, "w") as f: - json.dump(json_dict, f) - # Load from json. - dicts = load_coco_json(json_file_name, "") - - # Check the loaded mask matches the original. - anno = dicts[0]["annotations"][0] - loaded_mask = mask_util.decode(anno["segmentation"]) - self.assertTrue(np.array_equal(loaded_mask, mask)) - DatasetCatalog.pop("test_dataset") - MetadataCatalog.pop("test_dataset") - - def test_uncompressed_RLE(self): - mask = make_mask() - rle = mask_util.encode(np.asarray(mask, order="F")) - uncompressed = uncompressed_rle(mask) - compressed = mask_util.frPyObjects(uncompressed, *rle["size"]) - self.assertEqual(rle, compressed) - - -class TestConvertCOCO(unittest.TestCase): - @staticmethod - def generate_data(): - record = { - "file_name": "test", - "image_id": 0, - "height": 100, - "width": 100, - "annotations": [ - { - "bbox": [10, 10, 10, 10, 5], - "bbox_mode": BoxMode.XYWHA_ABS, - "category_id": 0, - "iscrowd": 0, - }, - { - "bbox": [15, 15, 3, 3], - "bbox_mode": BoxMode.XYXY_ABS, - "category_id": 0, - "iscrowd": 0, - }, - ], - } - - return [record] - - def test_convert_to_coco(self): - DatasetCatalog.register("test_dataset", lambda: TestConvertCOCO.generate_data()) - MetadataCatalog.get("test_dataset").set(thing_classes=["test_label"]) - convert_to_coco_dict("test_dataset") - DatasetCatalog.pop("test_dataset") - MetadataCatalog.pop("test_dataset") diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/configs/COCO-InstanceSegmentation/mask_rcnn_R_50_C4_1x.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/configs/COCO-InstanceSegmentation/mask_rcnn_R_50_C4_1x.py deleted file mode 100644 index 22016be150df4abbe912700d7ca29f8b7b72554a..0000000000000000000000000000000000000000 --- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/configs/COCO-InstanceSegmentation/mask_rcnn_R_50_C4_1x.py +++ /dev/null @@ -1,8 +0,0 @@ -from ..common.train import train -from ..common.optim import SGD as optimizer -from ..common.coco_schedule import lr_multiplier_1x as lr_multiplier -from ..common.data.coco import dataloader -from ..common.models.mask_rcnn_c4 import model - -model.backbone.freeze_at = 2 -train.init_checkpoint = "detectron2://ImageNetPretrained/MSRA/R-50.pkl" diff --git a/spaces/cccccch/VITS-fast-fine-tuning-DingZhen/finetune_speaker_v2.py b/spaces/cccccch/VITS-fast-fine-tuning-DingZhen/finetune_speaker_v2.py deleted file mode 100644 index 85fa044c2fa8e05da688cf937963fc9f592f9f6c..0000000000000000000000000000000000000000 --- a/spaces/cccccch/VITS-fast-fine-tuning-DingZhen/finetune_speaker_v2.py +++ /dev/null @@ -1,321 +0,0 @@ -import os -import json -import argparse -import itertools -import math -import torch -from torch import nn, optim -from torch.nn import functional as F -from torch.utils.data import DataLoader -from torch.utils.tensorboard import SummaryWriter -import torch.multiprocessing as mp -import torch.distributed as dist -from torch.nn.parallel import DistributedDataParallel as DDP -from torch.cuda.amp import autocast, GradScaler -from tqdm import tqdm - -import librosa -import logging - -logging.getLogger('numba').setLevel(logging.WARNING) - -import commons -import utils -from data_utils import ( - TextAudioSpeakerLoader, - TextAudioSpeakerCollate, - DistributedBucketSampler -) -from models import ( - SynthesizerTrn, - MultiPeriodDiscriminator, -) -from losses import ( - generator_loss, - discriminator_loss, - feature_loss, - kl_loss -) -from mel_processing import mel_spectrogram_torch, spec_to_mel_torch - - -torch.backends.cudnn.benchmark = True -global_step = 0 - - -def main(): - """Assume Single Node Multi GPUs Training Only""" - assert torch.cuda.is_available(), "CPU training is not allowed." - - n_gpus = torch.cuda.device_count() - os.environ['MASTER_ADDR'] = 'localhost' - os.environ['MASTER_PORT'] = '8000' - - hps = utils.get_hparams() - mp.spawn(run, nprocs=n_gpus, args=(n_gpus, hps,)) - - -def run(rank, n_gpus, hps): - global global_step - symbols = hps['symbols'] - if rank == 0: - logger = utils.get_logger(hps.model_dir) - logger.info(hps) - utils.check_git_hash(hps.model_dir) - writer = SummaryWriter(log_dir=hps.model_dir) - writer_eval = SummaryWriter(log_dir=os.path.join(hps.model_dir, "eval")) - - # Use gloo backend on Windows for Pytorch - dist.init_process_group(backend= 'gloo' if os.name == 'nt' else 'nccl', init_method='env://', world_size=n_gpus, rank=rank) - torch.manual_seed(hps.train.seed) - torch.cuda.set_device(rank) - - train_dataset = TextAudioSpeakerLoader(hps.data.training_files, hps.data, symbols) - train_sampler = DistributedBucketSampler( - train_dataset, - hps.train.batch_size, - [32,300,400,500,600,700,800,900,1000], - num_replicas=n_gpus, - rank=rank, - shuffle=True) - collate_fn = TextAudioSpeakerCollate() - train_loader = DataLoader(train_dataset, num_workers=2, shuffle=False, pin_memory=True, - collate_fn=collate_fn, batch_sampler=train_sampler) - # train_loader = DataLoader(train_dataset, batch_size=hps.train.batch_size, num_workers=2, shuffle=False, pin_memory=True, - # collate_fn=collate_fn) - if rank == 0: - eval_dataset = TextAudioSpeakerLoader(hps.data.validation_files, hps.data, symbols) - eval_loader = DataLoader(eval_dataset, num_workers=0, shuffle=False, - batch_size=hps.train.batch_size, pin_memory=True, - drop_last=False, collate_fn=collate_fn) - - net_g = SynthesizerTrn( - len(symbols), - hps.data.filter_length // 2 + 1, - hps.train.segment_size // hps.data.hop_length, - n_speakers=hps.data.n_speakers, - **hps.model).cuda(rank) - net_d = MultiPeriodDiscriminator(hps.model.use_spectral_norm).cuda(rank) - - # load existing model - _, _, _, _ = utils.load_checkpoint("./pretrained_models/G_0.pth", net_g, None, drop_speaker_emb=hps.drop_speaker_embed) - _, _, _, _ = utils.load_checkpoint("./pretrained_models/D_0.pth", net_d, None) - epoch_str = 1 - global_step = 0 - # freeze all other layers except speaker embedding - for p in net_g.parameters(): - p.requires_grad = True - for p in net_d.parameters(): - p.requires_grad = True - # for p in net_d.parameters(): - # p.requires_grad = False - # net_g.emb_g.weight.requires_grad = True - optim_g = torch.optim.AdamW( - net_g.parameters(), - hps.train.learning_rate, - betas=hps.train.betas, - eps=hps.train.eps) - optim_d = torch.optim.AdamW( - net_d.parameters(), - hps.train.learning_rate, - betas=hps.train.betas, - eps=hps.train.eps) - # optim_d = None - net_g = DDP(net_g, device_ids=[rank]) - net_d = DDP(net_d, device_ids=[rank]) - - scheduler_g = torch.optim.lr_scheduler.ExponentialLR(optim_g, gamma=hps.train.lr_decay) - scheduler_d = torch.optim.lr_scheduler.ExponentialLR(optim_d, gamma=hps.train.lr_decay) - - scaler = GradScaler(enabled=hps.train.fp16_run) - - for epoch in range(epoch_str, hps.train.epochs + 1): - if rank==0: - train_and_evaluate(rank, epoch, hps, [net_g, net_d], [optim_g, optim_d], [scheduler_g, scheduler_d], scaler, [train_loader, eval_loader], logger, [writer, writer_eval]) - else: - train_and_evaluate(rank, epoch, hps, [net_g, net_d], [optim_g, optim_d], [scheduler_g, scheduler_d], scaler, [train_loader, None], None, None) - scheduler_g.step() - scheduler_d.step() - - -def train_and_evaluate(rank, epoch, hps, nets, optims, schedulers, scaler, loaders, logger, writers): - net_g, net_d = nets - optim_g, optim_d = optims - scheduler_g, scheduler_d = schedulers - train_loader, eval_loader = loaders - if writers is not None: - writer, writer_eval = writers - - # train_loader.batch_sampler.set_epoch(epoch) - global global_step - - net_g.train() - net_d.train() - for batch_idx, (x, x_lengths, spec, spec_lengths, y, y_lengths, speakers) in enumerate(tqdm(train_loader)): - x, x_lengths = x.cuda(rank, non_blocking=True), x_lengths.cuda(rank, non_blocking=True) - spec, spec_lengths = spec.cuda(rank, non_blocking=True), spec_lengths.cuda(rank, non_blocking=True) - y, y_lengths = y.cuda(rank, non_blocking=True), y_lengths.cuda(rank, non_blocking=True) - speakers = speakers.cuda(rank, non_blocking=True) - - with autocast(enabled=hps.train.fp16_run): - y_hat, l_length, attn, ids_slice, x_mask, z_mask,\ - (z, z_p, m_p, logs_p, m_q, logs_q) = net_g(x, x_lengths, spec, spec_lengths, speakers) - - mel = spec_to_mel_torch( - spec, - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.mel_fmin, - hps.data.mel_fmax) - y_mel = commons.slice_segments(mel, ids_slice, hps.train.segment_size // hps.data.hop_length) - y_hat_mel = mel_spectrogram_torch( - y_hat.squeeze(1), - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.hop_length, - hps.data.win_length, - hps.data.mel_fmin, - hps.data.mel_fmax - ) - - y = commons.slice_segments(y, ids_slice * hps.data.hop_length, hps.train.segment_size) # slice - - # Discriminator - y_d_hat_r, y_d_hat_g, _, _ = net_d(y, y_hat.detach()) - with autocast(enabled=False): - loss_disc, losses_disc_r, losses_disc_g = discriminator_loss(y_d_hat_r, y_d_hat_g) - loss_disc_all = loss_disc - optim_d.zero_grad() - scaler.scale(loss_disc_all).backward() - scaler.unscale_(optim_d) - grad_norm_d = commons.clip_grad_value_(net_d.parameters(), None) - scaler.step(optim_d) - - with autocast(enabled=hps.train.fp16_run): - # Generator - y_d_hat_r, y_d_hat_g, fmap_r, fmap_g = net_d(y, y_hat) - with autocast(enabled=False): - loss_dur = torch.sum(l_length.float()) - loss_mel = F.l1_loss(y_mel, y_hat_mel) * hps.train.c_mel - loss_kl = kl_loss(z_p, logs_q, m_p, logs_p, z_mask) * hps.train.c_kl - - loss_fm = feature_loss(fmap_r, fmap_g) - loss_gen, losses_gen = generator_loss(y_d_hat_g) - loss_gen_all = loss_gen + loss_fm + loss_mel + loss_dur + loss_kl - optim_g.zero_grad() - scaler.scale(loss_gen_all).backward() - scaler.unscale_(optim_g) - grad_norm_g = commons.clip_grad_value_(net_g.parameters(), None) - scaler.step(optim_g) - scaler.update() - - if rank==0: - if global_step % hps.train.log_interval == 0: - lr = optim_g.param_groups[0]['lr'] - losses = [loss_disc, loss_gen, loss_fm, loss_mel, loss_dur, loss_kl] - logger.info('Train Epoch: {} [{:.0f}%]'.format( - epoch, - 100. * batch_idx / len(train_loader))) - logger.info([x.item() for x in losses] + [global_step, lr]) - - scalar_dict = {"loss/g/total": loss_gen_all, "loss/d/total": loss_disc_all, "learning_rate": lr, "grad_norm_g": grad_norm_g} - scalar_dict.update({"loss/g/fm": loss_fm, "loss/g/mel": loss_mel, "loss/g/dur": loss_dur, "loss/g/kl": loss_kl}) - - scalar_dict.update({"loss/g/{}".format(i): v for i, v in enumerate(losses_gen)}) - scalar_dict.update({"loss/d_r/{}".format(i): v for i, v in enumerate(losses_disc_r)}) - scalar_dict.update({"loss/d_g/{}".format(i): v for i, v in enumerate(losses_disc_g)}) - image_dict = { - "slice/mel_org": utils.plot_spectrogram_to_numpy(y_mel[0].data.cpu().numpy()), - "slice/mel_gen": utils.plot_spectrogram_to_numpy(y_hat_mel[0].data.cpu().numpy()), - "all/mel": utils.plot_spectrogram_to_numpy(mel[0].data.cpu().numpy()), - "all/attn": utils.plot_alignment_to_numpy(attn[0,0].data.cpu().numpy()) - } - utils.summarize( - writer=writer, - global_step=global_step, - images=image_dict, - scalars=scalar_dict) - - if global_step % hps.train.eval_interval == 0: - evaluate(hps, net_g, eval_loader, writer_eval) - utils.save_checkpoint(net_g, None, hps.train.learning_rate, epoch, os.path.join(hps.model_dir, "G_{}.pth".format(global_step))) - utils.save_checkpoint(net_g, None, hps.train.learning_rate, epoch, - os.path.join(hps.model_dir, "G_latest.pth".format(global_step))) - # utils.save_checkpoint(net_d, optim_d, hps.train.learning_rate, epoch, os.path.join(hps.model_dir, "D_{}.pth".format(global_step))) - old_g=os.path.join(hps.model_dir, "G_{}.pth".format(global_step-4000)) - # old_d=os.path.join(hps.model_dir, "D_{}.pth".format(global_step-400)) - if os.path.exists(old_g): - os.remove(old_g) - # if os.path.exists(old_d): - # os.remove(old_d) - global_step += 1 - if epoch > hps.max_epochs: - print("Maximum epoch reached, closing training...") - exit() - - if rank == 0: - logger.info('====> Epoch: {}'.format(epoch)) - - -def evaluate(hps, generator, eval_loader, writer_eval): - generator.eval() - with torch.no_grad(): - for batch_idx, (x, x_lengths, spec, spec_lengths, y, y_lengths, speakers) in enumerate(eval_loader): - x, x_lengths = x.cuda(0), x_lengths.cuda(0) - spec, spec_lengths = spec.cuda(0), spec_lengths.cuda(0) - y, y_lengths = y.cuda(0), y_lengths.cuda(0) - speakers = speakers.cuda(0) - - # remove else - x = x[:1] - x_lengths = x_lengths[:1] - spec = spec[:1] - spec_lengths = spec_lengths[:1] - y = y[:1] - y_lengths = y_lengths[:1] - speakers = speakers[:1] - break - y_hat, attn, mask, *_ = generator.module.infer(x, x_lengths, speakers, max_len=1000) - y_hat_lengths = mask.sum([1,2]).long() * hps.data.hop_length - - mel = spec_to_mel_torch( - spec, - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.mel_fmin, - hps.data.mel_fmax) - y_hat_mel = mel_spectrogram_torch( - y_hat.squeeze(1).float(), - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.hop_length, - hps.data.win_length, - hps.data.mel_fmin, - hps.data.mel_fmax - ) - image_dict = { - "gen/mel": utils.plot_spectrogram_to_numpy(y_hat_mel[0].cpu().numpy()) - } - audio_dict = { - "gen/audio": y_hat[0,:,:y_hat_lengths[0]] - } - if global_step == 0: - image_dict.update({"gt/mel": utils.plot_spectrogram_to_numpy(mel[0].cpu().numpy())}) - audio_dict.update({"gt/audio": y[0,:,:y_lengths[0]]}) - - utils.summarize( - writer=writer_eval, - global_step=global_step, - images=image_dict, - audios=audio_dict, - audio_sampling_rate=hps.data.sampling_rate - ) - generator.train() - - -if __name__ == "__main__": - main() diff --git a/spaces/cheetah003/HMMC_t2v_search/util.py b/spaces/cheetah003/HMMC_t2v_search/util.py deleted file mode 100644 index e03e51d4cfa0c9b96ee1377274b1b2bae3e49cc6..0000000000000000000000000000000000000000 --- a/spaces/cheetah003/HMMC_t2v_search/util.py +++ /dev/null @@ -1,75 +0,0 @@ -import torch -import torch.nn as nn -import threading -from torch._utils import ExceptionWrapper -import logging -import torch.nn.functional as F - -def get_a_var(obj): - if isinstance(obj, torch.Tensor): - return obj - - if isinstance(obj, list) or isinstance(obj, tuple): - for result in map(get_a_var, obj): - if isinstance(result, torch.Tensor): - return result - if isinstance(obj, dict): - for result in map(get_a_var, obj.items()): - if isinstance(result, torch.Tensor): - return result - return None - - -def parallel_apply(fct, model, inputs, device_ids): - modules = nn.parallel.replicate(model, device_ids) - assert len(modules) == len(inputs) - lock = threading.Lock() - results = {} - grad_enabled = torch.is_grad_enabled() - - def _worker(i, module, input): - torch.set_grad_enabled(grad_enabled) - device = get_a_var(input).get_device() - try: - with torch.cuda.device(device): - # this also avoids accidental slicing of `input` if it is a Tensor - if not isinstance(input, (list, tuple)): - input = (input,) - output = fct(module, *input) - with lock: - results[i] = output - except Exception: - with lock: - results[i] = ExceptionWrapper(where="in replica {} on device {}".format(i, device)) - - if len(modules) > 1: - threads = [threading.Thread(target=_worker, args=(i, module, input)) - for i, (module, input) in enumerate(zip(modules, inputs))] - - for thread in threads: - thread.start() - for thread in threads: - thread.join() - else: - _worker(0, modules[0], inputs[0]) - - outputs = [] - for i in range(len(inputs)): - output = results[i] - if isinstance(output, ExceptionWrapper): - output.reraise() - outputs.append(output) - return outputs - -def get_logger(filename=None): - logger = logging.getLogger('logger') - logger.setLevel(logging.DEBUG) - logging.basicConfig(format='%(asctime)s - %(levelname)s - %(message)s', - datefmt='%m/%d/%Y %H:%M:%S', - level=logging.INFO) - if filename is not None: - handler = logging.FileHandler(filename) - handler.setLevel(logging.DEBUG) - handler.setFormatter(logging.Formatter('%(asctime)s:%(levelname)s: %(message)s')) - logging.getLogger().addHandler(handler) - return logger \ No newline at end of file diff --git a/spaces/chendl/compositional_test/transformers/examples/legacy/seq2seq/train_distil_marian_enro_tpu.sh b/spaces/chendl/compositional_test/transformers/examples/legacy/seq2seq/train_distil_marian_enro_tpu.sh deleted file mode 100644 index 2fce7684ab449d82431307639b6c24c975491bc2..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/transformers/examples/legacy/seq2seq/train_distil_marian_enro_tpu.sh +++ /dev/null @@ -1,39 +0,0 @@ -# Copyright 2020 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -export WANDB_PROJECT=distil-marian -export BS=64 -export m=sshleifer/student_marian_en_ro_6_3 -export MAX_LEN=128 -export TPU_NUM_CORES=8 - -python xla_spawn.py --num_cores $TPU_NUM_CORES \ - finetune_trainer.py \ - --tokenizer_name $m --model_name_or_path $m \ - --data_dir $ENRO_DIR \ - --output_dir marian_en_ro_6_3 --overwrite_output_dir \ - --learning_rate=3e-4 \ - --warmup_steps 500 \ - --per_device_train_batch_size=$BS --per_device_eval_batch_size=$BS \ - --freeze_encoder --freeze_embeds \ - --num_train_epochs=6 \ - --save_steps 500 --eval_steps 500 \ - --logging_first_step --logging_steps 200 \ - --max_source_length $MAX_LEN --max_target_length $MAX_LEN \ - --val_max_target_length $MAX_TGT_LEN --test_max_target_length $MAX_TGT_LEN \ - --do_train --do_eval \ - --evaluation_strategy steps \ - --prediction_loss_only \ - --task translation --label_smoothing_factor 0.1 \ - "$@" diff --git a/spaces/chilge/taoli/inference_main.py b/spaces/chilge/taoli/inference_main.py deleted file mode 100644 index 825e791db86d37e955f42e8cb34323dbb248ed32..0000000000000000000000000000000000000000 --- a/spaces/chilge/taoli/inference_main.py +++ /dev/null @@ -1,65 +0,0 @@ -import io -import logging -import time -from pathlib import Path - -import librosa -import numpy as np -import soundfile - -from inference import infer_tool -from inference import slicer -from inference.infer_tool import Svc - -logging.getLogger('numba').setLevel(logging.WARNING) -chunks_dict = infer_tool.read_temp("inference/chunks_temp.json") - -model_path = "logs/48k/G_174000-Copy1.pth" -config_path = "configs/config.json" -svc_model = Svc(model_path, config_path) -infer_tool.mkdir(["raw", "results"]) - -# 支持多个wav文件,放在raw文件夹下 -clean_names = ["君の知らない物語-src"] -trans = [-5] # 音高调整,支持正负(半音) -spk_list = ['yunhao'] # 每次同时合成多语者音色 -slice_db = -40 # 默认-40,嘈杂的音频可以-30,干声保留呼吸可以-50 -wav_format = 'flac' # 音频输出格式 - -infer_tool.fill_a_to_b(trans, clean_names) -for clean_name, tran in zip(clean_names, trans): - raw_audio_path = f"raw/{clean_name}" - if "." not in raw_audio_path: - raw_audio_path += ".wav" - infer_tool.format_wav(raw_audio_path) - wav_path = Path(raw_audio_path).with_suffix('.wav') - audio, sr = librosa.load(wav_path, mono=True, sr=None) - wav_hash = infer_tool.get_md5(audio) - if wav_hash in chunks_dict.keys(): - print("load chunks from temp") - chunks = chunks_dict[wav_hash]["chunks"] - else: - chunks = slicer.cut(wav_path, db_thresh=slice_db) - print(chunks) - chunks_dict[wav_hash] = {"chunks": chunks, "time": int(time.time())} - infer_tool.write_temp("inference/chunks_temp.json", chunks_dict) - audio_data, audio_sr = slicer.chunks2audio(wav_path, chunks) - - for spk in spk_list: - audio = [] - for (slice_tag, data) in audio_data: - print(f'#=====segment start, {round(len(data) / audio_sr, 3)}s======') - length = int(np.ceil(len(data) / audio_sr * svc_model.target_sample)) - raw_path = io.BytesIO() - soundfile.write(raw_path, data, audio_sr, format="wav") - raw_path.seek(0) - if slice_tag: - print('jump empty segment') - _audio = np.zeros(length) - else: - out_audio, out_sr = svc_model.infer(spk, tran, raw_path) - _audio = out_audio.cpu().numpy() - audio.extend(list(_audio)) - - res_path = f'./results/{clean_name}_{tran}key_{spk}.{wav_format}' - soundfile.write(res_path, audio, svc_model.target_sample, format=wav_format) diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/colorLib/__init__.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/colorLib/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/cihyFjudo/fairness-paper-search/Crack de McAfee Security Scan Plus A Simple Guide to Install and Activate It.md b/spaces/cihyFjudo/fairness-paper-search/Crack de McAfee Security Scan Plus A Simple Guide to Install and Activate It.md deleted file mode 100644 index 3e2dc8f47f1dd073c2c504f09c0142a0df5369f5..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Crack de McAfee Security Scan Plus A Simple Guide to Install and Activate It.md +++ /dev/null @@ -1,10 +0,0 @@ - -

Local administrator accounts provide an attack vector for attackers who gain access to a system. Credentials can be cracked offline and more accounts means more likelihood of a successful crack. Therefore, you should aim for a maximum of one local administrator account which is secured appropriately.

-

crack de mcafee security scan plus


Download ❤❤❤ https://tinurli.com/2uwiLo



-

The main benefit of a strong password is security. Hackers work quickly when they are trying to access accounts. They want to steal as much information as they can in as short a time as possible. This makes an account with a strong password less inviting because cracking the code is much more involved.

-

A strong password also limits the damage that hackers can do to your personal accounts. A common strategy involves cracking the passwords of less secure sites with limited personal information. The hackers hope that they can use the password from your gym membership app to access information in your online banking account. Strong password protection prevents this situation.

-

A password is considered strong when it is difficult for a hacker to crack it quickly. Sophisticated algorithms can run through many password combinations in a short time. A password that is long, complex and unique will discourage attempts to break into your accounts.

-

Day 22. Do a tracepath to your favorite site or service. How many machines get their hands on your data between here and there?
Day 23. Connect a machine with a common OS to the internet. Measure mean time to compromise.
Day 24. Run crack against all your encrypted passwords
Day 25. Run a port scan on your own IP address
Day 26. Do a security audit of your own computer
Day 27. Walk a tablet/netbook/PDA around your wireless access point and map its range
Day 28. Go wardriving with a friend. How many wireless access points can you find? How many are unsecured?

-

aaccfb2cb3
-
-
\ No newline at end of file diff --git a/spaces/cihyFjudo/fairness-paper-search/Disco De Nipkow Pdf Downloadl A DIY Guide to Image Transmission with a Nipkow Disk.md b/spaces/cihyFjudo/fairness-paper-search/Disco De Nipkow Pdf Downloadl A DIY Guide to Image Transmission with a Nipkow Disk.md deleted file mode 100644 index 5423e9672d83a240fb8dd3e9f48b4aa776ab0eb8..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Disco De Nipkow Pdf Downloadl A DIY Guide to Image Transmission with a Nipkow Disk.md +++ /dev/null @@ -1,6 +0,0 @@ -

Disco De Nipkow Pdf Downloadl


Download Ziphttps://tinurli.com/2uwiJa



-
- aaccfb2cb3
-
-
-

diff --git a/spaces/cihyFjudo/fairness-paper-search/README.md b/spaces/cihyFjudo/fairness-paper-search/README.md deleted file mode 100644 index 38752d101a422fe148aba93864aafdec498cf700..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/README.md +++ /dev/null @@ -1,15 +0,0 @@ ---- -title: Fairness Paper Search -emoji: 🐠 -colorFrom: blue -colorTo: pink -python: 3.9.7 -sdk: gradio -sdk_version: 3.23.0 -app_file: app.py -pinned: true -duplicated_from: nomnomnonono/fairness-paper-search ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference - diff --git a/spaces/cihyFjudo/fairness-paper-search/Tarzan Y Jane Xxx Online Video ((FREE)) Free.md b/spaces/cihyFjudo/fairness-paper-search/Tarzan Y Jane Xxx Online Video ((FREE)) Free.md deleted file mode 100644 index 51457b6bcece3b48c6b16884a9409500022a5fe2..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Tarzan Y Jane Xxx Online Video ((FREE)) Free.md +++ /dev/null @@ -1,6 +0,0 @@ -

Tarzan Y Jane Xxx Online Video Free


DOWNLOAD ✦✦✦ https://tinurli.com/2uwjRE



- - aaccfb2cb3
-
-
-

diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/certifi/__main__.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/certifi/__main__.py deleted file mode 100644 index 8945b5da857f4a7dec2b84f1225f012f6098418c..0000000000000000000000000000000000000000 --- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/certifi/__main__.py +++ /dev/null @@ -1,12 +0,0 @@ -import argparse - -from certifi import contents, where - -parser = argparse.ArgumentParser() -parser.add_argument("-c", "--contents", action="store_true") -args = parser.parse_args() - -if args.contents: - print(contents()) -else: - print(where()) diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/contourpy/util/_build_config.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/contourpy/util/_build_config.py deleted file mode 100644 index 7d641a0f11115657e45480d2002c6780c8f5d80a..0000000000000000000000000000000000000000 --- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/contourpy/util/_build_config.py +++ /dev/null @@ -1,58 +0,0 @@ -# _build_config.py.in is converted into _build_config.py during the meson build process. - -from __future__ import annotations - - -def build_config() -> dict[str, str]: - """ - Return a dictionary containing build configuration settings. - - All dictionary keys and values are strings, for example ``False`` is - returned as ``"False"``. - """ - return dict( - # Python settings - python_version="3.10", - python_install_dir=r"/usr/local/lib/python3.10/site-packages/", - python_path=r"/tmp/build-env-bm13lnj5/bin/python", - - # Package versions - contourpy_version="1.1.0", - meson_version="1.1.1", - mesonpy_version="0.13.1", - pybind11_version="2.10.4", - - # Misc meson settings - meson_backend="ninja", - build_dir=r"/project/.mesonpy-ni3w2uwu/build/lib/contourpy/util", - source_dir=r"/project/lib/contourpy/util", - cross_build="False", - - # Build options - build_options=r"-Dbuildtype=release -Db_ndebug=if-release -Db_vscrt=md -Dvsenv=True --native-file=/project/.mesonpy-ni3w2uwu/build/meson-python-native-file.ini", - buildtype="release", - cpp_std="c++17", - debug="False", - optimization="3", - vsenv="True", - b_ndebug="if-release", - b_vscrt="from_buildtype", - - # C++ compiler - compiler_name="gcc", - compiler_version="10.2.1", - linker_id="ld.bfd", - compile_command="c++", - - # Host machine - host_cpu="x86_64", - host_cpu_family="x86_64", - host_cpu_endian="little", - host_cpu_system="linux", - - # Build machine, same as host machine if not a cross_build - build_cpu="x86_64", - build_cpu_family="x86_64", - build_cpu_endian="little", - build_cpu_system="linux", - ) diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/filelock/_windows.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/filelock/_windows.py deleted file mode 100644 index 41683f48d9862c3c49b88db872beac5699c11bc2..0000000000000000000000000000000000000000 --- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/filelock/_windows.py +++ /dev/null @@ -1,64 +0,0 @@ -from __future__ import annotations - -import os -import sys -from contextlib import suppress -from errno import EACCES -from pathlib import Path -from typing import cast - -from ._api import BaseFileLock -from ._util import raise_on_not_writable_file - -if sys.platform == "win32": # pragma: win32 cover - import msvcrt - - class WindowsFileLock(BaseFileLock): - """Uses the :func:`msvcrt.locking` function to hard lock the lock file on Windows systems.""" - - def _acquire(self) -> None: - raise_on_not_writable_file(self.lock_file) - flags = ( - os.O_RDWR # open for read and write - | os.O_CREAT # create file if not exists - | os.O_TRUNC # truncate file if not empty - ) - try: - fd = os.open(self.lock_file, flags, self._context.mode) - except OSError as exception: - if exception.errno != EACCES: # has no access to this lock - raise - else: - try: - msvcrt.locking(fd, msvcrt.LK_NBLCK, 1) - except OSError as exception: - os.close(fd) # close file first - if exception.errno != EACCES: # file is already locked - raise - else: - self._context.lock_file_fd = fd - - def _release(self) -> None: - fd = cast(int, self._context.lock_file_fd) - self._context.lock_file_fd = None - msvcrt.locking(fd, msvcrt.LK_UNLCK, 1) - os.close(fd) - - with suppress(OSError): # Probably another instance of the application hat acquired the file lock. - Path(self.lock_file).unlink() - -else: # pragma: win32 no cover - - class WindowsFileLock(BaseFileLock): - """Uses the :func:`msvcrt.locking` function to hard lock the lock file on Windows systems.""" - - def _acquire(self) -> None: - raise NotImplementedError - - def _release(self) -> None: - raise NotImplementedError - - -__all__ = [ - "WindowsFileLock", -] diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/ac3dec_fixed.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/ac3dec_fixed.c deleted file mode 100644 index c9e5cda69cea1dbc853f5cb053157222245a43de..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/ac3dec_fixed.c +++ /dev/null @@ -1,188 +0,0 @@ -/* - * Copyright (c) 2012 - * MIPS Technologies, Inc., California. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * 1. Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * 2. Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in the - * documentation and/or other materials provided with the distribution. - * 3. Neither the name of the MIPS Technologies, Inc., nor the names of its - * contributors may be used to endorse or promote products derived from - * this software without specific prior written permission. - * - * THIS SOFTWARE IS PROVIDED BY THE MIPS TECHNOLOGIES, INC. ``AS IS'' AND - * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE - * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE - * ARE DISCLAIMED. IN NO EVENT SHALL THE MIPS TECHNOLOGIES, INC. BE LIABLE - * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL - * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS - * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) - * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT - * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY - * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF - * SUCH DAMAGE. - * - * Author: Stanislav Ocovaj (socovaj@mips.com) - * - * AC3 fixed-point decoder for MIPS platforms - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#define USE_FIXED 1 -#include "ac3dec.h" -#include "codec_internal.h" -#define IMDCT_TYPE AV_TX_INT32_MDCT - -#include "ac3dec.h" - -static const int end_freq_inv_tab[8] = -{ - 50529027, 44278013, 39403370, 32292987, 27356480, 23729101, 20951060, 18755316 -}; - -static void scale_coefs ( - int32_t *dst, - const int32_t *src, - int dynrng, - int len) -{ - int i, shift; - unsigned mul, round; - int temp, temp1, temp2, temp3, temp4, temp5, temp6, temp7; - - mul = (dynrng & 0x1f) + 0x20; - shift = 4 - (sign_extend(dynrng, 9) >> 5); - if (shift > 0 ) { - round = 1 << (shift-1); - for (i=0; i> shift; - temp3 = src[i+3] * mul; - temp2 = temp2 + round; - - dst[i+1] = temp1 >> shift; - temp4 = src[i + 4] * mul; - temp3 = temp3 + round; - dst[i+2] = temp2 >> shift; - - temp5 = src[i+5] * mul; - temp4 = temp4 + round; - dst[i+3] = temp3 >> shift; - temp6 = src[i+6] * mul; - - dst[i+4] = temp4 >> shift; - temp5 = temp5 + round; - temp7 = src[i+7] * mul; - temp6 = temp6 + round; - - dst[i+5] = temp5 >> shift; - temp7 = temp7 + round; - dst[i+6] = temp6 >> shift; - dst[i+7] = temp7 >> shift; - - } - } else { - shift = -shift; - mul <<= shift; - for (i=0; i>12; - samples[1][i] = (v1+2048)>>12; - } - } else if (out_ch == 1) { - for (i = 0; i < len; i++) { - v0 = 0; - for (j = 0; j < in_ch; j++) - v0 += samples[j][i] * matrix[0][j]; - samples[0][i] = (v0+2048)>>12; - } - } -} - -#include "eac3dec.c" -#include "ac3dec.c" - -static const AVOption options[] = { - { "cons_noisegen", "enable consistent noise generation", OFFSET(consistent_noise_generation), AV_OPT_TYPE_BOOL, {.i64 = 0 }, 0, 1, PAR }, - { "drc_scale", "percentage of dynamic range compression to apply", OFFSET(drc_scale), AV_OPT_TYPE_FLOAT, {.dbl = 1.0}, 0.0, 6.0, PAR }, - { "heavy_compr", "enable heavy dynamic range compression", OFFSET(heavy_compression), AV_OPT_TYPE_BOOL, {.i64 = 0 }, 0, 1, PAR }, - { "downmix", "Request a specific channel layout from the decoder", OFFSET(downmix_layout), AV_OPT_TYPE_CHLAYOUT, {.str = NULL}, .flags = PAR }, - { NULL}, -}; - -static const AVClass ac3_decoder_class = { - .class_name = "Fixed-Point AC-3 Decoder", - .item_name = av_default_item_name, - .option = options, - .version = LIBAVUTIL_VERSION_INT, -}; - -const FFCodec ff_ac3_fixed_decoder = { - .p.name = "ac3_fixed", - CODEC_LONG_NAME("ATSC A/52A (AC-3)"), - .p.type = AVMEDIA_TYPE_AUDIO, - .p.id = AV_CODEC_ID_AC3, - .p.priv_class = &ac3_decoder_class, - .priv_data_size = sizeof (AC3DecodeContext), - .init = ac3_decode_init, - .close = ac3_decode_end, - FF_CODEC_DECODE_CB(ac3_decode_frame), - .p.capabilities = AV_CODEC_CAP_CHANNEL_CONF | - AV_CODEC_CAP_DR1, - .p.sample_fmts = (const enum AVSampleFormat[]) { AV_SAMPLE_FMT_S16P, - AV_SAMPLE_FMT_NONE }, - .caps_internal = FF_CODEC_CAP_INIT_CLEANUP, -}; diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/alacenc.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/alacenc.c deleted file mode 100644 index 9598e5861e530fa363749e0adcc002a6fbc6a03e..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/alacenc.c +++ /dev/null @@ -1,668 +0,0 @@ -/* - * ALAC audio encoder - * Copyright (c) 2008 Jaikrishnan Menon - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#include "libavutil/opt.h" - -#include "avcodec.h" -#include "codec_internal.h" -#include "encode.h" -#include "put_bits.h" -#include "lpc.h" -#include "mathops.h" -#include "alac_data.h" - -#define DEFAULT_FRAME_SIZE 4096 -#define ALAC_EXTRADATA_SIZE 36 -#define ALAC_FRAME_HEADER_SIZE 55 -#define ALAC_FRAME_FOOTER_SIZE 3 - -#define ALAC_ESCAPE_CODE 0x1FF -#define ALAC_MAX_LPC_ORDER 30 -#define DEFAULT_MAX_PRED_ORDER 6 -#define DEFAULT_MIN_PRED_ORDER 4 -#define ALAC_MAX_LPC_PRECISION 9 -#define ALAC_MIN_LPC_SHIFT 0 -#define ALAC_MAX_LPC_SHIFT 9 - -#define ALAC_CHMODE_LEFT_RIGHT 0 -#define ALAC_CHMODE_LEFT_SIDE 1 -#define ALAC_CHMODE_RIGHT_SIDE 2 -#define ALAC_CHMODE_MID_SIDE 3 - -typedef struct RiceContext { - int history_mult; - int initial_history; - int k_modifier; - int rice_modifier; -} RiceContext; - -typedef struct AlacLPCContext { - int lpc_order; - int lpc_coeff[ALAC_MAX_LPC_ORDER+1]; - int lpc_quant; -} AlacLPCContext; - -typedef struct AlacEncodeContext { - const AVClass *class; - AVCodecContext *avctx; - int frame_size; /**< current frame size */ - int verbatim; /**< current frame verbatim mode flag */ - int compression_level; - int min_prediction_order; - int max_prediction_order; - int max_coded_frame_size; - int write_sample_size; - int extra_bits; - int32_t sample_buf[2][DEFAULT_FRAME_SIZE]; - int32_t predictor_buf[2][DEFAULT_FRAME_SIZE]; - int interlacing_shift; - int interlacing_leftweight; - PutBitContext pbctx; - RiceContext rc; - AlacLPCContext lpc[2]; - LPCContext lpc_ctx; -} AlacEncodeContext; - - -static void init_sample_buffers(AlacEncodeContext *s, int channels, - const uint8_t *samples[2]) -{ - int ch, i; - int shift = av_get_bytes_per_sample(s->avctx->sample_fmt) * 8 - - s->avctx->bits_per_raw_sample; - -#define COPY_SAMPLES(type) do { \ - for (ch = 0; ch < channels; ch++) { \ - int32_t *bptr = s->sample_buf[ch]; \ - const type *sptr = (const type *)samples[ch]; \ - for (i = 0; i < s->frame_size; i++) \ - bptr[i] = sptr[i] >> shift; \ - } \ - } while (0) - - if (s->avctx->sample_fmt == AV_SAMPLE_FMT_S32P) - COPY_SAMPLES(int32_t); - else - COPY_SAMPLES(int16_t); -} - -static void encode_scalar(AlacEncodeContext *s, int x, - int k, int write_sample_size) -{ - int divisor, q, r; - - k = FFMIN(k, s->rc.k_modifier); - divisor = (1< 8) { - // write escape code and sample value directly - put_bits(&s->pbctx, 9, ALAC_ESCAPE_CODE); - put_bits(&s->pbctx, write_sample_size, x); - } else { - if (q) - put_bits(&s->pbctx, q, (1<pbctx, 1, 0); - - if (k != 1) { - if (r > 0) - put_bits(&s->pbctx, k, r+1); - else - put_bits(&s->pbctx, k-1, 0); - } - } -} - -static void write_element_header(AlacEncodeContext *s, - enum AlacRawDataBlockType element, - int instance) -{ - int encode_fs = 0; - - if (s->frame_size < DEFAULT_FRAME_SIZE) - encode_fs = 1; - - put_bits(&s->pbctx, 3, element); // element type - put_bits(&s->pbctx, 4, instance); // element instance - put_bits(&s->pbctx, 12, 0); // unused header bits - put_bits(&s->pbctx, 1, encode_fs); // Sample count is in the header - put_bits(&s->pbctx, 2, s->extra_bits >> 3); // Extra bytes (for 24-bit) - put_bits(&s->pbctx, 1, s->verbatim); // Audio block is verbatim - if (encode_fs) - put_bits32(&s->pbctx, s->frame_size); // No. of samples in the frame -} - -static void calc_predictor_params(AlacEncodeContext *s, int ch) -{ - int32_t coefs[MAX_LPC_ORDER][MAX_LPC_ORDER]; - int shift[MAX_LPC_ORDER]; - int opt_order; - - if (s->compression_level == 1) { - s->lpc[ch].lpc_order = 6; - s->lpc[ch].lpc_quant = 6; - s->lpc[ch].lpc_coeff[0] = 160; - s->lpc[ch].lpc_coeff[1] = -190; - s->lpc[ch].lpc_coeff[2] = 170; - s->lpc[ch].lpc_coeff[3] = -130; - s->lpc[ch].lpc_coeff[4] = 80; - s->lpc[ch].lpc_coeff[5] = -25; - } else { - opt_order = ff_lpc_calc_coefs(&s->lpc_ctx, s->sample_buf[ch], - s->frame_size, - s->min_prediction_order, - s->max_prediction_order, - ALAC_MAX_LPC_PRECISION, coefs, shift, - FF_LPC_TYPE_LEVINSON, 0, - ORDER_METHOD_EST, ALAC_MIN_LPC_SHIFT, - ALAC_MAX_LPC_SHIFT, 1); - - s->lpc[ch].lpc_order = opt_order; - s->lpc[ch].lpc_quant = shift[opt_order-1]; - memcpy(s->lpc[ch].lpc_coeff, coefs[opt_order-1], opt_order*sizeof(int)); - } -} - -static int estimate_stereo_mode(int32_t *left_ch, int32_t *right_ch, int n) -{ - int i, best; - int32_t lt, rt; - uint64_t sum[4]; - uint64_t score[4]; - - /* calculate sum of 2nd order residual for each channel */ - sum[0] = sum[1] = sum[2] = sum[3] = 0; - for (i = 2; i < n; i++) { - lt = left_ch[i] - 2 * left_ch[i - 1] + left_ch[i - 2]; - rt = right_ch[i] - 2 * right_ch[i - 1] + right_ch[i - 2]; - sum[2] += FFABS((lt + rt) >> 1); - sum[3] += FFABS(lt - rt); - sum[0] += FFABS(lt); - sum[1] += FFABS(rt); - } - - /* calculate score for each mode */ - score[0] = sum[0] + sum[1]; - score[1] = sum[0] + sum[3]; - score[2] = sum[1] + sum[3]; - score[3] = sum[2] + sum[3]; - - /* return mode with lowest score */ - best = 0; - for (i = 1; i < 4; i++) { - if (score[i] < score[best]) - best = i; - } - return best; -} - -static void alac_stereo_decorrelation(AlacEncodeContext *s) -{ - int32_t *left = s->sample_buf[0], *right = s->sample_buf[1]; - int i, mode, n = s->frame_size; - int32_t tmp; - - mode = estimate_stereo_mode(left, right, n); - - switch (mode) { - case ALAC_CHMODE_LEFT_RIGHT: - s->interlacing_leftweight = 0; - s->interlacing_shift = 0; - break; - case ALAC_CHMODE_LEFT_SIDE: - for (i = 0; i < n; i++) - right[i] = left[i] - right[i]; - s->interlacing_leftweight = 1; - s->interlacing_shift = 0; - break; - case ALAC_CHMODE_RIGHT_SIDE: - for (i = 0; i < n; i++) { - tmp = right[i]; - right[i] = left[i] - right[i]; - left[i] = tmp + (right[i] >> 31); - } - s->interlacing_leftweight = 1; - s->interlacing_shift = 31; - break; - default: - for (i = 0; i < n; i++) { - tmp = left[i]; - left[i] = (tmp + right[i]) >> 1; - right[i] = tmp - right[i]; - } - s->interlacing_leftweight = 1; - s->interlacing_shift = 1; - break; - } -} - -static void alac_linear_predictor(AlacEncodeContext *s, int ch) -{ - int i; - AlacLPCContext lpc = s->lpc[ch]; - int32_t *residual = s->predictor_buf[ch]; - - if (lpc.lpc_order == 31) { - residual[0] = s->sample_buf[ch][0]; - - for (i = 1; i < s->frame_size; i++) { - residual[i] = s->sample_buf[ch][i ] - - s->sample_buf[ch][i - 1]; - } - - return; - } - - // generalised linear predictor - - if (lpc.lpc_order > 0) { - int32_t *samples = s->sample_buf[ch]; - - // generate warm-up samples - residual[0] = samples[0]; - for (i = 1; i <= lpc.lpc_order; i++) - residual[i] = sign_extend(samples[i] - samples[i-1], s->write_sample_size); - - // perform lpc on remaining samples - for (i = lpc.lpc_order + 1; i < s->frame_size; i++) { - int sum = 1 << (lpc.lpc_quant - 1), res_val, j; - - for (j = 0; j < lpc.lpc_order; j++) { - sum += (samples[lpc.lpc_order-j] - samples[0]) * - lpc.lpc_coeff[j]; - } - - sum >>= lpc.lpc_quant; - sum += samples[0]; - residual[i] = sign_extend(samples[lpc.lpc_order+1] - sum, - s->write_sample_size); - res_val = residual[i]; - - if (res_val) { - int index = lpc.lpc_order - 1; - int neg = (res_val < 0); - - while (index >= 0 && (neg ? (res_val < 0) : (res_val > 0))) { - int val = samples[0] - samples[lpc.lpc_order - index]; - int sign = (val ? FFSIGN(val) : 0); - - if (neg) - sign *= -1; - - lpc.lpc_coeff[index] -= sign; - val *= sign; - res_val -= (val >> lpc.lpc_quant) * (lpc.lpc_order - index); - index--; - } - } - samples++; - } - } -} - -static void alac_entropy_coder(AlacEncodeContext *s, int ch) -{ - unsigned int history = s->rc.initial_history; - int sign_modifier = 0, i, k; - int32_t *samples = s->predictor_buf[ch]; - - for (i = 0; i < s->frame_size;) { - int x; - - k = av_log2((history >> 9) + 3); - - x = -2 * (*samples) -1; - x ^= x >> 31; - - samples++; - i++; - - encode_scalar(s, x - sign_modifier, k, s->write_sample_size); - - history += x * s->rc.history_mult - - ((history * s->rc.history_mult) >> 9); - - sign_modifier = 0; - if (x > 0xFFFF) - history = 0xFFFF; - - if (history < 128 && i < s->frame_size) { - unsigned int block_size = 0; - - k = 7 - av_log2(history) + ((history + 16) >> 6); - - while (*samples == 0 && i < s->frame_size) { - samples++; - i++; - block_size++; - } - encode_scalar(s, block_size, k, 16); - sign_modifier = (block_size <= 0xFFFF); - history = 0; - } - - } -} - -static void write_element(AlacEncodeContext *s, - enum AlacRawDataBlockType element, int instance, - const uint8_t *samples0, const uint8_t *samples1) -{ - const uint8_t *samples[2] = { samples0, samples1 }; - int i, j, channels; - int prediction_type = 0; - PutBitContext *pb = &s->pbctx; - - channels = element == TYPE_CPE ? 2 : 1; - - if (s->verbatim) { - write_element_header(s, element, instance); - /* samples are channel-interleaved in verbatim mode */ - if (s->avctx->sample_fmt == AV_SAMPLE_FMT_S32P) { - int shift = 32 - s->avctx->bits_per_raw_sample; - const int32_t *samples_s32[2] = { (const int32_t *)samples0, - (const int32_t *)samples1 }; - for (i = 0; i < s->frame_size; i++) - for (j = 0; j < channels; j++) - put_sbits(pb, s->avctx->bits_per_raw_sample, - samples_s32[j][i] >> shift); - } else { - const int16_t *samples_s16[2] = { (const int16_t *)samples0, - (const int16_t *)samples1 }; - for (i = 0; i < s->frame_size; i++) - for (j = 0; j < channels; j++) - put_sbits(pb, s->avctx->bits_per_raw_sample, - samples_s16[j][i]); - } - } else { - s->write_sample_size = s->avctx->bits_per_raw_sample - s->extra_bits + - channels - 1; - - init_sample_buffers(s, channels, samples); - write_element_header(s, element, instance); - - // extract extra bits if needed - if (s->extra_bits) { - uint32_t mask = (1 << s->extra_bits) - 1; - for (j = 0; j < channels; j++) { - int32_t *extra = s->predictor_buf[j]; - int32_t *smp = s->sample_buf[j]; - for (i = 0; i < s->frame_size; i++) { - extra[i] = smp[i] & mask; - smp[i] >>= s->extra_bits; - } - } - } - - if (channels == 2) - alac_stereo_decorrelation(s); - else - s->interlacing_shift = s->interlacing_leftweight = 0; - put_bits(pb, 8, s->interlacing_shift); - put_bits(pb, 8, s->interlacing_leftweight); - - for (i = 0; i < channels; i++) { - calc_predictor_params(s, i); - - put_bits(pb, 4, prediction_type); - put_bits(pb, 4, s->lpc[i].lpc_quant); - - put_bits(pb, 3, s->rc.rice_modifier); - put_bits(pb, 5, s->lpc[i].lpc_order); - // predictor coeff. table - for (j = 0; j < s->lpc[i].lpc_order; j++) - put_sbits(pb, 16, s->lpc[i].lpc_coeff[j]); - } - - // write extra bits if needed - if (s->extra_bits) { - for (i = 0; i < s->frame_size; i++) { - for (j = 0; j < channels; j++) { - put_bits(pb, s->extra_bits, s->predictor_buf[j][i]); - } - } - } - - // apply lpc and entropy coding to audio samples - for (i = 0; i < channels; i++) { - alac_linear_predictor(s, i); - - // TODO: determine when this will actually help. for now it's not used. - if (prediction_type == 15) { - // 2nd pass 1st order filter - int32_t *residual = s->predictor_buf[i]; - for (j = s->frame_size - 1; j > 0; j--) - residual[j] -= residual[j - 1]; - } - alac_entropy_coder(s, i); - } - } -} - -static int write_frame(AlacEncodeContext *s, AVPacket *avpkt, - uint8_t * const *samples) -{ - PutBitContext *pb = &s->pbctx; - int channels = s->avctx->ch_layout.nb_channels; - const enum AlacRawDataBlockType *ch_elements = ff_alac_channel_elements[channels - 1]; - const uint8_t *ch_map = ff_alac_channel_layout_offsets[channels - 1]; - int ch, element, sce, cpe; - - init_put_bits(pb, avpkt->data, avpkt->size); - - ch = element = sce = cpe = 0; - while (ch < channels) { - if (ch_elements[element] == TYPE_CPE) { - write_element(s, TYPE_CPE, cpe, samples[ch_map[ch]], - samples[ch_map[ch + 1]]); - cpe++; - ch += 2; - } else { - write_element(s, TYPE_SCE, sce, samples[ch_map[ch]], NULL); - sce++; - ch++; - } - element++; - } - - put_bits(pb, 3, TYPE_END); - flush_put_bits(pb); - - return put_bytes_output(pb); -} - -static av_always_inline int get_max_frame_size(int frame_size, int ch, int bps) -{ - int header_bits = 23 + 32 * (frame_size < DEFAULT_FRAME_SIZE); - return FFALIGN(header_bits + bps * ch * frame_size + 3, 8) / 8; -} - -static av_cold int alac_encode_close(AVCodecContext *avctx) -{ - AlacEncodeContext *s = avctx->priv_data; - ff_lpc_end(&s->lpc_ctx); - return 0; -} - -static av_cold int alac_encode_init(AVCodecContext *avctx) -{ - AlacEncodeContext *s = avctx->priv_data; - int ret; - uint8_t *alac_extradata; - - avctx->frame_size = s->frame_size = DEFAULT_FRAME_SIZE; - - if (avctx->sample_fmt == AV_SAMPLE_FMT_S32P) { - if (avctx->bits_per_raw_sample != 24) - av_log(avctx, AV_LOG_WARNING, "encoding as 24 bits-per-sample\n"); - avctx->bits_per_raw_sample = 24; - } else { - avctx->bits_per_raw_sample = 16; - s->extra_bits = 0; - } - - // Set default compression level - if (avctx->compression_level == FF_COMPRESSION_DEFAULT) - s->compression_level = 2; - else - s->compression_level = av_clip(avctx->compression_level, 0, 2); - - // Initialize default Rice parameters - s->rc.history_mult = 40; - s->rc.initial_history = 10; - s->rc.k_modifier = 14; - s->rc.rice_modifier = 4; - - s->max_coded_frame_size = get_max_frame_size(avctx->frame_size, - avctx->ch_layout.nb_channels, - avctx->bits_per_raw_sample); - - avctx->extradata = av_mallocz(ALAC_EXTRADATA_SIZE + AV_INPUT_BUFFER_PADDING_SIZE); - if (!avctx->extradata) - return AVERROR(ENOMEM); - avctx->extradata_size = ALAC_EXTRADATA_SIZE; - - alac_extradata = avctx->extradata; - AV_WB32(alac_extradata, ALAC_EXTRADATA_SIZE); - AV_WB32(alac_extradata+4, MKBETAG('a','l','a','c')); - AV_WB32(alac_extradata+12, avctx->frame_size); - AV_WB8 (alac_extradata+17, avctx->bits_per_raw_sample); - AV_WB8 (alac_extradata+21, avctx->ch_layout.nb_channels); - AV_WB32(alac_extradata+24, s->max_coded_frame_size); - AV_WB32(alac_extradata+28, - avctx->sample_rate * avctx->ch_layout.nb_channels * avctx->bits_per_raw_sample); // average bitrate - AV_WB32(alac_extradata+32, avctx->sample_rate); - - // Set relevant extradata fields - if (s->compression_level > 0) { - AV_WB8(alac_extradata+18, s->rc.history_mult); - AV_WB8(alac_extradata+19, s->rc.initial_history); - AV_WB8(alac_extradata+20, s->rc.k_modifier); - } - - if (s->max_prediction_order < s->min_prediction_order) { - av_log(avctx, AV_LOG_ERROR, - "invalid prediction orders: min=%d max=%d\n", - s->min_prediction_order, s->max_prediction_order); - return AVERROR(EINVAL); - } - - s->avctx = avctx; - - if ((ret = ff_lpc_init(&s->lpc_ctx, avctx->frame_size, - s->max_prediction_order, - FF_LPC_TYPE_LEVINSON)) < 0) { - return ret; - } - - return 0; -} - -static int alac_encode_frame(AVCodecContext *avctx, AVPacket *avpkt, - const AVFrame *frame, int *got_packet_ptr) -{ - AlacEncodeContext *s = avctx->priv_data; - int out_bytes, max_frame_size, ret; - - s->frame_size = frame->nb_samples; - - if (frame->nb_samples < DEFAULT_FRAME_SIZE) - max_frame_size = get_max_frame_size(s->frame_size, avctx->ch_layout.nb_channels, - avctx->bits_per_raw_sample); - else - max_frame_size = s->max_coded_frame_size; - - if ((ret = ff_alloc_packet(avctx, avpkt, 4 * max_frame_size)) < 0) - return ret; - - /* use verbatim mode for compression_level 0 */ - if (s->compression_level) { - s->verbatim = 0; - s->extra_bits = avctx->bits_per_raw_sample - 16; - } else { - s->verbatim = 1; - s->extra_bits = 0; - } - - out_bytes = write_frame(s, avpkt, frame->extended_data); - - if (out_bytes > max_frame_size) { - /* frame too large. use verbatim mode */ - s->verbatim = 1; - s->extra_bits = 0; - out_bytes = write_frame(s, avpkt, frame->extended_data); - } - - avpkt->size = out_bytes; - *got_packet_ptr = 1; - return 0; -} - -#if FF_API_OLD_CHANNEL_LAYOUT -static const uint64_t alac_channel_layouts[ALAC_MAX_CHANNELS + 1] = { - AV_CH_LAYOUT_MONO, - AV_CH_LAYOUT_STEREO, - AV_CH_LAYOUT_SURROUND, - AV_CH_LAYOUT_4POINT0, - AV_CH_LAYOUT_5POINT0_BACK, - AV_CH_LAYOUT_5POINT1_BACK, - AV_CH_LAYOUT_6POINT1_BACK, - AV_CH_LAYOUT_7POINT1_WIDE_BACK, - 0 -}; -#endif - - -#define OFFSET(x) offsetof(AlacEncodeContext, x) -#define AE AV_OPT_FLAG_AUDIO_PARAM | AV_OPT_FLAG_ENCODING_PARAM -static const AVOption options[] = { - { "min_prediction_order", NULL, OFFSET(min_prediction_order), AV_OPT_TYPE_INT, { .i64 = DEFAULT_MIN_PRED_ORDER }, MIN_LPC_ORDER, ALAC_MAX_LPC_ORDER, AE }, - { "max_prediction_order", NULL, OFFSET(max_prediction_order), AV_OPT_TYPE_INT, { .i64 = DEFAULT_MAX_PRED_ORDER }, MIN_LPC_ORDER, ALAC_MAX_LPC_ORDER, AE }, - - { NULL }, -}; - -static const AVClass alacenc_class = { - .class_name = "alacenc", - .item_name = av_default_item_name, - .option = options, - .version = LIBAVUTIL_VERSION_INT, -}; - -const FFCodec ff_alac_encoder = { - .p.name = "alac", - CODEC_LONG_NAME("ALAC (Apple Lossless Audio Codec)"), - .p.type = AVMEDIA_TYPE_AUDIO, - .p.id = AV_CODEC_ID_ALAC, - .p.capabilities = AV_CODEC_CAP_DR1 | AV_CODEC_CAP_SMALL_LAST_FRAME | - AV_CODEC_CAP_ENCODER_REORDERED_OPAQUE, - .priv_data_size = sizeof(AlacEncodeContext), - .p.priv_class = &alacenc_class, - .init = alac_encode_init, - FF_CODEC_ENCODE_CB(alac_encode_frame), - .close = alac_encode_close, - CODEC_OLD_CHANNEL_LAYOUTS_ARRAY(alac_channel_layouts) - .p.ch_layouts = ff_alac_ch_layouts, - .p.sample_fmts = (const enum AVSampleFormat[]){ AV_SAMPLE_FMT_S32P, - AV_SAMPLE_FMT_S16P, - AV_SAMPLE_FMT_NONE }, -}; diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/dcamath.h b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/dcamath.h deleted file mode 100644 index 38fa9a6235bd774d6003b1dc2bc182c08fa68238..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/dcamath.h +++ /dev/null @@ -1,56 +0,0 @@ -/* - * Copyright (C) 2016 foo86 - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#ifndef AVCODEC_DCAMATH_H -#define AVCODEC_DCAMATH_H - -#include "libavutil/common.h" -#include "libavutil/intmath.h" - -static inline int32_t norm__(int64_t a, int bits) -{ - if (bits > 0) - return (int32_t)((a + (INT64_C(1) << (bits - 1))) >> bits); - else - return (int32_t)a; -} - -static inline int32_t mul__(int32_t a, int32_t b, int bits) -{ - return norm__((int64_t)a * b, bits); -} - -static inline int32_t norm13(int64_t a) { return norm__(a, 13); } -static inline int32_t norm16(int64_t a) { return norm__(a, 16); } -static inline int32_t norm20(int64_t a) { return norm__(a, 20); } -static inline int32_t norm21(int64_t a) { return norm__(a, 21); } -static inline int32_t norm23(int64_t a) { return norm__(a, 23); } - -static inline int32_t mul15(int32_t a, int32_t b) { return mul__(a, b, 15); } -static inline int32_t mul16(int32_t a, int32_t b) { return mul__(a, b, 16); } -static inline int32_t mul17(int32_t a, int32_t b) { return mul__(a, b, 17); } -static inline int32_t mul22(int32_t a, int32_t b) { return mul__(a, b, 22); } -static inline int32_t mul23(int32_t a, int32_t b) { return mul__(a, b, 23); } -static inline int32_t mul31(int32_t a, int32_t b) { return mul__(a, b, 31); } -static inline int32_t mul32(int32_t a, int32_t b) { return mul__(a, b, 32); } - -static inline int32_t clip23(int32_t a) { return av_clip_intp2(a, 23); } - -#endif diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/kmvc.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/kmvc.c deleted file mode 100644 index 153cea03b9ce8518bbef33d02ed8e20923f808c5..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/kmvc.c +++ /dev/null @@ -1,414 +0,0 @@ -/* - * KMVC decoder - * Copyright (c) 2006 Konstantin Shishkov - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -/** - * @file - * Karl Morton's Video Codec decoder - */ - -#include - -#include "avcodec.h" -#include "bytestream.h" -#include "codec_internal.h" -#include "decode.h" -#include "libavutil/common.h" - -#define KMVC_KEYFRAME 0x80 -#define KMVC_PALETTE 0x40 -#define KMVC_METHOD 0x0F -#define MAX_PALSIZE 256 - -/* - * Decoder context - */ -typedef struct KmvcContext { - AVCodecContext *avctx; - - GetByteContext g; - uint8_t *cur, *prev; - int setpal; - int palsize; - uint32_t pal[MAX_PALSIZE]; - uint8_t frm0[320 * 200], frm1[320 * 200]; -} KmvcContext; - -typedef struct BitBuf { - int bits; - int bitbuf; -} BitBuf; - -#define BLK(data, x, y) data[av_clip((x) + (y) * 320, 0, 320 * 200 -1)] - -#define kmvc_init_getbits(bb, g) bb.bits = 7; bb.bitbuf = bytestream2_get_byte(g); - -#define kmvc_getbit(bb, g, res) {\ - res = 0; \ - if (bb.bitbuf & (1 << bb.bits)) res = 1; \ - bb.bits--; \ - if(bb.bits == -1) { \ - bb.bitbuf = bytestream2_get_byte(g); \ - bb.bits = 7; \ - } \ -} - -static int kmvc_decode_intra_8x8(KmvcContext * ctx, int w, int h) -{ - BitBuf bb; - int res, val; - int i, j; - int bx, by; - int l0x, l1x, l0y, l1y; - int mx, my; - - kmvc_init_getbits(bb, &ctx->g); - - for (by = 0; by < h; by += 8) - for (bx = 0; bx < w; bx += 8) { - if (!bytestream2_get_bytes_left(&ctx->g)) { - av_log(ctx->avctx, AV_LOG_ERROR, "Data overrun\n"); - return AVERROR_INVALIDDATA; - } - kmvc_getbit(bb, &ctx->g, res); - if (!res) { // fill whole 8x8 block - val = bytestream2_get_byte(&ctx->g); - for (i = 0; i < 64; i++) - BLK(ctx->cur, bx + (i & 0x7), by + (i >> 3)) = val; - } else { // handle four 4x4 subblocks - for (i = 0; i < 4; i++) { - l0x = bx + (i & 1) * 4; - l0y = by + (i & 2) * 2; - kmvc_getbit(bb, &ctx->g, res); - if (!res) { - kmvc_getbit(bb, &ctx->g, res); - if (!res) { // fill whole 4x4 block - val = bytestream2_get_byte(&ctx->g); - for (j = 0; j < 16; j++) - BLK(ctx->cur, l0x + (j & 3), l0y + (j >> 2)) = val; - } else { // copy block from already decoded place - val = bytestream2_get_byte(&ctx->g); - mx = val & 0xF; - my = val >> 4; - if ((l0x-mx) + 320*(l0y-my) < 0 || (l0x-mx) + 320*(l0y-my) > 320*197 - 4) { - av_log(ctx->avctx, AV_LOG_ERROR, "Invalid MV\n"); - return AVERROR_INVALIDDATA; - } - for (j = 0; j < 16; j++) - BLK(ctx->cur, l0x + (j & 3), l0y + (j >> 2)) = - BLK(ctx->cur, l0x + (j & 3) - mx, l0y + (j >> 2) - my); - } - } else { // descend to 2x2 sub-sub-blocks - for (j = 0; j < 4; j++) { - l1x = l0x + (j & 1) * 2; - l1y = l0y + (j & 2); - kmvc_getbit(bb, &ctx->g, res); - if (!res) { - kmvc_getbit(bb, &ctx->g, res); - if (!res) { // fill whole 2x2 block - val = bytestream2_get_byte(&ctx->g); - BLK(ctx->cur, l1x, l1y) = val; - BLK(ctx->cur, l1x + 1, l1y) = val; - BLK(ctx->cur, l1x, l1y + 1) = val; - BLK(ctx->cur, l1x + 1, l1y + 1) = val; - } else { // copy block from already decoded place - val = bytestream2_get_byte(&ctx->g); - mx = val & 0xF; - my = val >> 4; - if ((l1x-mx) + 320*(l1y-my) < 0 || (l1x-mx) + 320*(l1y-my) > 320*199 - 2) { - av_log(ctx->avctx, AV_LOG_ERROR, "Invalid MV\n"); - return AVERROR_INVALIDDATA; - } - BLK(ctx->cur, l1x, l1y) = BLK(ctx->cur, l1x - mx, l1y - my); - BLK(ctx->cur, l1x + 1, l1y) = - BLK(ctx->cur, l1x + 1 - mx, l1y - my); - BLK(ctx->cur, l1x, l1y + 1) = - BLK(ctx->cur, l1x - mx, l1y + 1 - my); - BLK(ctx->cur, l1x + 1, l1y + 1) = - BLK(ctx->cur, l1x + 1 - mx, l1y + 1 - my); - } - } else { // read values for block - BLK(ctx->cur, l1x, l1y) = bytestream2_get_byte(&ctx->g); - BLK(ctx->cur, l1x + 1, l1y) = bytestream2_get_byte(&ctx->g); - BLK(ctx->cur, l1x, l1y + 1) = bytestream2_get_byte(&ctx->g); - BLK(ctx->cur, l1x + 1, l1y + 1) = bytestream2_get_byte(&ctx->g); - } - } - } - } - } - } - - return 0; -} - -static int kmvc_decode_inter_8x8(KmvcContext * ctx, int w, int h) -{ - BitBuf bb; - int res, val; - int i, j; - int bx, by; - int l0x, l1x, l0y, l1y; - int mx, my; - - kmvc_init_getbits(bb, &ctx->g); - - for (by = 0; by < h; by += 8) - for (bx = 0; bx < w; bx += 8) { - kmvc_getbit(bb, &ctx->g, res); - if (!res) { - kmvc_getbit(bb, &ctx->g, res); - if (!res) { // fill whole 8x8 block - if (!bytestream2_get_bytes_left(&ctx->g)) { - av_log(ctx->avctx, AV_LOG_ERROR, "Data overrun\n"); - return AVERROR_INVALIDDATA; - } - val = bytestream2_get_byte(&ctx->g); - for (i = 0; i < 64; i++) - BLK(ctx->cur, bx + (i & 0x7), by + (i >> 3)) = val; - } else { // copy block from previous frame - for (i = 0; i < 64; i++) - BLK(ctx->cur, bx + (i & 0x7), by + (i >> 3)) = - BLK(ctx->prev, bx + (i & 0x7), by + (i >> 3)); - } - } else { // handle four 4x4 subblocks - if (!bytestream2_get_bytes_left(&ctx->g)) { - av_log(ctx->avctx, AV_LOG_ERROR, "Data overrun\n"); - return AVERROR_INVALIDDATA; - } - for (i = 0; i < 4; i++) { - l0x = bx + (i & 1) * 4; - l0y = by + (i & 2) * 2; - kmvc_getbit(bb, &ctx->g, res); - if (!res) { - kmvc_getbit(bb, &ctx->g, res); - if (!res) { // fill whole 4x4 block - val = bytestream2_get_byte(&ctx->g); - for (j = 0; j < 16; j++) - BLK(ctx->cur, l0x + (j & 3), l0y + (j >> 2)) = val; - } else { // copy block - val = bytestream2_get_byte(&ctx->g); - mx = (val & 0xF) - 8; - my = (val >> 4) - 8; - if ((l0x+mx) + 320*(l0y+my) < 0 || (l0x+mx) + 320*(l0y+my) > 320*197 - 4) { - av_log(ctx->avctx, AV_LOG_ERROR, "Invalid MV\n"); - return AVERROR_INVALIDDATA; - } - for (j = 0; j < 16; j++) - BLK(ctx->cur, l0x + (j & 3), l0y + (j >> 2)) = - BLK(ctx->prev, l0x + (j & 3) + mx, l0y + (j >> 2) + my); - } - } else { // descend to 2x2 sub-sub-blocks - for (j = 0; j < 4; j++) { - l1x = l0x + (j & 1) * 2; - l1y = l0y + (j & 2); - kmvc_getbit(bb, &ctx->g, res); - if (!res) { - kmvc_getbit(bb, &ctx->g, res); - if (!res) { // fill whole 2x2 block - val = bytestream2_get_byte(&ctx->g); - BLK(ctx->cur, l1x, l1y) = val; - BLK(ctx->cur, l1x + 1, l1y) = val; - BLK(ctx->cur, l1x, l1y + 1) = val; - BLK(ctx->cur, l1x + 1, l1y + 1) = val; - } else { // copy block - val = bytestream2_get_byte(&ctx->g); - mx = (val & 0xF) - 8; - my = (val >> 4) - 8; - if ((l1x+mx) + 320*(l1y+my) < 0 || (l1x+mx) + 320*(l1y+my) > 320*199 - 2) { - av_log(ctx->avctx, AV_LOG_ERROR, "Invalid MV\n"); - return AVERROR_INVALIDDATA; - } - BLK(ctx->cur, l1x, l1y) = BLK(ctx->prev, l1x + mx, l1y + my); - BLK(ctx->cur, l1x + 1, l1y) = - BLK(ctx->prev, l1x + 1 + mx, l1y + my); - BLK(ctx->cur, l1x, l1y + 1) = - BLK(ctx->prev, l1x + mx, l1y + 1 + my); - BLK(ctx->cur, l1x + 1, l1y + 1) = - BLK(ctx->prev, l1x + 1 + mx, l1y + 1 + my); - } - } else { // read values for block - BLK(ctx->cur, l1x, l1y) = bytestream2_get_byte(&ctx->g); - BLK(ctx->cur, l1x + 1, l1y) = bytestream2_get_byte(&ctx->g); - BLK(ctx->cur, l1x, l1y + 1) = bytestream2_get_byte(&ctx->g); - BLK(ctx->cur, l1x + 1, l1y + 1) = bytestream2_get_byte(&ctx->g); - } - } - } - } - } - } - - return 0; -} - -static int decode_frame(AVCodecContext * avctx, AVFrame *frame, - int *got_frame, AVPacket *avpkt) -{ - KmvcContext *const ctx = avctx->priv_data; - uint8_t *out, *src; - int i, ret; - int header; - int blocksize; - - bytestream2_init(&ctx->g, avpkt->data, avpkt->size); - - if ((ret = ff_get_buffer(avctx, frame, 0)) < 0) - return ret; - - frame->palette_has_changed = ff_copy_palette(ctx->pal, avpkt, avctx); - - header = bytestream2_get_byte(&ctx->g); - - /* blocksize 127 is really palette change event */ - if (bytestream2_peek_byte(&ctx->g) == 127) { - bytestream2_skip(&ctx->g, 3); - for (i = 0; i < 127; i++) { - ctx->pal[i + (header & 0x81)] = 0xFFU << 24 | bytestream2_get_be24(&ctx->g); - bytestream2_skip(&ctx->g, 1); - } - bytestream2_seek(&ctx->g, -127 * 4 - 3, SEEK_CUR); - } - - if (header & KMVC_KEYFRAME) { - frame->key_frame = 1; - frame->pict_type = AV_PICTURE_TYPE_I; - } else { - frame->key_frame = 0; - frame->pict_type = AV_PICTURE_TYPE_P; - } - - if (header & KMVC_PALETTE) { - frame->palette_has_changed = 1; - // palette starts from index 1 and has 127 entries - for (i = 1; i <= ctx->palsize; i++) { - ctx->pal[i] = 0xFFU << 24 | bytestream2_get_be24(&ctx->g); - } - } - - if (ctx->setpal) { - ctx->setpal = 0; - frame->palette_has_changed = 1; - } - - /* make the palette available on the way out */ - memcpy(frame->data[1], ctx->pal, 1024); - - blocksize = bytestream2_get_byte(&ctx->g); - - if (blocksize != 8 && blocksize != 127) { - av_log(avctx, AV_LOG_ERROR, "Block size = %i\n", blocksize); - return AVERROR_INVALIDDATA; - } - memset(ctx->cur, 0, 320 * 200); - switch (header & KMVC_METHOD) { - case 0: - case 1: // used in palette changed event - memcpy(ctx->cur, ctx->prev, 320 * 200); - break; - case 3: - kmvc_decode_intra_8x8(ctx, avctx->width, avctx->height); - break; - case 4: - kmvc_decode_inter_8x8(ctx, avctx->width, avctx->height); - break; - default: - av_log(avctx, AV_LOG_ERROR, "Unknown compression method %i\n", header & KMVC_METHOD); - return AVERROR_INVALIDDATA; - } - - out = frame->data[0]; - src = ctx->cur; - for (i = 0; i < avctx->height; i++) { - memcpy(out, src, avctx->width); - src += 320; - out += frame->linesize[0]; - } - - /* flip buffers */ - FFSWAP(uint8_t *, ctx->cur, ctx->prev); - - *got_frame = 1; - - /* always report that the buffer was completely consumed */ - return avpkt->size; -} - - - -/* - * Init kmvc decoder - */ -static av_cold int decode_init(AVCodecContext * avctx) -{ - KmvcContext *const c = avctx->priv_data; - int i; - - c->avctx = avctx; - - if (avctx->width > 320 || avctx->height > 200) { - av_log(avctx, AV_LOG_ERROR, "KMVC supports frames <= 320x200\n"); - return AVERROR(EINVAL); - } - - c->cur = c->frm0; - c->prev = c->frm1; - - for (i = 0; i < 256; i++) { - c->pal[i] = 0xFFU << 24 | i * 0x10101; - } - - if (avctx->extradata_size < 12) { - av_log(avctx, AV_LOG_WARNING, - "Extradata missing, decoding may not work properly...\n"); - c->palsize = 127; - } else { - c->palsize = AV_RL16(avctx->extradata + 10); - if (c->palsize >= (unsigned)MAX_PALSIZE) { - c->palsize = 127; - av_log(avctx, AV_LOG_ERROR, "KMVC palette too large\n"); - return AVERROR_INVALIDDATA; - } - } - - if (avctx->extradata_size == 1036) { // palette in extradata - uint8_t *src = avctx->extradata + 12; - for (i = 0; i < 256; i++) { - c->pal[i] = AV_RL32(src); - src += 4; - } - c->setpal = 1; - } - - avctx->pix_fmt = AV_PIX_FMT_PAL8; - - return 0; -} - -const FFCodec ff_kmvc_decoder = { - .p.name = "kmvc", - CODEC_LONG_NAME("Karl Morton's video codec"), - .p.type = AVMEDIA_TYPE_VIDEO, - .p.id = AV_CODEC_ID_KMVC, - .priv_data_size = sizeof(KmvcContext), - .init = decode_init, - FF_CODEC_DECODE_CB(decode_frame), - .p.capabilities = AV_CODEC_CAP_DR1, -}; diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/mips/Makefile b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/mips/Makefile deleted file mode 100644 index 05ed63bf3e4021f726db47ffff73b41c447efb23..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/mips/Makefile +++ /dev/null @@ -1,98 +0,0 @@ -ARCH_HEADERS = aacsbr_mips.h aacpsy_mips.h \ - cabac.h compute_antialias_fixed.h \ - compute_antialias_float.h \ - -MIPSFPU-OBJS-$(CONFIG_AMRNB_DECODER) += mips/acelp_filters_mips.o \ - mips/celp_filters_mips.o \ - mips/celp_math_mips.o \ - mips/acelp_vectors_mips.o -MIPSFPU-OBJS-$(CONFIG_AMRWB_DECODER) += mips/acelp_filters_mips.o \ - mips/celp_filters_mips.o \ - mips/amrwbdec_mips.o \ - mips/celp_math_mips.o \ - mips/acelp_vectors_mips.o -MIPSFPU-OBJS-$(CONFIG_MPEGAUDIODSP) += mips/mpegaudiodsp_mips_float.o -MIPSDSP-OBJS-$(CONFIG_MPEGAUDIODSP) += mips/mpegaudiodsp_mips_fixed.o -MIPSFPU-OBJS-$(CONFIG_FFT) += mips/fft_mips.o -MIPSFPU-OBJS-$(CONFIG_FMTCONVERT) += mips/fmtconvert_mips.o -OBJS-$(CONFIG_AC3DSP) += mips/ac3dsp_mips.o -OBJS-$(CONFIG_AAC_DECODER) += mips/aacdec_mips.o \ - mips/aacsbr_mips.o \ - mips/sbrdsp_mips.o \ - mips/aacpsdsp_mips.o -MIPSDSP-OBJS-$(CONFIG_AAC_ENCODER) += mips/aaccoder_mips.o -MIPSFPU-OBJS-$(CONFIG_AAC_ENCODER) += mips/iirfilter_mips.o -OBJS-$(CONFIG_HEVC_DECODER) += mips/hevcdsp_init_mips.o \ - mips/hevcpred_init_mips.o -OBJS-$(CONFIG_VP9_DECODER) += mips/vp9dsp_init_mips.o -OBJS-$(CONFIG_VP8_DECODER) += mips/vp8dsp_init_mips.o -OBJS-$(CONFIG_VP3DSP) += mips/vp3dsp_init_mips.o -OBJS-$(CONFIG_H264DSP) += mips/h264dsp_init_mips.o -OBJS-$(CONFIG_H264QPEL) += mips/h264qpel_init_mips.o -OBJS-$(CONFIG_H264CHROMA) += mips/h264chroma_init_mips.o -OBJS-$(CONFIG_H264PRED) += mips/h264pred_init_mips.o -OBJS-$(CONFIG_H263DSP) += mips/h263dsp_init_mips.o -OBJS-$(CONFIG_QPELDSP) += mips/qpeldsp_init_mips.o -OBJS-$(CONFIG_HPELDSP) += mips/hpeldsp_init_mips.o -OBJS-$(CONFIG_BLOCKDSP) += mips/blockdsp_init_mips.o -OBJS-$(CONFIG_PIXBLOCKDSP) += mips/pixblockdsp_init_mips.o -OBJS-$(CONFIG_IDCTDSP) += mips/idctdsp_init_mips.o -OBJS-$(CONFIG_MPEGVIDEO) += mips/mpegvideo_init_mips.o -OBJS-$(CONFIG_MPEGVIDEOENC) += mips/mpegvideoencdsp_init_mips.o -OBJS-$(CONFIG_ME_CMP) += mips/me_cmp_init_mips.o -OBJS-$(CONFIG_MPEG4_DECODER) += mips/xvididct_init_mips.o -OBJS-$(CONFIG_VC1DSP) += mips/vc1dsp_init_mips.o -OBJS-$(CONFIG_WMV2DSP) += mips/wmv2dsp_init_mips.o -OBJS-$(CONFIG_VIDEODSP) += mips/videodsp_init.o -MSA-OBJS-$(CONFIG_HEVC_DECODER) += mips/hevcdsp_msa.o \ - mips/hevc_mc_uni_msa.o \ - mips/hevc_mc_uniw_msa.o \ - mips/hevc_mc_bi_msa.o \ - mips/hevc_mc_biw_msa.o \ - mips/hevc_idct_msa.o \ - mips/hevc_lpf_sao_msa.o \ - mips/hevcpred_msa.o -MSA-OBJS-$(CONFIG_VP9_DECODER) += mips/vp9_mc_msa.o \ - mips/vp9_lpf_msa.o \ - mips/vp9_idct_msa.o \ - mips/vp9_intra_msa.o -MSA-OBJS-$(CONFIG_VP8_DECODER) += mips/vp8_mc_msa.o \ - mips/vp8_idct_msa.o \ - mips/vp8_lpf_msa.o -MSA-OBJS-$(CONFIG_VP3DSP) += mips/vp3dsp_idct_msa.o -MSA-OBJS-$(CONFIG_H264DSP) += mips/h264dsp_msa.o \ - mips/h264idct_msa.o \ - mips/h264_deblock_msa.o -MSA-OBJS-$(CONFIG_H264QPEL) += mips/h264qpel_msa.o -MSA-OBJS-$(CONFIG_H264CHROMA) += mips/h264chroma_msa.o -MSA-OBJS-$(CONFIG_H264PRED) += mips/h264pred_msa.o -MSA-OBJS-$(CONFIG_H263DSP) += mips/h263dsp_msa.o -MSA-OBJS-$(CONFIG_QPELDSP) += mips/qpeldsp_msa.o -MSA-OBJS-$(CONFIG_HPELDSP) += mips/hpeldsp_msa.o -MSA-OBJS-$(CONFIG_BLOCKDSP) += mips/blockdsp_msa.o -MSA-OBJS-$(CONFIG_PIXBLOCKDSP) += mips/pixblockdsp_msa.o -MSA-OBJS-$(CONFIG_IDCTDSP) += mips/idctdsp_msa.o \ - mips/simple_idct_msa.o -MSA-OBJS-$(CONFIG_MPEGVIDEO) += mips/mpegvideo_msa.o -MSA-OBJS-$(CONFIG_MPEGVIDEOENC) += mips/mpegvideoencdsp_msa.o -MSA-OBJS-$(CONFIG_ME_CMP) += mips/me_cmp_msa.o -MSA-OBJS-$(CONFIG_VC1_DECODER) += mips/vc1dsp_msa.o - -MMI-OBJS += mips/constants.o -MMI-OBJS-$(CONFIG_H264DSP) += mips/h264dsp_mmi.o -MMI-OBJS-$(CONFIG_H264CHROMA) += mips/h264chroma_mmi.o -MMI-OBJS-$(CONFIG_H264PRED) += mips/h264pred_mmi.o -MMI-OBJS-$(CONFIG_MPEGVIDEO) += mips/mpegvideo_mmi.o -MMI-OBJS-$(CONFIG_IDCTDSP) += mips/idctdsp_mmi.o \ - mips/simple_idct_mmi.o -MMI-OBJS-$(CONFIG_MPEG4_DECODER) += mips/xvid_idct_mmi.o -MMI-OBJS-$(CONFIG_BLOCKDSP) += mips/blockdsp_mmi.o -MMI-OBJS-$(CONFIG_PIXBLOCKDSP) += mips/pixblockdsp_mmi.o -MMI-OBJS-$(CONFIG_H264QPEL) += mips/h264qpel_mmi.o -MMI-OBJS-$(CONFIG_VP8_DECODER) += mips/vp8dsp_mmi.o -MMI-OBJS-$(CONFIG_HPELDSP) += mips/hpeldsp_mmi.o -MMI-OBJS-$(CONFIG_VC1_DECODER) += mips/vc1dsp_mmi.o -MMI-OBJS-$(CONFIG_WMV2DSP) += mips/wmv2dsp_mmi.o -MMI-OBJS-$(CONFIG_HEVC_DECODER) += mips/hevcdsp_mmi.o -MMI-OBJS-$(CONFIG_VP3DSP) += mips/vp3dsp_idct_mmi.o -MMI-OBJS-$(CONFIG_VP9_DECODER) += mips/vp9_mc_mmi.o diff --git a/spaces/congsaPfin/Manga-OCR/logs/Enjoy Free Board Games Download for PC and Mac - No Ads No Limits.md b/spaces/congsaPfin/Manga-OCR/logs/Enjoy Free Board Games Download for PC and Mac - No Ads No Limits.md deleted file mode 100644 index 4d4e1233b62e0e581c616094c401c4b4eb6197c2..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Enjoy Free Board Games Download for PC and Mac - No Ads No Limits.md +++ /dev/null @@ -1,111 +0,0 @@ - -

Free Board Games Download: How to Enjoy Classic and New Games on Your PC or Mac

-

Board games are one of the oldest and most popular forms of entertainment in human history. They are fun, challenging, social, and educational. They can stimulate your brain, test your skills, improve your mood, and bring you closer to your friends and family. Whether you like strategy, trivia, puzzle, or adventure games, there is a board game for everyone.

-

free board games download


Download Filehttps://urlca.com/2uO8a3



-

But what if you don't have the physical space, money, or time to buy and play board games? What if you want to play a board game that is out of print, rare, or expensive? What if you want to play with people who are far away or unavailable? What if you want to try new games that you have never heard of before?

-

The answer is simple: free board games download. Thanks to the internet and technology, you can now enjoy hundreds of classic and new board games on your PC or Mac without spending a dime. You can play online or offline with friends, family, or AI opponents. You can customize your rules, pieces, and boards to suit your preferences. You can access high-quality graphics, animations, and sound effects that enhance your gaming experience.

-

In this article, we will show you how to download free board games for your PC or Mac. We will also share with you the benefits of playing digital board games, the best websites to find them, and some tips and tricks to make your gaming experience more enjoyable.

-

free card games and board games download
-free monopoly classic board game download
-free unlimited board games for PC
-free board games download no time limits
-free board games download for Mac
-free solitaire card and board games download
-free online multiplayer board games download
-free board games download offline mode
-free board games download with video chat
-free board games download for Windows 10
-free slingo card and board games download
-free monopoly online board game download
-free full version board games for PC
-free board games download no ads
-free board games download for Android
-free bingo card and board games download
-free clue classic board game download
-free wifi-free board games download
-free board games download with AI opponents
-free board games download for iOS
-free slots card and board games download
-free battleship classic board game download
-free no risk board games for PC
-free board games download no registration
-free board games download for Kindle Fire
-free dominos card and board games download
-free the game of life classic board game download
-free pass and play board games download
-free board games download with animations
-free board games download for Chromebook
-free mahjong card and board games download
-free risk strategy board game download
-free family-friendly board games for PC
-free board games download no internet required
-free board games download for Linux
-free chess classic board game download
-free scrabble word board game download
-free fun and relaxing board games download
-free board games download with sound effects
-free board games download for laptop
-free checkers classic board game download
-free trivial pursuit trivia board game download
-free challenging and addictive board games for PC
-free board games download no installation needed
-free board games download for tablet
-free backgammon classic board game download
-free candy land adventure board game download
-free educational and learning board games download.

-

The Benefits of Downloading Board Games for Free

-

Downloading board games for free has many advantages over buying physical board games. Here are some of them:

-
    -
  • Save money and space: Physical board games can be expensive, especially if you want to buy new releases or rare editions. They also take up a lot of space in your home, which can be a problem if you have limited storage or live in a small apartment. By downloading digital board games for free, you can save money and space while still enjoying your favorite games.
  • -
  • Access hundreds of games: There are hundreds of board games available online for free download. You can find classic games like Monopoly, Scrabble, Chess, Checkers, Clue, Risk, Battleship, etc. You can also find new games that are based on popular movies, TV shows, books, or video games. You can also find games that are unique, original, or experimental. You can explore different themes, genres, and difficulty levels of games to suit your mood and taste.
  • -
  • Play online or offline: Depending on the game and the website, you can play board games online or offline. Online games allow you to play with other players from around the world, chat with them, and compete with them. Offline games allow you to play with yourself, your friends, or your family without needing an internet connection. You can also play with AI opponents that can challenge you and adapt to your skill level.
  • -
  • Enjoy high-quality graphics, animations, and sound effects: Digital board games have come a long way in terms of graphics, animations, and sound effects. They can create realistic, immersive, and engaging environments that enhance your gaming experience. You can see the pieces move, hear the dice roll, and feel the tension of the game. You can also zoom in and out, rotate, and change the perspective of the board to get a better view of the game.
  • -
  • Customize your rules, pieces, and boards: One of the best things about digital board games is that you can customize them to your liking. You can change the rules, the number of players, the time limit, the difficulty level, etc. You can also choose different pieces, colors, shapes, and sizes for your game. You can even create your own board or use a pre-made one from the website. You can make your game as simple or as complex as you want.
  • -
-

The Best Websites to Download Free Board Games

-

There are many websites that offer free board games download for PC and Mac. However, not all of them are reliable, safe, or legal. Some of them may contain viruses, malware, or spyware that can harm your device or steal your personal information. Some of them may also have incomplete, outdated, or low-quality games that are not worth playing.

-

To help you avoid these problems, we have compiled a list of the best websites to download free board games for your PC or Mac. These websites are trusted, reputable, and legal sources of free board games download. They have a large variety of games that are high-quality, updated, and fun to play. They also have easy-to-use interfaces and fast download speeds.

-

Here are the best websites to download free board games:

-
    -
  • Big Fish Games: Big Fish Games is one of the largest and most popular websites for casual games. It has a huge collection of card and board games for PC and Mac that you can download for free. Some of the games include Mahjongg Dimensions Deluxe, Fairway Solitaire, Mystery Case Files: The Harbinger, etc. You can also join the Big Fish Game Club to get discounts, free trials, and other benefits.
  • -
  • Google Play: Google Play is the official app store for Android devices. It has thousands of apps and games that you can download for free or for a small fee. Some of the board games that you can find on Google Play include Monopoly, Clue, Ticket to Ride, etc. You can also sync your Google account with your device to access your games on multiple devices.
  • -
  • GameTop: GameTop is a website that offers full versions of PC and Mac games for download with no ads or time limits. It has hundreds of board games that you can choose from such as Chess Pro 3D, Ludo Master, Sudoku Quest, etc. You can also browse by categories such as puzzle, strategy, arcade, etc.
  • -
-

How to Download and Install Free Board Games on Your Device

-

Downloading and installing free board games on your PC or Mac is easy and fast. Here are the steps that you need to follow:

-
    -
  1. Choose a website that offers free board games download: Visit one of the websites mentioned above or any other website that you trust and like. Make sure that the website is safe, legal, and compatible with your device.
  2. -
  3. Browse the categories and genres of games available: Look for the board games that interest you and suit your preferences. You can use the search bar or the filters to narrow down your options.
  4. -
  5. Click on the game that you want to download and read the description and requirements: Once you find a game that you want to play, click on it to see more details about the game, such as the title, genre, description, features, screenshots, reviews, ratings, etc. You should also check the system requirements and compatibility of the game with your device to make sure that it will run smoothly and without errors.
  6. -
  7. Follow the instructions to download and install the game on your PC or Mac: Depending on the website and the game, you may need to create an account, sign in, or verify your email address before downloading the game. You may also need to accept the terms and conditions and the privacy policy of the website and the game. After that, you can click on the download button or link to start downloading the game file to your device. The download time may vary depending on the size of the file and your internet speed. Once the download is complete, you can open the file and follow the installation wizard to install the game on your device. You may need to agree to some permissions and settings during the installation process.
  8. -
  9. Launch the game and enjoy playing it: After installing the game, you can launch it from your desktop, start menu, or applications folder. You may need to sign in or create a profile to access the game. You can then choose your mode, level, options, etc. and start playing the game.
  10. -
-

Tips and Tricks to Enhance Your Board Game Experience

-

Playing board games on your PC or Mac can be a lot of fun, but it can also be frustrating if you encounter problems or difficulties. To avoid these issues and make your gaming experience more enjoyable, here are some tips and tricks that you can follow:

-
    -
  • Check the system requirements and compatibility of the games before downloading them: Not all games are compatible with all devices or operating systems. Some games may require higher specifications or features than your device can provide. To avoid wasting time and space downloading games that won't work on your device, you should always check the system requirements and compatibility of the games before downloading them. You can find this information on the website or in the description of the game.
  • -
  • Update your device and software regularly to avoid glitches and bugs: Sometimes, games may not run properly or crash due to outdated or corrupted device or software. To prevent this from happening, you should always update your device and software regularly to ensure that they are functioning well and compatible with the latest versions of the games. You can check for updates manually or enable automatic updates on your device or software settings.
  • -
  • Use a reliable antivirus program to scan the downloaded files for malware: Although most websites that offer free board games download are safe and legal, there is still a risk of downloading files that contain malware such as viruses, worms, trojans, etc. These malware can damage your device or steal your personal information. To protect yourself from these threats, you should always use a reliable antivirus program to scan the downloaded files for malware before opening or installing them. You should also avoid clicking on suspicious links or pop-ups that may appear during or after downloading.
  • -
  • Adjust the settings and options of the games to optimize your performance and comfort: Different games have different settings and options that you can adjust to optimize your performance and comfort while playing. For example, you can change the resolution, sound volume, brightness, language, etc. of the games to suit your preferences. You can also enable or disable features such as hints, tutorials, notifications, etc. depending on your needs. You can find these settings and options in the menu or options screen of the games.
  • -
  • Join online communities and forums to share your feedback, tips, and suggestions with other players: Playing board games online can be more fun and rewarding if you interact with other players who share your passion and interest. You can join online communities and forums where you can share your feedback, tips, and suggestions with other players. You can also ask for help, advice, or recommendations from other players who have more experience or knowledge about the games. You can also make new friends, join groups, participate in events, etc.
  • -
-

Conclusion

-

Free board games download is a great way to enjoy classic and new games on your PC or Mac without spending a dime. You can save money and space by playing digital versions of your favorite games. You can access hundreds of games with different themes, genres, and difficulty levels. You can play online or offline with friends, family, or AI opponents. You can enjoy high-quality graphics, animations, and sound effects. You can customize your rules, pieces, and boards to suit your preferences.

-

In this article, we have shown you how to download free board games for your PC or Mac. We have also shared with you the benefits of playing digital board games, the best websites to find them, and some tips and tricks to make your gaming experience more enjoyable.

-

So what are you waiting for? Visit the websites mentioned in this article and start downloading your favorite board games for free today. You will be amazed by how much fun you can have with board games on your PC or Mac.

-

FAQs

-

Here are some frequently asked questions about free board games download:

-
    -
  • Q: Are free board games download legal?
  • -
  • A: Yes, free board games download are legal as long as they are offered by authorized websites that have the rights to distribute them. However, you should avoid downloading pirated or cracked games that are illegal and may contain malware.
  • -
  • Q: Are free board games download safe?
  • -
  • A: Yes, free board games download are safe as long as you download them from reputable websites that have security measures and quality checks. However, you should always use a reliable antivirus program to scan the downloaded files for malware before opening or installing them.
  • -
  • Q: How do I uninstall free board games download?
  • -
  • A: To uninstall free board games download, you can follow the same steps as uninstalling any other program on your PC or Mac. You can use the control panel or the settings menu to find the game and click on the uninstall option. You can also delete the game file from your device.
  • -
  • Q: How do I update free board games download?
  • -
  • A: To update free board games download, you can check the website where you downloaded the game for any new versions or patches. You can also check the game itself for any update notifications or options. You can then download and install the update on your device.
  • -
  • Q: How do I contact the support team of free board games download?
  • -
  • A: To contact the support team of free board games download, you can visit the website where you downloaded the game and look for the contact information or the help section. You can also check the game itself for any support options or links. You can then email, call, or chat with the support team and ask for assistance.
  • -

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Ice Cream 1 Horror Neighborhood MOD APK - No Ads All Levels Unlocked.md b/spaces/congsaPfin/Manga-OCR/logs/Ice Cream 1 Horror Neighborhood MOD APK - No Ads All Levels Unlocked.md deleted file mode 100644 index fe4ce954070165b15d165dbe4a10b703904c0b95..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Ice Cream 1 Horror Neighborhood MOD APK - No Ads All Levels Unlocked.md +++ /dev/null @@ -1,86 +0,0 @@ - -

Ice Cream 1 Mod APK No Ads: A Scary and Fun Game for Android

-

Do you love horror games? Do you want to experience a thrilling adventure with a creepy ice cream man? If yes, then you should try Ice Cream 1, a popular game for Android devices. But wait, there's more! You can also download Ice Cream 1 mod apk no ads, which gives you access to all the features of the game without any interruptions. In this article, we will tell you everything you need to know about Ice Cream 1 mod apk no ads, including its features, how to download and install it, and why you should play it. Let's get started!

-

Introduction

-

What is Ice Cream 1?

-

Ice Cream 1 is a horror game developed by Keplerians Horror Games. It is the first installment of the Ice Scream series, which has three games so far. The game follows the story of Rod, a sinister ice cream man who kidnaps children and freezes them in his van. You play as one of the children's friends, who witnesses the abduction and decides to rescue them. You have to sneak into Rod's van, explore his lair, solve puzzles, and avoid being caught by him. The game has different levels, characters, modes, and endings, making it a fun and challenging experience.

-

ice cream 1 mod apk no ads


Download Zip >>> https://urlca.com/2uO9B9



-

Why download Ice Cream 1 mod apk no ads?

-

Ice Cream 1 is a free game that you can download from Google Play Store. However, the game has some limitations that may affect your enjoyment. For example, some of the content is locked behind in-app purchases, such as new characters, outfits, weapons, and maps. Also, the game has ads that pop up every now and then, which can be annoying and distracting. That's why we recommend downloading Ice Cream 1 mod apk no ads, which is a modified version of the game that removes all these restrictions. With Ice Cream 1 mod apk no ads, you can enjoy the game to the fullest without spending any money or watching any ads.

-

Features of Ice Cream 1 mod apk no ads

-

Unlocked content

-

One of the best features of Ice Cream 1 mod apk no ads is that it unlocks all the content that is normally paid or limited in the original game. This means that you can play with any character you want, such as Mike, J or Lis; wear any outfit you like, such as clown, pirate, or superhero; use any weapon you prefer, such as baseball bat, hammer, or slingshot; and explore any map you choose, such as neighborhood, factory, or circus. You can also access all the modes and endings of the game without any restrictions.

-

No annoying ads

-

Another great feature of Ice Cream 1 mod apk no ads is that it removes all the ads that appear in the original game. This means that you can play the game without any interruptions or distractions. You don't have to watch any videos or banners to get extra coins or lives. You don't have to wait for any loading screens or timers to continue playing. You can just focus on the game and have fun.

-

High-quality graphics and sound

-

Ice Cream 1 mod apk no ads also maintains the high-quality graphics and sound of the original game. The game has a realistic and immersive 3D environment that creates a spooky atmosphere. The game also has a dynamic and adaptive sound system that changes according to the situation and the actions of the characters. The game also has a voice-over and subtitles in different languages, such as English, Spanish, French, German, Italian, Portuguese, Russian, Turkish, Korean, Japanese, and Chinese. The game is compatible with most Android devices and runs smoothly without any lag or glitches.

-

Easy controls and gameplay

-

Ice Cream 1 mod apk no ads also has easy controls and gameplay that make it suitable for anyone who loves horror games. The game has a simple and intuitive interface that allows you to move, interact, hide, and use items with just a few taps. The game also has a tutorial and tips that guide you through the basics of the game. The game has a balanced difficulty level that adapts to your skills and preferences. You can choose between normal, hard, or extreme modes, depending on how much you want to challenge yourself. The game also has a save and load system that lets you resume your progress anytime you want.

-

How to download and install Ice Cream 1 mod apk no ads

-

Step 1: Download the apk file from a trusted source

-

The first step to download and install Ice Cream 1 mod apk no ads is to find a reliable source that offers the apk file for free. There are many websites that claim to provide the apk file, but some of them may be fake or malicious. Therefore, you should be careful and do some research before downloading anything from the internet. You can use Google or any other search engine to look for reviews and ratings of the websites that offer the apk file. You can also check the comments and feedback of other users who have downloaded the apk file from the same source. Once you find a trustworthy source, you can click on the download button and save the apk file on your device.

-

Step 2: Enable unknown sources on your device

-

The second step to download and install Ice Cream 1 mod apk no ads is to enable unknown sources on your device. This is necessary because Android devices normally do not allow installing apps from sources other than Google Play Store. To enable unknown sources, you have to go to your device settings and look for security or privacy options. There, you will find an option to allow installation of apps from unknown sources. You have to turn on this option and confirm your choice. This will allow you to install Ice Cream 1 mod apk no ads on your device.

-

Step 3: Install the apk file and enjoy the game

-

The third and final step to download and install Ice Cream 1 mod apk no ads is to install the apk file and enjoy the game. To install the apk file, you have to locate it on your device storage and tap on it. This will start the installation process and ask you for some permissions. You have to grant these permissions and follow the instructions on the screen. Once the installation is complete, you can open the game and start playing it. You will see that all the features of Ice Cream 1 mod apk no ads are available for you to use.

-

ice scream 1 horror neighborhood mod apk unlocked
-ice cream 1 mod apk download free full version
-ice scream 1 mod apk unlimited lives and coins
-ice cream 1 horror game mod apk no ads
-ice scream 1 mod apk latest version android
-ice cream 1 mod apk offline without internet
-ice scream 1 mod apk all characters and levels
-ice cream 1 mod apk hack cheats generator
-ice scream 1 mod apk premium features unlocked
-ice cream 1 mod apk no root required
-ice scream 1 mod apk for pc windows 10
-ice cream 1 mod apk revdl rexdl apkpure
-ice scream 1 mod apk with obb data file
-ice cream 1 mod apk no verification or survey
-ice scream 1 mod apk original from play store
-ice cream 1 mod apk high quality graphics and sound
-ice scream 1 mod apk easy and fun gameplay
-ice cream 1 mod apk new update and bug fixes
-ice scream 1 mod apk best horror game of 2023
-ice cream 1 mod apk tips and tricks guide
-ice scream 1 horror neighborhood hack mod apk
-ice cream 1 hack mod apk download free no ads
-ice scream 1 unlocked mod apk no ads free
-ice cream 1 full version mod apk no ads
-ice scream 1 latest mod apk no ads download
-ice cream 1 offline mod apk no ads free
-ice scream 1 all levels mod apk no ads
-ice cream 1 cheats mod apk no ads generator
-ice cream 1 premium mod apk no ads unlocked
-ice cream 1 no root mod apk no ads free
-ice scream 1 pc mod apk no ads windows 10
-ice cream 1 revdl mod apk no ads download
-ice scream 1 obb mod apk no ads data file
-ice cream 1 verification mod apk no ads free
-ice scream 1 original mod apk no ads play store
-ice cream 1 graphics mod apk no ads quality sound
-ice cream 1 gameplay mod apk no ads fun easy
-ice cream 1 update mod apk no ads bug fixes
-ice scream 1 horror game mod apk no ads best of 2023
-ice scream 1 guide mod apk no ads tips tricks

-

Conclusion

-

Summary of the main points

-

Ice Cream 1 mod apk no ads is a modified version of Ice Cream 1, a horror game for Android devices. It unlocks all the content of the game, removes all the ads, and maintains the high-quality graphics and sound of the original game. It also has easy controls and gameplay that make it fun and exciting for anyone who loves horror games.

-

Call to action

-

If you want to experience a scary and fun adventure with a creepy ice cream man, then you should download Ice Cream 1 mod apk no ads today. It is free, safe, and easy to install on your device. You will not regret it!

-

Frequently Asked Questions

-

Here are some of the most common questions that people ask about Ice Cream 1 mod apk no ads:

-
    -
  • Is Ice Cream 1 mod apk no ads safe?
  • -

    Yes, Ice Cream 1 mod apk no ads is safe as long as you download it from a trusted source. It does not contain any viruses or malware that can harm your device or steal your data.

    -
  • Is Ice Cream 1 mod apk no ads legal?
  • -

    Yes, Ice Cream 1 mod apk no ads is legal as long as you use it for personal and non-commercial purposes. It does not violate any laws or regulations that govern the use of apps or games.

    -
  • Is Ice Cream 1 mod apk no ads compatible with my device?
  • -

    Yes, Ice Cream 1 mod apk no ads is compatible with most Android devices that run on Android 4.1 or higher. It has a size of about 100 MB and does not require any additional files or data to run.

    -
  • Is Ice Cream 1 mod apk no ads updated?
  • -

    Yes, Ice Cream 1 mod apk no ads is updated regularly to fix any bugs or errors and to improve its performance and compatibility. You can check the latest version of the apk file on the website where you downloaded it from.

    -
  • Can I play Ice Cream 1 mod apk no ads offline?
  • -

    Yes, you can play Ice Cream 1 mod apk no ads offline without any internet connection. However, some features of the game may not work properly or may be disabled when you are offline, such as the leaderboard, achievements, and social media integration.

    -

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Ludo King MOD APK The Best Way to Play Ludo with Unlimited Six and No Ads.md b/spaces/congsaPfin/Manga-OCR/logs/Ludo King MOD APK The Best Way to Play Ludo with Unlimited Six and No Ads.md deleted file mode 100644 index f00ae16d42fa43103b672b1a53bbb4a796d7631d..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Ludo King MOD APK The Best Way to Play Ludo with Unlimited Six and No Ads.md +++ /dev/null @@ -1,91 +0,0 @@ -
-

Introduction

-

Ludo is one of the most popular board games in the world. It is a game of strategy, luck, and fun that can be played by anyone, anywhere, anytime. But what if you could play Ludo on your smartphone or tablet with more features, more options, and more excitement? That's where Ludo King comes in.

-

ludo king mod apk (unlimited six latest version download)


DOWNLOAD ===== https://urlca.com/2uOc6d



-

Ludo King is a free-to-play mobile game application that lets you play Ludo online or offline with your friends, family, or random players from around the world. It is based on the classic board game Ludo, which originated in India in the 6th century CE. Ludo King has been downloaded over 500 million times and has been ranked as the No.1 game on Google Play Store and Apple App Store in many countries.

-

But what if you want to enjoy Ludo King even more? What if you want to get unlimited sixes on every roll, unlimited coins to buy themes and sub-games, and no ads or bugs to interrupt your gameplay? That's where Ludo King mod apk comes in.

-

Ludo King mod apk is a modified version of the original Ludo King game that gives you access to all the premium features for free. You can download and install Ludo King mod apk on your Android device easily and safely from our website. In this article, we will tell you everything you need to know about Ludo King mod apk, including its features, benefits, installation process, and more.

-

Ludo King game history

-

Origin and evolution of Ludo

-

Ludo is a game that has a long and rich history. It is derived from the ancient Indian game of Pachisi, which was played by kings and queens in the 6th century CE. Pachisi was also known as Chaupar or Chausar in different regions of India. The game involved moving four pieces around a cross-shaped board with 68 squares using six cowrie shells as dice.

-

ludo king hack apk download unlimited sixes and coins
-ludo king modded apk with all themes unlocked and no ads
-download ludo king mod apk latest version 7.9.0.260 for android
-ludo king unlimited money and diamonds mod apk free download
-how to install ludo king mod apk with unlimited six feature
-ludo king mod apk online multiplayer with friends and family
-ludo king premium mod apk with vip features and unlimited gems
-ludo king hack version download for pc windows 10/8/7
-ludo king mod apk offline mode without internet connection
-ludo king cheat codes and tricks for unlimited sixes and wins
-ludo king mod apk revdl rexdl happymod apkpure
-ludo king mod apk old version 4.3 download for android
-ludo king mod apk unlimited everything unlocked 2023
-ludo king hack tool online generator no survey no human verification
-ludo king mod menu apk with custom dice and board themes
-ludo king pro mod apk with advanced settings and features
-ludo king cracked apk download for ios iphone ipad
-ludo king mod apk unlimited tokens and coins 2023
-ludo king hack apk download link mediafire mega zippyshare
-ludo king mod apk no root no jailbreak required
-ludo king original vs mod apk comparison and review
-ludo king mod apk new update with bug fixes and improvements
-ludo king hack game download for android phone and tablet
-ludo king mod apk unlimited sixes in snake and ladder mode
-ludo king modded game download for laptop and desktop
-ludo king hack version download kaise kare in hindi
-ludo king mod apk with voice chat and emojis
-ludo king unlimited six dice hack apk download 2023
-ludo king mod apk latest version free download for android 11/10/9/8/7/6/5/4
-ludo king hack app download for jio phone keypad mobile
-ludo king mod apk all unlocked 2023 download for android tv box firestick smart tv roku chromecast apple tv xbox one ps4 ps5 switch wii u nintendo 3ds psp vita ds lite gameboy advance sp micro color pocket neo geo x ouya shield raspberry pi zero w arduino uno esp32 esp8266 nodemcu lolin32 wemos d1 mini pro esp32-cam esp8266-cam esp-eye m5stack atom lite fire core base stick camera timer kit faces go plus bottom rgb led unit speaker mic env hat battery module grove port extender proto board breadboard jumper wire resistor capacitor led potentiometer button switch buzzer piezo speaker ultrasonic sensor ir receiver transmitter remote control servo motor stepper motor dc motor fan relay solenoid valve pump water level sensor soil moisture sensor temperature sensor humidity sensor pressure sensor light sensor color sensor gesture sensor proximity sensor touch sensor hall effect sensor magnetic switch reed switch tilt sensor vibration sensor accelerometer gyroscope compass magnetometer gps module bluetooth module wifi module gsm module sim card slot sd card slot rfid module nfc module barcode scanner qr code scanner fingerprint scanner face recognition camera oled display lcd display tft display e-paper display led matrix display 7 segment display dot matrix display graphic lcd display character lcd display touch screen display joystick keypad keyboard mouse trackball touchpad rotary encoder knob slider potentiometer dial wheel analog digital converter digital analog converter sound recorder player mp3 player wav player midi player synthesizer sequencer sampler drum machine metronome tuner guitar pedal effect distortion overdrive fuzz wah chorus flanger phaser delay reverb tremolo vibrato ring modulator octave pitch shifter harmonizer looper sampler granular synthesizer additive synthesizer subtractive synthesizer fm synthesizer am synthesizer pm synthesizer wavetable synthesizer physical modeling synthesizer plucked string synthesizer karplus strong algorithm bowed string synthesizer wind instrument synthesizer brass instrument synthesizer woodwind instrument synthesizer flute clarinet oboe bassoon saxophone trumpet trombone tuba french horn harmonica accordion melodica organ piano electric piano clavinet harpsichord celesta glockenspiel xylophone marimba vibraphone timpani drum kit snare drum bass drum tom tom hi hat cymbal crash cymbal ride cymbal splash cymbal china cymbal cowbell wood block tambourine triangle shaker maracas cabasa guiro claves castanets bongos congas djembe darbuka tabla dholak dhol nagara taiko bodhran frame drum cajon kalimba mbira thumb piano steel drum steel pan hang drum handpan tongue drum slit drum slit gong slit bell tank

-

Pachisi spread to other parts of Asia, Africa, and Europe over time and underwent many changes in its rules and appearance. Some of the variations of Pachisi include Parcheesi in America, Parchís in Spain, Mensch ärgere dich nicht in Germany, Petits Chevaux in France, Ludo in England, and many more. The modern version of Ludo that we know today was patented by the British game designer Alfred Collier in 1896. He simplified the board design and the rules and named it Ludo, which means "I play" in Latin.

-

Development and success of Ludo King

-

Ludo King is a digital adaptation of the classic Ludo game that was developed by Gametion Technologies, an Indian gaming company founded by Vikash Jaiswal in 2008. Jaiswal was inspired by his childhood memories of playing Ludo with his family and friends. He wanted to create a game that could bring people together and recreate the same nostalgia and fun.

-

Ludo King was launched in 2016 for Android, iOS, Windows Phone, and desktop platforms. It became an instant hit among the users and received positive reviews from critics. It also won several awards and accolades, such as the Best Casual Game of 2018 by Google Play Store, the Best Mobile Game of 2019 by India Game Developers Conference, and the Best Game of 2020 by Apple App Store.

-

Ludo King has been praised for its simple yet addictive gameplay, its colorful and attractive graphics, its smooth and user-friendly interface, its social and interactive features, and its cross-platform compatibility. It has also been credited for reviving the popularity of Ludo among the younger generation and for providing a source of entertainment and connection during the COVID-19 pandemic.

-

Ludo King game features

-

Online and offline multiplayer modes

-

One of the main features of Ludo King is that it allows you to play Ludo online or offline with up to six players. You can choose to play online with your friends or family by creating a private room or joining an existing one. You can also play online with random players from around the world by joining a public room. You can chat with your opponents and send them emojis and stickers during the game.

-

If you don't have an internet connection or you want to play solo, you can choose to play offline with the computer or with local players on the same device. You can also play offline with a Bluetooth connection or a hotspot connection with nearby devices. You can adjust the difficulty level and the number of players according to your preference.

-

Voice chat and e-greetings

-

Another feature of Ludo King that makes it more fun and engaging is that it supports voice chat and e-greetings. You can use the voice chat feature to talk to your friends or family while playing online. You can also use the e-greetings feature to send personalized messages to your opponents before or after the game. You can choose from a variety of templates and themes for your e-greetings, such as birthday, anniversary, festival, thank you, sorry, congratulations, etc.

-

Themes and sub-games

-

Ludo King also offers you a variety of themes and sub-games to enhance your gaming experience. You can change the theme of your board and dice according to your mood or occasion. You can choose from themes such as nature, disco, Egypt, pinball, candy, cake, etc. You can also unlock more themes by earning coins or buying them with real money.

-

Besides Ludo, you can also play other sub-games within Ludo King, such as Snake and Ladders, Carrom, Chess, Tic Tac Toe, etc. These sub-games are based on other classic board games that are equally fun and challenging. You can play these sub-games online or offline with multiple players or against the computer.

-

Languages and platforms

-

Ludo King is a game that is accessible to everyone, regardless of their language or platform preference. It supports 30 different languages, including English, Hindi, Spanish, French, German, Arabic, etc. You can change the language of the game from the settings menu. It also supports multiple platforms, such as Android, iOS, Windows Phone, and desktop. You can download and play Ludo King on any device of your choice. You can also sync your game progress across different devices using your Facebook account.

-

Ludo King game rules

-

How to play Ludo King

-

Ludo King is a game that is easy to learn and play. The game is played on a square board with four colored corners: red, green, yellow, and blue. Each corner has four pieces of the same color. The board also has a center with a star shape and a track with 52 squares around it.

-

The objective of the game is to move all your four pieces from your corner to the center of the board before your opponents do. To start the game, each player rolls a single die. The player with the highest number goes first, followed by the others in a clockwise order. To enter a piece into the track, you need to roll a six. You can also roll again if you get a six.

-

Once you enter a piece into the track, you can move it along the track according to the number you roll. You can also enter more pieces into the track by rolling sixes. You can move your pieces in any order you want, but you cannot move them backwards or skip any squares. You can also form a block with two or more pieces of the same color on the same square, which prevents other players from passing over them.

-

If you land on a square that already has an opponent's piece, you can capture that piece and send it back to its corner. This is called cutting or killing. However, you cannot capture a piece that is on a safe square, which is marked with a star or a cross. You also cannot capture a piece that is on its home column, which is the last six squares before the center.

-

To reach the center of the board, you need to move your pieces along your home column and then enter the star shape. You need to roll the exact number to enter the star shape. For example, if your piece is three squares away from the center, you need to roll a three to enter it. If you roll more than what you need, you have to move your piece back and forth until you get the exact number.

-

How to win Ludo King

-

The first player to move all their four pieces to the center of the board wins the game. The game ends when one player wins or when all players agree to end it. The winner gets coins and points based on their rank and performance. The coins can be used to buy themes and sub-games, while the points can be used to increase your level and rank.

-

Rule variations and tips

-

Ludo King allows you to customize some of the rules of the game according to your preference. You can change these rules from the settings menu before starting a game. Some of these rules are:

- - Double up: This rule allows you to roll two dice instead of one, which gives you more chances to move your pieces and capture your opponents. - Quick mode: This rule reduces the number of pieces per player from four to two, which makes the game faster and shorter. - Magic dice: This rule gives you a special dice that can be used once per game to get any number you want. - No cut: This rule prevents you from capturing your opponents' pieces, which makes the game more peaceful and friendly.

Besides changing these rules, you can also use some tips and tricks to improve your chances of winning Ludo King. Some of these tips are:

- - Plan ahead: Think about your moves before rolling the dice and try to anticipate your opponents' moves as well. - Be strategic: Use your pieces wisely and try to form blocks, avoid cuts, and reach safe squares whenever possible. - Be aggressive: Don't hesitate to capture your opponents' pieces whenever you get a chance and try to delay their progress as much as possible. - Be flexible: Adapt to different situations and change your tactics according to the circumstances.

Ludo King mod apk features

-

Unlimited sixes and coins

-

Ludo King mod apk gives you unlimited sixes and coins for free. This means that you can enter and move your pieces faster and easier than ever before. You can also buy any theme or sub-game you want without spending any real money.

-

All themes and sub-games unlocked

-

Ludo King mod apk also gives you access to all the themes and sub-games that are otherwise locked or paid and install Ludo King mod apk, follow these steps:

- - Step 1: Click on the download button below to start downloading Ludo King mod apk on your device. - Step 2: Once the download is complete, go to your file manager and locate the downloaded file. Tap on it to start the installation process. - Step 3: You may need to enable the "Unknown sources" option in your device settings to allow the installation of apps from third-party sources. - Step 4: Follow the on-screen instructions and wait for the installation to finish. - Step 5: Launch the Ludo King mod apk app and enjoy playing Ludo King with unlimited features.

That's it! You have successfully downloaded and installed Ludo King mod apk on your device. Now you can play Ludo King with your friends or family online or offline with more fun and excitement. You can also explore the different themes and sub-games that are available in Ludo King mod apk. You can also use the voice chat and e-greetings features to communicate and interact with your opponents.

-

We hope you found this article helpful and informative. If you have any questions or feedback, please feel free to leave a comment below. We would love to hear from you. And if you liked this article, please share it with your friends and family who love playing Ludo King.

-

FAQs

-

Here are some of the frequently asked questions about Ludo King mod apk:

-

Q: Is Ludo King mod apk safe to use?

-

A: Yes, Ludo King mod apk is safe to use. It does not contain any viruses, malware, or spyware that can harm your device or data. It is also compatible with most Android devices and does not require any root access or special permissions.

-

Q: Is Ludo King mod apk legal to use?

-

A: Ludo King mod apk is not an official product of Gametion Technologies or any other affiliated company. It is a fan-made modification of the original Ludo King game that is meant for entertainment purposes only. It is not intended to infringe any trademark or copyright of the original game or its developers. However, using Ludo King mod apk may violate the terms and conditions of the original game and may result in a ban or suspension of your account. Therefore, use Ludo King mod apk at your own risk and discretion.

-

Q: How can I update Ludo King mod apk?

-

A: Ludo King mod apk is updated regularly to keep up with the latest version of the original game and to fix any bugs or errors. You can check for updates on our website or on the app itself. To update Ludo King mod apk, you need to download and install the latest version of the app from our website. You do not need to uninstall the previous version of the app.

-

Q: How can I uninstall Ludo King mod apk?

-

A: If you want to uninstall Ludo King mod apk from your device, you can do so easily by following these steps:

- - Step 1: Go to your device settings and find the apps or applications option. - Step 2: Find and select Ludo King mod apk from the list of installed apps. - Step 3: Tap on the uninstall button and confirm your action. - Step 4: Wait for the uninstallation process to complete.

That's it! You have successfully uninstalled Ludo King mod apk from your device. You can also delete the downloaded file from your file manager if you want.

-

Q: Can I play Ludo King mod apk with my friends who have the original game?

-

A: Yes, you can play Ludo King mod apk with your friends who have the original game. However, you may face some compatibility issues or errors while playing online. To avoid this, you can either play offline with your friends on the same device or use a hotspot connection or a Bluetooth connection with nearby devices.

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Pink Games APK Play with Cute Characters and Amazing Graphics.md b/spaces/congsaPfin/Manga-OCR/logs/Pink Games APK Play with Cute Characters and Amazing Graphics.md deleted file mode 100644 index 2a1e8f500dcc3555a4bb5c9515fea814fff21469..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Pink Games APK Play with Cute Characters and Amazing Graphics.md +++ /dev/null @@ -1,90 +0,0 @@ -
-

Pink Games APK Free Download: How to Play the Best Games for Girls on Your Android Device

-

If you are a girl who loves playing games on your Android device, you might be interested in pink games. Pink games are games that are designed for girls, with cute graphics, fun gameplay, and feminine themes. Whether you like dressing up, cooking, or managing a pop star group, there is a pink game for you. In this article, we will show you how to download and install pink games APK for free on your Android device, and introduce you to some of the best pink games that you can play.

-

The Benefits of Playing Pink Games on Your Android Device

-

Pink games are not only fun and entertaining, but they also have some benefits for girls who play them. Here are some of them:

-

pink games apk free download


DOWNLOAD ---> https://urlca.com/2uOeMW



-

You can enjoy a variety of games for different tastes and preferences

-

One of the advantages of playing pink games is that you can choose from a wide range of games that suit your interests and hobbies. Whether you like fashion, beauty, music, or adventure, there is a pink game for you. You can also switch between different genres and styles of games whenever you want, without getting bored.

-

You can customize your avatar and interact with other players in real-time

-

Another benefit of playing pink games is that you can create your own avatar and personalize it with different outfits, accessories, hairstyles, and makeup. You can also chat with other players who share your passion for pink games, and make new friends from around the world. Some pink games also allow you to join clubs, teams, or groups, where you can cooperate and compete with other players in various challenges and events.

-

You can access the games offline and save your progress online

-

A third benefit of playing pink games is that you can play them anytime and anywhere, even without an internet connection. You can download the games on your device and enjoy them offline whenever you want. You can also save your progress online, so you don't have to worry about losing your data or achievements. You can also sync your account across different devices, so you can continue playing where you left off.

-

The Best Pink Games to Download for Free on Your Android Device

-

Now that you know the benefits of playing pink games, you might be wondering which ones to download and play. Here are some of the best pink games that we recommend:

-

* blackpink the game apk free download
-* pink princess games apk free download
-* pink panther games apk free download
-* pink unicorn games apk free download
-* pink dress up games apk free download
-* pink cooking games apk free download
-* pink salon games apk free download
-* pink car games apk free download
-* pink puzzle games apk free download
-* pink arcade games apk free download
-* pink horror games apk free download
-* pink adventure games apk free download
-* pink candy games apk free download
-* pink fashion games apk free download
-* pink makeover games apk free download
-* pink nail games apk free download
-* pink hair games apk free download
-* pink cake games apk free download
-* pink bingo games apk free download
-* pink casino games apk free download
-* pink bubble games apk free download
-* pink shooter games apk free download
-* pink simulator games apk free download
-* pink coloring games apk free download
-* pink drawing games apk free download
-* pink quiz games apk free download
-* pink trivia games apk free download
-* pink word games apk free download
-* pink math games apk free download
-* pink educational games apk free download
-* pink learning games apk free download
-* pink baby games apk free download
-* pink toddler games apk free download
-* pink kids games apk free download
-* pink girl games apk free download
-* pink boy games apk free download
-* pink animal games apk free download
-* pink cat games apk free download
-* pink dog games apk free download
-* pink panda games apk free download
-* pink pony games apk free download
-* pink mermaid games apk free download
-* pink fairy games apk free download
-* pink butterfly games apk free download
-* pink flower games apk free download

-

BLACKPINK THE GAME

-

If you are a fan of the famous K-pop group BLACKPINK, this game is for you. BLACKPINK THE GAME is an Android game where you manage the four members of the group: Jennie Kim, Jisoo, Lisa, and Rosé. You can help them with their training, schedule, outfits, concerts, and more. You can also play real-time mini games with friends in BLACKPINK WORLD, a space where you can meet new friends who love BLACKPINK as much as you do. You can also customize your own 3D avatar with various items inspired by BLACKPINK's style. You can download BLACKPINK THE GAME APK for free from this link.

-

Princess Salon

-

If you love princesses and fairy tales, this game is for you. Princess Salon is an Android game where you can dress up, makeover, and spa four beautiful princesses: Sophia, Olivia, Emma, and Mia. You can choose from hundreds of dresses, shoes, jewelry, accessories, hairstyles, and makeup to create your own princess look. You can also take photos of your princesses and share them with your friends. You can download Princess Salon APK for free from this link.

-

Cooking Fever

-

If you enjoy cooking and baking, this game is for you. Cooking Fever is an Android game where you can cook delicious dishes and desserts from around the world in over 1000 levels. You can use more than 150 ingredients to prepare hundreds of tasty recipes, such as burgers, pizzas, sushi, cakes, and ice cream. You can also upgrade your kitchen appliances and utensils to make your cooking faster and easier. You can also decorate your restaurants and attract more customers. You can download Cooking Fever APK for free from this link.

-

How to Download and Install Pink Games APK on Your Android Device

-

Now that you know some of the best pink games to play on your Android device, you might be wondering how to download and install them. Here are the steps you need to follow:

-

Find a reliable source for downloading the APK files of the games you want to play

-

The first step is to find a trustworthy website that offers the APK files of the pink games you want to play. APK files are the installation files for Android apps that are not available on the Google Play Store. You can use the links we provided above, or search for other websites that offer pink games APK downloads. However, be careful not to download any malicious or fake files that might harm your device or steal your personal information.

-

Enable the installation of apps from unknown sources on your device settings

-

The second step is to allow your device to install apps from unknown sources. This means that you can install apps that are not from the Google Play Store, such as the pink games APK files. To do this, go to your device settings and look for the security or privacy option. Then, find the option that says "Unknown sources" or "Install unknown apps" and enable it. This will let you install the pink games APK files on your device.

-

Download and install the APK files of the games you want to play

-

The third step is to download and install the APK files of the pink games you want to play. To do this, go to the website where you found the APK files and click on the download button. Then, wait for the file to finish downloading on your device. After that, open the file and follow the instructions on the screen to install the game. Once the installation is complete, you can launch the game and enjoy playing it.

-

Conclusion

-

Pink games are games that are designed for girls who love playing games on their Android devices. They have cute graphics, fun gameplay, and feminine themes that appeal to different tastes and preferences. They also have some benefits, such as allowing you to customize your avatar, interact with other players, and access the games offline. Some of the best pink games that you can download for free on your Android device are BLACKPINK THE GAME, Princess Salon, and Cooking Fever. To download and install them, you need to find a reliable source for the APK files, enable the installation of apps from unknown sources on your device settings, and download and install the APK files of the games you want to play.

-

We hope this article helped you learn more about pink games APK free download and how to play them on your Android device. If you have any questions or comments, feel free to leave them below. Happy gaming!

-

FAQs

-

Here are some frequently asked questions about pink games APK free download:

-

What are APK files?

-

APK files are the installation files for Android apps that are not available on the Google Play Store. They allow you to install apps that are not officially supported by Google or your device manufacturer.

-

Are pink games safe to download and play?

-

Pink games are generally safe to download and play if you get them from a reputable source. However, some websites might offer fake or malicious files that might harm your device or steal your personal information. Therefore, it is important to check the reviews and ratings of the website before downloading any APK files.

-

Do I need an internet connection to play pink games?

-

Some pink games require an internet connection to play, while others can be played offline. For example, BLACKPINK THE GAME requires an internet connection to access the online features, such as chatting with other players and joining clubs. However, you can still play the offline mode without an internet connection. On the other hand, Princess Salon and Cooking Fever can be played offline without any problem. However, you might need an internet connection to save your progress online or sync your account across different devices.

-

How can I update the pink games that I downloaded?

-

To update the pink games that you downloaded, you need to check the website where you got the APK files for any new versions. If there is a new version available, you need to download and install it on your device. Alternatively, some pink games might notify you when there is a new update available and prompt you to download and install it.

-

Can I play pink games on other devices besides Android?

-

Some pink games are also available on other devices besides Android, such as iOS, Windows, or Mac. However, you might not be able to use the APK files to install them on those devices. Instead, you might need to use the official app stores or websites of those devices to download and install the games.

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Revit 2017 Content Library Best Practices and Recommendations.md b/spaces/congsaPfin/Manga-OCR/logs/Revit 2017 Content Library Best Practices and Recommendations.md deleted file mode 100644 index 7f9f3e6b5a8fc3cca79b8bfe3d1784cacbbe600e..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Revit 2017 Content Library Best Practices and Recommendations.md +++ /dev/null @@ -1,126 +0,0 @@ -
-

How to Download and Install Revit Content Libraries for 2017

-

Revit is a building information modeling (BIM) software that allows architects, engineers, and contractors to design and document a building in 3D. Revit helps you create parametric components, coordinate data inputs, streamline project management, and produce federated project deliverables.

-

One of the key features of Revit is the use of content libraries, which are collections of files specific to a language, region, and discipline. Content libraries include templates, families, materials, and other elements that you can use in your Revit project. Content libraries can help you save time, improve consistency, and enhance quality in your design and documentation.

-

revit library download 2017


DOWNLOAD ✦✦✦ https://urlca.com/2uOcHN



-

In this article, we will show you how to download and install Revit content libraries for 2017 and earlier versions. We will also explain how to access them in your project, what benefits they offer, and what common problems you may encounter with them.

-

How to download Revit content libraries for 2017 and earlier versions

-

If you are using Revit 2020 or later versions, you do not need to download all the content libraries locally. You can use the Load Autodesk Family command to load default library families from the cloud on demand. If you prefer to keep a local copy of the content, you can still download the library from the Autodesk website.

-

If you are using Revit 2017 or earlier versions, you need to manually download the content libraries from the Autodesk website. Here are the steps to follow:

-
    -
  1. Go to [this page](^1^) and select the version of Revit that you are using.
  2. -
  3. Scroll down to find the content library that matches your language, region, and discipline.
  4. -
  5. Click on the link to download the content library file.
  6. -
  7. Save the file to a location on your computer where you can easily find it.
  8. -
-

How to install Revit content libraries on your computer

-

Once you have downloaded the content library file, you need to install it on your computer. Here are the steps to follow:

-
    -
  1. Double-click on the file to start the installation wizard.
  2. -
  3. Select Uninstall/Change.
  4. -
  5. Select Add/Remove Features.
  6. -
  7. In the dialog box, select the libraries that you want to install or remove.
  8. -
  9. Click Next until you finish the installation process.
  10. -
-

How to access Revit content libraries in your project

-

After installing the content libraries on your computer, you can access them in your Revit project. Here are the steps to follow:

-
    -
  1. Open or create a new Revit project.
  2. -
  3. Select Insert tab > Load Family.
  4. -
  5. In the Load Family dialog box, browse to the location where you installed the content library (usually C:\ProgramData\Autodesk\RVT ).
  6. -
  7. Select the family that you want to load into your project.
  8. -
  9. Click Open.
  10. -
  11. Place the family in your project as needed.
  12. -
-

Benefits of using Revit content libraries for your design and documentation

-

Using Revit content libraries for your design and documentation has many benefits. Here are some of them:

-
    -
  • You can save time by using ready-made elements instead of creating them from scratch.
  • -
  • You can improve consistency by using standardized elements across your project.
  • -
  • You can enhance quality by using elements that are tested and verified by Autodesk or other reputable sources.
  • -
-

Common problems with Revit content libraries and how to troubleshoot them

-

Despite the benefits of using Revit content libraries, you may encounter some problems with them. Here are some of the common problems and how to troubleshoot them:

-

revit 2017 content library download link
-revit 2017 family library download free
-revit 2017 template library download offline
-revit 2017 architecture library download full
-revit 2017 mep library download zip
-revit 2017 structural library download crack
-revit 2017 material library download update
-revit 2017 furniture library download online
-revit 2017 door library download windows 10
-revit 2017 window library download mac
-revit 2017 railing library download software
-revit 2017 roof library download pdf
-revit 2017 plant library download autocad
-revit 2017 car library download sketchup
-revit 2017 kitchen library download dwg
-revit 2017 bathroom library download rfa
-revit 2017 lighting library download lumion
-revit 2017 curtain wall library download enscape
-revit 2017 furniture system library download vray
-revit 2017 site component library download twinmotion
-revit 2017 generic model library download blender
-revit 2017 ceiling fan library download bimobject
-revit 2017 hvac equipment library download manufacturer
-revit 2017 plumbing fixture library download region
-revit 2017 electrical fixture library download industry
-revit 2017 annotation symbol library download metric
-revit 2017 detail component library download imperial
-revit 2017 profile family library download tutorial
-revit 2017 adaptive component library download example
-revit 2017 mass family library download parametric
-revit 2017 casework family library download modern
-revit 2017 entourage family library download realistic
-revit 2017 furniture family library download ikea
-revit 2017 people family library download human scale
-revit 2017 tree family library download evermotion
-revit 2017 chair family library download herman miller
-revit 2017 sofa family library download knoll
-revit 2017 table family library download steelcase
-revit 2017 bed family library download west elm
-revit 2017 desk family library download wayfair

- - - - - - - - - - - - - - - - - - - - - - - - - -
ProblemSolution
The content library is missing or corrupted.Download and install the content library again from the Autodesk website.
The content library is outdated or incompatible.Upgrade your Revit version to the latest one or use a compatible content library for your version.
The content library is not showing up in the Load Family dialog box.Check the path of the content library in the Revit Options dialog box (File tab > Options > File Locations > Places).
The content library is slow to load or causes performance issues.Reduce the size of the content library by removing unnecessary files or use a cloud-based content library service such as BIM 360 Design.
The content library does not meet your specific needs or standards.Create your own custom content library by modifying existing families or creating new ones using the Family Editor.
-

Conclusion: Summary of the main points and a call to action

-

In this article, we have shown you how to download and install Revit content libraries for 2017 and earlier versions. We have also explained how to access them in your project, what benefits they offer, and what common problems you may encounter with them.

-

Revit content libraries are essential for building design and construction, as they help you save time, improve consistency, and enhance quality in your design and documentation. By using Revit content libraries, you can create parametric components, coordinate data inputs, streamline project management, and produce federated project deliverables.

-

If you want to learn more about Revit content libraries, you can visit the Autodesk website or check out some of the online courses and tutorials available. You can also join the Revit community forums and blogs to share your questions and feedback with other users and experts.

-

FAQs: Five frequently asked questions and answers about Revit content libraries

-

Q: What are the differences between system families and loadable families?

-

A: System families are predefined families that are built into Revit, such as walls, floors, roofs, ceilings, etc. You cannot load or unload system families, but you can modify their types and properties. Loadable families are user-defined families that are created outside of Revit, such as doors, windows, furniture, etc. You can load or unload loadable families into your project as needed.

-

Q: How can I create my own custom family?

-

A: You can create your own custom family by using the Family Editor in Revit. The Family Editor allows you to create a family template, define parameters, sketch geometry, apply materials, add constraints, and test behavior. You can also use existing families as a base for creating new ones.

-

Q: How can I share my custom family with others?

-

A: You can share your custom family with others by saving it as a .rfa file and sending it via email or cloud storage. You can also upload it to online platforms such as Autodesk Seek or BIMobject, where other users can search and download it.

-

Q: How can I update my existing family to a newer version of Revit?

-

A: You can update your existing family to a newer version of Revit by opening it in the newer version and saving it. Revit will automatically upgrade the family to the current version. However, you may need to check and adjust some parameters or features that may have changed in the newer version.

-

Q: How can I find more content libraries for Revit?

-

A: You can find more content libraries for Revit by visiting the Autodesk website or other online sources such as BIMobject, NBS National BIM Library, ARCAT, etc. You can also search for specific keywords or categories on Google or Bing.

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/contluForse/HuggingGPT/assets/Driver Mini Digital Camera Regal Jdc5.md b/spaces/contluForse/HuggingGPT/assets/Driver Mini Digital Camera Regal Jdc5.md deleted file mode 100644 index 379efddf9406b630edc2ca2f4fe0af236e279e5d..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/Driver Mini Digital Camera Regal Jdc5.md +++ /dev/null @@ -1,6 +0,0 @@ -

Driver Mini Digital Camera Regal Jdc5


Download Filehttps://ssurll.com/2uzy4k



-
-Jazz JDC5 Digital Camera - Questions answered and issues fixed... I have a Regal JD camera lost the driver cd. I just bought a CD using my network and I can download it but I am getting error messages while installing. Should I run it online? And is there another way to load the cd? I have a Regal JD lost the driver cd. I just bought a CD using my network and I can download it but I am getting error messages while installing. Should I run it online? And is there another way to download the CD? 8a78ff9644
-
-
-

diff --git a/spaces/contluForse/HuggingGPT/assets/Episode 1.25 Full Movie Free Download Experience the Breathtaking Story of Survival and Discovery.md b/spaces/contluForse/HuggingGPT/assets/Episode 1.25 Full Movie Free Download Experience the Breathtaking Story of Survival and Discovery.md deleted file mode 100644 index 4e6556d2c2e9bfe3e2eefbd50e62e59b7fd90234..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/Episode 1.25 Full Movie Free Download Experience the Breathtaking Story of Survival and Discovery.md +++ /dev/null @@ -1,13 +0,0 @@ -
-

Freeware programs can be downloaded used free of charge and without any time limitations. Freeware products can be used free of charge for both personal and professional (commercial use).

-

This license is commonly used for video games and it allows users to download and play the game for free. Basically, a product is offered Free to Play (Freemium) and the user can decide if he wants to pay the money (Premium) for additional features, services, virtual or physical goods that expand the functionality of the game. In some cases, ads may be show to the users.

-

Episode 1.25 full movie free download


DOWNLOAD ✑ ✑ ✑ https://ssurll.com/2uzvfR



-

On July 31, 2017, patch 1.13 was released. It adds a new hunt for Melusine, which is available after chapter 8 at the Meldacio Hunter HQ. A new trophy, "Love Turned Tragic", can be earned for completing the quest. Cross chains, originally seen in Episode Duscae, have been implemented into the game and a tutorial for them appears during the Melusine hunt. The player can earn the "Seize the Moment" trophy for performing 10 full-hit cross chains. The Magitek Exosuit attire was also added. It was originally planned to be a part of the Booster Pack+ DLC, but was delayed and distributed for free instead.

-

Patch 1.27 was released December 13th, 2018. It adds the Adventurer from Another World collaboration quest with Final Fantasy XIV, which also adds new attires and weapons. The Russian voice pack is added as a free download.

-

Bendix played the lead in Rod Serling's "The Time Element" (1958), a time-travel adventure episode about a man who travels back to 1941 and unsuccessfully tries to warn everyone in Honolulu about the impending attack on Pearl Harbor; the program's success opened the doors for Serling's later series The Twilight Zone. Bendix also appeared on The Ford Show, Starring Tennessee Ernie Ford (also 1958). He returned for a second appearance on October 1, 1959, the fourth-season premiere of the series, in which he and Tennessee Ernie performed a comedy skit about a safari.[7]

-

In this bonus episode, Gil Kidron and Rutger Vos graciously invite me on to their long-running show Pod Academy. This show is dedicated to applying a critical intellect to popular media, especially movies or TV series. We discuss the 2014 movie Noah, staring Russell Crowe, Anthony Hopkins, Jennifer Connelly, Emma Watson and Ray Winstone, doing what Ray Winstone always does: being himself.

-

The Jews have a placid existence under Persian rule, and create Judaism. They reconstruct their religion, one now without kings and prophets. From now on, the Law is all. I discuss the last of the books of the Tanakh: the romances of Esther and Judith, the hateful but mercifully brief prophet Obadiah, and the funniest book in the canon, Jonah. Daniel gets his chance in a later episode.

-

-

To see which subtitles are available for a certain TV episode or movie, navigate to its Plex page. If adding subtitles has been successful, you'll see all of the languages listed (expand the drop-down menu to see the full list).

aaccfb2cb3
-
-
\ No newline at end of file diff --git a/spaces/contluForse/HuggingGPT/assets/Ergosoft Texprint 14 Crack Junki.md b/spaces/contluForse/HuggingGPT/assets/Ergosoft Texprint 14 Crack Junki.md deleted file mode 100644 index 0f9470853654904b835c6552d3e8cceb961d42d3..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/Ergosoft Texprint 14 Crack Junki.md +++ /dev/null @@ -1,6 +0,0 @@ -

Ergosoft Texprint 14 Crack Junki


Download · https://ssurll.com/2uzy5n



-
-Ergosoft Texprint 14 Crack Junki. トーク情報. トークが開始されました. lesdaupropam · lesdaupropam 10ヶ月前 ... 1fdad05405
-
-
-

diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/normalbae/models/submodules/efficientnet_repo/data/tf_preprocessing.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/normalbae/models/submodules/efficientnet_repo/data/tf_preprocessing.py deleted file mode 100644 index ee3adeaed6e06bcdec68e312e335b6e2fa31faec..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/normalbae/models/submodules/efficientnet_repo/data/tf_preprocessing.py +++ /dev/null @@ -1,234 +0,0 @@ -""" Tensorflow Preprocessing Adapter - -Allows use of Tensorflow preprocessing pipeline in PyTorch Transform - -Copyright of original Tensorflow code below. - -Hacked together by / Copyright 2020 Ross Wightman -""" -# Copyright 2018 The TensorFlow Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# ============================================================================== -from __future__ import absolute_import -from __future__ import division -from __future__ import print_function - -import tensorflow as tf -import numpy as np - -IMAGE_SIZE = 224 -CROP_PADDING = 32 - - -def distorted_bounding_box_crop(image_bytes, - bbox, - min_object_covered=0.1, - aspect_ratio_range=(0.75, 1.33), - area_range=(0.05, 1.0), - max_attempts=100, - scope=None): - """Generates cropped_image using one of the bboxes randomly distorted. - - See `tf.image.sample_distorted_bounding_box` for more documentation. - - Args: - image_bytes: `Tensor` of binary image data. - bbox: `Tensor` of bounding boxes arranged `[1, num_boxes, coords]` - where each coordinate is [0, 1) and the coordinates are arranged - as `[ymin, xmin, ymax, xmax]`. If num_boxes is 0 then use the whole - image. - min_object_covered: An optional `float`. Defaults to `0.1`. The cropped - area of the image must contain at least this fraction of any bounding - box supplied. - aspect_ratio_range: An optional list of `float`s. The cropped area of the - image must have an aspect ratio = width / height within this range. - area_range: An optional list of `float`s. The cropped area of the image - must contain a fraction of the supplied image within in this range. - max_attempts: An optional `int`. Number of attempts at generating a cropped - region of the image of the specified constraints. After `max_attempts` - failures, return the entire image. - scope: Optional `str` for name scope. - Returns: - cropped image `Tensor` - """ - with tf.name_scope(scope, 'distorted_bounding_box_crop', [image_bytes, bbox]): - shape = tf.image.extract_jpeg_shape(image_bytes) - sample_distorted_bounding_box = tf.image.sample_distorted_bounding_box( - shape, - bounding_boxes=bbox, - min_object_covered=min_object_covered, - aspect_ratio_range=aspect_ratio_range, - area_range=area_range, - max_attempts=max_attempts, - use_image_if_no_bounding_boxes=True) - bbox_begin, bbox_size, _ = sample_distorted_bounding_box - - # Crop the image to the specified bounding box. - offset_y, offset_x, _ = tf.unstack(bbox_begin) - target_height, target_width, _ = tf.unstack(bbox_size) - crop_window = tf.stack([offset_y, offset_x, target_height, target_width]) - image = tf.image.decode_and_crop_jpeg(image_bytes, crop_window, channels=3) - - return image - - -def _at_least_x_are_equal(a, b, x): - """At least `x` of `a` and `b` `Tensors` are equal.""" - match = tf.equal(a, b) - match = tf.cast(match, tf.int32) - return tf.greater_equal(tf.reduce_sum(match), x) - - -def _decode_and_random_crop(image_bytes, image_size, resize_method): - """Make a random crop of image_size.""" - bbox = tf.constant([0.0, 0.0, 1.0, 1.0], dtype=tf.float32, shape=[1, 1, 4]) - image = distorted_bounding_box_crop( - image_bytes, - bbox, - min_object_covered=0.1, - aspect_ratio_range=(3. / 4, 4. / 3.), - area_range=(0.08, 1.0), - max_attempts=10, - scope=None) - original_shape = tf.image.extract_jpeg_shape(image_bytes) - bad = _at_least_x_are_equal(original_shape, tf.shape(image), 3) - - image = tf.cond( - bad, - lambda: _decode_and_center_crop(image_bytes, image_size), - lambda: tf.image.resize([image], [image_size, image_size], resize_method)[0]) - - return image - - -def _decode_and_center_crop(image_bytes, image_size, resize_method): - """Crops to center of image with padding then scales image_size.""" - shape = tf.image.extract_jpeg_shape(image_bytes) - image_height = shape[0] - image_width = shape[1] - - padded_center_crop_size = tf.cast( - ((image_size / (image_size + CROP_PADDING)) * - tf.cast(tf.minimum(image_height, image_width), tf.float32)), - tf.int32) - - offset_height = ((image_height - padded_center_crop_size) + 1) // 2 - offset_width = ((image_width - padded_center_crop_size) + 1) // 2 - crop_window = tf.stack([offset_height, offset_width, - padded_center_crop_size, padded_center_crop_size]) - image = tf.image.decode_and_crop_jpeg(image_bytes, crop_window, channels=3) - image = tf.image.resize([image], [image_size, image_size], resize_method)[0] - - return image - - -def _flip(image): - """Random horizontal image flip.""" - image = tf.image.random_flip_left_right(image) - return image - - -def preprocess_for_train(image_bytes, use_bfloat16, image_size=IMAGE_SIZE, interpolation='bicubic'): - """Preprocesses the given image for evaluation. - - Args: - image_bytes: `Tensor` representing an image binary of arbitrary size. - use_bfloat16: `bool` for whether to use bfloat16. - image_size: image size. - interpolation: image interpolation method - - Returns: - A preprocessed image `Tensor`. - """ - resize_method = tf.image.ResizeMethod.BICUBIC if interpolation == 'bicubic' else tf.image.ResizeMethod.BILINEAR - image = _decode_and_random_crop(image_bytes, image_size, resize_method) - image = _flip(image) - image = tf.reshape(image, [image_size, image_size, 3]) - image = tf.image.convert_image_dtype( - image, dtype=tf.bfloat16 if use_bfloat16 else tf.float32) - return image - - -def preprocess_for_eval(image_bytes, use_bfloat16, image_size=IMAGE_SIZE, interpolation='bicubic'): - """Preprocesses the given image for evaluation. - - Args: - image_bytes: `Tensor` representing an image binary of arbitrary size. - use_bfloat16: `bool` for whether to use bfloat16. - image_size: image size. - interpolation: image interpolation method - - Returns: - A preprocessed image `Tensor`. - """ - resize_method = tf.image.ResizeMethod.BICUBIC if interpolation == 'bicubic' else tf.image.ResizeMethod.BILINEAR - image = _decode_and_center_crop(image_bytes, image_size, resize_method) - image = tf.reshape(image, [image_size, image_size, 3]) - image = tf.image.convert_image_dtype( - image, dtype=tf.bfloat16 if use_bfloat16 else tf.float32) - return image - - -def preprocess_image(image_bytes, - is_training=False, - use_bfloat16=False, - image_size=IMAGE_SIZE, - interpolation='bicubic'): - """Preprocesses the given image. - - Args: - image_bytes: `Tensor` representing an image binary of arbitrary size. - is_training: `bool` for whether the preprocessing is for training. - use_bfloat16: `bool` for whether to use bfloat16. - image_size: image size. - interpolation: image interpolation method - - Returns: - A preprocessed image `Tensor` with value range of [0, 255]. - """ - if is_training: - return preprocess_for_train(image_bytes, use_bfloat16, image_size, interpolation) - else: - return preprocess_for_eval(image_bytes, use_bfloat16, image_size, interpolation) - - -class TfPreprocessTransform: - - def __init__(self, is_training=False, size=224, interpolation='bicubic'): - self.is_training = is_training - self.size = size[0] if isinstance(size, tuple) else size - self.interpolation = interpolation - self._image_bytes = None - self.process_image = self._build_tf_graph() - self.sess = None - - def _build_tf_graph(self): - with tf.device('/cpu:0'): - self._image_bytes = tf.placeholder( - shape=[], - dtype=tf.string, - ) - img = preprocess_image( - self._image_bytes, self.is_training, False, self.size, self.interpolation) - return img - - def __call__(self, image_bytes): - if self.sess is None: - self.sess = tf.Session() - img = self.sess.run(self.process_image, feed_dict={self._image_bytes: image_bytes}) - img = img.round().clip(0, 255).astype(np.uint8) - if img.ndim < 3: - img = np.expand_dims(img, axis=-1) - img = np.rollaxis(img, 2) # HWC to CHW - return img diff --git a/spaces/cscan/CodeFormer/CodeFormer/basicsr/losses/__init__.py b/spaces/cscan/CodeFormer/CodeFormer/basicsr/losses/__init__.py deleted file mode 100644 index 2b184e74c861e6fca0c548692a9a949a6100b0aa..0000000000000000000000000000000000000000 --- a/spaces/cscan/CodeFormer/CodeFormer/basicsr/losses/__init__.py +++ /dev/null @@ -1,26 +0,0 @@ -from copy import deepcopy - -from basicsr.utils import get_root_logger -from basicsr.utils.registry import LOSS_REGISTRY -from .losses import (CharbonnierLoss, GANLoss, L1Loss, MSELoss, PerceptualLoss, WeightedTVLoss, g_path_regularize, - gradient_penalty_loss, r1_penalty) - -__all__ = [ - 'L1Loss', 'MSELoss', 'CharbonnierLoss', 'WeightedTVLoss', 'PerceptualLoss', 'GANLoss', 'gradient_penalty_loss', - 'r1_penalty', 'g_path_regularize' -] - - -def build_loss(opt): - """Build loss from options. - - Args: - opt (dict): Configuration. It must constain: - type (str): Model type. - """ - opt = deepcopy(opt) - loss_type = opt.pop('type') - loss = LOSS_REGISTRY.get(loss_type)(**opt) - logger = get_root_logger() - logger.info(f'Loss [{loss.__class__.__name__}] is created.') - return loss diff --git a/spaces/cvlab/zero123-live/ldm/data/inpainting/__init__.py b/spaces/cvlab/zero123-live/ldm/data/inpainting/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/daarumadx/bot/src/transform/opencv/bodypart/inferrer.py b/spaces/daarumadx/bot/src/transform/opencv/bodypart/inferrer.py deleted file mode 100644 index 96276aabad8a15a59b309bf35c4cac27cbd4395e..0000000000000000000000000000000000000000 --- a/spaces/daarumadx/bot/src/transform/opencv/bodypart/inferrer.py +++ /dev/null @@ -1,61 +0,0 @@ -"""Inference Body part functions.""" -import random - -from transform.opencv.bodypart import BodyPart, BoundingBox, Dimension, Center - - -def infer_nip(aur_list): - """ - Infer nipples. - - :param aur_list: aur list) - :return: nip list - """ - nip_list = [] - - for aur in aur_list: - # Nip rules: - # - circle (w == h) - # - min dim: 5 - # - bigger if aur is bigger - nip_dim = int(5 + aur.w * random.uniform(0.03, 0.09)) - - # center: - x = aur.x - y = aur.y - - # Calculate Bounding Box: - xmax, xmin, ymax, ymin = BoundingBox.calculate_bounding_box(nip_dim, nip_dim, x, y) - - BodyPart.add_body_part_to_list("nip", BoundingBox(xmin, ymin, xmax, ymax), Center(x, y), - Dimension(nip_dim, nip_dim), nip_list) - - return nip_list - - -def infer_hair(vag_list, enable): - """ - Infer vaginal hair. - - :param vag_list: vag list - :param enable: Enable or disable hair generation - :return: hair list - """ - hair_list = [] - - if enable: - for vag in vag_list: - # Hair rules: - hair_w = vag.w * random.uniform(0.4, 1.5) - hair_h = vag.h * random.uniform(0.4, 1.5) - - # center: - x = vag.x - y = vag.y - (hair_h / 2) - (vag.h / 2) - - xmax, xmin, ymax, ymin = BoundingBox.calculate_bounding_box(hair_h, hair_w, x, y) - - BodyPart.add_body_part_to_list("hair", BoundingBox(xmin, ymin, xmax, ymax), Center(x, y), - Dimension(hair_w, hair_h), hair_list) - - return hair_list diff --git a/spaces/daddyjin/TalkingFaceGeneration/Demo_TFR_Pirenderer/src/face3d/models/arcface_torch/losses.py b/spaces/daddyjin/TalkingFaceGeneration/Demo_TFR_Pirenderer/src/face3d/models/arcface_torch/losses.py deleted file mode 100644 index 87aeaa107af4d53f5a6132b3739d5cafdcded7fc..0000000000000000000000000000000000000000 --- a/spaces/daddyjin/TalkingFaceGeneration/Demo_TFR_Pirenderer/src/face3d/models/arcface_torch/losses.py +++ /dev/null @@ -1,42 +0,0 @@ -import torch -from torch import nn - - -def get_loss(name): - if name == "cosface": - return CosFace() - elif name == "arcface": - return ArcFace() - else: - raise ValueError() - - -class CosFace(nn.Module): - def __init__(self, s=64.0, m=0.40): - super(CosFace, self).__init__() - self.s = s - self.m = m - - def forward(self, cosine, label): - index = torch.where(label != -1)[0] - m_hot = torch.zeros(index.size()[0], cosine.size()[1], device=cosine.device) - m_hot.scatter_(1, label[index, None], self.m) - cosine[index] -= m_hot - ret = cosine * self.s - return ret - - -class ArcFace(nn.Module): - def __init__(self, s=64.0, m=0.5): - super(ArcFace, self).__init__() - self.s = s - self.m = m - - def forward(self, cosine: torch.Tensor, label): - index = torch.where(label != -1)[0] - m_hot = torch.zeros(index.size()[0], cosine.size()[1], device=cosine.device) - m_hot.scatter_(1, label[index, None], self.m) - cosine.acos_() - cosine[index] += m_hot - cosine.cos_().mul_(self.s) - return cosine diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/frontend/assets/index-9da94804.css b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/frontend/assets/index-9da94804.css deleted file mode 100644 index 79d901421a55ea578fdaf2c50c84e8fafcea8c41..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/frontend/assets/index-9da94804.css +++ /dev/null @@ -1 +0,0 @@ -div.svelte-1gww5xe{display:flex;position:absolute;justify-content:center;align-items:center;border-radius:var(--radius-sm);background-color:#000c;padding:var(--size-1) .4rem;color:#fff;font-size:var(--text-sm)}span.svelte-1gww5xe{display:inline-block;margin-right:var(--size-1);border-radius:var(--radius-xs);width:var(--size-3);height:var(--size-3)}.wrap.svelte-1mjxput{margin-top:var(--size-3)}.legend.svelte-1mjxput{display:flex;justify-content:center;align-items:center;color:var(--body-text-color)}.legend-item.svelte-1mjxput{display:flex;align-items:center;gap:var(--spacing-sm);margin-right:var(--size-2);margin-left:var(--size-2)}.legend-box.svelte-1mjxput{display:inline-block;border-radius:var(--radius-xs);width:var(--size-3);height:var(--size-3)}svg.svelte-1mjxput{width:var(--size-full)}.label-text.svelte-1mjxput{fill:var(--body-text-color);font-size:var(--text-sm);font-family:var(--font-mono)}.main-label.svelte-1mjxput{display:flex;justify-content:center;align-items:center;color:var(--body-text-color)}.chart.svelte-etmurc{display:flex;display:relative;justify-content:center;align-items:center;background:var(--background-fill-primary);width:var(--size-full);height:var(--size-64)} diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/huggingface_hub/_commit_scheduler.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/huggingface_hub/_commit_scheduler.py deleted file mode 100644 index e190693e38e7b6840cee4340fc43555f0c8f616c..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/huggingface_hub/_commit_scheduler.py +++ /dev/null @@ -1,318 +0,0 @@ -import atexit -import logging -import os -import time -from concurrent.futures import Future -from dataclasses import dataclass -from io import SEEK_END, SEEK_SET, BytesIO -from pathlib import Path -from threading import Lock, Thread -from typing import Dict, List, Optional, Union - -from .hf_api import IGNORE_GIT_FOLDER_PATTERNS, CommitInfo, CommitOperationAdd, HfApi -from .utils import filter_repo_objects - - -logger = logging.getLogger(__name__) - - -@dataclass(frozen=True) -class _FileToUpload: - """Temporary dataclass to store info about files to upload. Not meant to be used directly.""" - - local_path: Path - path_in_repo: str - size_limit: int - last_modified: float - - -class CommitScheduler: - """ - Scheduler to upload a local folder to the Hub at regular intervals (e.g. push to hub every 5 minutes). - - The scheduler is started when instantiated and run indefinitely. At the end of your script, a last commit is - triggered. Checkout the [upload guide](https://huggingface.co/docs/huggingface_hub/guides/upload#scheduled-uploads) - to learn more about how to use it. - - Args: - repo_id (`str`): - The id of the repo to commit to. - folder_path (`str` or `Path`): - Path to the local folder to upload regularly. - every (`int` or `float`, *optional*): - The number of minutes between each commit. Defaults to 5 minutes. - path_in_repo (`str`, *optional*): - Relative path of the directory in the repo, for example: `"checkpoints/"`. Defaults to the root folder - of the repository. - repo_type (`str`, *optional*): - The type of the repo to commit to. Defaults to `model`. - revision (`str`, *optional*): - The revision of the repo to commit to. Defaults to `main`. - private (`bool`, *optional*): - Whether to make the repo private. Defaults to `False`. This value is ignored if the repo already exist. - token (`str`, *optional*): - The token to use to commit to the repo. Defaults to the token saved on the machine. - allow_patterns (`List[str]` or `str`, *optional*): - If provided, only files matching at least one pattern are uploaded. - ignore_patterns (`List[str]` or `str`, *optional*): - If provided, files matching any of the patterns are not uploaded. - hf_api (`HfApi`, *optional*): - The [`HfApi`] client to use to commit to the Hub. Can be set with custom settings (user agent, token,...). - - Example: - ```py - >>> from pathlib import Path - >>> from huggingface_hub import CommitScheduler - - # Scheduler uploads every 10 minutes - >>> csv_path = Path("watched_folder/data.csv") - >>> CommitScheduler(repo_id="test_scheduler", repo_type="dataset", folder_path=csv_path.parent, every=10) - - >>> with csv_path.open("a") as f: - ... f.write("first line") - - # Some time later (...) - >>> with csv_path.open("a") as f: - ... f.write("second line") - ``` - """ - - def __init__( - self, - *, - repo_id: str, - folder_path: Union[str, Path], - every: Union[int, float] = 5, - path_in_repo: Optional[str] = None, - repo_type: Optional[str] = None, - revision: Optional[str] = None, - private: bool = False, - token: Optional[str] = None, - allow_patterns: Optional[Union[List[str], str]] = None, - ignore_patterns: Optional[Union[List[str], str]] = None, - hf_api: Optional["HfApi"] = None, - ) -> None: - self.api = hf_api or HfApi(token=token) - - # Folder - self.folder_path = Path(folder_path).expanduser().resolve() - self.path_in_repo = path_in_repo or "" - self.allow_patterns = allow_patterns - - if ignore_patterns is None: - ignore_patterns = [] - elif isinstance(ignore_patterns, str): - ignore_patterns = [ignore_patterns] - self.ignore_patterns = ignore_patterns + IGNORE_GIT_FOLDER_PATTERNS - - if self.folder_path.is_file(): - raise ValueError(f"'folder_path' must be a directory, not a file: '{self.folder_path}'.") - self.folder_path.mkdir(parents=True, exist_ok=True) - - # Repository - repo_url = self.api.create_repo(repo_id=repo_id, private=private, repo_type=repo_type, exist_ok=True) - self.repo_id = repo_url.repo_id - self.repo_type = repo_type - self.revision = revision - self.token = token - - # Keep track of already uploaded files - self.last_uploaded: Dict[Path, float] = {} # key is local path, value is timestamp - - # Scheduler - if not every > 0: - raise ValueError(f"'every' must be a positive integer, not '{every}'.") - self.lock = Lock() - self.every = every - - logger.info(f"Scheduled job to push '{self.folder_path}' to '{self.repo_id}' every {self.every} minutes.") - self._scheduler_thread = Thread(target=self._run_scheduler, daemon=True) - self._scheduler_thread.start() - atexit.register(self._push_to_hub) - - self.__stopped = False - - def stop(self) -> None: - """Stop the scheduler. - - A stopped scheduler cannot be restarted. Mostly for tests purposes. - """ - self.__stopped = True - - def _run_scheduler(self) -> None: - """Dumb thread waiting between each scheduled push to Hub.""" - while True: - self.last_future = self.trigger() - time.sleep(self.every * 60) - if self.__stopped: - break - - def trigger(self) -> Future: - """Trigger a `push_to_hub` and return a future. - - This method is automatically called every `every` minutes. You can also call it manually to trigger a commit - immediately, without waiting for the next scheduled commit. - """ - return self.api.run_as_future(self._push_to_hub) - - def _push_to_hub(self) -> Optional[CommitInfo]: - if self.__stopped: # If stopped, already scheduled commits are ignored - return None - - logger.info("(Background) scheduled commit triggered.") - try: - return self.push_to_hub() - except Exception as e: - logger.error(f"Error while pushing to Hub: {e}") # Depending on the setup, error might be silenced - raise - - def push_to_hub(self) -> Optional[CommitInfo]: - """ - Push folder to the Hub and return the commit info. - - - - This method is not meant to be called directly. It is run in the background by the scheduler, respecting a - queue mechanism to avoid concurrent commits. Making a direct call to the method might lead to concurrency - issues. - - - - The default behavior of `push_to_hub` is to assume an append-only folder. It lists all files in the folder and - uploads only changed files. If no changes are found, the method returns without committing anything. If you want - to change this behavior, you can inherit from [`CommitScheduler`] and override this method. This can be useful - for example to compress data together in a single file before committing. For more details and examples, check - out our [integration guide](https://huggingface.co/docs/huggingface_hub/main/en/guides/upload#scheduled-uploads). - """ - # Check files to upload (with lock) - with self.lock: - logger.debug("Listing files to upload for scheduled commit.") - - # List files from folder (taken from `_prepare_upload_folder_additions`) - relpath_to_abspath = { - path.relative_to(self.folder_path).as_posix(): path - for path in sorted(self.folder_path.glob("**/*")) # sorted to be deterministic - if path.is_file() - } - prefix = f"{self.path_in_repo.strip('/')}/" if self.path_in_repo else "" - - # Filter with pattern + filter out unchanged files + retrieve current file size - files_to_upload: List[_FileToUpload] = [] - for relpath in filter_repo_objects( - relpath_to_abspath.keys(), allow_patterns=self.allow_patterns, ignore_patterns=self.ignore_patterns - ): - local_path = relpath_to_abspath[relpath] - stat = local_path.stat() - if self.last_uploaded.get(local_path) is None or self.last_uploaded[local_path] != stat.st_mtime: - files_to_upload.append( - _FileToUpload( - local_path=local_path, - path_in_repo=prefix + relpath, - size_limit=stat.st_size, - last_modified=stat.st_mtime, - ) - ) - - # Return if nothing to upload - if len(files_to_upload) == 0: - logger.debug("Dropping schedule commit: no changed file to upload.") - return None - - # Convert `_FileToUpload` as `CommitOperationAdd` (=> compute file shas + limit to file size) - logger.debug("Removing unchanged files since previous scheduled commit.") - add_operations = [ - CommitOperationAdd( - # Cap the file to its current size, even if the user append data to it while a scheduled commit is happening - path_or_fileobj=PartialFileIO(file_to_upload.local_path, size_limit=file_to_upload.size_limit), - path_in_repo=file_to_upload.path_in_repo, - ) - for file_to_upload in files_to_upload - ] - - # Upload files (append mode expected - no need for lock) - logger.debug("Uploading files for scheduled commit.") - commit_info = self.api.create_commit( - repo_id=self.repo_id, - repo_type=self.repo_type, - operations=add_operations, - commit_message="Scheduled Commit", - revision=self.revision, - ) - - # Successful commit: keep track of the latest "last_modified" for each file - for file in files_to_upload: - self.last_uploaded[file.local_path] = file.last_modified - return commit_info - - -class PartialFileIO(BytesIO): - """A file-like object that reads only the first part of a file. - - Useful to upload a file to the Hub when the user might still be appending data to it. Only the first part of the - file is uploaded (i.e. the part that was available when the filesystem was first scanned). - - In practice, only used internally by the CommitScheduler to regularly push a folder to the Hub with minimal - disturbance for the user. The object is passed to `CommitOperationAdd`. - - Only supports `read`, `tell` and `seek` methods. - - Args: - file_path (`str` or `Path`): - Path to the file to read. - size_limit (`int`): - The maximum number of bytes to read from the file. If the file is larger than this, only the first part - will be read (and uploaded). - """ - - def __init__(self, file_path: Union[str, Path], size_limit: int) -> None: - self._file_path = Path(file_path) - self._file = self._file_path.open("rb") - self._size_limit = min(size_limit, os.fstat(self._file.fileno()).st_size) - - def __del__(self) -> None: - self._file.close() - return super().__del__() - - def __repr__(self) -> str: - return f"" - - def __len__(self) -> int: - return self._size_limit - - def __getattribute__(self, name: str): - if name.startswith("_") or name in ("read", "tell", "seek"): # only 3 public methods supported - return super().__getattribute__(name) - raise NotImplementedError(f"PartialFileIO does not support '{name}'.") - - def tell(self) -> int: - """Return the current file position.""" - return self._file.tell() - - def seek(self, __offset: int, __whence: int = SEEK_SET) -> int: - """Change the stream position to the given offset. - - Behavior is the same as a regular file, except that the position is capped to the size limit. - """ - if __whence == SEEK_END: - # SEEK_END => set from the truncated end - __offset = len(self) + __offset - __whence = SEEK_SET - - pos = self._file.seek(__offset, __whence) - if pos > self._size_limit: - return self._file.seek(self._size_limit) - return pos - - def read(self, __size: Optional[int] = -1) -> bytes: - """Read at most `__size` bytes from the file. - - Behavior is the same as a regular file, except that it is capped to the size limit. - """ - current = self._file.tell() - if __size is None or __size < 0: - # Read until file limit - truncated_size = self._size_limit - current - else: - # Read until file limit or __size - truncated_size = min(__size, self._size_limit - current) - return self._file.read(truncated_size) diff --git a/spaces/declare-lab/tango/diffusers/examples/text_to_image/train_text_to_image_lora.py b/spaces/declare-lab/tango/diffusers/examples/text_to_image/train_text_to_image_lora.py deleted file mode 100644 index c85b339d5b7ac07c7191c66888465c75c2c3a3bb..0000000000000000000000000000000000000000 --- a/spaces/declare-lab/tango/diffusers/examples/text_to_image/train_text_to_image_lora.py +++ /dev/null @@ -1,861 +0,0 @@ -# coding=utf-8 -# Copyright 2023 The HuggingFace Inc. team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -"""Fine-tuning script for Stable Diffusion for text2image with support for LoRA.""" - -import argparse -import logging -import math -import os -import random -from pathlib import Path - -import datasets -import numpy as np -import torch -import torch.nn.functional as F -import torch.utils.checkpoint -import transformers -from accelerate import Accelerator -from accelerate.logging import get_logger -from accelerate.utils import ProjectConfiguration, set_seed -from datasets import load_dataset -from huggingface_hub import create_repo, upload_folder -from packaging import version -from torchvision import transforms -from tqdm.auto import tqdm -from transformers import CLIPTextModel, CLIPTokenizer - -import diffusers -from diffusers import AutoencoderKL, DDPMScheduler, DiffusionPipeline, UNet2DConditionModel -from diffusers.loaders import AttnProcsLayers -from diffusers.models.attention_processor import LoRAAttnProcessor -from diffusers.optimization import get_scheduler -from diffusers.utils import check_min_version, is_wandb_available -from diffusers.utils.import_utils import is_xformers_available - - -# Will error if the minimal version of diffusers is not installed. Remove at your own risks. -check_min_version("0.15.0.dev0") - -logger = get_logger(__name__, log_level="INFO") - - -def save_model_card(repo_id: str, images=None, base_model=str, dataset_name=str, repo_folder=None): - img_str = "" - for i, image in enumerate(images): - image.save(os.path.join(repo_folder, f"image_{i}.png")) - img_str += f"![img_{i}](./image_{i}.png)\n" - - yaml = f""" ---- -license: creativeml-openrail-m -base_model: {base_model} -tags: -- stable-diffusion -- stable-diffusion-diffusers -- text-to-image -- diffusers -- lora -inference: true ---- - """ - model_card = f""" -# LoRA text2image fine-tuning - {repo_id} -These are LoRA adaption weights for {base_model}. The weights were fine-tuned on the {dataset_name} dataset. You can find some example images in the following. \n -{img_str} -""" - with open(os.path.join(repo_folder, "README.md"), "w") as f: - f.write(yaml + model_card) - - -def parse_args(): - parser = argparse.ArgumentParser(description="Simple example of a training script.") - parser.add_argument( - "--pretrained_model_name_or_path", - type=str, - default=None, - required=True, - help="Path to pretrained model or model identifier from huggingface.co/models.", - ) - parser.add_argument( - "--revision", - type=str, - default=None, - required=False, - help="Revision of pretrained model identifier from huggingface.co/models.", - ) - parser.add_argument( - "--dataset_name", - type=str, - default=None, - help=( - "The name of the Dataset (from the HuggingFace hub) to train on (could be your own, possibly private," - " dataset). It can also be a path pointing to a local copy of a dataset in your filesystem," - " or to a folder containing files that 🤗 Datasets can understand." - ), - ) - parser.add_argument( - "--dataset_config_name", - type=str, - default=None, - help="The config of the Dataset, leave as None if there's only one config.", - ) - parser.add_argument( - "--train_data_dir", - type=str, - default=None, - help=( - "A folder containing the training data. Folder contents must follow the structure described in" - " https://huggingface.co/docs/datasets/image_dataset#imagefolder. In particular, a `metadata.jsonl` file" - " must exist to provide the captions for the images. Ignored if `dataset_name` is specified." - ), - ) - parser.add_argument( - "--image_column", type=str, default="image", help="The column of the dataset containing an image." - ) - parser.add_argument( - "--caption_column", - type=str, - default="text", - help="The column of the dataset containing a caption or a list of captions.", - ) - parser.add_argument( - "--validation_prompt", type=str, default=None, help="A prompt that is sampled during training for inference." - ) - parser.add_argument( - "--num_validation_images", - type=int, - default=4, - help="Number of images that should be generated during validation with `validation_prompt`.", - ) - parser.add_argument( - "--validation_epochs", - type=int, - default=1, - help=( - "Run fine-tuning validation every X epochs. The validation process consists of running the prompt" - " `args.validation_prompt` multiple times: `args.num_validation_images`." - ), - ) - parser.add_argument( - "--max_train_samples", - type=int, - default=None, - help=( - "For debugging purposes or quicker training, truncate the number of training examples to this " - "value if set." - ), - ) - parser.add_argument( - "--output_dir", - type=str, - default="sd-model-finetuned-lora", - help="The output directory where the model predictions and checkpoints will be written.", - ) - parser.add_argument( - "--cache_dir", - type=str, - default=None, - help="The directory where the downloaded models and datasets will be stored.", - ) - parser.add_argument("--seed", type=int, default=None, help="A seed for reproducible training.") - parser.add_argument( - "--resolution", - type=int, - default=512, - help=( - "The resolution for input images, all the images in the train/validation dataset will be resized to this" - " resolution" - ), - ) - parser.add_argument( - "--center_crop", - default=False, - action="store_true", - help=( - "Whether to center crop the input images to the resolution. If not set, the images will be randomly" - " cropped. The images will be resized to the resolution first before cropping." - ), - ) - parser.add_argument( - "--random_flip", - action="store_true", - help="whether to randomly flip images horizontally", - ) - parser.add_argument( - "--train_batch_size", type=int, default=16, help="Batch size (per device) for the training dataloader." - ) - parser.add_argument("--num_train_epochs", type=int, default=100) - parser.add_argument( - "--max_train_steps", - type=int, - default=None, - help="Total number of training steps to perform. If provided, overrides num_train_epochs.", - ) - parser.add_argument( - "--gradient_accumulation_steps", - type=int, - default=1, - help="Number of updates steps to accumulate before performing a backward/update pass.", - ) - parser.add_argument( - "--gradient_checkpointing", - action="store_true", - help="Whether or not to use gradient checkpointing to save memory at the expense of slower backward pass.", - ) - parser.add_argument( - "--learning_rate", - type=float, - default=1e-4, - help="Initial learning rate (after the potential warmup period) to use.", - ) - parser.add_argument( - "--scale_lr", - action="store_true", - default=False, - help="Scale the learning rate by the number of GPUs, gradient accumulation steps, and batch size.", - ) - parser.add_argument( - "--lr_scheduler", - type=str, - default="constant", - help=( - 'The scheduler type to use. Choose between ["linear", "cosine", "cosine_with_restarts", "polynomial",' - ' "constant", "constant_with_warmup"]' - ), - ) - parser.add_argument( - "--lr_warmup_steps", type=int, default=500, help="Number of steps for the warmup in the lr scheduler." - ) - parser.add_argument( - "--use_8bit_adam", action="store_true", help="Whether or not to use 8-bit Adam from bitsandbytes." - ) - parser.add_argument( - "--allow_tf32", - action="store_true", - help=( - "Whether or not to allow TF32 on Ampere GPUs. Can be used to speed up training. For more information, see" - " https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices" - ), - ) - parser.add_argument( - "--dataloader_num_workers", - type=int, - default=0, - help=( - "Number of subprocesses to use for data loading. 0 means that the data will be loaded in the main process." - ), - ) - parser.add_argument("--adam_beta1", type=float, default=0.9, help="The beta1 parameter for the Adam optimizer.") - parser.add_argument("--adam_beta2", type=float, default=0.999, help="The beta2 parameter for the Adam optimizer.") - parser.add_argument("--adam_weight_decay", type=float, default=1e-2, help="Weight decay to use.") - parser.add_argument("--adam_epsilon", type=float, default=1e-08, help="Epsilon value for the Adam optimizer") - parser.add_argument("--max_grad_norm", default=1.0, type=float, help="Max gradient norm.") - parser.add_argument("--push_to_hub", action="store_true", help="Whether or not to push the model to the Hub.") - parser.add_argument("--hub_token", type=str, default=None, help="The token to use to push to the Model Hub.") - parser.add_argument( - "--hub_model_id", - type=str, - default=None, - help="The name of the repository to keep in sync with the local `output_dir`.", - ) - parser.add_argument( - "--logging_dir", - type=str, - default="logs", - help=( - "[TensorBoard](https://www.tensorflow.org/tensorboard) log directory. Will default to" - " *output_dir/runs/**CURRENT_DATETIME_HOSTNAME***." - ), - ) - parser.add_argument( - "--mixed_precision", - type=str, - default=None, - choices=["no", "fp16", "bf16"], - help=( - "Whether to use mixed precision. Choose between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >=" - " 1.10.and an Nvidia Ampere GPU. Default to the value of accelerate config of the current system or the" - " flag passed with the `accelerate.launch` command. Use this argument to override the accelerate config." - ), - ) - parser.add_argument( - "--report_to", - type=str, - default="tensorboard", - help=( - 'The integration to report the results and logs to. Supported platforms are `"tensorboard"`' - ' (default), `"wandb"` and `"comet_ml"`. Use `"all"` to report to all integrations.' - ), - ) - parser.add_argument("--local_rank", type=int, default=-1, help="For distributed training: local_rank") - parser.add_argument( - "--checkpointing_steps", - type=int, - default=500, - help=( - "Save a checkpoint of the training state every X updates. These checkpoints are only suitable for resuming" - " training using `--resume_from_checkpoint`." - ), - ) - parser.add_argument( - "--checkpoints_total_limit", - type=int, - default=None, - help=( - "Max number of checkpoints to store. Passed as `total_limit` to the `Accelerator` `ProjectConfiguration`." - " See Accelerator::save_state https://huggingface.co/docs/accelerate/package_reference/accelerator#accelerate.Accelerator.save_state" - " for more docs" - ), - ) - parser.add_argument( - "--resume_from_checkpoint", - type=str, - default=None, - help=( - "Whether training should be resumed from a previous checkpoint. Use a path saved by" - ' `--checkpointing_steps`, or `"latest"` to automatically select the last available checkpoint.' - ), - ) - parser.add_argument( - "--enable_xformers_memory_efficient_attention", action="store_true", help="Whether or not to use xformers." - ) - parser.add_argument("--noise_offset", type=float, default=0, help="The scale of noise offset.") - - args = parser.parse_args() - env_local_rank = int(os.environ.get("LOCAL_RANK", -1)) - if env_local_rank != -1 and env_local_rank != args.local_rank: - args.local_rank = env_local_rank - - # Sanity checks - if args.dataset_name is None and args.train_data_dir is None: - raise ValueError("Need either a dataset name or a training folder.") - - return args - - -DATASET_NAME_MAPPING = { - "lambdalabs/pokemon-blip-captions": ("image", "text"), -} - - -def main(): - args = parse_args() - logging_dir = os.path.join(args.output_dir, args.logging_dir) - - accelerator_project_config = ProjectConfiguration(total_limit=args.checkpoints_total_limit) - - accelerator = Accelerator( - gradient_accumulation_steps=args.gradient_accumulation_steps, - mixed_precision=args.mixed_precision, - log_with=args.report_to, - logging_dir=logging_dir, - project_config=accelerator_project_config, - ) - if args.report_to == "wandb": - if not is_wandb_available(): - raise ImportError("Make sure to install wandb if you want to use it for logging during training.") - import wandb - - # Make one log on every process with the configuration for debugging. - logging.basicConfig( - format="%(asctime)s - %(levelname)s - %(name)s - %(message)s", - datefmt="%m/%d/%Y %H:%M:%S", - level=logging.INFO, - ) - logger.info(accelerator.state, main_process_only=False) - if accelerator.is_local_main_process: - datasets.utils.logging.set_verbosity_warning() - transformers.utils.logging.set_verbosity_warning() - diffusers.utils.logging.set_verbosity_info() - else: - datasets.utils.logging.set_verbosity_error() - transformers.utils.logging.set_verbosity_error() - diffusers.utils.logging.set_verbosity_error() - - # If passed along, set the training seed now. - if args.seed is not None: - set_seed(args.seed) - - # Handle the repository creation - if accelerator.is_main_process: - if args.output_dir is not None: - os.makedirs(args.output_dir, exist_ok=True) - - if args.push_to_hub: - repo_id = create_repo( - repo_id=args.hub_model_id or Path(args.output_dir).name, exist_ok=True, token=args.hub_token - ).repo_id - # Load scheduler, tokenizer and models. - noise_scheduler = DDPMScheduler.from_pretrained(args.pretrained_model_name_or_path, subfolder="scheduler") - tokenizer = CLIPTokenizer.from_pretrained( - args.pretrained_model_name_or_path, subfolder="tokenizer", revision=args.revision - ) - text_encoder = CLIPTextModel.from_pretrained( - args.pretrained_model_name_or_path, subfolder="text_encoder", revision=args.revision - ) - vae = AutoencoderKL.from_pretrained(args.pretrained_model_name_or_path, subfolder="vae", revision=args.revision) - unet = UNet2DConditionModel.from_pretrained( - args.pretrained_model_name_or_path, subfolder="unet", revision=args.revision - ) - # freeze parameters of models to save more memory - unet.requires_grad_(False) - vae.requires_grad_(False) - - text_encoder.requires_grad_(False) - - # For mixed precision training we cast the text_encoder and vae weights to half-precision - # as these models are only used for inference, keeping weights in full precision is not required. - weight_dtype = torch.float32 - if accelerator.mixed_precision == "fp16": - weight_dtype = torch.float16 - elif accelerator.mixed_precision == "bf16": - weight_dtype = torch.bfloat16 - - # Move unet, vae and text_encoder to device and cast to weight_dtype - unet.to(accelerator.device, dtype=weight_dtype) - vae.to(accelerator.device, dtype=weight_dtype) - text_encoder.to(accelerator.device, dtype=weight_dtype) - - # now we will add new LoRA weights to the attention layers - # It's important to realize here how many attention weights will be added and of which sizes - # The sizes of the attention layers consist only of two different variables: - # 1) - the "hidden_size", which is increased according to `unet.config.block_out_channels`. - # 2) - the "cross attention size", which is set to `unet.config.cross_attention_dim`. - - # Let's first see how many attention processors we will have to set. - # For Stable Diffusion, it should be equal to: - # - down blocks (2x attention layers) * (2x transformer layers) * (3x down blocks) = 12 - # - mid blocks (2x attention layers) * (1x transformer layers) * (1x mid blocks) = 2 - # - up blocks (2x attention layers) * (3x transformer layers) * (3x down blocks) = 18 - # => 32 layers - - # Set correct lora layers - lora_attn_procs = {} - for name in unet.attn_processors.keys(): - cross_attention_dim = None if name.endswith("attn1.processor") else unet.config.cross_attention_dim - if name.startswith("mid_block"): - hidden_size = unet.config.block_out_channels[-1] - elif name.startswith("up_blocks"): - block_id = int(name[len("up_blocks.")]) - hidden_size = list(reversed(unet.config.block_out_channels))[block_id] - elif name.startswith("down_blocks"): - block_id = int(name[len("down_blocks.")]) - hidden_size = unet.config.block_out_channels[block_id] - - lora_attn_procs[name] = LoRAAttnProcessor(hidden_size=hidden_size, cross_attention_dim=cross_attention_dim) - - unet.set_attn_processor(lora_attn_procs) - - if args.enable_xformers_memory_efficient_attention: - if is_xformers_available(): - import xformers - - xformers_version = version.parse(xformers.__version__) - if xformers_version == version.parse("0.0.16"): - logger.warn( - "xFormers 0.0.16 cannot be used for training in some GPUs. If you observe problems during training, please update xFormers to at least 0.0.17. See https://huggingface.co/docs/diffusers/main/en/optimization/xformers for more details." - ) - unet.enable_xformers_memory_efficient_attention() - else: - raise ValueError("xformers is not available. Make sure it is installed correctly") - - lora_layers = AttnProcsLayers(unet.attn_processors) - - # Enable TF32 for faster training on Ampere GPUs, - # cf https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices - if args.allow_tf32: - torch.backends.cuda.matmul.allow_tf32 = True - - if args.scale_lr: - args.learning_rate = ( - args.learning_rate * args.gradient_accumulation_steps * args.train_batch_size * accelerator.num_processes - ) - - # Initialize the optimizer - if args.use_8bit_adam: - try: - import bitsandbytes as bnb - except ImportError: - raise ImportError( - "Please install bitsandbytes to use 8-bit Adam. You can do so by running `pip install bitsandbytes`" - ) - - optimizer_cls = bnb.optim.AdamW8bit - else: - optimizer_cls = torch.optim.AdamW - - optimizer = optimizer_cls( - lora_layers.parameters(), - lr=args.learning_rate, - betas=(args.adam_beta1, args.adam_beta2), - weight_decay=args.adam_weight_decay, - eps=args.adam_epsilon, - ) - - # Get the datasets: you can either provide your own training and evaluation files (see below) - # or specify a Dataset from the hub (the dataset will be downloaded automatically from the datasets Hub). - - # In distributed training, the load_dataset function guarantees that only one local process can concurrently - # download the dataset. - if args.dataset_name is not None: - # Downloading and loading a dataset from the hub. - dataset = load_dataset( - args.dataset_name, - args.dataset_config_name, - cache_dir=args.cache_dir, - ) - else: - data_files = {} - if args.train_data_dir is not None: - data_files["train"] = os.path.join(args.train_data_dir, "**") - dataset = load_dataset( - "imagefolder", - data_files=data_files, - cache_dir=args.cache_dir, - ) - # See more about loading custom images at - # https://huggingface.co/docs/datasets/v2.4.0/en/image_load#imagefolder - - # Preprocessing the datasets. - # We need to tokenize inputs and targets. - column_names = dataset["train"].column_names - - # 6. Get the column names for input/target. - dataset_columns = DATASET_NAME_MAPPING.get(args.dataset_name, None) - if args.image_column is None: - image_column = dataset_columns[0] if dataset_columns is not None else column_names[0] - else: - image_column = args.image_column - if image_column not in column_names: - raise ValueError( - f"--image_column' value '{args.image_column}' needs to be one of: {', '.join(column_names)}" - ) - if args.caption_column is None: - caption_column = dataset_columns[1] if dataset_columns is not None else column_names[1] - else: - caption_column = args.caption_column - if caption_column not in column_names: - raise ValueError( - f"--caption_column' value '{args.caption_column}' needs to be one of: {', '.join(column_names)}" - ) - - # Preprocessing the datasets. - # We need to tokenize input captions and transform the images. - def tokenize_captions(examples, is_train=True): - captions = [] - for caption in examples[caption_column]: - if isinstance(caption, str): - captions.append(caption) - elif isinstance(caption, (list, np.ndarray)): - # take a random caption if there are multiple - captions.append(random.choice(caption) if is_train else caption[0]) - else: - raise ValueError( - f"Caption column `{caption_column}` should contain either strings or lists of strings." - ) - inputs = tokenizer( - captions, max_length=tokenizer.model_max_length, padding="max_length", truncation=True, return_tensors="pt" - ) - return inputs.input_ids - - # Preprocessing the datasets. - train_transforms = transforms.Compose( - [ - transforms.Resize(args.resolution, interpolation=transforms.InterpolationMode.BILINEAR), - transforms.CenterCrop(args.resolution) if args.center_crop else transforms.RandomCrop(args.resolution), - transforms.RandomHorizontalFlip() if args.random_flip else transforms.Lambda(lambda x: x), - transforms.ToTensor(), - transforms.Normalize([0.5], [0.5]), - ] - ) - - def preprocess_train(examples): - images = [image.convert("RGB") for image in examples[image_column]] - examples["pixel_values"] = [train_transforms(image) for image in images] - examples["input_ids"] = tokenize_captions(examples) - return examples - - with accelerator.main_process_first(): - if args.max_train_samples is not None: - dataset["train"] = dataset["train"].shuffle(seed=args.seed).select(range(args.max_train_samples)) - # Set the training transforms - train_dataset = dataset["train"].with_transform(preprocess_train) - - def collate_fn(examples): - pixel_values = torch.stack([example["pixel_values"] for example in examples]) - pixel_values = pixel_values.to(memory_format=torch.contiguous_format).float() - input_ids = torch.stack([example["input_ids"] for example in examples]) - return {"pixel_values": pixel_values, "input_ids": input_ids} - - # DataLoaders creation: - train_dataloader = torch.utils.data.DataLoader( - train_dataset, - shuffle=True, - collate_fn=collate_fn, - batch_size=args.train_batch_size, - num_workers=args.dataloader_num_workers, - ) - - # Scheduler and math around the number of training steps. - overrode_max_train_steps = False - num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps) - if args.max_train_steps is None: - args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch - overrode_max_train_steps = True - - lr_scheduler = get_scheduler( - args.lr_scheduler, - optimizer=optimizer, - num_warmup_steps=args.lr_warmup_steps * args.gradient_accumulation_steps, - num_training_steps=args.max_train_steps * args.gradient_accumulation_steps, - ) - - # Prepare everything with our `accelerator`. - lora_layers, optimizer, train_dataloader, lr_scheduler = accelerator.prepare( - lora_layers, optimizer, train_dataloader, lr_scheduler - ) - - # We need to recalculate our total training steps as the size of the training dataloader may have changed. - num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps) - if overrode_max_train_steps: - args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch - # Afterwards we recalculate our number of training epochs - args.num_train_epochs = math.ceil(args.max_train_steps / num_update_steps_per_epoch) - - # We need to initialize the trackers we use, and also store our configuration. - # The trackers initializes automatically on the main process. - if accelerator.is_main_process: - accelerator.init_trackers("text2image-fine-tune", config=vars(args)) - - # Train! - total_batch_size = args.train_batch_size * accelerator.num_processes * args.gradient_accumulation_steps - - logger.info("***** Running training *****") - logger.info(f" Num examples = {len(train_dataset)}") - logger.info(f" Num Epochs = {args.num_train_epochs}") - logger.info(f" Instantaneous batch size per device = {args.train_batch_size}") - logger.info(f" Total train batch size (w. parallel, distributed & accumulation) = {total_batch_size}") - logger.info(f" Gradient Accumulation steps = {args.gradient_accumulation_steps}") - logger.info(f" Total optimization steps = {args.max_train_steps}") - global_step = 0 - first_epoch = 0 - - # Potentially load in the weights and states from a previous save - if args.resume_from_checkpoint: - if args.resume_from_checkpoint != "latest": - path = os.path.basename(args.resume_from_checkpoint) - else: - # Get the most recent checkpoint - dirs = os.listdir(args.output_dir) - dirs = [d for d in dirs if d.startswith("checkpoint")] - dirs = sorted(dirs, key=lambda x: int(x.split("-")[1])) - path = dirs[-1] if len(dirs) > 0 else None - - if path is None: - accelerator.print( - f"Checkpoint '{args.resume_from_checkpoint}' does not exist. Starting a new training run." - ) - args.resume_from_checkpoint = None - else: - accelerator.print(f"Resuming from checkpoint {path}") - accelerator.load_state(os.path.join(args.output_dir, path)) - global_step = int(path.split("-")[1]) - - resume_global_step = global_step * args.gradient_accumulation_steps - first_epoch = global_step // num_update_steps_per_epoch - resume_step = resume_global_step % (num_update_steps_per_epoch * args.gradient_accumulation_steps) - - # Only show the progress bar once on each machine. - progress_bar = tqdm(range(global_step, args.max_train_steps), disable=not accelerator.is_local_main_process) - progress_bar.set_description("Steps") - - for epoch in range(first_epoch, args.num_train_epochs): - unet.train() - train_loss = 0.0 - for step, batch in enumerate(train_dataloader): - # Skip steps until we reach the resumed step - if args.resume_from_checkpoint and epoch == first_epoch and step < resume_step: - if step % args.gradient_accumulation_steps == 0: - progress_bar.update(1) - continue - - with accelerator.accumulate(unet): - # Convert images to latent space - latents = vae.encode(batch["pixel_values"].to(dtype=weight_dtype)).latent_dist.sample() - latents = latents * vae.config.scaling_factor - - # Sample noise that we'll add to the latents - noise = torch.randn_like(latents) - if args.noise_offset: - # https://www.crosslabs.org//blog/diffusion-with-offset-noise - noise += args.noise_offset * torch.randn( - (latents.shape[0], latents.shape[1], 1, 1), device=latents.device - ) - - bsz = latents.shape[0] - # Sample a random timestep for each image - timesteps = torch.randint(0, noise_scheduler.num_train_timesteps, (bsz,), device=latents.device) - timesteps = timesteps.long() - - # Add noise to the latents according to the noise magnitude at each timestep - # (this is the forward diffusion process) - noisy_latents = noise_scheduler.add_noise(latents, noise, timesteps) - - # Get the text embedding for conditioning - encoder_hidden_states = text_encoder(batch["input_ids"])[0] - - # Get the target for loss depending on the prediction type - if noise_scheduler.config.prediction_type == "epsilon": - target = noise - elif noise_scheduler.config.prediction_type == "v_prediction": - target = noise_scheduler.get_velocity(latents, noise, timesteps) - else: - raise ValueError(f"Unknown prediction type {noise_scheduler.config.prediction_type}") - - # Predict the noise residual and compute loss - model_pred = unet(noisy_latents, timesteps, encoder_hidden_states).sample - loss = F.mse_loss(model_pred.float(), target.float(), reduction="mean") - - # Gather the losses across all processes for logging (if we use distributed training). - avg_loss = accelerator.gather(loss.repeat(args.train_batch_size)).mean() - train_loss += avg_loss.item() / args.gradient_accumulation_steps - - # Backpropagate - accelerator.backward(loss) - if accelerator.sync_gradients: - params_to_clip = lora_layers.parameters() - accelerator.clip_grad_norm_(params_to_clip, args.max_grad_norm) - optimizer.step() - lr_scheduler.step() - optimizer.zero_grad() - - # Checks if the accelerator has performed an optimization step behind the scenes - if accelerator.sync_gradients: - progress_bar.update(1) - global_step += 1 - accelerator.log({"train_loss": train_loss}, step=global_step) - train_loss = 0.0 - - if global_step % args.checkpointing_steps == 0: - if accelerator.is_main_process: - save_path = os.path.join(args.output_dir, f"checkpoint-{global_step}") - accelerator.save_state(save_path) - logger.info(f"Saved state to {save_path}") - - logs = {"step_loss": loss.detach().item(), "lr": lr_scheduler.get_last_lr()[0]} - progress_bar.set_postfix(**logs) - - if global_step >= args.max_train_steps: - break - - if accelerator.is_main_process: - if args.validation_prompt is not None and epoch % args.validation_epochs == 0: - logger.info( - f"Running validation... \n Generating {args.num_validation_images} images with prompt:" - f" {args.validation_prompt}." - ) - # create pipeline - pipeline = DiffusionPipeline.from_pretrained( - args.pretrained_model_name_or_path, - unet=accelerator.unwrap_model(unet), - revision=args.revision, - torch_dtype=weight_dtype, - ) - pipeline = pipeline.to(accelerator.device) - pipeline.set_progress_bar_config(disable=True) - - # run inference - generator = torch.Generator(device=accelerator.device).manual_seed(args.seed) - images = [] - for _ in range(args.num_validation_images): - images.append( - pipeline(args.validation_prompt, num_inference_steps=30, generator=generator).images[0] - ) - - for tracker in accelerator.trackers: - if tracker.name == "tensorboard": - np_images = np.stack([np.asarray(img) for img in images]) - tracker.writer.add_images("validation", np_images, epoch, dataformats="NHWC") - if tracker.name == "wandb": - tracker.log( - { - "validation": [ - wandb.Image(image, caption=f"{i}: {args.validation_prompt}") - for i, image in enumerate(images) - ] - } - ) - - del pipeline - torch.cuda.empty_cache() - - # Save the lora layers - accelerator.wait_for_everyone() - if accelerator.is_main_process: - unet = unet.to(torch.float32) - unet.save_attn_procs(args.output_dir) - - if args.push_to_hub: - save_model_card( - repo_id, - images=images, - base_model=args.pretrained_model_name_or_path, - dataset_name=args.dataset_name, - repo_folder=args.output_dir, - ) - upload_folder( - repo_id=repo_id, - folder_path=args.output_dir, - commit_message="End of training", - ignore_patterns=["step_*", "epoch_*"], - ) - - # Final inference - # Load previous pipeline - pipeline = DiffusionPipeline.from_pretrained( - args.pretrained_model_name_or_path, revision=args.revision, torch_dtype=weight_dtype - ) - pipeline = pipeline.to(accelerator.device) - - # load attention processors - pipeline.unet.load_attn_procs(args.output_dir) - - # run inference - generator = torch.Generator(device=accelerator.device).manual_seed(args.seed) - images = [] - for _ in range(args.num_validation_images): - images.append(pipeline(args.validation_prompt, num_inference_steps=30, generator=generator).images[0]) - - if accelerator.is_main_process: - for tracker in accelerator.trackers: - if tracker.name == "tensorboard": - np_images = np.stack([np.asarray(img) for img in images]) - tracker.writer.add_images("test", np_images, epoch, dataformats="NHWC") - if tracker.name == "wandb": - tracker.log( - { - "test": [ - wandb.Image(image, caption=f"{i}: {args.validation_prompt}") - for i, image in enumerate(images) - ] - } - ) - - accelerator.end_training() - - -if __name__ == "__main__": - main() diff --git a/spaces/deepwisdom/MetaGPT/tests/metagpt/roles/__init__.py b/spaces/deepwisdom/MetaGPT/tests/metagpt/roles/__init__.py deleted file mode 100644 index 3073bcd2cb52a8c7a10a92b1393b1568fc0b5053..0000000000000000000000000000000000000000 --- a/spaces/deepwisdom/MetaGPT/tests/metagpt/roles/__init__.py +++ /dev/null @@ -1,7 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8 -*- -""" -@Time : 2023/5/12 10:14 -@Author : alexanderwu -@File : __init__.py -""" diff --git a/spaces/derful/Chatgpt-academic/crazy_functions/test_project/latex/attention/parameter_attention.tex b/spaces/derful/Chatgpt-academic/crazy_functions/test_project/latex/attention/parameter_attention.tex deleted file mode 100644 index 7bc4fe452dbdbfe44ff72f0cdbd37acd5c786ce6..0000000000000000000000000000000000000000 --- a/spaces/derful/Chatgpt-academic/crazy_functions/test_project/latex/attention/parameter_attention.tex +++ /dev/null @@ -1,45 +0,0 @@ -\pagebreak -\section*{Two Feed-Forward Layers = Attention over Parameters}\label{sec:parameter_attention} - -In addition to attention layers, our model contains position-wise feed-forward networks (Section \ref{sec:ffn}), which consist of two linear transformations with a ReLU activation in between. In fact, these networks too can be seen as a form of attention. Compare the formula for such a network with the formula for a simple dot-product attention layer (biases and scaling factors omitted): - -\begin{align*} - FFN(x, W_1, W_2) = ReLU(xW_1)W_2 \\ - A(q, K, V) = Softmax(qK^T)V -\end{align*} - -Based on the similarity of these formulae, the two-layer feed-forward network can be seen as a kind of attention, where the keys and values are the rows of the trainable parameter matrices $W_1$ and $W_2$, and where we use ReLU instead of Softmax in the compatibility function. - -%the compatablity function is $compat(q, k_i) = ReLU(q \cdot k_i)$ instead of $Softmax(qK_T)_i$. - -Given this similarity, we experimented with replacing the position-wise feed-forward networks with attention layers similar to the ones we use everywhere else our model. The multi-head-attention-over-parameters sublayer is identical to the multi-head attention described in \ref{sec:multihead}, except that the "keys" and "values" inputs to each attention head are trainable model parameters, as opposed to being linear projections of a previous layer. These parameters are scaled up by a factor of $\sqrt{d_{model}}$ in order to be more similar to activations. - -In our first experiment, we replaced each position-wise feed-forward network with a multi-head-attention-over-parameters sublayer with $h_p=8$ heads, key-dimensionality $d_{pk}=64$, and value-dimensionality $d_{pv}=64$, using $n_p=1536$ key-value pairs for each attention head. The sublayer has a total of $2097152$ parameters, including the parameters in the query projection and the output projection. This matches the number of parameters in the position-wise feed-forward network that we replaced. While the theoretical amount of computation is also the same, in practice, the attention version caused the step times to be about 30\% longer. - -In our second experiment, we used $h_p=8$ heads, and $n_p=512$ key-value pairs for each attention head, again matching the total number of parameters in the base model. - -Results for the first experiment were slightly worse than for the base model, and results for the second experiment were slightly better, see Table~\ref{tab:parameter_attention}. - -\begin{table}[h] -\caption{Replacing the position-wise feed-forward networks with multihead-attention-over-parameters produces similar results to the base model. All metrics are on the English-to-German translation development set, newstest2013.} -\label{tab:parameter_attention} -\begin{center} -\vspace{-2mm} -%\scalebox{1.0}{ -\begin{tabular}{c|cccccc|cccc} -\hline\rule{0pt}{2.0ex} - & \multirow{2}{*}{$\dmodel$} & \multirow{2}{*}{$\dff$} & -\multirow{2}{*}{$h_p$} & \multirow{2}{*}{$d_{pk}$} & \multirow{2}{*}{$d_{pv}$} & - \multirow{2}{*}{$n_p$} & - PPL & BLEU & params & training\\ - & & & & & & & (dev) & (dev) & $\times10^6$ & time \\ -\hline\rule{0pt}{2.0ex} -base & 512 & 2048 & & & & & 4.92 & 25.8 & 65 & 12 hours\\ -\hline\rule{0pt}{2.0ex} -AOP$_1$ & 512 & & 8 & 64 & 64 & 1536 & 4.92& 25.5 & 65 & 16 hours\\ -AOP$_2$ & 512 & & 16 & 64 & 64 & 512 & \textbf{4.86} & \textbf{25.9} & 65 & 16 hours \\ -\hline -\end{tabular} -%} -\end{center} -\end{table} diff --git a/spaces/diacanFperku/AutoGPT/5dfly Photo Design 4132 Crake Keyrar.md b/spaces/diacanFperku/AutoGPT/5dfly Photo Design 4132 Crake Keyrar.md deleted file mode 100644 index 3895691d2eb4963fa1af05dd82ed694d61607e2d..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/5dfly Photo Design 4132 Crake Keyrar.md +++ /dev/null @@ -1,23 +0,0 @@ -

5dfly Photo Design 4132 Crake Keyrar


DOWNLOADhttps://gohhs.com/2uFTC0



-
-December 20, 2018 - 6b11cea230 . . how to design dynamic magazine layout from concept to . stories/3379828-extra-quality-acdsee-photo-manager-14-1-137-serial-free-downsoftsfree-crack #include "iostream" #include "stdio.h" #include "conio.h" #include "iomanip " #include "locale.h" #include "windows.h" #include -using namespace std; -int main(int argc, char* argv[]) { -struct student_type student; // -create an instance of the structure // -struct student { -int id; -stringname; -string name; -double grade; -int total_economy; -string title; -string article; -string year; -string description; -string image; -string category; -stringname; 8a78ff9644
-
-
-

diff --git a/spaces/diacanFperku/AutoGPT/EmpyrionGalacticSurvivalAlphav443nosurveynopasswordnodownload.md b/spaces/diacanFperku/AutoGPT/EmpyrionGalacticSurvivalAlphav443nosurveynopasswordnodownload.md deleted file mode 100644 index 093d37cb8333397346e3909e3f0a2d395f252b76..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/EmpyrionGalacticSurvivalAlphav443nosurveynopasswordnodownload.md +++ /dev/null @@ -1,6 +0,0 @@ -

EmpyrionGalacticSurvivalAlphav443nosurveynopasswordnodownload


Downloadhttps://gohhs.com/2uFUOm



- - d5da3c52bf
-
-
-

diff --git a/spaces/diagaiwei/ir_chinese_medqa/colbert/utilities/minicorpus.py b/spaces/diagaiwei/ir_chinese_medqa/colbert/utilities/minicorpus.py deleted file mode 100644 index 0b11eb7836f5ede25f0eb6c037177917f032ccf3..0000000000000000000000000000000000000000 --- a/spaces/diagaiwei/ir_chinese_medqa/colbert/utilities/minicorpus.py +++ /dev/null @@ -1,66 +0,0 @@ -import os -import random - -from colbert.utils.utils import create_directory - -from colbert.data.collection import Collection -from colbert.data.queries import Queries -from colbert.data.ranking import Ranking - - -def sample_minicorpus(name, factor, topk=30, maxdev=3000): - """ - Factor: - * nano=1 - * micro=10 - * mini=100 - * small=100 with topk=100 - * medium=150 with topk=300 - """ - - random.seed(12345) - - # Load collection - collection = Collection(path='/dfs/scratch0/okhattab/OpenQA/collection.tsv') - - # Load train and dev queries - qas_train = Queries(path='/dfs/scratch0/okhattab/OpenQA/NQ/train/qas.json').qas() - qas_dev = Queries(path='/dfs/scratch0/okhattab/OpenQA/NQ/dev/qas.json').qas() - - # Load train and dev C3 rankings - ranking_train = Ranking(path='/dfs/scratch0/okhattab/OpenQA/NQ/train/rankings/C3.tsv.annotated').todict() - ranking_dev = Ranking(path='/dfs/scratch0/okhattab/OpenQA/NQ/dev/rankings/C3.tsv.annotated').todict() - - # Sample NT and ND queries from each, keep only the top-k passages for those - sample_train = random.sample(list(qas_train.keys()), min(len(qas_train.keys()), 300*factor)) - sample_dev = random.sample(list(qas_dev.keys()), min(len(qas_dev.keys()), maxdev, 30*factor)) - - train_pids = [pid for qid in sample_train for qpids in ranking_train[qid][:topk] for pid in qpids] - dev_pids = [pid for qid in sample_dev for qpids in ranking_dev[qid][:topk] for pid in qpids] - - sample_pids = sorted(list(set(train_pids + dev_pids))) - print(f'len(sample_pids) = {len(sample_pids)}') - - # Save the new query sets: train and dev - ROOT = f'/future/u/okhattab/root/unit/data/NQ-{name}' - - create_directory(os.path.join(ROOT, 'train')) - create_directory(os.path.join(ROOT, 'dev')) - - new_train = Queries(data={qid: qas_train[qid] for qid in sample_train}) - new_train.save(os.path.join(ROOT, 'train/questions.tsv')) - new_train.save_qas(os.path.join(ROOT, 'train/qas.json')) - - new_dev = Queries(data={qid: qas_dev[qid] for qid in sample_dev}) - new_dev.save(os.path.join(ROOT, 'dev/questions.tsv')) - new_dev.save_qas(os.path.join(ROOT, 'dev/qas.json')) - - # Save the new collection - print(f"Saving to {os.path.join(ROOT, 'collection.tsv')}") - Collection(data=[collection[pid] for pid in sample_pids]).save(os.path.join(ROOT, 'collection.tsv')) - - print('#> Done!') - - -if __name__ == '__main__': - sample_minicorpus('medium', 150, topk=300) diff --git a/spaces/digitalxingtong/Jiaran-Bert-VITS2/monotonic_align/__init__.py b/spaces/digitalxingtong/Jiaran-Bert-VITS2/monotonic_align/__init__.py deleted file mode 100644 index a323673bb16070d6d0fffddb939b657d0915ff1b..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Jiaran-Bert-VITS2/monotonic_align/__init__.py +++ /dev/null @@ -1,20 +0,0 @@ -from numpy import zeros, int32, float32 -from torch import from_numpy - -from .core import maximum_path_jit - - -def maximum_path(neg_cent, mask): - """ numba optimized version. - neg_cent: [b, t_t, t_s] - mask: [b, t_t, t_s] - """ - device = neg_cent.device - dtype = neg_cent.dtype - neg_cent = neg_cent.data.cpu().numpy().astype(float32) - path = zeros(neg_cent.shape, dtype=int32) - - t_t_max = mask.sum(1)[:, 0].data.cpu().numpy().astype(int32) - t_s_max = mask.sum(2)[:, 0].data.cpu().numpy().astype(int32) - maximum_path_jit(path, neg_cent, t_t_max, t_s_max) - return from_numpy(path).to(device=device, dtype=dtype) \ No newline at end of file diff --git a/spaces/dineshreddy/WALT/mmdet/datasets/samplers/group_sampler.py b/spaces/dineshreddy/WALT/mmdet/datasets/samplers/group_sampler.py deleted file mode 100644 index f88cf3439446a2eb7d8656388ddbe93196315f5b..0000000000000000000000000000000000000000 --- a/spaces/dineshreddy/WALT/mmdet/datasets/samplers/group_sampler.py +++ /dev/null @@ -1,148 +0,0 @@ -from __future__ import division -import math - -import numpy as np -import torch -from mmcv.runner import get_dist_info -from torch.utils.data import Sampler - - -class GroupSampler(Sampler): - - def __init__(self, dataset, samples_per_gpu=1): - assert hasattr(dataset, 'flag') - self.dataset = dataset - self.samples_per_gpu = samples_per_gpu - self.flag = dataset.flag.astype(np.int64) - self.group_sizes = np.bincount(self.flag) - self.num_samples = 0 - for i, size in enumerate(self.group_sizes): - self.num_samples += int(np.ceil( - size / self.samples_per_gpu)) * self.samples_per_gpu - - def __iter__(self): - indices = [] - for i, size in enumerate(self.group_sizes): - if size == 0: - continue - indice = np.where(self.flag == i)[0] - assert len(indice) == size - np.random.shuffle(indice) - num_extra = int(np.ceil(size / self.samples_per_gpu) - ) * self.samples_per_gpu - len(indice) - indice = np.concatenate( - [indice, np.random.choice(indice, num_extra)]) - indices.append(indice) - indices = np.concatenate(indices) - indices = [ - indices[i * self.samples_per_gpu:(i + 1) * self.samples_per_gpu] - for i in np.random.permutation( - range(len(indices) // self.samples_per_gpu)) - ] - indices = np.concatenate(indices) - indices = indices.astype(np.int64).tolist() - assert len(indices) == self.num_samples - return iter(indices) - - def __len__(self): - return self.num_samples - - -class DistributedGroupSampler(Sampler): - """Sampler that restricts data loading to a subset of the dataset. - - It is especially useful in conjunction with - :class:`torch.nn.parallel.DistributedDataParallel`. In such case, each - process can pass a DistributedSampler instance as a DataLoader sampler, - and load a subset of the original dataset that is exclusive to it. - - .. note:: - Dataset is assumed to be of constant size. - - Arguments: - dataset: Dataset used for sampling. - num_replicas (optional): Number of processes participating in - distributed training. - rank (optional): Rank of the current process within num_replicas. - seed (int, optional): random seed used to shuffle the sampler if - ``shuffle=True``. This number should be identical across all - processes in the distributed group. Default: 0. - """ - - def __init__(self, - dataset, - samples_per_gpu=1, - num_replicas=None, - rank=None, - seed=0): - _rank, _num_replicas = get_dist_info() - if num_replicas is None: - num_replicas = _num_replicas - if rank is None: - rank = _rank - self.dataset = dataset - self.samples_per_gpu = samples_per_gpu - self.num_replicas = num_replicas - self.rank = rank - self.epoch = 0 - self.seed = seed if seed is not None else 0 - - assert hasattr(self.dataset, 'flag') - self.flag = self.dataset.flag - self.group_sizes = np.bincount(self.flag) - - self.num_samples = 0 - for i, j in enumerate(self.group_sizes): - self.num_samples += int( - math.ceil(self.group_sizes[i] * 1.0 / self.samples_per_gpu / - self.num_replicas)) * self.samples_per_gpu - self.total_size = self.num_samples * self.num_replicas - - def __iter__(self): - # deterministically shuffle based on epoch - g = torch.Generator() - g.manual_seed(self.epoch + self.seed) - - indices = [] - for i, size in enumerate(self.group_sizes): - if size > 0: - indice = np.where(self.flag == i)[0] - assert len(indice) == size - # add .numpy() to avoid bug when selecting indice in parrots. - # TODO: check whether torch.randperm() can be replaced by - # numpy.random.permutation(). - indice = indice[list( - torch.randperm(int(size), generator=g).numpy())].tolist() - extra = int( - math.ceil( - size * 1.0 / self.samples_per_gpu / self.num_replicas) - ) * self.samples_per_gpu * self.num_replicas - len(indice) - # pad indice - tmp = indice.copy() - for _ in range(extra // size): - indice.extend(tmp) - indice.extend(tmp[:extra % size]) - indices.extend(indice) - - assert len(indices) == self.total_size - - indices = [ - indices[j] for i in list( - torch.randperm( - len(indices) // self.samples_per_gpu, generator=g)) - for j in range(i * self.samples_per_gpu, (i + 1) * - self.samples_per_gpu) - ] - - # subsample - offset = self.num_samples * self.rank - indices = indices[offset:offset + self.num_samples] - assert len(indices) == self.num_samples - - return iter(indices) - - def __len__(self): - return self.num_samples - - def set_epoch(self, epoch): - self.epoch = epoch diff --git a/spaces/dinhminh20521597/OCR_DEMO/configs/_base_/det_models/fcenet_r50_fpn.py b/spaces/dinhminh20521597/OCR_DEMO/configs/_base_/det_models/fcenet_r50_fpn.py deleted file mode 100644 index 3c2bd12b6295858895c53e5e1700df3962a8a7d5..0000000000000000000000000000000000000000 --- a/spaces/dinhminh20521597/OCR_DEMO/configs/_base_/det_models/fcenet_r50_fpn.py +++ /dev/null @@ -1,33 +0,0 @@ -model = dict( - type='FCENet', - backbone=dict( - type='mmdet.ResNet', - depth=50, - num_stages=4, - out_indices=(1, 2, 3), - frozen_stages=-1, - norm_cfg=dict(type='BN', requires_grad=True), - init_cfg=dict(type='Pretrained', checkpoint='torchvision://resnet50'), - norm_eval=False, - style='pytorch'), - neck=dict( - type='mmdet.FPN', - in_channels=[512, 1024, 2048], - out_channels=256, - add_extra_convs='on_output', - num_outs=3, - relu_before_extra_convs=True, - act_cfg=None), - bbox_head=dict( - type='FCEHead', - in_channels=256, - scales=(8, 16, 32), - fourier_degree=5, - loss=dict(type='FCELoss', num_sample=50), - postprocessor=dict( - type='FCEPostprocessor', - text_repr_type='quad', - num_reconstr_points=50, - alpha=1.2, - beta=1.0, - score_thr=0.3))) diff --git a/spaces/dorkai/text-generation-webui-main/text-generation-webui-main/css/chat.css b/spaces/dorkai/text-generation-webui-main/text-generation-webui-main/css/chat.css deleted file mode 100644 index b5102e9a72ca0b066b12d52ab371d8a24774ac19..0000000000000000000000000000000000000000 --- a/spaces/dorkai/text-generation-webui-main/text-generation-webui-main/css/chat.css +++ /dev/null @@ -1,43 +0,0 @@ -.h-\[40vh\], .wrap.svelte-byatnx.svelte-byatnx.svelte-byatnx { - height: 66.67vh -} - -.gradio-container { - margin-left: auto !important; - margin-right: auto !important; -} - -.w-screen { - width: unset -} - -div.svelte-362y77>*, div.svelte-362y77>.form>* { - flex-wrap: nowrap -} - -/* fixes the API documentation in chat mode */ -.api-docs.svelte-1iguv9h.svelte-1iguv9h.svelte-1iguv9h { - display: grid; -} - -.pending.svelte-1ed2p3z { - opacity: 1; -} - -#extensions { - padding: 0; - padding: 0; -} - -#gradio-chatbot { - height: 66.67vh; -} - -.wrap.svelte-6roggh.svelte-6roggh { - max-height: 92.5%; -} - -/* This is for the microphone button in the whisper extension */ -.sm.svelte-1ipelgc { - width: 100%; -} diff --git a/spaces/duchaba/skin_cancer_diagnose/app.py b/spaces/duchaba/skin_cancer_diagnose/app.py deleted file mode 100644 index b8e48465c3fcb6143b6f469ba894acad5ef3b437..0000000000000000000000000000000000000000 --- a/spaces/duchaba/skin_cancer_diagnose/app.py +++ /dev/null @@ -1,173 +0,0 @@ - -import fastai -import fastai.vision -import PIL -import gradio -import matplotlib -import numpy -import pandas -from fastai.vision.all import * -# -# create class -class ADA_SKIN(object): - # - # initialize the object - def __init__(self, name="Wallaby",verbose=True,*args, **kwargs): - super(ADA_SKIN, self).__init__(*args, **kwargs) - self.author = "Duc Haba" - self.name = name - if (verbose): - self._ph() - self._pp("Hello from class", str(self.__class__) + " Class: " + str(self.__class__.__name__)) - self._pp("Code name", self.name) - self._pp("Author is", self.author) - self._ph() - # - self.article = '

Warning:

Do NOT use this for any medical diagnosis.
' - self.article += 'I am not a dermatologist, and NO dermatologist has endorsed it. ' - self.article += 'This DL model is for my independent research.
Please refer to the GPL 3.0 for usage and license.' - self.article += '

Citation:

  • ' - self.article += 'Author/Dev: Duc Haba, 2022.
  • ' - self.article += '
  • https://linkedin.com/in/duchaba
  • ' - self.article += '
  • The training dataset are from the International Skin Imaging Collaboration (ISIC)
  • ' - self.article += '
  • The Skin Cancer Identification are from 3 seperate datasets.
  • ' - self.article += '
      ' - self.article += '
    1. https://www.kaggle.com/datasets/surajghuwalewala/ham1000-segmentation-and-classification
    2. ' - self.article += '
    3. https://www.kaggle.com/datasets/andrewmvd/isic-2019
    4. ' - self.article += '
    5. https://www.kaggle.com/datasets/jnegrini/skin-lesions-act-keratosis-and-melanoma
    6. ' - self.article += '
      • ' - self.article += 'The Malignant versus Benign dataset
      • ' - self.article += '
        1. https://www.kaggle.com/datasets/fanconic/skin-cancer-malignant-vs-benign
        2. ' - self.article += '
      ' - self.article += '

      Articles:

      ' - self.article += '

      Example Images: (left to right)

        ' - self.article += '
      1. Bowen Disease (AKIEC)
      2. ' - self.article += '
      3. Basal Cell Carcinoma
      4. ' - self.article += '
      5. Benign Keratosis-like Lesions
      6. ' - self.article += '
      7. Dermatofibroma
      8. ' - self.article += '
      9. Melanoma
      10. ' - self.article += '
      11. Melanocytic Nevi
      12. ' - self.article += '
      13. Squamous Cell Carcinoma
      14. ' - self.article += '
      15. Vascular Lesions
      16. ' - self.article += '
      17. Benign
      18. ' - self.article += '
      19. Benign 2
      ' - self.article += '

      Train Result:

        ' - self.article += '
      • Skin Cancer Classificaiton: F1-Score, Precision, and Recall Graph
      • ' - self.article += '
      • F1-Score, Precision, and Recall Graph' - self.article += '
      • Skin Cancer Malignant or Benign: F1-Score, Precision, and Recall Graph
      • ' - self.article += '
      • F1-Score, Precision, and Recall Graph' - self.article += '
      ' - self.article += '

      Dev Stack:

        ' - self.article += '
      • Jupyter Notebook, Python, Pandas, Matplotlib, Sklearn
      • ' - self.article += '
      • Fast.ai, PyTorch
      • ' - self.article += '
      ' - self.article += '

      Licenses:

' - self.examples = ['akiec1.jpg','bcc1.jpg','bkl1.jpg','df1.jpg','mel1.jpg', - 'nevi1.jpg','scc1.jpg','vl1.jpg','benign1.jpg','benign3.jpg'] - self.title = "Skin Cancer Diagnose" - return - # - # pretty print output name-value line - def _pp(self, a, b): - print("%34s : %s" % (str(a), str(b))) - return - # - # pretty print the header or footer lines - def _ph(self): - print("-" * 34, ":", "-" * 34) - return - # - def _predict_image(self,img,cat): - pred,idx,probs = learn.predict(img) - return dict(zip(cat, map(float,probs))) - # - def _predict_image2(self,img,cat): - pred,idx,probs = learn2.predict(img) - return dict(zip(cat, map(float,probs))) - # - def _draw_pred(self,df_pred, df2): - canvas, pic = matplotlib.pyplot.subplots(1,2, figsize=(12,6)) - ti = df_pred["vocab"].head(3).values - ti2 = df2["vocab"].head(2).values - # special case - #if (matplotlib.__version__) >= "3.5.2": - try: - df_pred["pred"].head(3).plot(ax=pic[0],kind="pie", - cmap="Set2",labels=ti, explode=(0.02,0,0), - wedgeprops=dict(width=.4), - normalize=False) - df2["pred"].head(2).plot(ax=pic[1],kind="pie", - colors=["cornflowerblue","darkorange"],labels=ti2, explode=(0.02,0), - wedgeprops=dict(width=.4), - normalize=False) - except: - df_pred["pred"].head(3).plot(ax=pic[0],kind="pie", - cmap="Set2",labels=ti, explode=(0.02,0,0), - wedgeprops=dict(width=.4)) - df2["pred"].head(2).plot(ax=pic[1],kind="pie", - colors=["cornflowerblue","darkorange"],labels=ti2, explode=(0.02,0), - wedgeprops=dict(width=.4)) - t = str(ti[0]) + ": " + str(numpy.round(df_pred.head(1).pred.values[0]*100, 2)) + "% Certainty" - pic[0].set_title(t,fontsize=14.0, fontweight="bold") - pic[0].axis('off') - pic[0].legend(ti, loc="lower right",title="Skin Cancers: Top 3") - # - k0 = numpy.round(df2.head(1).pred.values[0]*100, 2) - k1 = numpy.round(df2.tail(1).pred.values[0]*100, 2) - if (k0 > k1): - t2 = str(ti2[0]) + ": " + str(k0) + "% Certainty" - else: - t2 = str(ti2[1]) + ": " + str(k1) + "% Certainty" - pic[1].set_title(t2,fontsize=14.0, fontweight="bold") - pic[1].axis('off') - pic[1].legend(ti2, loc="lower right",title="Skin Cancers:") - # - # # draw circle - # centre_circle = matplotlib.pyplot.Circle((0, 0), 0.6, fc='white') - # p = matplotlib.pyplot.gcf() - # # Adding Circle in Pie chart - # p.gca().add_artist(centre_circle) - # - #p=plt.gcf() - #p.gca().add_artist(my_circle) - # - canvas.tight_layout() - return canvas - # - def predict_donut(self,img): - d = self._predict_image(img,self.categories) - df = pandas.DataFrame(d, index=[0]) - df = df.transpose().reset_index() - df.columns = ["vocab", "pred"] - df.sort_values("pred", inplace=True,ascending=False, ignore_index=True) - # - d2 = self._predict_image2(img,self.categories2) - df2 = pandas.DataFrame(d2, index=[0]) - df2 = df2.transpose().reset_index() - df2.columns = ["vocab", "pred"] - # - canvas = self._draw_pred(df,df2) - return canvas -# -maxi = ADA_SKIN(verbose=False) -# -learn = fastai.learner.load_learner('ada_learn_skin_norm2000.pkl') -learn2 = fastai.learner.load_learner('ada_learn_malben.pkl') -maxi.categories = learn.dls.vocab -maxi.categories2 = learn2.dls.vocab -hf_image = gradio.inputs.Image(shape=(192, 192)) -hf_label = gradio.outputs.Label() -intf = gradio.Interface(fn=maxi.predict_donut, - inputs=hf_image, - outputs=["plot"], - examples=maxi.examples, - title=maxi.title, - live=True, - article=maxi.article) -intf.launch(inline=False,share=True) \ No newline at end of file diff --git a/spaces/elinteerie/NigeriaFoodAI/app.py b/spaces/elinteerie/NigeriaFoodAI/app.py deleted file mode 100644 index ba7e7d3e3d23a876d5b7edbdafedbef424202682..0000000000000000000000000000000000000000 --- a/spaces/elinteerie/NigeriaFoodAI/app.py +++ /dev/null @@ -1,27 +0,0 @@ -import tensorflow as tf -import requests -model_depl = tf.keras.models.load_model("my_h5_model.h5") - -# Labels -response = requests.get("https://raw.githubusercontent.com/elinteerie/Nigeria-Food-AI/main/labels.txt") -labels = response.text.split("\n") - -def classify_image(inp): - inp = inp.reshape((-1, 224, 224, 3)) - prediction = model_depl.predict(inp).flatten() - confidences = {labels[i]: float(prediction[i]) for i in range(14)} - return confidences - - -title = "Nigeria Food AI BETA" -description = """ -Upload a picture and receive all detailed nutrients -Features// Version 1: Dish names Second Release: Food groups Third Release: Micro and Macro Nutrients Fourth Release: Ingredients Fifth Release: Food Group Sixth: & lots more -This Nigeria Indigenous AI will be trained on the following food: -For the First Version and testing, Akara and Bread, Banga Soup, Bitterleaf Soup, Edikakong, Egusi, Ewedu, Garri and Groundnut, Jellof, Moi-moi, Nkwobi, Ofe-owerri, Ogbono, Okra, Puff puff -""" -import gradio as gr -gr.Interface(fn=classify_image, - inputs=gr.Image(shape=(224, 224)), - outputs=gr.Label(num_top_classes=3), - examples=["egusi sample.jpg", "ogn.jpg"]).launch(share=True) \ No newline at end of file diff --git a/spaces/evaluate-metric/wer/wer.py b/spaces/evaluate-metric/wer/wer.py deleted file mode 100644 index 214d5b22e2afbe6bf7d78747dd3f00b9c76b8a3e..0000000000000000000000000000000000000000 --- a/spaces/evaluate-metric/wer/wer.py +++ /dev/null @@ -1,106 +0,0 @@ -# Copyright 2021 The HuggingFace Evaluate Authors. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -""" Word Error Ratio (WER) metric. """ - -import datasets -from jiwer import compute_measures - -import evaluate - - -_CITATION = """\ -@inproceedings{inproceedings, - author = {Morris, Andrew and Maier, Viktoria and Green, Phil}, - year = {2004}, - month = {01}, - pages = {}, - title = {From WER and RIL to MER and WIL: improved evaluation measures for connected speech recognition.} -} -""" - -_DESCRIPTION = """\ -Word error rate (WER) is a common metric of the performance of an automatic speech recognition system. - -The general difficulty of measuring performance lies in the fact that the recognized word sequence can have a different length from the reference word sequence (supposedly the correct one). The WER is derived from the Levenshtein distance, working at the word level instead of the phoneme level. The WER is a valuable tool for comparing different systems as well as for evaluating improvements within one system. This kind of measurement, however, provides no details on the nature of translation errors and further work is therefore required to identify the main source(s) of error and to focus any research effort. - -This problem is solved by first aligning the recognized word sequence with the reference (spoken) word sequence using dynamic string alignment. Examination of this issue is seen through a theory called the power law that states the correlation between perplexity and word error rate. - -Word error rate can then be computed as: - -WER = (S + D + I) / N = (S + D + I) / (S + D + C) - -where - -S is the number of substitutions, -D is the number of deletions, -I is the number of insertions, -C is the number of correct words, -N is the number of words in the reference (N=S+D+C). - -This value indicates the average number of errors per reference word. The lower the value, the better the -performance of the ASR system with a WER of 0 being a perfect score. -""" - -_KWARGS_DESCRIPTION = """ -Compute WER score of transcribed segments against references. - -Args: - references: List of references for each speech input. - predictions: List of transcriptions to score. - concatenate_texts (bool, default=False): Whether to concatenate all input texts or compute WER iteratively. - -Returns: - (float): the word error rate - -Examples: - - >>> predictions = ["this is the prediction", "there is an other sample"] - >>> references = ["this is the reference", "there is another one"] - >>> wer = evaluate.load("wer") - >>> wer_score = wer.compute(predictions=predictions, references=references) - >>> print(wer_score) - 0.5 -""" - - -@evaluate.utils.file_utils.add_start_docstrings(_DESCRIPTION, _KWARGS_DESCRIPTION) -class WER(evaluate.Metric): - def _info(self): - return evaluate.MetricInfo( - description=_DESCRIPTION, - citation=_CITATION, - inputs_description=_KWARGS_DESCRIPTION, - features=datasets.Features( - { - "predictions": datasets.Value("string", id="sequence"), - "references": datasets.Value("string", id="sequence"), - } - ), - codebase_urls=["https://github.com/jitsi/jiwer/"], - reference_urls=[ - "https://en.wikipedia.org/wiki/Word_error_rate", - ], - ) - - def _compute(self, predictions=None, references=None, concatenate_texts=False): - if concatenate_texts: - return compute_measures(references, predictions)["wer"] - else: - incorrect = 0 - total = 0 - for prediction, reference in zip(predictions, references): - measures = compute_measures(reference, prediction) - incorrect += measures["substitutions"] + measures["deletions"] + measures["insertions"] - total += measures["substitutions"] + measures["deletions"] + measures["hits"] - return incorrect / total diff --git a/spaces/facebook/MusicGen/tests/modules/test_lstm.py b/spaces/facebook/MusicGen/tests/modules/test_lstm.py deleted file mode 100644 index 1248964c8191e19f27661f0974bef9cc967eb015..0000000000000000000000000000000000000000 --- a/spaces/facebook/MusicGen/tests/modules/test_lstm.py +++ /dev/null @@ -1,32 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import random -import torch - -from audiocraft.modules.lstm import StreamableLSTM - - -class TestStreamableLSTM: - - def test_lstm(self): - B, C, T = 4, 2, random.randint(1, 100) - - lstm = StreamableLSTM(C, 3, skip=False) - x = torch.randn(B, C, T) - y = lstm(x) - - print(y.shape) - assert y.shape == torch.Size([B, C, T]) - - def test_lstm_skip(self): - B, C, T = 4, 2, random.randint(1, 100) - - lstm = StreamableLSTM(C, 3, skip=True) - x = torch.randn(B, C, T) - y = lstm(x) - - assert y.shape == torch.Size([B, C, T]) diff --git a/spaces/facebook/StyleNeRF/torch_utils/ops/nerf_utils.py b/spaces/facebook/StyleNeRF/torch_utils/ops/nerf_utils.py deleted file mode 100644 index 6a504abb083d7d9a7afbff861c428b7b7d2de7a5..0000000000000000000000000000000000000000 --- a/spaces/facebook/StyleNeRF/torch_utils/ops/nerf_utils.py +++ /dev/null @@ -1,38 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved - - -import os -import torch -from .. import custom_ops - - -_plugin = None - -def _init(): - global _plugin - if _plugin is None: - _plugin = custom_ops.get_plugin( - module_name='nerf_utils_plugin', - sources=['nerf_utils.cu'], - headers=['utils.h'], - source_dir=os.path.dirname(__file__), - extra_cuda_cflags=['--use_fast_math'], - ) - - return True - -def topp_masking(w, p=0.99): - """ - w: B x N x S normalized (S number of samples) - p: top-P used - """ - # _init() - w_sorted, w_indices = w.sort(dim=-1, descending=True) - - w_mask = w_sorted.cumsum(-1).lt(p) - w_mask = torch.cat([torch.ones_like(w_mask[...,:1]), w_mask[..., :-1]], -1) - w_mask = w_mask.scatter(-1, w_indices, w_mask) - - # w_mask = torch.zeros_like(w).bool() - # _plugin.topp_masking(w_indices.int(), w_sorted, w_mask, p, w.size(0), w.size(1), w.size(2)) - return w_mask \ No newline at end of file diff --git a/spaces/facebook/ov-seg/open_vocab_seg/modeling/backbone/swin.py b/spaces/facebook/ov-seg/open_vocab_seg/modeling/backbone/swin.py deleted file mode 100644 index aa651bdab51bb353e3be4b5554f41e251803d5cb..0000000000000000000000000000000000000000 --- a/spaces/facebook/ov-seg/open_vocab_seg/modeling/backbone/swin.py +++ /dev/null @@ -1,832 +0,0 @@ -# -------------------------------------------------------- -# Swin Transformer -# Copyright (c) 2021 Microsoft -# Licensed under The MIT License [see LICENSE for details] -# Written by Ze Liu, Yutong Lin, Yixuan Wei -# -------------------------------------------------------- - -# Copyright (c) Facebook, Inc. and its affiliates. -# Modified by Bowen Cheng from https://github.com/SwinTransformer/Swin-Transformer-Semantic-Segmentation/blob/main/mmseg/models/backbones/swin_transformer.py -# Copyright (c) Meta Platforms, Inc. All Rights Reserved - -import numpy as np -import torch -import torch.nn as nn -import torch.nn.functional as F -import torch.utils.checkpoint as checkpoint -from timm.models.layers import DropPath, to_2tuple, trunc_normal_ - -from detectron2.modeling import BACKBONE_REGISTRY, Backbone, ShapeSpec - - -class Mlp(nn.Module): - """Multilayer perceptron.""" - - def __init__( - self, - in_features, - hidden_features=None, - out_features=None, - act_layer=nn.GELU, - drop=0.0, - ): - super().__init__() - out_features = out_features or in_features - hidden_features = hidden_features or in_features - self.fc1 = nn.Linear(in_features, hidden_features) - self.act = act_layer() - self.fc2 = nn.Linear(hidden_features, out_features) - self.drop = nn.Dropout(drop) - - def forward(self, x): - x = self.fc1(x) - x = self.act(x) - x = self.drop(x) - x = self.fc2(x) - x = self.drop(x) - return x - - -def window_partition(x, window_size): - """ - Args: - x: (B, H, W, C) - window_size (int): window size - Returns: - windows: (num_windows*B, window_size, window_size, C) - """ - B, H, W, C = x.shape - x = x.view(B, H // window_size, window_size, W // window_size, window_size, C) - windows = ( - x.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, window_size, window_size, C) - ) - return windows - - -def window_reverse(windows, window_size, H, W): - """ - Args: - windows: (num_windows*B, window_size, window_size, C) - window_size (int): Window size - H (int): Height of image - W (int): Width of image - Returns: - x: (B, H, W, C) - """ - B = int(windows.shape[0] / (H * W / window_size / window_size)) - x = windows.view( - B, H // window_size, W // window_size, window_size, window_size, -1 - ) - x = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(B, H, W, -1) - return x - - -class WindowAttention(nn.Module): - """Window based multi-head self attention (W-MSA) module with relative position bias. - It supports both of shifted and non-shifted window. - Args: - dim (int): Number of input channels. - window_size (tuple[int]): The height and width of the window. - num_heads (int): Number of attention heads. - qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True - qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set - attn_drop (float, optional): Dropout ratio of attention weight. Default: 0.0 - proj_drop (float, optional): Dropout ratio of output. Default: 0.0 - """ - - def __init__( - self, - dim, - window_size, - num_heads, - qkv_bias=True, - qk_scale=None, - attn_drop=0.0, - proj_drop=0.0, - ): - - super().__init__() - self.dim = dim - self.window_size = window_size # Wh, Ww - self.num_heads = num_heads - head_dim = dim // num_heads - self.scale = qk_scale or head_dim ** -0.5 - - # define a parameter table of relative position bias - self.relative_position_bias_table = nn.Parameter( - torch.zeros((2 * window_size[0] - 1) * (2 * window_size[1] - 1), num_heads) - ) # 2*Wh-1 * 2*Ww-1, nH - - # get pair-wise relative position index for each token inside the window - coords_h = torch.arange(self.window_size[0]) - coords_w = torch.arange(self.window_size[1]) - coords = torch.stack(torch.meshgrid([coords_h, coords_w])) # 2, Wh, Ww - coords_flatten = torch.flatten(coords, 1) # 2, Wh*Ww - relative_coords = ( - coords_flatten[:, :, None] - coords_flatten[:, None, :] - ) # 2, Wh*Ww, Wh*Ww - relative_coords = relative_coords.permute( - 1, 2, 0 - ).contiguous() # Wh*Ww, Wh*Ww, 2 - relative_coords[:, :, 0] += self.window_size[0] - 1 # shift to start from 0 - relative_coords[:, :, 1] += self.window_size[1] - 1 - relative_coords[:, :, 0] *= 2 * self.window_size[1] - 1 - relative_position_index = relative_coords.sum(-1) # Wh*Ww, Wh*Ww - self.register_buffer("relative_position_index", relative_position_index) - - self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias) - self.attn_drop = nn.Dropout(attn_drop) - self.proj = nn.Linear(dim, dim) - self.proj_drop = nn.Dropout(proj_drop) - - trunc_normal_(self.relative_position_bias_table, std=0.02) - self.softmax = nn.Softmax(dim=-1) - - def forward(self, x, mask=None): - """Forward function. - Args: - x: input features with shape of (num_windows*B, N, C) - mask: (0/-inf) mask with shape of (num_windows, Wh*Ww, Wh*Ww) or None - """ - B_, N, C = x.shape - qkv = ( - self.qkv(x) - .reshape(B_, N, 3, self.num_heads, C // self.num_heads) - .permute(2, 0, 3, 1, 4) - ) - q, k, v = ( - qkv[0], - qkv[1], - qkv[2], - ) # make torchscript happy (cannot use tensor as tuple) - - q = q * self.scale - attn = q @ k.transpose(-2, -1) - - relative_position_bias = self.relative_position_bias_table[ - self.relative_position_index.view(-1) - ].view( - self.window_size[0] * self.window_size[1], - self.window_size[0] * self.window_size[1], - -1, - ) # Wh*Ww,Wh*Ww,nH - relative_position_bias = relative_position_bias.permute( - 2, 0, 1 - ).contiguous() # nH, Wh*Ww, Wh*Ww - attn = attn + relative_position_bias.unsqueeze(0) - - if mask is not None: - nW = mask.shape[0] - attn = attn.view(B_ // nW, nW, self.num_heads, N, N) + mask.unsqueeze( - 1 - ).unsqueeze(0) - attn = attn.view(-1, self.num_heads, N, N) - attn = self.softmax(attn) - else: - attn = self.softmax(attn) - - attn = self.attn_drop(attn) - - x = (attn @ v).transpose(1, 2).reshape(B_, N, C) - x = self.proj(x) - x = self.proj_drop(x) - return x - - -class SwinTransformerBlock(nn.Module): - """Swin Transformer Block. - Args: - dim (int): Number of input channels. - num_heads (int): Number of attention heads. - window_size (int): Window size. - shift_size (int): Shift size for SW-MSA. - mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. - qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True - qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set. - drop (float, optional): Dropout rate. Default: 0.0 - attn_drop (float, optional): Attention dropout rate. Default: 0.0 - drop_path (float, optional): Stochastic depth rate. Default: 0.0 - act_layer (nn.Module, optional): Activation layer. Default: nn.GELU - norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm - """ - - def __init__( - self, - dim, - num_heads, - window_size=7, - shift_size=0, - mlp_ratio=4.0, - qkv_bias=True, - qk_scale=None, - drop=0.0, - attn_drop=0.0, - drop_path=0.0, - act_layer=nn.GELU, - norm_layer=nn.LayerNorm, - ): - super().__init__() - self.dim = dim - self.num_heads = num_heads - self.window_size = window_size - self.shift_size = shift_size - self.mlp_ratio = mlp_ratio - assert ( - 0 <= self.shift_size < self.window_size - ), "shift_size must in 0-window_size" - - self.norm1 = norm_layer(dim) - self.attn = WindowAttention( - dim, - window_size=to_2tuple(self.window_size), - num_heads=num_heads, - qkv_bias=qkv_bias, - qk_scale=qk_scale, - attn_drop=attn_drop, - proj_drop=drop, - ) - - self.drop_path = DropPath(drop_path) if drop_path > 0.0 else nn.Identity() - self.norm2 = norm_layer(dim) - mlp_hidden_dim = int(dim * mlp_ratio) - self.mlp = Mlp( - in_features=dim, - hidden_features=mlp_hidden_dim, - act_layer=act_layer, - drop=drop, - ) - - self.H = None - self.W = None - - def forward(self, x, mask_matrix): - """Forward function. - Args: - x: Input feature, tensor size (B, H*W, C). - H, W: Spatial resolution of the input feature. - mask_matrix: Attention mask for cyclic shift. - """ - B, L, C = x.shape - H, W = self.H, self.W - assert L == H * W, "input feature has wrong size" - - shortcut = x - x = self.norm1(x) - x = x.view(B, H, W, C) - - # pad feature maps to multiples of window size - pad_l = pad_t = 0 - pad_r = (self.window_size - W % self.window_size) % self.window_size - pad_b = (self.window_size - H % self.window_size) % self.window_size - x = F.pad(x, (0, 0, pad_l, pad_r, pad_t, pad_b)) - _, Hp, Wp, _ = x.shape - - # cyclic shift - if self.shift_size > 0: - shifted_x = torch.roll( - x, shifts=(-self.shift_size, -self.shift_size), dims=(1, 2) - ) - attn_mask = mask_matrix - else: - shifted_x = x - attn_mask = None - - # partition windows - x_windows = window_partition( - shifted_x, self.window_size - ) # nW*B, window_size, window_size, C - x_windows = x_windows.view( - -1, self.window_size * self.window_size, C - ) # nW*B, window_size*window_size, C - - # W-MSA/SW-MSA - attn_windows = self.attn( - x_windows, mask=attn_mask - ) # nW*B, window_size*window_size, C - - # merge windows - attn_windows = attn_windows.view(-1, self.window_size, self.window_size, C) - shifted_x = window_reverse(attn_windows, self.window_size, Hp, Wp) # B H' W' C - - # reverse cyclic shift - if self.shift_size > 0: - x = torch.roll( - shifted_x, shifts=(self.shift_size, self.shift_size), dims=(1, 2) - ) - else: - x = shifted_x - - if pad_r > 0 or pad_b > 0: - x = x[:, :H, :W, :].contiguous() - - x = x.view(B, H * W, C) - - # FFN - x = shortcut + self.drop_path(x) - x = x + self.drop_path(self.mlp(self.norm2(x))) - - return x - - -class PatchMerging(nn.Module): - """Patch Merging Layer - Args: - dim (int): Number of input channels. - norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm - """ - - def __init__(self, dim, norm_layer=nn.LayerNorm): - super().__init__() - self.dim = dim - self.reduction = nn.Linear(4 * dim, 2 * dim, bias=False) - self.norm = norm_layer(4 * dim) - - def forward(self, x, H, W): - """Forward function. - Args: - x: Input feature, tensor size (B, H*W, C). - H, W: Spatial resolution of the input feature. - """ - B, L, C = x.shape - assert L == H * W, "input feature has wrong size" - - x = x.view(B, H, W, C) - - # padding - pad_input = (H % 2 == 1) or (W % 2 == 1) - if pad_input: - x = F.pad(x, (0, 0, 0, W % 2, 0, H % 2)) - - x0 = x[:, 0::2, 0::2, :] # B H/2 W/2 C - x1 = x[:, 1::2, 0::2, :] # B H/2 W/2 C - x2 = x[:, 0::2, 1::2, :] # B H/2 W/2 C - x3 = x[:, 1::2, 1::2, :] # B H/2 W/2 C - x = torch.cat([x0, x1, x2, x3], -1) # B H/2 W/2 4*C - x = x.view(B, -1, 4 * C) # B H/2*W/2 4*C - - x = self.norm(x) - x = self.reduction(x) - - return x - - -class BasicLayer(nn.Module): - """A basic Swin Transformer layer for one stage. - Args: - dim (int): Number of feature channels - depth (int): Depths of this stage. - num_heads (int): Number of attention head. - window_size (int): Local window size. Default: 7. - mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. Default: 4. - qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True - qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set. - drop (float, optional): Dropout rate. Default: 0.0 - attn_drop (float, optional): Attention dropout rate. Default: 0.0 - drop_path (float | tuple[float], optional): Stochastic depth rate. Default: 0.0 - norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm - downsample (nn.Module | None, optional): Downsample layer at the end of the layer. Default: None - use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False. - """ - - def __init__( - self, - dim, - depth, - num_heads, - window_size=7, - mlp_ratio=4.0, - qkv_bias=True, - qk_scale=None, - drop=0.0, - attn_drop=0.0, - drop_path=0.0, - norm_layer=nn.LayerNorm, - downsample=None, - use_checkpoint=False, - ): - super().__init__() - self.window_size = window_size - self.shift_size = window_size // 2 - self.depth = depth - self.use_checkpoint = use_checkpoint - - # build blocks - self.blocks = nn.ModuleList( - [ - SwinTransformerBlock( - dim=dim, - num_heads=num_heads, - window_size=window_size, - shift_size=0 if (i % 2 == 0) else window_size // 2, - mlp_ratio=mlp_ratio, - qkv_bias=qkv_bias, - qk_scale=qk_scale, - drop=drop, - attn_drop=attn_drop, - drop_path=drop_path[i] - if isinstance(drop_path, list) - else drop_path, - norm_layer=norm_layer, - ) - for i in range(depth) - ] - ) - - # patch merging layer - if downsample is not None: - self.downsample = downsample(dim=dim, norm_layer=norm_layer) - else: - self.downsample = None - - def forward(self, x, H, W): - """Forward function. - Args: - x: Input feature, tensor size (B, H*W, C). - H, W: Spatial resolution of the input feature. - """ - - # calculate attention mask for SW-MSA - Hp = int(np.ceil(H / self.window_size)) * self.window_size - Wp = int(np.ceil(W / self.window_size)) * self.window_size - img_mask = torch.zeros((1, Hp, Wp, 1), device=x.device) # 1 Hp Wp 1 - h_slices = ( - slice(0, -self.window_size), - slice(-self.window_size, -self.shift_size), - slice(-self.shift_size, None), - ) - w_slices = ( - slice(0, -self.window_size), - slice(-self.window_size, -self.shift_size), - slice(-self.shift_size, None), - ) - cnt = 0 - for h in h_slices: - for w in w_slices: - img_mask[:, h, w, :] = cnt - cnt += 1 - - mask_windows = window_partition( - img_mask, self.window_size - ) # nW, window_size, window_size, 1 - mask_windows = mask_windows.view(-1, self.window_size * self.window_size) - attn_mask = mask_windows.unsqueeze(1) - mask_windows.unsqueeze(2) - attn_mask = attn_mask.masked_fill(attn_mask != 0, float(-100.0)).masked_fill( - attn_mask == 0, float(0.0) - ) - - for blk in self.blocks: - blk.H, blk.W = H, W - if self.use_checkpoint: - x = checkpoint.checkpoint(blk, x, attn_mask) - else: - x = blk(x, attn_mask) - if self.downsample is not None: - x_down = self.downsample(x, H, W) - Wh, Ww = (H + 1) // 2, (W + 1) // 2 - return x, H, W, x_down, Wh, Ww - else: - return x, H, W, x, H, W - - -class PatchEmbed(nn.Module): - """Image to Patch Embedding - Args: - patch_size (int): Patch token size. Default: 4. - in_chans (int): Number of input image channels. Default: 3. - embed_dim (int): Number of linear projection output channels. Default: 96. - norm_layer (nn.Module, optional): Normalization layer. Default: None - """ - - def __init__(self, patch_size=4, in_chans=3, embed_dim=96, norm_layer=None): - super().__init__() - patch_size = to_2tuple(patch_size) - self.patch_size = patch_size - - self.in_chans = in_chans - self.embed_dim = embed_dim - - self.proj = nn.Conv2d( - in_chans, embed_dim, kernel_size=patch_size, stride=patch_size - ) - if norm_layer is not None: - self.norm = norm_layer(embed_dim) - else: - self.norm = None - - def forward(self, x): - """Forward function.""" - # padding - _, _, H, W = x.size() - if W % self.patch_size[1] != 0: - x = F.pad(x, (0, self.patch_size[1] - W % self.patch_size[1])) - if H % self.patch_size[0] != 0: - x = F.pad(x, (0, 0, 0, self.patch_size[0] - H % self.patch_size[0])) - - x = self.proj(x) # B C Wh Ww - if self.norm is not None: - Wh, Ww = x.size(2), x.size(3) - x = x.flatten(2).transpose(1, 2) - x = self.norm(x) - x = x.transpose(1, 2).view(-1, self.embed_dim, Wh, Ww) - - return x - - -class SwinTransformer(nn.Module): - """Swin Transformer backbone. - A PyTorch impl of : `Swin Transformer: Hierarchical Vision Transformer using Shifted Windows` - - https://arxiv.org/pdf/2103.14030 - Args: - pretrain_img_size (int): Input image size for training the pretrained model, - used in absolute postion embedding. Default 224. - patch_size (int | tuple(int)): Patch size. Default: 4. - in_chans (int): Number of input image channels. Default: 3. - embed_dim (int): Number of linear projection output channels. Default: 96. - depths (tuple[int]): Depths of each Swin Transformer stage. - num_heads (tuple[int]): Number of attention head of each stage. - window_size (int): Window size. Default: 7. - mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. Default: 4. - qkv_bias (bool): If True, add a learnable bias to query, key, value. Default: True - qk_scale (float): Override default qk scale of head_dim ** -0.5 if set. - drop_rate (float): Dropout rate. - attn_drop_rate (float): Attention dropout rate. Default: 0. - drop_path_rate (float): Stochastic depth rate. Default: 0.2. - norm_layer (nn.Module): Normalization layer. Default: nn.LayerNorm. - ape (bool): If True, add absolute position embedding to the patch embedding. Default: False. - patch_norm (bool): If True, add normalization after patch embedding. Default: True. - out_indices (Sequence[int]): Output from which stages. - frozen_stages (int): Stages to be frozen (stop grad and set eval mode). - -1 means not freezing any parameters. - use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False. - """ - - def __init__( - self, - pretrain_img_size=224, - patch_size=4, - in_chans=3, - embed_dim=96, - depths=[2, 2, 6, 2], - num_heads=[3, 6, 12, 24], - window_size=7, - mlp_ratio=4.0, - qkv_bias=True, - qk_scale=None, - drop_rate=0.0, - attn_drop_rate=0.0, - drop_path_rate=0.2, - norm_layer=nn.LayerNorm, - ape=False, - patch_norm=True, - out_indices=(0, 1, 2, 3), - norm_indices=None, - frozen_stages=-1, - use_checkpoint=False, - projection=False, - project_dim=256, - ): - super().__init__() - - self.pretrain_img_size = pretrain_img_size - self.num_layers = len(depths) - self.embed_dim = embed_dim - self.ape = ape - self.patch_norm = patch_norm - self.out_indices = out_indices - self.norm_indices = norm_indices if norm_indices is not None else out_indices - self.frozen_stages = frozen_stages - - # split image into non-overlapping patches - self.patch_embed = PatchEmbed( - patch_size=patch_size, - in_chans=in_chans, - embed_dim=embed_dim, - norm_layer=norm_layer if self.patch_norm else None, - ) - - # absolute position embedding - if self.ape: - pretrain_img_size = to_2tuple(pretrain_img_size) - patch_size = to_2tuple(patch_size) - patches_resolution = [ - pretrain_img_size[0] // patch_size[0], - pretrain_img_size[1] // patch_size[1], - ] - - self.absolute_pos_embed = nn.Parameter( - torch.zeros(1, embed_dim, patches_resolution[0], patches_resolution[1]) - ) - trunc_normal_(self.absolute_pos_embed, std=0.02) - - self.pos_drop = nn.Dropout(p=drop_rate) - - # stochastic depth - dpr = [ - x.item() for x in torch.linspace(0, drop_path_rate, sum(depths)) - ] # stochastic depth decay rule - - # build layers - self.layers = nn.ModuleList() - for i_layer in range(self.num_layers): - layer = BasicLayer( - dim=int(embed_dim * 2 ** i_layer), - depth=depths[i_layer], - num_heads=num_heads[i_layer], - window_size=window_size, - mlp_ratio=mlp_ratio, - qkv_bias=qkv_bias, - qk_scale=qk_scale, - drop=drop_rate, - attn_drop=attn_drop_rate, - drop_path=dpr[sum(depths[:i_layer]) : sum(depths[: i_layer + 1])], - norm_layer=norm_layer, - downsample=PatchMerging if (i_layer < self.num_layers - 1) else None, - use_checkpoint=use_checkpoint, - ) - self.layers.append(layer) - - num_features = [int(embed_dim * 2 ** i) for i in range(self.num_layers)] - self.num_features = num_features - - # add a norm layer for each output - for i_layer in self.norm_indices: - if i_layer >= len(self.num_features): - continue - layer = norm_layer(num_features[i_layer]) - layer_name = f"norm{i_layer}" - self.add_module(layer_name, layer) - # add projector head - self.projection = projection - if projection: - self.project_dim = project_dim - self.norm = norm_layer(self.num_features[-1]) - self.projector = nn.Linear(self.num_features[-1], project_dim, bias=False) - self._freeze_stages() - - def _freeze_stages(self): - if self.frozen_stages >= 0: - self.patch_embed.eval() - for param in self.patch_embed.parameters(): - param.requires_grad = False - - if self.frozen_stages >= 1 and self.ape: - self.absolute_pos_embed.requires_grad = False - - if self.frozen_stages >= 2: - self.pos_drop.eval() - for i in range(0, self.frozen_stages - 1): - m = self.layers[i] - m.eval() - for param in m.parameters(): - param.requires_grad = False - - def init_weights(self, pretrained=None): - """Initialize the weights in backbone. - Args: - pretrained (str, optional): Path to pre-trained weights. - Defaults to None. - """ - - def _init_weights(m): - if isinstance(m, nn.Linear): - trunc_normal_(m.weight, std=0.02) - if isinstance(m, nn.Linear) and m.bias is not None: - nn.init.constant_(m.bias, 0) - elif isinstance(m, nn.LayerNorm): - nn.init.constant_(m.bias, 0) - nn.init.constant_(m.weight, 1.0) - - def forward(self, x): - """Forward function.""" - x = self.patch_embed(x) - - Wh, Ww = x.size(2), x.size(3) - if self.ape: - # interpolate the position embedding to the corresponding size - absolute_pos_embed = F.interpolate( - self.absolute_pos_embed, size=(Wh, Ww), mode="bicubic" - ) - x = (x + absolute_pos_embed).flatten(2).transpose(1, 2) # B Wh*Ww C - else: - x = x.flatten(2).transpose(1, 2) - x = self.pos_drop(x) - - outs = {} - for i in range(self.num_layers): - layer = self.layers[i] - x_out, H, W, x, Wh, Ww = layer(x, Wh, Ww) - - if i in self.out_indices: - if i in self.norm_indices: - norm_layer = getattr(self, f"norm{i}") - x_out = norm_layer(x_out) - out = ( - x_out.view(-1, H, W, self.num_features[i]) - .permute(0, 3, 1, 2) - .contiguous() - ) - outs["res{}".format(i + 2)] = out - if self.projection: - x_out = self.norm(x_out) - x_out = x_out.view(-1, H, W, self.num_features[-1]).contiguous() - outs["fc"] = self.projector(x_out).permute(0, 3, 1, 2) - - return outs - - def train(self, mode=True): - """Convert the model into training mode while keep layers freezed.""" - super(SwinTransformer, self).train(mode) - self._freeze_stages() - - -@BACKBONE_REGISTRY.register() -class D2SwinTransformer(SwinTransformer, Backbone): - def __init__(self, cfg, input_shape): - - pretrain_img_size = cfg.MODEL.SWIN.PRETRAIN_IMG_SIZE - patch_size = cfg.MODEL.SWIN.PATCH_SIZE - in_chans = 3 - embed_dim = cfg.MODEL.SWIN.EMBED_DIM - depths = cfg.MODEL.SWIN.DEPTHS - num_heads = cfg.MODEL.SWIN.NUM_HEADS - window_size = cfg.MODEL.SWIN.WINDOW_SIZE - mlp_ratio = cfg.MODEL.SWIN.MLP_RATIO - qkv_bias = cfg.MODEL.SWIN.QKV_BIAS - qk_scale = cfg.MODEL.SWIN.QK_SCALE - drop_rate = cfg.MODEL.SWIN.DROP_RATE - attn_drop_rate = cfg.MODEL.SWIN.ATTN_DROP_RATE - drop_path_rate = cfg.MODEL.SWIN.DROP_PATH_RATE - norm_layer = nn.LayerNorm - ape = cfg.MODEL.SWIN.APE - patch_norm = cfg.MODEL.SWIN.PATCH_NORM - norm_indices = cfg.MODEL.SWIN.NORM_INDICES - projection = cfg.MODEL.SWIN.PROJECTION - project_dim = cfg.MODEL.SWIN.PROJECT_DIM - super().__init__( - pretrain_img_size, - patch_size, - in_chans, - embed_dim, - depths, - num_heads, - window_size, - mlp_ratio, - qkv_bias, - qk_scale, - drop_rate, - attn_drop_rate, - drop_path_rate, - norm_layer, - ape, - patch_norm, - norm_indices=norm_indices, - projection=projection, - project_dim=project_dim, - ) - - self._out_features = cfg.MODEL.SWIN.OUT_FEATURES - - self._out_feature_strides = { - "res2": 4, - "res3": 8, - "res4": 16, - "res5": 32, - "fc": 32, - } - self._out_feature_channels = { - "res2": self.num_features[0], - "res3": self.num_features[1], - "res4": self.num_features[2], - "res5": self.num_features[3], - "fc": self.num_features[3], - } - - def forward(self, x): - """ - Args: - x: Tensor of shape (N,C,H,W). H, W must be a multiple of ``self.size_divisibility``. - Returns: - dict[str->Tensor]: names and the corresponding features - """ - assert ( - x.dim() == 4 - ), f"SwinTransformer takes an input of shape (N, C, H, W). Got {x.shape} instead!" - outputs = {} - y = super().forward(x) - for k in y.keys(): - if k in self._out_features: - outputs[k] = y[k] - return outputs - - def output_shape(self): - return { - name: ShapeSpec( - channels=self._out_feature_channels[name], - stride=self._out_feature_strides[name], - ) - for name in self._out_features - } - - @property - def size_divisibility(self): - return 32 diff --git a/spaces/fatiXbelha/sd/Baixe bloons td 6 apk com dinheiro infinito e xp infinito grtis.md b/spaces/fatiXbelha/sd/Baixe bloons td 6 apk com dinheiro infinito e xp infinito grtis.md deleted file mode 100644 index 076078235b57e5e6f9e52cb049fc9b8991c1dd22..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Baixe bloons td 6 apk com dinheiro infinito e xp infinito grtis.md +++ /dev/null @@ -1,95 +0,0 @@ -
-

Bloons TD 6 APK Unlimited Money and XP: How to Download and Play

Do you love tower defense games? Do you enjoy popping colorful balloons with cute monkeys? Do you want to have unlimited money and xp to upgrade your towers and heroes? If you answered yes to any of these questions, then you should definitely check out Bloons TD 6 APK Unlimited Money and XP.

-

bloons td 6 apk dinheiro infinito e xp infinito


Download Zip ○○○ https://urllie.com/2uNy7s



Bloons TD 6 is one of the most popular and fun tower defense games on the market. It has over 100 million downloads and a 4.7-star rating on Google Play Store. It is the sixth installment in the Bloons Tower Defense series, which started as a flash game in 2007. In this game, you have to stop the bloons (short for balloons) from reaching the end of the track by placing monkey towers and heroes along the way. The bloons come in different colors, shapes, and sizes, and some of them have special abilities that make them harder to pop. You have to use your money and xp wisely to upgrade your towers and heroes and unlock new ones.

-

But what if you don't want to spend hours grinding for money and xp? What if you want to have access to all the towers and heroes from the start? What if you want to enjoy the game without any limitations or restrictions? That's where Bloons TD 6 APK Unlimited Money and XP comes in. This is a modded version of Bloons TD 6 that gives you unlimited money and xp to spend on anything you want. You can buy all the upgrades, powers, insta monkeys, and more without worrying about running out of resources. You can also level up your heroes faster and unlock their full potential. With Bloons TD 6 APK Unlimited Money and XP, you can have more fun and challenge yourself with harder modes and maps.

-

Features of Bloons TD 6 APK Unlimited Money and XP

-

Bloons TD 6 APK Unlimited Money and XP is not just a simple hack that gives you more money and xp. It also has many features that make it better than the original version of Bloons TD 6. Here are some of them:

-

Huge content

-

Bloons TD 6 APK Unlimited Money and XP has a huge amount of content that will keep you entertained for hours. The game has regular updates that add new features, events, odysseys, maps, towers, heroes, and more. You can also participate in boss events that pit you against powerful enemies with unique abilities. You can also embark on odysseys that take you on epic journeys across multiple maps with special rules. You can also compete in contested territory that lets you claim a piece of land and defend it from other players. You can also complete quests that reward you with monkey money, trophies, powers, insta monkeys, and more. You can also visit the trophy store that lets you buy cosmetic items, music tracks, wallpapers, and more with your earned trophies. You can also explore the content browser that lets you play user-generated maps, challenges, co-op games, and more.

-

bloons td 6 mod apk unlimited money and xp
-bloons td 6 hack apk infinite cash and experience
-bloons td 6 cheat apk endless coins and level up
-bloons td 6 cracked apk free money and xp
-bloons td 6 premium apk unlimited gold and rank
-bloons td 6 full apk infinite money and experience
-bloons td 6 unlocked apk unlimited cash and level
-bloons td 6 patched apk free coins and xp
-bloons td 6 pro apk unlimited money and rank
-bloons td 6 latest apk infinite cash and experience
-bloons td 6 modded apk endless money and level up
-bloons td 6 hacked apk free gold and xp
-bloons td 6 cheats apk unlimited coins and rank
-bloons td 6 crack apk infinite money and level
-bloons td 6 paid apk free cash and xp
-bloons td 6 update apk unlimited gold and experience
-bloons td 6 mods apk infinite coins and level up
-bloons td 6 download apk free money and xp
-bloons td 6 android apk unlimited cash and rank
-bloons td 6 ios apk infinite gold and level
-bloons td 6 online apk free coins and experience
-bloons td 6 offline apk unlimited money and level up
-bloons td 6 new apk free cash and xp
-bloons td 6 old apk unlimited coins and rank
-bloons td 6 best apk infinite money and level
-bloons td 6 worst apk free gold and xp
-bloons td 6 fun apk unlimited cash and experience
-bloons td 6 easy apk infinite coins and level up
-bloons td 6 hard apk free money and xp
-bloons td 6 extreme apk unlimited gold and rank
-bloons td 6 original apk infinite cash and level
-bloons td 6 fake apk free coins and xp
-bloons td 6 real apk unlimited money and experience
-bloons td 6 working apk infinite gold and level up
-bloons td 6 broken apk free cash and xp
-bloons td 6 fixed apk unlimited coins and rank
-bloons td 6 beta apk infinite money and level
-bloons td 6 final apk free gold and experience
-bloons td 6 version apk unlimited cash and level up
-bloons td 6 edition apk infinite coins and xp
-bloons td 6 deluxe apk free money and rank
-bloons td 6 standard apk unlimited gold and level
-bloons td 6 classic apk infinite cash and xp
-bloons td 6 modern apk free coins and rank
-bloons td 6 retro apk unlimited money and level

-

Epic monkey towers and heroes

-

Bloons TD 6 APK Unlimited Money and XP has 23 powerful monkey towers with 3 upgrade paths each. Each tower has its own unique activated abilities that can help you in different situations. You can also unlock paragons that are super-powered versions of the towers with amazing effects. You can also choose from 14 diverse heroes that have their own personalities, voices, animations, and skills. Each hero has 20 signature upgrades and 2 special abilities that can change the course of the game. You can mix and match different towers and heroes to create your own defense strategy.

-

Endless awesomeness

-

Bloons TD 6 APK Unlimited Money and XP has endless awesomeness that will make you addicted to the game. You can play with up to 4 players in co-op mode and work together to pop the bloons. You can also play offline mode when you don't have internet access or want to save data. You can also enjoy 68 handcrafted maps with different themes, layouts, obstacles, and difficulties. You can also gain monkey knowledge that gives you permanent boosts and bonuses for your towers and heroes. You can also use powers and insta monkeys that give you instant advantages in the game.

-

How to download Bloons TD 6 APK Unlimited Money and XP

-

If you are interested in downloading Bloons TD 6 APK Unlimited Money and XP, you need to follow these simple steps:

-

Step 1: Enable unknown sources

-

The first step is to enable unknown sources on your Android device. This will allow you to install apps from sources other than Google Play Store. To do this, go to Settings > Security > Unknown sources and toggle it on.

-

Step 2: Download Bloons TD 6 APK Unlimited Money and XP file

-

The next step is to download Bloons TD 6 APK Unlimited Money and XP file from a reliable source. There are many websites that offer this file, but some of them may contain viruses or malware that can harm your device or steal your data. To avoid this, you should download Bloons TD 6 APK Unlimited Money and XP file from a trusted source like [this one]. This website has verified and tested the file and guarantees that it is safe and working. You can also read the reviews and ratings from other users who have downloaded the file.

-

Step 3: Install Bloons TD 6 APK Unlimited Money and XP file

-

The third step is to install Bloons TD 6 APK Unlimited Money and XP file on your device. To do this, locate the downloaded file in your file manager and tap on it. You may see a pop-up window asking you to confirm the installation. Tap on Install and wait for the process to finish.

-

Step 4: Launch Bloons TD 6 APK Unlimited Money and XP game

-

The final step is to launch Bloons TD 6 APK Unlimited Money and XP game on your device. To do this, go to your app drawer and tap on the Bloons TD 6 icon. You may see a loading screen with some tips and tricks. After that, you can start playing the game with unlimited money and xp. You can also access all the features of Bloons TD 6 APK Unlimited Money and XP that we mentioned earlier.

-

Tips and tricks for Bloons TD 6 APK Unlimited Money and XP

-

Now that you have downloaded and installed Bloons TD 6 APK Unlimited Money and XP, you may want to know some tips and tricks to make the most out of the game. Here are some of them:

-

Place heroes early

-

One of the best tips for Bloons TD 6 APK Unlimited Money and XP is to place your heroes early in the game. Heroes are powerful units that can help you pop more bloons and support your towers. They also gain xp as they pop bloons, which allows them to level up and unlock new abilities. The sooner you place your heroes, the faster they will level up and become more effective. You can also use your unlimited money to buy their signature upgrades and special abilities.

-

Start with dart monkeys

-

Another tip for Bloons TD 6 APK Unlimited Money and XP is to start with dart monkeys in the first few rounds. Dart monkeys are cheap and versatile towers that can handle most types of bloons in the early game. They can also be upgraded to have longer range, faster firing, more pierce, more damage, or even camo detection. By starting with dart monkeys, you can save your money for better towers later in the game.

-

Unlock upgrades with xp

-

A third tip for Bloons TD 6 APK Unlimited Money and XP is to unlock upgrades with xp. Upgrades are enhancements that make your towers more powerful and versatile. Each tower has three upgrade paths, each with five upgrades. You can only choose one upgrade path per tower, but you can mix and match different paths for different towers. To unlock upgrades, you need to spend xp that you earn by popping bloons. With unlimited xp, you can unlock all the upgrades for all the towers in no time.

-

Use different types of towers

-

A fourth tip for Bloons TD 6 APK Unlimited Money and XP is to use different types of towers in your defense. There are many types of towers in Bloons TD 6, each with its own strengths and weaknesses. Some towers are good at popping certain types of bloons, while others are good at supporting other towers or creating synergies. For example, glue gunners can slow down bloons, ice monkeys can freeze bloons, bomb shooters can pop lead bloons, ninja monkeys can detect camo bloons, etc. By using different types of towers, you can deal with different types of bloons more effectively.

-

Use activated abilities wisely

-

A fifth tip for Bloons TD 6 APK Unlimited Money and XP is to use activated abilities wisely. Activated abilities are special powers that you can use once they are charged up by popping bloons. Some towers have activated abilities that can help you in different situations. For example, super monkey storm can wipe out all the bloons on the screen, monkey pirates can hook a moab-class bloon and destroy it instantly, ground zero can drop a massive bomb that pops all bloons in a large radius, etc. By using activated abilities wisely, you can turn the tide of the battle in your favor.

-

Conclusion

-

Bloons TD 6 APK Unlimited Money and XP is a great way to enjoy one of the best tower defense games on the market. It gives you unlimited money and xp to spend on anything you want, as well as many features that make the game more fun and challenging. You can download Bloons TD 6 APK Unlimited Money and XP from a reliable source like [this one], and follow the steps to install it on your device. You can also use the tips and tricks we shared to make the most out of the game. If you have any questions or feedback, feel free to leave a comment below. We hope you enjoy popping bloons with unlimited money and xp!

-

FAQs

-

Here are some frequently asked questions about Bloons TD 6 APK Unlimited Money and XP:

-

Q: Is Bloons TD 6 APK Unlimited Money and XP safe to download and install?

-

A: Yes, Bloons TD 6 APK Unlimited Money and XP is safe to download and install, as long as you get it from a trusted source like [this one]. This website has verified and tested the file and guarantees that it is free of viruses or malware. You can also read the reviews and ratings from other users who have downloaded the file.

-

Q: Do I need to root my device to use Bloons TD 6 APK Unlimited Money and XP?

-

A: No, you don't need to root your device to use Bloons TD 6 APK Unlimited Money and XP. You just need to enable unknown sources on your device and follow the steps to install the file. However, if you want to use some advanced features like backup or restore, you may need to root your device.

-

Q: Will Bloons TD 6 APK Unlimited Money and XP work on my device?

-

A: Bloons TD 6 APK Unlimited Money and XP should work on most Android devices that have Android 5.0 or higher. However, some devices may have compatibility issues or performance problems due to different hardware or software specifications. If you encounter any issues, you can try adjusting the settings or contacting the developer for support.

-

Q: Can I play online or co-op with Bloons TD 6 APK Unlimited Money and XP?

-

A: Yes, you can play online or co-op with Bloons TD 6 APK Unlimited Money and XP, as long as you have a stable internet connection. However, you may not be able to join some games or servers that have anti-cheat measures or require verification. You may also face some lag or disconnect issues due to network problems or server overload.

-

Q: Can I update Bloons TD 6 APK Unlimited Money and XP to the latest version?

-

A: Yes, you can update Bloons TD 6 APK Unlimited Money and XP to the latest version, as long as you download it from the same source that you got it from. You can also check for updates regularly on the website or enable notifications on your device. However, you may lose some of your progress or data if you update without backing up first.

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Caa Palavras APK um jogo de raciocnio e diverso para Android.md b/spaces/fatiXbelha/sd/Caa Palavras APK um jogo de raciocnio e diverso para Android.md deleted file mode 100644 index c99cd48774fbda1b2421b37352f98052fc49480e..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Caa Palavras APK um jogo de raciocnio e diverso para Android.md +++ /dev/null @@ -1,130 +0,0 @@ -
-

Caça Palavras APK Download: How to Play and Enjoy this Fun Word Game

-

If you are looking for a fun and challenging word game to play on your Android device, you might want to try Caça Palavras. This is a popular word search game in Portuguese that will test your vocabulary, spelling, and concentration skills. In this article, we will tell you what Caça Palavras is, how to download and install it on your device, and how to play it like a pro.

-

caça palavras apk download


Download Zip ✫✫✫ https://urllie.com/2uNFtJ



-

What is Caça Palavras?

-

Caça Palavras is a classic word search game that has been around for decades. The game consists of finding hidden words in a grid of letters. The words can be horizontal, vertical, diagonal, or backwards, depending on the level of difficulty. The game has different themes and categories, such as animals, fruits, colors, countries, sports, etc. You can choose the theme that interests you the most or play a random one.

-

A classic word search game in Portuguese

-

Caça Palavras is a game that is designed for Portuguese speakers or learners. The game has hundreds of levels with thousands of words in Portuguese. You can improve your vocabulary, spelling, and reading skills by playing this game. You can also learn new words and their meanings by using the dictionary feature that shows the definition of each word you find.

-

Different levels of difficulty and themes

-

Caça Palavras has five levels of difficulty: easy, normal, hard, big, and huge. The level of difficulty affects the size of the grid, the number of words, and the direction of the words. The easy level has a 10x10 grid with 11 words that are only horizontal or vertical. The huge level has a 20x20 grid with 30 words that can be in any direction, including backwards. You can choose the level that suits your skill and preference.

-

The game also has different themes and categories that you can choose from. Each theme has a set of words related to that topic. For example, if you choose the theme "Animals", you will have to find words like "dog", "cat", "lion", "elephant", etc. You can also play a random theme if you want more variety and challenge.

-

Benefits of playing Caça Palavras

-

Playing Caça Palavras is not only fun but also beneficial for your brain and mental health. Some of the benefits of playing this game are:

-

caça palavras em português apk download
-caça palavras para android apk download
-caça palavras online apk download
-caça palavras bíblico apk download
-caça palavras educativo apk download
-caça palavras infantil apk download
-caça palavras com dicas apk download
-caça palavras de animais apk download
-caça palavras de frutas apk download
-caça palavras de países apk download
-caça palavras de profissões apk download
-caça palavras de cores apk download
-caça palavras de flores apk download
-caça palavras de esportes apk download
-caça palavras de alimentos apk download
-caça palavras de música apk download
-caça palavras de filmes apk download
-caça palavras de marcas apk download
-caça palavras de nomes apk download
-caça palavras de verbos apk download
-caça palavras de inglês apk download
-caça palavras de espanhol apk download
-caça palavras de francês apk download
-caça palavras de italiano apk download
-caça palavras de alemão apk download
-caça palavras de japonês apk download
-caça palavras de chinês apk download
-caça palavras de russo apk download
-caça palavras de árabe apk download
-caça palavras de turco apk download
-baixar caça palavras gratis apk
-baixar jogo de caça palavras apk
-baixar aplicativo de caça palavras apk
-baixar jogo da forca e caça palavras apk
-baixar jogo do show do milhão com caça palavras apk
-baixar jogo da velha e caça palavras apk
-baixar jogo da galinha pintadinha com caça palavras apk
-baixar jogo do chaves com caça palavras apk
-baixar jogo do sonic com caça palavras apk
-baixar jogo do mario com caça palavras apk
-como instalar o jogo de caça palavras no celular android
-como jogar o jogo de caça palavras no celular android
-como desafiar seus amigos no jogo de caça palavras no celular android
-como ganhar pontos e moedas no jogo de caça palavras no celular android
-como desbloquear novos níveis e temas no jogo de caça palavras no celular android

-
    -
  • It helps you increase your vocabulary and spelling skills in Portuguese.
  • -
  • It helps you improve your concentration and focus by blocking out distractions.
  • -
  • It helps you stimulate your memory and cognitive functions by finding words quickly.
  • -
  • It helps you relax and reduce stress by having fun and engaging your mind.
  • -
  • It helps you learn new things and expand your knowledge by discovering new words and their meanings.
  • -
-

How to download Caça Palavras APK?

-

If you want to play Caça Palavras on your Android device, you will need to download and install the APK file of the game. APK stands for Android Package Kit, which is a file format that contains all the necessary components for an app to run on an Android device. You can download Caça Palavras APK from various sources online, such as APKCombo or APKPure. However, before you do that, you should follow some steps to ensure a safe and smooth installation process.

-

Steps to download and install the game on Android devices

Steps to download and install the game on Android devices

-

To download and install Caça Palavras APK on your Android device, you can follow these steps:

-
    -
  1. Go to a reputable website that offers Caça Palavras APK, such as APKCombo or APKPure. Search for the game and tap on the download button. You may need to accept some pop-ups or permissions before downloading the file.
  2. -
  3. Once the file is downloaded, open your file explorer app and locate the APK file in your Downloads folder. Tap on the file and then tap on Install. You may need to enable the option to install unknown apps from your browser or file explorer app.
  4. -
  5. Wait for the installation process to finish and then tap on Open. You can now enjoy playing Caça Palavras on your device.
  6. -
-

Tips to avoid malware and viruses

-

While downloading and installing APK files can be a convenient way to access apps that are not available on the Google Play Store, it can also pose some risks for your device and data. Some APK files may contain malware or viruses that can harm your device or steal your information. To avoid these dangers, you should follow some tips:

-
    -
  • Only download APK files from trusted and verified sources, such as APKCombo or APKPure. Avoid clicking on suspicious links or ads that may redirect you to malicious websites.
  • -
  • Check the reviews and ratings of the app before downloading it. Look for any negative feedback or complaints from other users that may indicate a problem with the app.
  • -
  • Scan the APK file with an antivirus app before installing it. You can use a reliable antivirus app, such as Avast Mobile Security or AVG Antivirus, to scan the file and detect any potential threats.
  • -
  • Keep your device updated with the latest security patches and software updates. This can help you prevent any vulnerabilities or exploits that may allow hackers to access your device.
  • -
-

Alternatives to Caça Palavras APK

-

If you are not comfortable with downloading and installing APK files, or if you want to try other word search games in Portuguese, you can also check out some alternatives to Caça Palavras APK. Here are some of them:

- - - - - -
NameDescriptionLink
Caça Palavras - Word SearchA word search game with over 1000 levels and 18 categories. You can also create your own puzzles and share them with your friends.[Caça Palavras - Word Search](https://play.google.com/store/apps/details?id=com.rbgames.cacapalavras)
Sopa de Letras - Word SearchA word search game with over 2000 levels and 40 categories. You can also play online with other players and compete for the best score.[Sopa de Letras - Word Search](https://play.google.com/store/apps/details?id=com.edujoy.Word_Search)
Cruzadinhas - CrosswordsA crossword game with over 1000 levels and 20 categories. You can also play offline and customize the difficulty and size of the puzzles.[Cruzadinhas - Crosswords](https://play.google.com/store/apps/details?id=com.rbgames.cruzadinhas)

How to play Caça Palavras?

-

Now that you have downloaded and installed Caça Palavras APK on your device, you are ready to play this fun and addictive word game. The game is very easy to play, but it can also be very challenging and rewarding. Here are some basic rules and gameplay tips to help you enjoy the game.

-

Basic rules and gameplay

-

The basic rules of Caça Palavras are simple: you have to find all the hidden words in the grid of letters. The words can be horizontal, vertical, diagonal, or backwards, depending on the level of difficulty. You can see the list of words at the bottom of the screen, and you can tap on them to see their definition. To find a word, you have to swipe your finger over the letters that form the word. If you find a correct word, it will be highlighted and crossed out from the list. If you find all the words in the grid, you will complete the level and move on to the next one.

-

Tips and tricks to find words faster and easier

-

While playing Caça Palavras, you may encounter some difficulties or challenges, such as finding long words, finding words in different directions, or finding words that are not familiar to you. To overcome these challenges, you can use some tips and tricks, such as:

-
    -
  • Look for common prefixes or suffixes, such as "re", "in", "ar", "ção", etc. They can help you identify the beginning or the end of a word.
  • -
  • Look for patterns or shapes, such as circles, squares, triangles, etc. They can help you spot words that are diagonal or backwards.
  • -
  • Look for letters that are repeated or uncommon, such as "q", "x", "z", etc. They can help you narrow down the possible words.
  • -
  • Use the dictionary feature to learn the meaning of each word. This can help you remember the word and find it easier in the future.
  • -
-

How to use hints and other features

-

If you are still stuck or need some help, you can use some hints and other features that are available in Caça Palavras. Here are some of them:

-
    -
  • Hint: This feature will show you one letter of a hidden word. You can use it by tapping on the light bulb icon at the top right corner of the screen. You have a limited number of hints per level, but you can earn more by watching ads or buying them with coins.
  • -
  • Solve: This feature will show you all the hidden words in the grid. You can use it by tapping on the check mark icon at the top right corner of the screen. You have a limited number of solves per level, but you can earn more by watching ads or buying them with coins.
  • -
  • Shuffle: This feature will shuffle the letters in the grid. You can use it by tapping on the arrow icon at the top right corner of the screen. You have an unlimited number of shuffles per level, but they will not affect your score or time.
  • -
  • Timer: This feature will show you how much time you have left to complete the level. You can see it at the top left corner of the screen. You have a limited amount of time per level, but you can earn more by watching ads or buying them with coins.
  • -
  • Score: This feature will show you how many points you have earned by finding words. You can see it at the bottom left corner of the screen. You can earn more points by finding longer words, finding words faster, or finding bonus words.
  • -
-

Conclusion

-

Caça Palavras is a fun and challenging word game that will test your vocabulary, spelling, and concentration skills in Portuguese. You can download and install Caça Palavras APK on your Android device from various sources online, such as APKCombo or APKPure. However, you should follow some steps and tips to ensure a safe and smooth installation process. You can also play Caça Palavras easily by following some basic rules and gameplay tips. You can also use some hints and other features to help you find words faster and easier. Playing Caça Palavras is not only fun but also beneficial for your brain and mental health.

-

FAQs

-

Here are some frequently asked questions about Caça Palavras APK:

-
    -
  1. Is Caça Palavras APK free?
    -Yes, Caça Palavras APK is free to download and play. However, it may contain ads and in-app purchases that require real money.
  2. -
  3. Is Caça Palavras APK safe?Is Caça Palavras APK safe?
    -Caça Palavras APK is generally safe to download and install, as long as you get it from a reputable and verified source, such as APKCombo or APKPure. However, you should always scan the file with an antivirus app before installing it, and keep your device updated with the latest security patches and software updates.
  4. -
  5. How can I play Caça Palavras on PC?
    -If you want to play Caça Palavras on your PC, you will need to use an Android emulator, such as BlueStacks or NoxPlayer. An Android emulator is a software that allows you to run Android apps on your PC. You can download and install an Android emulator on your PC, and then download and install Caça Palavras APK from the emulator's app store or browser.
  6. -
  7. How can I play Caça Palavras with friends?
    -Caça Palavras is a single-player game, but you can still play it with your friends by sharing your puzzles and scores with them. You can create your own puzzles by using the custom mode, and then share them with your friends via email, WhatsApp, Facebook, etc. You can also compare your scores and achievements with your friends by using the leaderboard feature.
  8. -
  9. How can I contact the developer of Caça Palavras?
    -If you have any questions, suggestions, or feedback about Caça Palavras, you can contact the developer of the game by using the contact form on their website, or by sending an email to contato@rbgames.com.br.
  10. -
  11. How can I learn more about Caça Palavras?
    -If you want to learn more about Caça Palavras, you can visit their website, where you can find more information about the game, such as the features, the themes, the levels, the dictionary, etc. You can also follow them on their social media accounts, such as Facebook or Instagram, where you can see the latest news, updates, and promotions about the game.
  12. -

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Download Dalom Kids Izindunduma MP3 Songs and Enjoy the Best of African Music.md b/spaces/fatiXbelha/sd/Download Dalom Kids Izindunduma MP3 Songs and Enjoy the Best of African Music.md deleted file mode 100644 index 9c3f30e88e3c06d334904e43d7f44b0857688c5e..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Download Dalom Kids Izindunduma MP3 Songs and Enjoy the Best of African Music.md +++ /dev/null @@ -1,117 +0,0 @@ - -

Dalom Kids-Izindunduma MP3 Download MP3 Songs: How to Enjoy the Best of South African Music

-

If you are a fan of South African music, you might have heard of Dalom Kids, a popular group that has been making waves in the industry since the 1990s. Dalom Kids are known for their catchy and upbeat songs that blend traditional and modern elements, creating a unique sound that appeals to a wide audience. One of their most famous songs is Izindunduma, which means "thunder" in Zulu. This song is a powerful and energetic anthem that showcases the talent and charisma of Dalom Kids. In this article, we will tell you more about Dalom Kids, Izindunduma, and how you can download and enjoy their MP3 songs online.

-

dalom kids-izindunduma mp3 download mp3 songs


Download Zip ››››› https://urllie.com/2uNHVQ



-

Introduction

-

Who are Dalom Kids?

-

Dalom Kids are a South African music group that was formed in 1990 by Dan Tshanda, who was also the founder of Splash, another successful group. Dalom Kids consist of four members: Patricia Majalisa, Finky Molefe, Sipho Ndlovu, and Thandi Zulu. They specialize in disco music, also known as bubblegum music, which is a genre that emerged in South Africa in the 1980s and features synthesizers, drum machines, and catchy melodies. Dalom Kids have released several albums over the years, such as Jameela, Izindunduma, Ncedani, Mathambo, and Ultimate Collection. They have also collaborated with other artists like Splash, Peacock, Matshikos, and Patricia Majalisa.

-

What is Izindunduma?

-

Izindunduma is one of the most popular songs by Dalom Kids, released in 1996 as part of their album of the same name. The song is a lively and upbeat track that features a catchy chorus and a danceable beat. The lyrics are in Zulu and English, and they talk about the power and strength of thunder. The song is also a metaphor for the impact and influence of Dalom Kids in the music scene. Izindunduma has become an anthem for many fans of South African music, and it is often played at parties, weddings, and festivals.

-

Why should you listen to Dalom Kids-Izindunduma MP3 songs?

-

There are many reasons why you should listen to Dalom Kids-Izindunduma MP3 songs online. Here are some of them:

-
    -
  • You will get to enjoy the best of South African music, which is rich in culture, diversity, and creativity.
  • -
  • You will get to experience the energy and excitement of disco music, which is fun, uplifting, and danceable.
  • -
  • You will get to appreciate the talent and skill of Dalom Kids, who are one of the most successful and influential groups in South African music history.
  • -
  • You will get to support local artists who have contributed to the development and promotion of South African music globally.
  • -
-

How to download Dalom Kids-Izindunduma MP3 songs online

-

If you want to download Dalom Kids-Izindunduma MP3 songs online, you will need to use a reliable and legal platform that offers high-quality and affordable music. There are many options available on the internet, but we recommend two of them: Wynk Music and Shazam. These are two popular apps that allow you to stream and download Dalom Kids-Izindunduma MP3 songs online. Here is how you can use these apps to download your favorite songs:

-

Wynk Music

-

Wynk Music is a music streaming and downloading app that offers over 6 million songs from various genres and languages. You can access Wynk Music on your smartphone, tablet, or computer, and enjoy unlimited music without any ads. You can also create your own playlists, share your music with your friends, and discover new songs and artists.

-

Features of Wynk Music

-
    -
  • Wynk Music has a large and diverse library of songs, including Dalom Kids-Izindunduma MP3 songs.
  • -
  • Wynk Music allows you to download songs offline and listen to them anytime and anywhere.
  • -
  • Wynk Music offers high-quality audio and video streaming and downloading options.
  • -
  • Wynk Music has a user-friendly and attractive interface that makes it easy to navigate and use.
  • -
  • Wynk Music has a subscription plan that gives you unlimited access to all the songs and features for a nominal fee.
  • -
-

Steps to download Dalom Kids-Izindunduma MP3 songs from Wynk Music

-
    -
  1. Download and install Wynk Music app on your device from the Google Play Store or the App Store.
  2. -
  3. Sign up or log in to your Wynk Music account using your phone number, email, or social media.
  4. -
  5. Search for Dalom Kids-Izindunduma MP3 songs using the search bar or browse through the categories and genres.
  6. -
  7. Select the song you want to download and tap on the download icon next to it.
  8. -
  9. Choose the quality and format of the song you want to download and confirm your choice.
  10. -
  11. Wait for the download to complete and enjoy your song offline.
  12. -
-

Shazam

-

Shazam is a music recognition and discovery app that helps you identify any song playing around you. You can use Shazam to find out the name, artist, album, genre, and lyrics of any song in seconds. You can also use Shazam to stream and download songs from various platforms, such as Spotify, Apple Music, YouTube, Deezer, and more.

-

dalom kids-izindunduma mp3 songs free download
-dalom kids-izindunduma album mp3 download
-dalom kids-izindunduma full album zip download
-dalom kids-izindunduma mp3 music download
-dalom kids-izindunduma mp3 song download fakaza
-dalom kids-izindunduma mp3 download waploaded
-dalom kids-izindunduma mp3 download zamusic
-dalom kids-izindunduma mp3 download tubidy
-dalom kids-izindunduma mp3 download skull
-dalom kids-izindunduma mp3 download juice
-dalom kids-izindunduma mp3 download 320kbps
-dalom kids-izindunduma mp3 download hiphopza
-dalom kids-izindunduma mp3 download datafilehost
-dalom kids-izindunduma mp3 download naijaloaded
-dalom kids-izindunduma mp3 download tooxclusive
-dalom kids-izindunduma mp3 download audiomack
-dalom kids-izindunduma mp3 download soundcloud
-dalom kids-izindunduma mp3 download mdundo
-dalom kids-izindunduma mp3 download afrobeat
-dalom kids-izindunduma mp3 download sahiphopmag
-dalom kids songs mp3 free download
-dalom kids songs list and download
-dalom kids songs lyrics and download
-dalom kids songs 2022 mp3 download
-dalom kids songs mixtape mp3 download
-dalom kids songs playlist mp3 download
-dalom kids songs video mp4 download
-dalom kids songs online streaming and download
-dalom kids songs wynk music app download
-dalom kids songs snapea app download
-best of dalom kids songs mp3 download
-latest of dalom kids songs mp3 download
-top of dalom kids songs mp3 download
-all of dalom kids songs mp3 download
-most popular of dalom kids songs mp3 download
-most downloaded of dalom kids songs mp3 download
-most streamed of dalom kids songs mp3 download
-most liked of dalom kids songs mp3 download
-most rated of dalom kids songs mp3 download
-most played of dalom kids songs mp3 download

-

Features of Shazam

-
    -
  • Shazam can recognize any song playing around you, even if it is in a noisy environment or has background noise.
  • -
  • Shazam can provide you with detailed information about any song, such as the title, artist, album, genre, release date, lyrics, and more.
  • -
  • Shazam can connect you to various music streaming and downloading platforms, where you can listen to or download Dalom Kids-Izindunduma MP3 songs.
  • -
  • Shazam can create personalized playlists for you based on your music preferences and history.
  • -
  • Shazam can sync your Shazams across all your devices and let you access them anytime and anywhere.
  • -
-

Steps to download Dalom Kids-Izindunduma MP3 songs from Shazam

-
    -
  1. Download and install Shazam app on your device from the Google Play Store or the App Store.
  2. -
  3. Open Shazam app and tap on the Shazam button while playing Dalom Kids-Izindunduma MP3 song on another device or source.
  4. -
  5. Wait for Shazam to identify the song and show you the song details on the screen.
  6. -
  7. Tap on the streaming or downloading platform of your choice, such as Spotify, Apple Music, YouTube, Deezer, etc.
  8. -
  9. Login or sign up to your chosen platform using your account details or social media.
  10. -
  11. Select the song you want to download and tap on the download icon next to it.
  12. -
  13. Choose the quality and format of the song you want to download and confirm your choice.
  14. -
  15. Wait for the download to complete and enjoy your song offline.
  16. -
-

Conclusion

-

Dalom Kids-Izindunduma MP3 songs are some of the best examples of South African music that you can enjoy online. They are catchy, upbeat, and energetic songs that will make you want to dance and sing along. You can download Dalom Kids-Izindunduma MP3 songs online using Wynk Music or Shazam apps, which are reliable and legal platforms that offer high-quality and affordable music. You can also use these apps to discover more songs and artists from South Africa and other parts of the world. So what are you waiting for? Download Dalom Kids-I zindunduma MP3 songs online and enjoy the best of South African music.

-

FAQs

-

Here are some frequently asked questions about Dalom Kids-Izindunduma MP3 songs:

-
    -
  1. What does Izindunduma mean?
  2. -

    Izindunduma means "thunder" in Zulu, and it is also the name of one of the most popular songs by Dalom Kids, a South African music group.

    -
  3. Who are the members of Dalom Kids?
  4. -

    Dalom Kids consist of four members: Patricia Majalisa, Finky Molefe, Sipho Ndlovu, and Thandi Zulu. They are led by Dan Tshanda, who is also the founder of Splash, another successful group.

    -
  5. What genre of music do Dalom Kids play?
  6. -

    Dalom Kids play disco music, also known as bubblegum music, which is a genre that emerged in South Africa in the 1980s and features synthesizers, drum machines, and catchy melodies.

    -
  7. How can I download Dalom Kids-Izindunduma MP3 songs online?
  8. -

    You can download Dalom Kids-Izindunduma MP3 songs online using Wynk Music or Shazam apps, which are reliable and legal platforms that offer high-quality and affordable music. You can also stream and download songs from other platforms, such as Spotify, Apple Music, YouTube, Deezer, and more.

    -
  9. What are some other songs by Dalom Kids that I can listen to?
  10. -

    Some other songs by Dalom Kids that you can listen to are Jameela, Ncedani, Mathambo, Ndincendeni, and Ngizohlala Naye.

    -

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Drive Club APK 1.7.41 A Fun and Challenging Car Game for All Ages.md b/spaces/fatiXbelha/sd/Drive Club APK 1.7.41 A Fun and Challenging Car Game for All Ages.md deleted file mode 100644 index 651ca43873c57bf0e6eed1de65f87b268266e83e..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Drive Club APK 1.7.41 A Fun and Challenging Car Game for All Ages.md +++ /dev/null @@ -1,104 +0,0 @@ - -

Drive Club APK 1.7.41: A Car Game for Everyone

-

Are you a car game enthusiast who wants to improve your driving skills, feel the fun in your bones, and experience different game modes and features? If yes, then you should try Drive Club APK 1.7.41, a car driving simulator game that will satisfy your needs and expectations. In this article, we will tell you everything you need to know about this amazing game, such as what it is, what are the car models and modification options, what are the game modes, how to download and install it, and some frequently asked questions.

-

What is Drive Club APK?

-

Drive Club APK is a car driving simulator game that offers you various modes and features to enjoy. You can choose from over 50 car models, modify them according to your taste, drive them in different scenarios and challenges, and compete with other players online. You can also explore a large open world map with high graphics and realistic physics.

-

drive club apk 1.7.41


Downloadhttps://urllie.com/2uNGpe



-

Drive Club APK is also a low mb game that does not affect the performance of your phone. It takes up little space and works perfectly on any device. You don't have to worry about lagging or crashing issues while playing this game.

-

Drive Club APK is developed by Open World Car Games, a company that specializes in creating car games with high quality and innovation. They have also developed other popular games such as Car Parking Multiplayer, Car Simulator 2, Car Driving School Simulator, and more.

-

What are the car models and modification options in Drive Club APK?

-

One of the best features of Drive Club APK is that it has over 50 car models for you to choose from. You can find various types of cars such as sports cars, SUVs, drift cars, speed cars, electric cars, and more. You can also unlock new cars by completing missions or buying them with coins.

-

Another great feature of Drive Club APK is that it allows you to modify your cars according to your preference. You can customize your cars with dozens of modified options such as tuning, wheel replacement, car painting, glass painting, spoilers, camber, suspension, neon, coating, and more. You can create your own unique car design and show it off to other players.

-

What are the game modes in Drive Club APK?

Drive Club APK has eight different game modes for you to enjoy. Each mode has its own rules, objectives, and challenges. You can switch between the modes anytime you want. Here are the game modes in Drive Club APK:

-

Multiplayer online mode

-

This mode allows you to play with your friends or join multiplayer races with other players from around the world. You can chat with them, invite them to your garage, or challenge them to a race. You can also join different servers and rooms according to your region and language. You can also create your own room and invite other players to join.

-

Realistic car parking mode

-

This mode tests your parking skills in various scenarios and levels. You have to park the car without hitting anything or running out of time. You can choose from different camera angles and control options to make it easier for you. You can also earn coins and stars by completing the levels.

-

drive club car parking games apk download
-drive club multiplayer apk latest version
-drive club open world car games apk
-drive club 3d car driving simulator apk
-drive club realistic car parking mode apk
-drive club breaking mode apk
-drive club prototype mode apk
-drive club check point mode apk
-drive club stunt mode apk
-drive club free driving mode apk
-drive club drift game mode apk
-drive club car modification games apk
-drive club tuning club apk
-drive club wheel replacement apk
-drive club changing tires apk
-drive club changing rims apk
-drive club car painting apk
-drive club glass painting apk
-drive club spoilers apk
-drive club camber apk
-drive club suspension apk
-drive club neon apk
-drive club coating apk
-drive club new car models apk
-drive club sports cars apk
-drive club SUVs apk
-drive club drift cars apk
-drive club speed cars apk
-drive club electric cars apk
-drive club 2021 high graphics apk
-drive club 50 car models apk
-drive club net energy gain experiment apk
-drive club holy grail fusion experiment apk
-drive club mini sun experiment apk
-drive club 100 million degrees celsius experiment apk
-drive club seven times hotter than sun core experiment apk
-drive club korea superconducting tokamak advanced research experiment apk
-drive club korea institute of fusion energy experiment apk
-drive club juegos de coches apk descargar gratis (Spanish for "drive club car games download free")
-drive club simulador de conducción de automóviles juego (Spanish for "drive club car driving simulator game")
-drive club juego de coches multijugador (Spanish for "drive club multiplayer car game")
-drive club modo de estacionamiento realista (Spanish for "drive club realistic parking mode")
-drive club modo de ruptura (Spanish for "drive club breaking mode")
-drive club modo prototipo (Spanish for "drive club prototype mode")
-drive club modo de punto de control (Spanish for "drive club check point mode")
-drive club modo de acrobacias (Spanish for "drive club stunt mode")
-drive club modo de conducción libre (Spanish for "drive club free driving mode")
-drive club modo de juego de derrape (Spanish for "drive club drift game mode")
-drive club juego de modificación de coches (Spanish for "drive club car modification game")

-

Breaking mode

-

This mode lets you unleash your destructive side and break the objects you encounter with your car. You can smash boxes, barrels, glass, walls, and more. You can also use different weapons such as rockets, bombs, and guns to cause more damage. You can also see the damage meter and the score on the screen.

-

Prototype mode

-

This mode gives you a different perspective and graphics of the game. You have to reach the finish line without hitting anything with different graphics such as wireframe, neon, or pixelated. You can also adjust the speed and the sensitivity of the car.

-

Check point mode

-

This mode challenges you to pass through the checkpoints before the time is up. You have to drive fast and avoid obstacles on the way. You can also use nitro boosters to increase your speed. You can also see the distance and the time on the screen.

-

Stunt mode

-

This mode tests your courage and skills in using challenging ramps suspended in the air. You have to reach the finish line by jumping over gaps, loops, bridges, and more. You have to balance your car and land safely on the ground. You can also perform flips and tricks in the air.

-

Free driving mode

-

This mode allows you to roam freely on the large open world map with high graphics and realistic physics. You can drive anywhere you want, explore different places, interact with other cars and pedestrians, or do side missions such as taxi driving, delivery, police chase, and more. You can also change the weather, time, traffic, and radio stations.

-

Drift game mode

-

This mode lets you drift as much as you want with drifting cars. You can choose from different drift tracks and cars. You have to drift around corners and curves without losing control of your car. You can also see the drift meter and the score on the screen.

-

How to download and install Drive Club APK?

-

If you want to download and install Drive Club APK on your device, you have to follow these simple steps:

-
    -
  1. Download the APK file from a trusted source such as APKCombo. You can use this link: Drive Club APK 1.7.41 Download for Android – Download Drive Club APK Latest Version - APKCombo
  2. -
  3. Enable unknown sources on your device settings. This will allow you to install apps from sources other than Google Play Store.
  4. -
  5. Install the APK file by tapping on it and following the instructions.
  6. -
  7. Enjoy the game!
  8. -
-

Conclusion

-

Drive Club APK 1.7.41 is a car driving simulator game that offers you various modes and features to enjoy. You can choose from over 50 car models, modify them according to your taste, drive them in different scenarios and challenges, and compete with other players online. You can also explore a large open world map with high graphics and realistic physics.

-

If you are looking for a car game that will keep you entertained for hours, then you should download Drive Club APK 1.7.41 today. It is a low mb game that does not affect the performance of your phone. It is also easy to download and install from a trusted source such as APKCombo.

-

We hope this article has helped you learn more about Drive Club APK 1.7.41 and how to play it on your device. If you have any questions or feedback, please feel free to leave a comment below.

-

FAQs

-
    -
  • Is Drive Club APK safe?
  • -

    Yes, Drive Club APK is

    Yes, Drive Club APK is safe to download and install on your device. It does not contain any viruses, malware, or spyware that can harm your device or data. However, you should always download it from a trusted source such as APKCombo to avoid any risks.

    -
  • How to update Drive Club APK?
  • -

    To update Drive Club APK, you have to download the latest version of the APK file from the same source you downloaded it from. Then, you have to uninstall the previous version of the game and install the new one. You can also check for updates within the game settings.

    -
  • How to play Drive Club APK on PC?
  • -

    To play Drive Club APK on PC, you have to use an Android emulator such as BlueStacks, NoxPlayer, or LDPlayer. These are software that allow you to run Android apps and games on your PC. You have to download and install the emulator on your PC, then download and install Drive Club APK on the emulator. Then, you can launch the game and play it on your PC.

    -
  • How to get unlimited coins in Drive Club APK?
  • -

    There is no official way to get unlimited coins in Drive Club APK. However, some websites claim to offer modded versions of the game that have unlimited coins and other features. We do not recommend using these modded versions as they may be unsafe, illegal, or incompatible with the game. The best way to get coins in Drive Club APK is to play the game and complete the missions.

    -
  • How to contact the developer of Drive Club APK?
  • -

    If you want to contact the developer of Drive Club APK, you can use their email address: openworldcargames@gmail.com. You can also visit their website: https://openworldcargames.com/ or follow them on Facebook: https://www.facebook.com/openworldcargames/. You can send them your feedback, suggestions, complaints, or inquiries.

    -

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Driving School 2020 MOD APK Drive Any Car You Want in Any Scenario.md b/spaces/fatiXbelha/sd/Driving School 2020 MOD APK Drive Any Car You Want in Any Scenario.md deleted file mode 100644 index 91725c21651200035735c8a83610768e209802a3..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Driving School 2020 MOD APK Drive Any Car You Want in Any Scenario.md +++ /dev/null @@ -1,91 +0,0 @@ -
-

Driving School 2020 Mod APK: Learn to Drive Like a Pro

-

Do you want to learn how to drive a car, a bus, a truck, or even a supercar? Do you want to explore different cities and roads with realistic traffic and weather conditions? Do you want to have fun and challenge yourself with over 80 levels of driving scenarios? If you answered yes to any of these questions, then you should try Driving School 2020, a popular driving simulator game for Android devices. And if you want to enjoy the game even more, you should download Driving School 2020 Mod APK, a modified version of the game that gives you unlimited money, all vehicles unlocked, and no ads. In this article, we will tell you everything you need to know about Driving School 2020 and Driving School 2020 Mod APK, including what they are, what features they offer, how to download and install them, and how to play them.

-

driving school 2020 mod apk


Download Zip ✵✵✵ https://urllie.com/2uNwOB



-

What is Driving School 2020?

-

Driving School 2020 is a realistic driving simulator game that lets you learn how to drive different types of vehicles in various locations and situations. You can choose from over 150 vehicles, including cars, buses, trucks, motorcycles, and supercars. You can also customize your vehicles with different colors, rims, spoilers, and stickers. You can drive in different cities around the world, such as New York, Los Angeles, Rome, Berlin, Tokyo, and more. You can also experience different weather conditions, such as rain, snow, fog, and night. You can follow the traffic rules and signs or break them if you want to have some fun. You can complete over 80 levels with different driving objectives and challenges. You can also play online with other players or offline with AI drivers.

-

A realistic driving simulator game

-

Driving School 2020 is not just a game; it is also a learning tool that teaches you how to drive safely and correctly. The game has realistic physics and graphics that make you feel like you are driving a real vehicle. The game also has realistic sounds and voices that enhance the immersion. The game has a realistic dashboard that shows you the speedometer, the fuel gauge, the turn signals, the headlights, the wipers, and more. The game also has a realistic steering wheel that responds to your touch and tilt. The game also has realistic traffic and pedestrians that react to your actions. The game also has realistic damage and accidents that affect your vehicle's performance and appearance.

-

A variety of vehicles and locations

-

Driving School 2020 offers you a wide range of vehicles to choose from. You can drive cars from different brands and models, such as Ford Mustang GT, Chevrolet Camaro SS, BMW M4, Audi R8, Lamborghini Aventador, Ferrari LaFerrari, and more. You can also drive buses from different sizes and types, such as school buses, city buses, double-decker buses, and more. You can also drive trucks from different categories, such as pickup trucks, delivery trucks, fire trucks, and more. You can also drive motorcycles from different styles, such as sport bikes, cruiser bikes, dirt bikes, and more. You can also drive supercars from different eras, such as Bug otherwise locked or paid in the original game. You can also use the money to skip some levels or challenges that you find too hard or boring. You can also use the money to buy some extra features, such as a GPS, a radio, a camera, and more.

-

All vehicles unlocked

-

Driving School 2020 Mod APK gives you access to all the vehicles that are available in the game. You can drive any vehicle you want from the garage without having to unlock them by completing levels or paying money. You can drive over 150 vehicles, including cars, buses, trucks, motorcycles, and supercars. You can also drive some special vehicles, such as police cars, ambulance cars, fire trucks, and more. You can also drive some futuristic vehicles, such as flying cars, hoverboards, and more.

-

No ads

-

Driving School 2020 Mod APK removes all the ads that are present in the original Driving School 2020 game. You can enjoy the game without any interruptions or distractions from annoying ads that pop up on your screen. You can also save your data and battery by not having to load or watch any ads. You can also support the original developers of Driving School 2020 by buying the premium version of the game that also removes the ads and gives you some extra benefits.

-

How to download and install Driving School 2020 Mod APK?

-

If you want to download and install Driving School 2020 Mod APK on your Android device, you need to follow these simple steps:

-

driving school 2020 mod apk unlimited money
-driving school 2020 mod apk download for android
-driving school 2020 mod apk latest version
-driving school 2020 mod apk all cars unlocked
-driving school 2020 mod apk free shopping
-driving school 2020 mod apk revdl
-driving school 2020 mod apk offline
-driving school 2020 mod apk hack
-driving school 2020 mod apk android 1
-driving school 2020 mod apk rexdl
-driving school 2020 mod apk no ads
-driving school 2020 mod apk unlimited coins
-driving school 2020 mod apk obb
-driving school 2020 mod apk unlimited everything
-driving school 2020 mod apk premium
-driving school 2020 mod apk full version
-driving school 2020 mod apk vip
-driving school 2020 mod apk mega
-driving school 2020 mod apk data
-driving school 2020 mod apk pro
-driving school 2020 mod apk unlocked
-driving school 2020 mod apk online
-driving school 2020 mod apk new update
-driving school 2020 mod apk old version
-driving school 2020 mod apk original
-driving school 2020 mod apk gameplay
-driving school 2020 mod apk cheat
-driving school 2020 mod apk highly compressed
-driving school 2020 mod apk unlimited xp
-driving school 2020 mod apk real car simulator
-driving school 2020 mod apk best car game
-driving school 2020 mod apk realistic physics
-driving school 2020 mod apk multiplayer mode
-driving school 2020 mod apk easy controls
-driving school 2020 mod apk hd graphics
-driving school 2020 mod apk fun and challenging levels
-driving school 2020 mod apk different vehicles to drive
-driving school 2020 mod apk various locations to explore
-driving school 2020 mod apk learn traffic rules and signs
-driving school 2020 mod apk earn your driver's license

-

Steps to download and install Driving School 2020 Mod APK

-
    -
  1. Go to a trusted website or source that hosts modded APK files and search for Driving School 2020 Mod APK. Make sure that the website or source is safe and reliable and does not contain any viruses or malware.
  2. -
  3. Download the Driving School 2020 Mod APK file from the website or source. The file size should be around 300 MB.
  4. -
  5. Before installing the Driving School 2020 Mod APK file, you need to enable the installation of unknown sources on your Android device. To do this, go to your device settings, then security, then unknown sources, and toggle it on.
  6. -
  7. Locate the Driving School 2020 Mod APK file on your device storage and tap on it to start the installation process. Follow the instructions on your screen and wait for the installation to finish.
  8. -
  9. Once the installation is done, you can launch the Driving School 2020 Mod APK game from your app drawer or home screen and enjoy it.
  10. -
-

Tips to play Driving School 2020 Mod APK

-

If you want to play Driving School 2020 Mod APK and have a great time, you need to follow these tips:

-
    -
  • Choose a vehicle that suits your driving style and preference. You can try different vehicles and see how they perform and handle in different situations.
  • -
  • Choose a location that matches your driving skill and interest. You can explore different cities and roads and see how they look and feel in different weather conditions.
  • -
  • Choose a mode that challenges your driving ability and knowledge. You can learn the basics of driving in driving school mode, practice your driving skills in free driving mode, complete various driving objectives and challenges in career mode, or compete with other players in online mode.
  • -
  • Follow the traffic rules and signs or break them if you want to have some fun. You can drive safely and correctly or drive recklessly and dangerously depending on your mood and preference.
  • -
  • Customize your vehicles with different colors, rims, spoilers, and stickers to make them look unique and stylish. You can also upgrade your vehicles with better engines, brakes, tires, suspensions, and more to make them perform better.
  • -
-

Conclusion

-

Driving School 2020 is a realistic driving simulator game that lets you learn how to drive different types of vehicles in various locations and situations. Driving School 2020 Mod APK is a modified version of the original game that gives you unlimited money, all vehicles unlocked, and no ads. You can download and install Driving School 2020 Mod APK from third-party websites or sources that host modded APK files. You can play Driving School 2020 Mod APK and have fun with over 150 vehicles, over 80 levels, and different modes. You can also learn the driving rules and regulations and improve your driving skills and knowledge. Driving School 2020 Mod APK is a fun and educational way to learn to drive like a pro.

-

FAQs

-

Here are some frequently asked questions about Driving School 2020 and Driving School 2020 Mod APK:

-
    -
  1. Is Driving School 2020 Mod APK safe to download and install?
  2. -

    Driving School 2020 Mod APK is generally safe to download and install, as long as you get it from a trusted website or source that does not contain any viruses or malware. However, you should always be careful when downloading and installing any modded APK file, as it may have some risks or consequences for your device or account. You should also backup your data and files before installing any modded APK file, in case something goes wrong or you want to revert to the original version.

    -
  3. Is Driving School 2020 Mod APK legal to use?
  4. -

    Driving School 2020 Mod APK is not legal to use, as it violates the terms and conditions of the original Driving School 2020 game. By using Driving School 2020 Mod APK, you are infringing the intellectual property rights of the original developers of Driving School 2020. You are also cheating and unfair to other players who play the game legitimately. You may also face some legal actions or penalties from the original developers or the authorities if you are caught using Driving School 2020 Mod APK.

    -
  5. Will Driving School 2020 Mod APK work on my device?
  6. -

    Driving School 2020 Mod APK will work on most Android devices that have a minimum Android version of 4.1 or higher and a minimum free storage space of 300 MB. However, some devices may not be compatible or supported by Driving School 2020 Mod APK, due to different hardware specifications or software configurations. You may also encounter some bugs or errors when using Driving School 2020 Mod APK, as it is not an official version of the game. You may also need to update Driving School 2020 Mod APK regularly to keep up with the latest version of the game.

    -
  7. Can I play online with Driving School 2020 Mod APK?
  8. -

    You can play online with Driving School 2020 Mod APK, but you may face some problems or issues when doing so. For example, you may not be able to connect to the online servers or join the online matches. You may also be banned or suspended from the online mode if you are detected using Driving School 2020 Mod APK. You may also have an unfair advantage or disadvantage over other players who are using the original version of the game. Therefore, it is not recommended to play online with Driving School 2020 Mod APK.

    -
  9. Can I uninstall Driving School 2020 Mod APK?
  10. -

    You can uninstall Driving School 2020 Mod APK anytime you want, just like any other app on your device. You can go to your device settings, then apps, then find and select Driving School 2020 Mod APK, then tap on uninstall. You can also delete the Driving School 2020 Mod APK file from your device storage if you want to free up some space. However, if you uninstall Driving School 2020 Mod APK, you will lose all your progress and data in the game, such as your money, vehicles, levels, and more. You will also lose access to all the features and benefits that Driving School 2020 Mod APK offers.

    -

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/fb700/chatglm-fitness-RLHF/speaker_encoder/data_objects/utterance.py b/spaces/fb700/chatglm-fitness-RLHF/speaker_encoder/data_objects/utterance.py deleted file mode 100644 index 0768c3420f422a7464f305b4c1fb6752c57ceda7..0000000000000000000000000000000000000000 --- a/spaces/fb700/chatglm-fitness-RLHF/speaker_encoder/data_objects/utterance.py +++ /dev/null @@ -1,26 +0,0 @@ -import numpy as np - - -class Utterance: - def __init__(self, frames_fpath, wave_fpath): - self.frames_fpath = frames_fpath - self.wave_fpath = wave_fpath - - def get_frames(self): - return np.load(self.frames_fpath) - - def random_partial(self, n_frames): - """ - Crops the frames into a partial utterance of n_frames - - :param n_frames: The number of frames of the partial utterance - :return: the partial utterance frames and a tuple indicating the start and end of the - partial utterance in the complete utterance. - """ - frames = self.get_frames() - if frames.shape[0] == n_frames: - start = 0 - else: - start = np.random.randint(0, frames.shape[0] - n_frames) - end = start + n_frames - return frames[start:end], (start, end) \ No newline at end of file diff --git a/spaces/fb700/chatglm-fitness-RLHF/src/face3d/models/arcface_torch/partial_fc.py b/spaces/fb700/chatglm-fitness-RLHF/src/face3d/models/arcface_torch/partial_fc.py deleted file mode 100644 index 17e2d25715d10ba446c957e1d2528b0687ed71d5..0000000000000000000000000000000000000000 --- a/spaces/fb700/chatglm-fitness-RLHF/src/face3d/models/arcface_torch/partial_fc.py +++ /dev/null @@ -1,222 +0,0 @@ -import logging -import os - -import torch -import torch.distributed as dist -from torch.nn import Module -from torch.nn.functional import normalize, linear -from torch.nn.parameter import Parameter - - -class PartialFC(Module): - """ - Author: {Xiang An, Yang Xiao, XuHan Zhu} in DeepGlint, - Partial FC: Training 10 Million Identities on a Single Machine - See the original paper: - https://arxiv.org/abs/2010.05222 - """ - - @torch.no_grad() - def __init__(self, rank, local_rank, world_size, batch_size, resume, - margin_softmax, num_classes, sample_rate=1.0, embedding_size=512, prefix="./"): - """ - rank: int - Unique process(GPU) ID from 0 to world_size - 1. - local_rank: int - Unique process(GPU) ID within the server from 0 to 7. - world_size: int - Number of GPU. - batch_size: int - Batch size on current rank(GPU). - resume: bool - Select whether to restore the weight of softmax. - margin_softmax: callable - A function of margin softmax, eg: cosface, arcface. - num_classes: int - The number of class center storage in current rank(CPU/GPU), usually is total_classes // world_size, - required. - sample_rate: float - The partial fc sampling rate, when the number of classes increases to more than 2 millions, Sampling - can greatly speed up training, and reduce a lot of GPU memory, default is 1.0. - embedding_size: int - The feature dimension, default is 512. - prefix: str - Path for save checkpoint, default is './'. - """ - super(PartialFC, self).__init__() - # - self.num_classes: int = num_classes - self.rank: int = rank - self.local_rank: int = local_rank - self.device: torch.device = torch.device("cuda:{}".format(self.local_rank)) - self.world_size: int = world_size - self.batch_size: int = batch_size - self.margin_softmax: callable = margin_softmax - self.sample_rate: float = sample_rate - self.embedding_size: int = embedding_size - self.prefix: str = prefix - self.num_local: int = num_classes // world_size + int(rank < num_classes % world_size) - self.class_start: int = num_classes // world_size * rank + min(rank, num_classes % world_size) - self.num_sample: int = int(self.sample_rate * self.num_local) - - self.weight_name = os.path.join(self.prefix, "rank_{}_softmax_weight.pt".format(self.rank)) - self.weight_mom_name = os.path.join(self.prefix, "rank_{}_softmax_weight_mom.pt".format(self.rank)) - - if resume: - try: - self.weight: torch.Tensor = torch.load(self.weight_name) - self.weight_mom: torch.Tensor = torch.load(self.weight_mom_name) - if self.weight.shape[0] != self.num_local or self.weight_mom.shape[0] != self.num_local: - raise IndexError - logging.info("softmax weight resume successfully!") - logging.info("softmax weight mom resume successfully!") - except (FileNotFoundError, KeyError, IndexError): - self.weight = torch.normal(0, 0.01, (self.num_local, self.embedding_size), device=self.device) - self.weight_mom: torch.Tensor = torch.zeros_like(self.weight) - logging.info("softmax weight init!") - logging.info("softmax weight mom init!") - else: - self.weight = torch.normal(0, 0.01, (self.num_local, self.embedding_size), device=self.device) - self.weight_mom: torch.Tensor = torch.zeros_like(self.weight) - logging.info("softmax weight init successfully!") - logging.info("softmax weight mom init successfully!") - self.stream: torch.cuda.Stream = torch.cuda.Stream(local_rank) - - self.index = None - if int(self.sample_rate) == 1: - self.update = lambda: 0 - self.sub_weight = Parameter(self.weight) - self.sub_weight_mom = self.weight_mom - else: - self.sub_weight = Parameter(torch.empty((0, 0)).cuda(local_rank)) - - def save_params(self): - """ Save softmax weight for each rank on prefix - """ - torch.save(self.weight.data, self.weight_name) - torch.save(self.weight_mom, self.weight_mom_name) - - @torch.no_grad() - def sample(self, total_label): - """ - Sample all positive class centers in each rank, and random select neg class centers to filling a fixed - `num_sample`. - - total_label: tensor - Label after all gather, which cross all GPUs. - """ - index_positive = (self.class_start <= total_label) & (total_label < self.class_start + self.num_local) - total_label[~index_positive] = -1 - total_label[index_positive] -= self.class_start - if int(self.sample_rate) != 1: - positive = torch.unique(total_label[index_positive], sorted=True) - if self.num_sample - positive.size(0) >= 0: - perm = torch.rand(size=[self.num_local], device=self.device) - perm[positive] = 2.0 - index = torch.topk(perm, k=self.num_sample)[1] - index = index.sort()[0] - else: - index = positive - self.index = index - total_label[index_positive] = torch.searchsorted(index, total_label[index_positive]) - self.sub_weight = Parameter(self.weight[index]) - self.sub_weight_mom = self.weight_mom[index] - - def forward(self, total_features, norm_weight): - """ Partial fc forward, `logits = X * sample(W)` - """ - torch.cuda.current_stream().wait_stream(self.stream) - logits = linear(total_features, norm_weight) - return logits - - @torch.no_grad() - def update(self): - """ Set updated weight and weight_mom to memory bank. - """ - self.weight_mom[self.index] = self.sub_weight_mom - self.weight[self.index] = self.sub_weight - - def prepare(self, label, optimizer): - """ - get sampled class centers for cal softmax. - - label: tensor - Label tensor on each rank. - optimizer: opt - Optimizer for partial fc, which need to get weight mom. - """ - with torch.cuda.stream(self.stream): - total_label = torch.zeros( - size=[self.batch_size * self.world_size], device=self.device, dtype=torch.long) - dist.all_gather(list(total_label.chunk(self.world_size, dim=0)), label) - self.sample(total_label) - optimizer.state.pop(optimizer.param_groups[-1]['params'][0], None) - optimizer.param_groups[-1]['params'][0] = self.sub_weight - optimizer.state[self.sub_weight]['momentum_buffer'] = self.sub_weight_mom - norm_weight = normalize(self.sub_weight) - return total_label, norm_weight - - def forward_backward(self, label, features, optimizer): - """ - Partial fc forward and backward with model parallel - - label: tensor - Label tensor on each rank(GPU) - features: tensor - Features tensor on each rank(GPU) - optimizer: optimizer - Optimizer for partial fc - - Returns: - -------- - x_grad: tensor - The gradient of features. - loss_v: tensor - Loss value for cross entropy. - """ - total_label, norm_weight = self.prepare(label, optimizer) - total_features = torch.zeros( - size=[self.batch_size * self.world_size, self.embedding_size], device=self.device) - dist.all_gather(list(total_features.chunk(self.world_size, dim=0)), features.data) - total_features.requires_grad = True - - logits = self.forward(total_features, norm_weight) - logits = self.margin_softmax(logits, total_label) - - with torch.no_grad(): - max_fc = torch.max(logits, dim=1, keepdim=True)[0] - dist.all_reduce(max_fc, dist.ReduceOp.MAX) - - # calculate exp(logits) and all-reduce - logits_exp = torch.exp(logits - max_fc) - logits_sum_exp = logits_exp.sum(dim=1, keepdims=True) - dist.all_reduce(logits_sum_exp, dist.ReduceOp.SUM) - - # calculate prob - logits_exp.div_(logits_sum_exp) - - # get one-hot - grad = logits_exp - index = torch.where(total_label != -1)[0] - one_hot = torch.zeros(size=[index.size()[0], grad.size()[1]], device=grad.device) - one_hot.scatter_(1, total_label[index, None], 1) - - # calculate loss - loss = torch.zeros(grad.size()[0], 1, device=grad.device) - loss[index] = grad[index].gather(1, total_label[index, None]) - dist.all_reduce(loss, dist.ReduceOp.SUM) - loss_v = loss.clamp_min_(1e-30).log_().mean() * (-1) - - # calculate grad - grad[index] -= one_hot - grad.div_(self.batch_size * self.world_size) - - logits.backward(grad) - if total_features.grad is not None: - total_features.grad.detach_() - x_grad: torch.Tensor = torch.zeros_like(features, requires_grad=True) - # feature gradient all-reduce - dist.reduce_scatter(x_grad, list(total_features.grad.chunk(self.world_size, dim=0))) - x_grad = x_grad * self.world_size - # backward backbone - return x_grad, loss_v diff --git a/spaces/fb700/chatglm-fitness-RLHF/src/face3d/models/arcface_torch/utils/utils_config.py b/spaces/fb700/chatglm-fitness-RLHF/src/face3d/models/arcface_torch/utils/utils_config.py deleted file mode 100644 index 0c02eaf70fc0140aca7925f621c29a496f491cae..0000000000000000000000000000000000000000 --- a/spaces/fb700/chatglm-fitness-RLHF/src/face3d/models/arcface_torch/utils/utils_config.py +++ /dev/null @@ -1,16 +0,0 @@ -import importlib -import os.path as osp - - -def get_config(config_file): - assert config_file.startswith('configs/'), 'config file setting must start with configs/' - temp_config_name = osp.basename(config_file) - temp_module_name = osp.splitext(temp_config_name)[0] - config = importlib.import_module("configs.base") - cfg = config.config - config = importlib.import_module("configs.%s" % temp_module_name) - job_cfg = config.config - cfg.update(job_cfg) - if cfg.output is None: - cfg.output = osp.join('work_dirs', temp_module_name) - return cfg \ No newline at end of file diff --git a/spaces/fcakyon/streamlit-image-comparison/README.md b/spaces/fcakyon/streamlit-image-comparison/README.md deleted file mode 100644 index fb2d10911a04bdebde5c086f8ff57bbc060eb2a4..0000000000000000000000000000000000000000 --- a/spaces/fcakyon/streamlit-image-comparison/README.md +++ /dev/null @@ -1,36 +0,0 @@ ---- -title: Streamlit Image Comparison -emoji: 🖼️ -colorFrom: red -colorTo: yellow -sdk: streamlit -app_file: app.py -pinned: false ---- - -# Configuration -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. \ No newline at end of file diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Bingo Heaven The Ultimate Free Bingo Game to Download.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Bingo Heaven The Ultimate Free Bingo Game to Download.md deleted file mode 100644 index 3be5ba92802a18b178d6e574928163bca1e1375d..0000000000000000000000000000000000000000 --- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Bingo Heaven The Ultimate Free Bingo Game to Download.md +++ /dev/null @@ -1,158 +0,0 @@ -
-

How to Download Free Bingo Games and Have Fun

-

If you are looking for a fun and relaxing way to spend your free time, you might want to try playing bingo. Bingo is a popular game that can be enjoyed by people of all ages and backgrounds. You can play bingo online or offline, with friends or strangers, for fun or for prizes. And the best part is, you can download free bingo games and play them anytime and anywhere you want.

-

download free bingo


DOWNLOADhttps://gohhs.com/2uPqR3



-

In this article, we will tell you everything you need to know about bingo, how to find and download free bingo games, how to play and win free bingo games, and answer some frequently asked questions about bingo. Let's get started!

-

What is Bingo and Why is it Popular?

-

Bingo is a game of chance where players mark off numbers on cards as they are randomly drawn by a caller. The first player to mark off a specific pattern of numbers on their card wins the game. The patterns can vary depending on the type of bingo game, but they usually include horizontal, vertical, diagonal, or four-corner lines, or shapes like letters or symbols.

-

The History of Bingo

-

Bingo has a long and interesting history that dates back to the 16th century. It originated in Italy as a lottery game called "Il Gioco del Lotto d'Italia". It then spread to France, where it was called "Le Lotto", and then to Germany, where it was used as an educational tool for teaching math and spelling. In the 1920s, bingo arrived in America, where it was called "Beano" because players used beans to mark their cards. A toy salesman named Edwin Lowe changed the name to "Bingo" after he heard someone accidentally yell "Bingo" instead of "Beano" when they won. He also popularized the game by creating standardized cards and rules, and marketing them across the country.

-

The Benefits of Playing Bingo

-

Bingo is not only fun, but also beneficial for your health and well-being. Some of the benefits of playing bingo are:

-

download free bingo games for pc
-download free bingo bash
-download free bingo caller
-download free bingo cards
-download free bingo blitz
-download free bingo games offline
-download free bingo pop
-download free bingo heaven
-download free bingo app
-download free bingo games for android
-download free bingo daubers
-download free bingo software
-download free bingo sounds
-download free bingo numbers
-download free bingo online
-download free bingo board
-download free bingo machine
-download free bingo tickets
-download free bingo bonanza
-download free bingo generator
-download free bingo hall
-download free bingo images
-download free bingo island
-download free bingo journey
-download free bingo quest
-download free bingo run
-download free bingo star
-download free bingo template
-download free bingo voice
-download free bingo win real money
-download free printable bingo cards
-download free halloween bingo cards
-download free christmas bingo cards
-download free baby shower bingo cards
-download free bridal shower bingo cards
-download free thanksgiving bingo cards
-download free valentine's day bingo cards
-download free easter bingo cards
-download free birthday bingo cards
-download free alphabet bingo cards
-download free sight word bingo cards
-download free math bingo cards
-download free multiplication bingo cards
-download free fraction bingo cards
-download free addition bingo cards
-download free subtraction bingo cards
-download free division bingo cards

-
    -
  • It improves your cognitive skills, such as memory, concentration, and attention.
  • -
  • It enhances your social skills, such as communication, cooperation, and friendship.
  • -
  • It reduces your stress levels, as it releases endorphins and dopamine in your brain.
  • -
  • It boosts your mood, as it makes you feel happy and excited.
  • -
  • It increases your chances of winning prizes, such as cash, gift cards, or merchandise.
  • -
-

How to Find and Download Free Bingo Games

-

There are many ways to find and download free bingo games online. You can use search engines like Bing or Google to look for websites or apps that offer free bingo games. You can also browse through online platforms like Google Play Store or Apple App Store to find free bingo games for your mobile devices. You can also check out online reviews or ratings from other players to see which free bingo games are the best.

-

The Types of Bingo Games Available Online

-

There are many types of bingo games available online that cater to different preferences and tastes. Some of the most common types of bingo games are:

-
    -
  • 75-ball bingo: This is the most popular type of bingo game in America. It uses a 5x5 card with numbers from 1 to 75. The center square is usually marked as a free space.
  • -
  • 90-ball bingo: This is the most popular type of bingo game in Europe. It uses a 9x3 card with numbers from 1 to 90. There are three horizontal lines on each card, each with five numbers. The first line has numbers from 1 to 9, the second line has numbers from 10 to 19, and so on.
  • -
  • 80-ball bingo: This is a type of bingo game that uses a 4x4 card with numbers from 1 to 80. Each column has a different color: red, yellow, blue, and silver.
  • -
  • 30-ball bingo: This is a type of bingo game that uses a 3x3 card with numbers from 1 to 30. It is also known as speed bingo, as it is faster and more exciting than other types of bingo games.
  • -
-

The Features to Look for in a Good Bingo Game

-

When you are looking for a good bingo game to download, you should consider the following features:

-
    -
  • The quality of the graphics and sound effects: You want a bingo game that has clear and colorful graphics and realistic and fun sound effects.
  • -
  • The variety of the themes and rooms: You want a bingo game that has different themes and rooms to suit your mood and preferences. For example, you might want to play in a tropical beach, a haunted house, or a fairy tale castle.
  • -
  • The ease of the gameplay and navigation: You want a bingo game that has simple and intuitive gameplay and navigation. You should be able to easily mark your cards, chat with other players, and access the settings and features.
  • -
  • The security and reliability of the game: You want a bingo game that has a secure and reliable connection and does not crash or freeze. You should also be able to trust that the game is fair and random.
  • -
  • The availability of the customer support and feedback: You want a bingo game that has a responsive and helpful customer support team that can answer your questions and solve your issues. You should also be able to give your feedback and suggestions to improve the game.
  • -
-

The Best Free Bingo Games to Download

-

Based on our research and reviews, here are some of the best free bingo games that you can download:

- - - - - - - -
NameDescriptionDownload Link
Bingo BlitzA popular bingo game that lets you travel around the world and collect items and souvenirs. You can also play with your friends or join a community of bingo lovers.[Bingo Blitz]
Bingo BashA fun bingo game that features various mini-games and power-ups. You can also win coins, chips, and rewards by playing daily quests and challenges.[Bingo Bash]
Bingo PopA classic bingo game that has stunning graphics and animations. You can also enjoy bonus games, free spins, jackpots, and special events.[Bingo Pop]
Bingo ShowdownA western-themed bingo game that has exciting gameplay and features. You can also use power-ups, collect tickets, and compete with other players in tournaments.[Bingo Showdown]
Bingo PartyA festive bingo game that has over 30 themed rooms and 200 cards. You can also play with up to 10 friends or join millions of players online.[Bingo Party]
-

How to Play and Win Free Bingo Games

-

Playing free bingo games is easy and fun. Here are some steps to follow:

-
    -
  1. Download the free bingo game of your choice from the links above or from other sources.
  2. -
  3. Open the game and create an account or log in with your existing account.
  4. -
  5. Select the type of bingo game you want to play and choose a room or theme.
  6. -
  7. Purchase or get some cards for the game. You can usually get some free cards every day or by watching ads or completing tasks.
  8. -
  9. Wait for the caller to announce the numbers and mark them on your cards. You can usually use the auto-daub feature to mark them automatically.
  10. -
  11. If you complete a pattern on your card before anyone else, tap the "Bingo" button to claim your win.
  12. -
  13. Collect your prizes and rewards for winning the game. You can usually get coins, chips, tickets, power-ups, or other items.
  14. -
  15. Repeat the process for more fun and excitement.
  16. -
-

The Rules and Tips for Playing Bingo

-

While playing free bingo games is simple, there are some some rules and tips that you should follow to make the most of your bingo experience. Here are some of them:

-
    -
  • Read the rules and instructions of the game before you start playing. Different bingo games may have different rules and features, so make sure you understand them well.
  • -
  • Choose the number of cards that suits your budget and skill level. Playing with more cards can increase your chances of winning, but it can also make it harder to keep track of them.
  • -
  • Pay attention to the caller and the numbers on your cards. Don't get distracted by other things or miss any numbers that are called.
  • -
  • Use the chat feature to communicate and socialize with other players. You can make new friends, share tips, or congratulate each other on your wins.
  • -
  • Be respectful and courteous to other players and the game staff. Don't use abusive or offensive language, spam the chat, or cheat in any way.
  • -
-

The Strategies and Tricks for Winning Bingo

-

While bingo is mostly a game of luck, there are some strategies and tricks that you can use to improve your odds of winning. Here are some of them:

-
    -
  • Play at off-peak times or in less crowded rooms. This can reduce the competition and increase your chances of being the first to complete a pattern.
  • -
  • Play with different types of cards or patterns. This can diversify your options and give you more opportunities to win.
  • -
  • Use power-ups or boosters wisely. These are special features that can help you mark more numbers, get extra cards, or multiply your prizes. You can usually get them for free or buy them with coins or real money.
  • -
  • Take advantage of the promotions and offers. These are special events or deals that can give you more rewards, bonuses, or discounts. You can usually find them on the game's website, social media, or email.
  • -
  • Manage your bankroll and time. Don't spend more than you can afford or play for longer than you should. Set a limit for yourself and stick to it.
  • -
-

The Rewards and Bonuses for Playing Bingo

-

Playing free bingo games can also give you various rewards and bonuses that can make your bingo experience more enjoyable and rewarding. Some of the rewards and bonuses that you can get are:

-
    -
  • Cash prizes: These are real money that you can win by playing bingo games. You can usually withdraw them to your bank account or use them to buy more cards or power-ups.
  • -
  • Gift cards: These are vouchers that you can use to buy products or services from various online or offline stores.
  • -
  • Merchandise: These are physical items that you can get as prizes for playing bingo games. They can include t-shirts, mugs, hats, bags, or other accessories.
  • -
  • Loyalty points: These are points that you can earn by playing bingo games regularly. You can redeem them for various rewards, such as free cards, power-ups, or cash prizes.
  • -
  • VIP status: This is a special status that you can achieve by playing bingo games frequently. It can give you access to exclusive rooms, features, and benefits, such as higher prizes, faster withdrawals, or personal support.
  • -
-

Conclusion

-

Bingo is a fun and relaxing game that can be played by anyone, anywhere, anytime. You can download free bingo games online and enjoy them on your computer or mobile device. You can also improve your skills, win prizes, and make friends by playing bingo games online. So what are you waiting for? Download your favorite free bingo game today and have a blast!

-

FAQs

-

Here are some frequently asked questions about bingo:

-
    -
  1. Q: Is online bingo rigged?
  2. -
  3. A: No, online bingo is not rigged. Online bingo games use random number generators (RNGs) to ensure that the numbers are drawn fairly and randomly. Online bingo games are also regulated by authorities and audited by independent agencies to ensure their fairness and security.
  4. -
  5. Q: How old do I have to be to play online bingo?
  6. -
  7. A: The legal age to play online bingo varies depending on the country or state where you live. In general, you have to be at least 18 years old to play online bingo in most places. However, some places may have a higher or lower age limit, so make sure you check the laws and regulations before you play online bingo.
  8. -
  9. Q: How do I claim my prizes from online bingo?
  10. -
  11. A: The process of claiming your prizes from online bingo depends on the game and the prize that you won. In general, you have to verify your identity and provide some details and information to the game provider. You can usually choose the method of receiving your prize, such as bank transfer, PayPal, or check. You may also have to pay some taxes or fees depending on the amount and type of your prize.
  12. -
  13. Q: Can I play online bingo with my friends?
  14. -
  15. A: Yes, you can play online bingo with your friends. Most online bingo games have a chat feature that allows you to communicate and socialize with other players. You can also invite your friends to join the same game or room as you, or create a private game or room for you and your friends only.
  16. -
  17. Q: What are the best tips for playing online bingo?
  18. -
  19. A: Some of the best tips for playing online bingo are:
  20. -
      -
    • Play responsibly and have fun. Don't gamble more than you can afford or play for longer than you should.
    • -
    • Do some research and compare different online bingo games and providers. Choose the ones that suit your preferences and needs.
    • -
    • Take advantage of the freebies and bonuses that online bingo games offer. They can help you play more games, win more prizes, and save more money.
    • -
    • Learn the rules and strategies of online bingo. They can help you improve your skills, increase your odds, and avoid mistakes.
    • -
    • Join a community of online bingo players. They can help you learn more, share tips, and make friends.
    • -
    -

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Bloons TD 6 for PC Enjoy Endless Hours of Strategy Gaming.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Bloons TD 6 for PC Enjoy Endless Hours of Strategy Gaming.md deleted file mode 100644 index 677fa85e38d6851c8b915a63ccdc729291d4cc56..0000000000000000000000000000000000000000 --- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Bloons TD 6 for PC Enjoy Endless Hours of Strategy Gaming.md +++ /dev/null @@ -1,120 +0,0 @@ - -

Download Bloons TD 6 for PC: A Guide

-

If you are a fan of tower defense games, you might have heard of Bloons TD 6, a smash hit game that has millions of players worldwide. But did you know that you can also play this game on your PC? In this article, we will tell you what Bloons TD 6 is, why you should play it on PC, how to download it, and some tips and tricks for playing it. Let's get started!

-

download bloons td 6 para pc


Downloadhttps://gohhs.com/2uPmmd



-

What is Bloons TD 6?

-

Bloons TD 6 is a strategy game developed by Ninja Kiwi, a New Zealand-based game studio. It is the sixth installment in the Bloons Tower Defense series, which has been around for over a decade. The game involves placing various monkey towers and heroes along a path to pop invading balloons (called bloons) before they reach the end. The game features over 20 monkey towers, each with three upgrade paths and unique abilities, over 10 heroes with special skills and upgrades, over 60 maps with different themes and challenges, and various modes and events to keep you entertained. The game also supports online co-op with up to four players, as well as offline play when your WiFi doesn't work.

-

Why play Bloons TD 6 on PC?

-

Better graphics and performance

-

One of the reasons to play Bloons TD 6 on PC is that you can enjoy better graphics and performance than on mobile devices. The game has been optimized for PC, with high-resolution textures, smooth animations, and fast loading times. You can also adjust the graphics settings to suit your preferences and system specifications. Playing on PC also means that you don't have to worry about battery drain, overheating, or interruptions from phone calls or notifications.

-

More control options

-

Another reason to play Bloons TD 6 on PC is that you can have more control options than on mobile devices. You can use your mouse and keyboard to place towers, select heroes, activate abilities, and navigate menus. You can also customize your keybindings and mouse sensitivity to suit your style. Playing on PC also gives you access to hotkeys and shortcuts that can make your gameplay more efficient and convenient.

-

Bigger screen and sound

-

A third reason to play Bloons TD 6 on PC is that you can experience the game on a bigger screen and sound system than on mobile devices. You can play the game on your monitor or TV, with full-screen mode or windowed mode. You can also enjoy the game's colorful graphics, cute animations, and catchy music on your speakers or headphones. Playing on PC can enhance your immersion and enjoyment of the game.

-

How to download Bloons TD 6 for PC?

-

Option 1: Steam

-

One of the ways to download Bloons TD 6 for PC is to use Steam, a popular digital distribution platform that offers thousands of games for various genres and platforms. To download Bloons TD 6 from Steam, you need to follow these steps:

-

download bloons td 6 for pc free
-download bloons td 6 on pc with steam
-download bloons td 6 on pc from epic games store
-download bloons td 6 on pc using bluestacks emulator
-download bloons td 6 on pc windows 10
-download bloons td 6 on pc mac
-download bloons td 6 on pc offline
-download bloons td 6 on pc latest version
-download bloons td 6 on pc full game
-download bloons td 6 on pc crack
-how to download bloons td 6 on pc
-where to download bloons td 6 on pc
-can you download bloons td 6 on pc
-is it possible to download bloons td 6 on pc
-why should you download bloons td 6 on pc
-what are the benefits of downloading bloons td 6 on pc
-what are the requirements to download bloons td 6 on pc
-what are the steps to download bloons td 6 on pc
-what are the best sites to download bloons td 6 on pc
-what are the best tips and tricks for downloading bloons td 6 on pc
-download and play bloons td 6 on pc co-op mode
-download and play bloons td 6 on pc multiplayer mode
-download and play bloons td 6 on pc single player mode
-download and play bloons td 6 on pc boss events mode
-download and play bloons td 6 on pc odysseys mode
-download and play bloons td 6 on pc contested territory mode
-download and play bloons td 6 on pc quests mode
-download and play bloons td 6 on pc trophy store mode
-download and play bloons td 6 on pc content browser mode
-download and play bloons td 6 on pc with paragon upgrades
-download and play bloons td 6 on pc with heroes and skins
-download and play bloons td 6 on pc with monkey towers and abilities
-download and play bloons td 6 on pc with powers and insta monkeys
-download and play bloons td 6 on pc with monkey knowledge and meta-upgrades
-download and play bloons td 6 on pc with regular updates and new features
-download and play bloons td 6 on pc with high definition graphics and sound effects
-download and play bloons td 6 on pc with native gamepad support and keyboard controls
-download and play bloons td 6 on pc with multi-instance sync and cloud save features
-download and play bloons td 6 on pc with in-game purchases and random items
-download and play bloons td 6 on pc with mild fantasy violence and age rating

-
    -
  1. Create a Steam account if you don't have one already. You can do this by visiting the Steam website and clicking on the "Join Steam" button. You will need to provide your email address, password, and username.
  2. -
  3. Download and install the Steam client on your PC. You can do this by visiting the Steam website and clicking on the "Install Steam" button. You will need to run the installer and follow the instructions.
  4. -
  5. Launch the Steam client and log in with your Steam account. You will see the Steam interface, which allows you to browse, buy, and play games.
  6. -
  7. Search for Bloons TD 6 in the Steam store. You can do this by typing "Bloons TD 6" in the search bar at the top right corner of the interface. You will see the game's page, which shows its price, description, screenshots, videos, reviews, and system requirements.
  8. -
  9. Buy and download Bloons TD 6. You can do this by clicking on the "Add to Cart" button and proceeding to checkout. You will need to provide your payment method and confirm your purchase. Once you have bought the game, you can download it by clicking on the "Library" tab and selecting Bloons TD 6 from your list of games. You will see the game's status, which shows its download progress, size, and estimated time.
  10. -
  11. Play Bloons TD 6 on your PC. You can do this by clicking on the "Play" button once the game has finished downloading. You will see the game's launcher, which allows you to adjust the graphics settings, sound settings, and control settings. Once you are ready, you can click on the "Play" button again and enjoy the game.
  12. -
-

Option 2: Microsoft Store

-

Another way to download Bloons TD 6 for PC is to use Microsoft Store, an official app store for Windows devices that offers various apps and games for different categories and purposes. To download Bloons TD 6 from Microsoft Store, you need to follow these steps:

-
    -
  1. Create a Microsoft account if you don't have one already. You can do this by visiting the Microsoft website and clicking on the "Sign in" button. You will need to provide your email address, phone number, or Skype name, and create a password.
  2. -
  3. Open Microsoft Store on your PC. You can do this by clicking on the Start menu and selecting Microsoft Store from the list of apps. You will see the Microsoft Store interface, which allows you to browse, buy, and download apps and games.
  4. -
  5. Search for Bloons TD 6 in Microsoft Store. You can do this by typing "Bloons TD 6" in the search bar at the top right corner of the interface. You will see the game's page, which shows its price, description, screenshots, videos, reviews, and system requirements.
  6. -
  7. Buy and download Bloons TD 6. You can do this by clicking on the "Buy" button and proceeding to checkout. You will need to provide your payment method and confirm your purchase. Once you have bought the game, you can download it by clicking on the "Install" button. You will see the game's status, which shows its download progress, size, and estimated time.
  8. -
  9. Play Bloons TD 6 on your PC. You can do this by clicking on the Start menu and selecting Bloons TD 6 from the list of apps. You will see the game's launcher, which allows you to adjust the graphics settings, sound settings, and control settings. Once you are ready, you can click on the "Play" button and enjoy the game.
  10. -
-

Option 3: BlueStacks

-

A third way to download Bloons TD 6 for PC is to use BlueStacks, an Android emulator that lets you play mobile games on your PC. BlueStacks simulates an Android device on your PC, allowing you to access Google Play Store and other Android apps and games. To download Bloons TD 6 from BlueStacks, you need to follow these steps:

-
    -
  1. Download and install BlueStacks on your PC. You can do this by visiting the BlueStacks website and clicking on the "Download BlueStacks" button. You will need to run the installer and follow the instructions.
  2. -
  3. Launch BlueStacks and log in with your Google account. You will see the BlueStacks interface, which looks like an Android device. You will need to provide your Google account credentials to access Google Play Store and other Android services.
  4. -
  5. Search for Bloons TD 6 in Google Play Store. You can do this by clicking on the Google Play Store icon on the BlueStacks home screen and typing "Bloons TD 6" in the search bar. You will see the game's page, which shows its price, description, screenshots, videos, reviews, and system requirements.
  6. -
  7. Buy and download Bloons TD 6. You can do this by clicking on the "Install" button and proceeding to checkout. You will need to provide your payment method and confirm your purchase. Once you have bought the game, you can download it by clicking on the "Open" button. You will see the game's status, which shows its download progress, size, and estimated time.
  8. -
  9. Play Bloons TD 6 on your PC. You can do this by clicking on the Bloons TD 6 icon on the BlueStacks home screen. You will see the game's launcher, which allows you to adjust the graphics settings, sound settings, and control settings. Once you are ready, you can click on the "Play" button and enjoy the game.
  10. -
-

Tips and tricks for playing Bloons TD 6 on PC

-

Now that you know how to download Bloons TD 6 for PC, you might want to know some tips and tricks for playing it. Here are some of them:

-
    -
  • Use different monkey towers and heroes for different situations. Each monkey tower and hero has its own strengths and weaknesses, so you need to experiment with different combinations and strategies to find what works best for you.
  • -
  • Upgrade your monkey towers and heroes wisely. Upgrading your monkey towers and heroes can make them more powerful and versatile, but it also costs money and sometimes sacrifices other abilities. You need to balance your budget and your needs, and plan ahead for future rounds.
  • -
  • Use the sandbox mode to test your skills and ideas. The sandbox mode lets you play any map with unlimited money and lives, and customize the bloon types and speed. You can use this mode to practice your strategies, try new things, or just have fun.
  • -
  • Play online co-op with your friends or other players. Online co-op lets you team up with up to three other players and share money, lives, towers, and heroes. You can communicate with your teammates using chat or voice chat, and work together to pop all the bloons.
  • -
  • Complete daily challenges and events to earn rewards. Daily challenges and events are special modes that have different rules and objectives. They can be easy or hard, fun or frustrating, but they always offer rewards such as monkey money, trophies, insta-monkeys, or skins.
  • -
-

Conclusion

-

Bloons TD 6 is a great game that you can play on your PC for a better gaming experience. You can download it from Steam, Microsoft Store, or BlueStacks, depending on your preference and convenience. You can also use some tips and tricks to improve your gameplay and have more fun. If you are looking for a challenging and addictive strategy game that will keep you entertained for hours, you should definitely try Bloons TD 6 on PC.

-

FAQs

-

Here are some frequently asked questions about Bloons TD 6 on PC:

-
    -
  1. Is Bloons TD 6 free on PC?
    No, Bloons TD 6 is not free on PC. It costs $9.99 on Steam and Microsoft Store, and $4.99 on BlueStacks.
  2. -
  3. Can I play Bloons TD 6 offline on PC?
    Yes, you can play Bloons TD 6 offline on PC. However, you will need an internet connection to download the game, update it, access online features such as co-op and events, and sync your progress across devices.
  4. -
  5. Can I transfer my progress from mobile to PC?
    Yes, you can transfer your progress from mobile to PC using Ninja Kiwi's cloud save feature. You need to create a Ninja Kiwi account and link it to your game on both devices. Then you can sync your progress by clicking on the cloud icon in the game's settings menu.
  6. -
  7. What are the system requirements for Bloons TD 6 on PC?
    The minimum system requirements for Bloons TD 6 on PC are:
      -
    • OS: Windows 7 (64bit)
    • -
    • Processor: 1.5Ghz or better (x 64bit or ARM64)
    • -
    • Memory: 4096 MB RAM
    • -
    • Graphics: OpenGL 2.0 compatible, ATI, Nvidia or Intel HD
    • -
    • Storage: 2048 MB available space
    • -
    • Sound Card: Windows compatible sound card
    • -
    -
  8. -
  9. Where can I get more information and support for Bloons TD 6 on PC?
    You can get more information and support for Bloons TD 6 on PC by visiting the following websites: -
  10. -
-

I hope this article has helped you learn how to download Bloons TD 6 for PC and enjoy this amazing game. If you have any questions or comments, feel free to leave them below. Thanks for reading!

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/fffiloni/AnimateDiff-Image-Init/download_bashscripts/4-MajicMix.sh b/spaces/fffiloni/AnimateDiff-Image-Init/download_bashscripts/4-MajicMix.sh deleted file mode 100644 index b287167c5ba8e594d6f183017aa9a231d4ae63b6..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/AnimateDiff-Image-Init/download_bashscripts/4-MajicMix.sh +++ /dev/null @@ -1,2 +0,0 @@ -#!/bin/bash -wget https://civitai.com/api/download/models/79068 -P models/DreamBooth_LoRA/ --content-disposition --no-check-certificate \ No newline at end of file diff --git a/spaces/fffiloni/Video-Matting-Anything/segment-anything/segment_anything/utils/transforms.py b/spaces/fffiloni/Video-Matting-Anything/segment-anything/segment_anything/utils/transforms.py deleted file mode 100644 index c08ba1e3db751f3a5483a003be38c69c2cf2df85..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/Video-Matting-Anything/segment-anything/segment_anything/utils/transforms.py +++ /dev/null @@ -1,102 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. - -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import numpy as np -import torch -from torch.nn import functional as F -from torchvision.transforms.functional import resize, to_pil_image # type: ignore - -from copy import deepcopy -from typing import Tuple - - -class ResizeLongestSide: - """ - Resizes images to the longest side 'target_length', as well as provides - methods for resizing coordinates and boxes. Provides methods for - transforming both numpy array and batched torch tensors. - """ - - def __init__(self, target_length: int) -> None: - self.target_length = target_length - - def apply_image(self, image: np.ndarray) -> np.ndarray: - """ - Expects a numpy array with shape HxWxC in uint8 format. - """ - target_size = self.get_preprocess_shape(image.shape[0], image.shape[1], self.target_length) - return np.array(resize(to_pil_image(image), target_size)) - - def apply_coords(self, coords: np.ndarray, original_size: Tuple[int, ...]) -> np.ndarray: - """ - Expects a numpy array of length 2 in the final dimension. Requires the - original image size in (H, W) format. - """ - old_h, old_w = original_size - new_h, new_w = self.get_preprocess_shape( - original_size[0], original_size[1], self.target_length - ) - coords = deepcopy(coords).astype(float) - coords[..., 0] = coords[..., 0] * (new_w / old_w) - coords[..., 1] = coords[..., 1] * (new_h / old_h) - return coords - - def apply_boxes(self, boxes: np.ndarray, original_size: Tuple[int, ...]) -> np.ndarray: - """ - Expects a numpy array shape Bx4. Requires the original image size - in (H, W) format. - """ - boxes = self.apply_coords(boxes.reshape(-1, 2, 2), original_size) - return boxes.reshape(-1, 4) - - def apply_image_torch(self, image: torch.Tensor) -> torch.Tensor: - """ - Expects batched images with shape BxCxHxW and float format. This - transformation may not exactly match apply_image. apply_image is - the transformation expected by the model. - """ - # Expects an image in BCHW format. May not exactly match apply_image. - target_size = self.get_preprocess_shape(image.shape[2], image.shape[3], self.target_length) - return F.interpolate( - image, target_size, mode="bilinear", align_corners=False, antialias=True - ) - - def apply_coords_torch( - self, coords: torch.Tensor, original_size: Tuple[int, ...] - ) -> torch.Tensor: - """ - Expects a torch tensor with length 2 in the last dimension. Requires the - original image size in (H, W) format. - """ - old_h, old_w = original_size - new_h, new_w = self.get_preprocess_shape( - original_size[0], original_size[1], self.target_length - ) - coords = deepcopy(coords).to(torch.float) - coords[..., 0] = coords[..., 0] * (new_w / old_w) - coords[..., 1] = coords[..., 1] * (new_h / old_h) - return coords - - def apply_boxes_torch( - self, boxes: torch.Tensor, original_size: Tuple[int, ...] - ) -> torch.Tensor: - """ - Expects a torch tensor with shape Bx4. Requires the original image - size in (H, W) format. - """ - boxes = self.apply_coords_torch(boxes.reshape(-1, 2, 2), original_size) - return boxes.reshape(-1, 4) - - @staticmethod - def get_preprocess_shape(oldh: int, oldw: int, long_side_length: int) -> Tuple[int, int]: - """ - Compute the output size given input size and target long side length. - """ - scale = long_side_length * 1.0 / max(oldh, oldw) - newh, neww = oldh * scale, oldw * scale - neww = int(neww + 0.5) - newh = int(newh + 0.5) - return (newh, neww) diff --git a/spaces/fffiloni/audioldm-text-to-audio-generation-copy/audioldm/hifigan/models.py b/spaces/fffiloni/audioldm-text-to-audio-generation-copy/audioldm/hifigan/models.py deleted file mode 100644 index c4382cc39de0463f9b7c0f33f037dbc233e7cb36..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/audioldm-text-to-audio-generation-copy/audioldm/hifigan/models.py +++ /dev/null @@ -1,174 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F -from torch.nn import Conv1d, ConvTranspose1d -from torch.nn.utils import weight_norm, remove_weight_norm - -LRELU_SLOPE = 0.1 - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size * dilation - dilation) / 2) - - -class ResBlock(torch.nn.Module): - def __init__(self, h, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock, self).__init__() - self.h = h - self.convs1 = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]), - ) - ), - ] - ) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - ] - ) - self.convs2.apply(init_weights) - - def forward(self, x): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - xt = c2(xt) - x = xt + x - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class Generator(torch.nn.Module): - def __init__(self, h): - super(Generator, self).__init__() - self.h = h - self.num_kernels = len(h.resblock_kernel_sizes) - self.num_upsamples = len(h.upsample_rates) - self.conv_pre = weight_norm( - Conv1d(h.num_mels, h.upsample_initial_channel, 7, 1, padding=3) - ) - resblock = ResBlock - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(h.upsample_rates, h.upsample_kernel_sizes)): - self.ups.append( - weight_norm( - ConvTranspose1d( - h.upsample_initial_channel // (2**i), - h.upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = h.upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(h.resblock_kernel_sizes, h.resblock_dilation_sizes) - ): - self.resblocks.append(resblock(h, ch, k, d)) - - self.conv_post = weight_norm(Conv1d(ch, 1, 7, 1, padding=3)) - self.ups.apply(init_weights) - self.conv_post.apply(init_weights) - - def forward(self, x): - x = self.conv_pre(x) - for i in range(self.num_upsamples): - x = F.leaky_relu(x, LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - # print("Removing weight norm...") - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - remove_weight_norm(self.conv_pre) - remove_weight_norm(self.conv_post) diff --git a/spaces/fffiloni/audioldm-text-to-audio-generation-copy/audioldm/latent_diffusion/ddpm.py b/spaces/fffiloni/audioldm-text-to-audio-generation-copy/audioldm/latent_diffusion/ddpm.py deleted file mode 100644 index ffca031c27d413698adee5a58547b7d0ea4069c3..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/audioldm-text-to-audio-generation-copy/audioldm/latent_diffusion/ddpm.py +++ /dev/null @@ -1,441 +0,0 @@ -""" -wild mixture of -https://github.com/lucidrains/denoising-diffusion-pytorch/blob/7706bdfc6f527f58d33f84b7b522e61e6e3164b3/denoising_diffusion_pytorch/denoising_diffusion_pytorch.py -https://github.com/openai/improved-diffusion/blob/e94489283bb876ac1477d5dd7709bbbd2d9902ce/improved_diffusion/gaussian_diffusion.py -https://github.com/CompVis/taming-transformers --- merci -""" -import sys -import os - -import torch -import torch.nn as nn -import numpy as np -from contextlib import contextmanager -from functools import partial -from tqdm import tqdm - -from audioldm.utils import exists, default, count_params, instantiate_from_config -from audioldm.latent_diffusion.ema import LitEma -from audioldm.latent_diffusion.util import ( - make_beta_schedule, - extract_into_tensor, - noise_like, -) -import soundfile as sf -import os - - -__conditioning_keys__ = {"concat": "c_concat", "crossattn": "c_crossattn", "adm": "y"} - - -def disabled_train(self, mode=True): - """Overwrite model.train with this function to make sure train/eval mode - does not change anymore.""" - return self - - -def uniform_on_device(r1, r2, shape, device): - return (r1 - r2) * torch.rand(*shape, device=device) + r2 - - -class DiffusionWrapper(nn.Module): - def __init__(self, diff_model_config, conditioning_key): - super().__init__() - self.diffusion_model = instantiate_from_config(diff_model_config) - self.conditioning_key = conditioning_key - assert self.conditioning_key in [ - None, - "concat", - "crossattn", - "hybrid", - "adm", - "film", - ] - - def forward( - self, x, t, c_concat: list = None, c_crossattn: list = None, c_film: list = None - ): - x = x.contiguous() - t = t.contiguous() - - if self.conditioning_key is None: - out = self.diffusion_model(x, t) - elif self.conditioning_key == "concat": - xc = torch.cat([x] + c_concat, dim=1) - out = self.diffusion_model(xc, t) - elif self.conditioning_key == "crossattn": - cc = torch.cat(c_crossattn, 1) - out = self.diffusion_model(x, t, context=cc) - elif self.conditioning_key == "hybrid": - xc = torch.cat([x] + c_concat, dim=1) - cc = torch.cat(c_crossattn, 1) - out = self.diffusion_model(xc, t, context=cc) - elif ( - self.conditioning_key == "film" - ): # The condition is assumed to be a global token, which wil pass through a linear layer and added with the time embedding for the FILM - cc = c_film[0].squeeze(1) # only has one token - out = self.diffusion_model(x, t, y=cc) - elif self.conditioning_key == "adm": - cc = c_crossattn[0] - out = self.diffusion_model(x, t, y=cc) - else: - raise NotImplementedError() - - return out - - -class DDPM(nn.Module): - # classic DDPM with Gaussian diffusion, in image space - def __init__( - self, - unet_config, - timesteps=1000, - beta_schedule="linear", - loss_type="l2", - ckpt_path=None, - ignore_keys=[], - load_only_unet=False, - monitor="val/loss", - use_ema=True, - first_stage_key="image", - latent_t_size=256, - latent_f_size=16, - channels=3, - log_every_t=100, - clip_denoised=True, - linear_start=1e-4, - linear_end=2e-2, - cosine_s=8e-3, - given_betas=None, - original_elbo_weight=0.0, - v_posterior=0.0, # weight for choosing posterior variance as sigma = (1-v) * beta_tilde + v * beta - l_simple_weight=1.0, - conditioning_key=None, - parameterization="eps", # all assuming fixed variance schedules - scheduler_config=None, - use_positional_encodings=False, - learn_logvar=False, - logvar_init=0.0, - ): - super().__init__() - assert parameterization in [ - "eps", - "x0", - ], 'currently only supporting "eps" and "x0"' - self.parameterization = parameterization - self.state = None - # print(f"{self.__class__.__name__}: Running in {self.parameterization}-prediction mode") - self.cond_stage_model = None - self.clip_denoised = clip_denoised - self.log_every_t = log_every_t - self.first_stage_key = first_stage_key - - self.latent_t_size = latent_t_size - self.latent_f_size = latent_f_size - - self.channels = channels - self.use_positional_encodings = use_positional_encodings - self.model = DiffusionWrapper(unet_config, conditioning_key) - count_params(self.model, verbose=True) - self.use_ema = use_ema - if self.use_ema: - self.model_ema = LitEma(self.model) - # print(f"Keeping EMAs of {len(list(self.model_ema.buffers()))}.") - - self.use_scheduler = scheduler_config is not None - if self.use_scheduler: - self.scheduler_config = scheduler_config - - self.v_posterior = v_posterior - self.original_elbo_weight = original_elbo_weight - self.l_simple_weight = l_simple_weight - - if monitor is not None: - self.monitor = monitor - - self.register_schedule( - given_betas=given_betas, - beta_schedule=beta_schedule, - timesteps=timesteps, - linear_start=linear_start, - linear_end=linear_end, - cosine_s=cosine_s, - ) - - self.loss_type = loss_type - - self.learn_logvar = learn_logvar - self.logvar = torch.full(fill_value=logvar_init, size=(self.num_timesteps,)) - if self.learn_logvar: - self.logvar = nn.Parameter(self.logvar, requires_grad=True) - else: - self.logvar = nn.Parameter(self.logvar, requires_grad=False) - - self.logger_save_dir = None - self.logger_project = None - self.logger_version = None - self.label_indices_total = None - # To avoid the system cannot find metric value for checkpoint - self.metrics_buffer = { - "val/kullback_leibler_divergence_sigmoid": 15.0, - "val/kullback_leibler_divergence_softmax": 10.0, - "val/psnr": 0.0, - "val/ssim": 0.0, - "val/inception_score_mean": 1.0, - "val/inception_score_std": 0.0, - "val/kernel_inception_distance_mean": 0.0, - "val/kernel_inception_distance_std": 0.0, - "val/frechet_inception_distance": 133.0, - "val/frechet_audio_distance": 32.0, - } - self.initial_learning_rate = None - - def get_log_dir(self): - if ( - self.logger_save_dir is None - and self.logger_project is None - and self.logger_version is None - ): - return os.path.join( - self.logger.save_dir, self.logger._project, self.logger.version - ) - else: - return os.path.join( - self.logger_save_dir, self.logger_project, self.logger_version - ) - - def set_log_dir(self, save_dir, project, version): - self.logger_save_dir = save_dir - self.logger_project = project - self.logger_version = version - - def register_schedule( - self, - given_betas=None, - beta_schedule="linear", - timesteps=1000, - linear_start=1e-4, - linear_end=2e-2, - cosine_s=8e-3, - ): - if exists(given_betas): - betas = given_betas - else: - betas = make_beta_schedule( - beta_schedule, - timesteps, - linear_start=linear_start, - linear_end=linear_end, - cosine_s=cosine_s, - ) - alphas = 1.0 - betas - alphas_cumprod = np.cumprod(alphas, axis=0) - alphas_cumprod_prev = np.append(1.0, alphas_cumprod[:-1]) - - (timesteps,) = betas.shape - self.num_timesteps = int(timesteps) - self.linear_start = linear_start - self.linear_end = linear_end - assert ( - alphas_cumprod.shape[0] == self.num_timesteps - ), "alphas have to be defined for each timestep" - - to_torch = partial(torch.tensor, dtype=torch.float32) - - self.register_buffer("betas", to_torch(betas)) - self.register_buffer("alphas_cumprod", to_torch(alphas_cumprod)) - self.register_buffer("alphas_cumprod_prev", to_torch(alphas_cumprod_prev)) - - # calculations for diffusion q(x_t | x_{t-1}) and others - self.register_buffer("sqrt_alphas_cumprod", to_torch(np.sqrt(alphas_cumprod))) - self.register_buffer( - "sqrt_one_minus_alphas_cumprod", to_torch(np.sqrt(1.0 - alphas_cumprod)) - ) - self.register_buffer( - "log_one_minus_alphas_cumprod", to_torch(np.log(1.0 - alphas_cumprod)) - ) - self.register_buffer( - "sqrt_recip_alphas_cumprod", to_torch(np.sqrt(1.0 / alphas_cumprod)) - ) - self.register_buffer( - "sqrt_recipm1_alphas_cumprod", to_torch(np.sqrt(1.0 / alphas_cumprod - 1)) - ) - - # calculations for posterior q(x_{t-1} | x_t, x_0) - posterior_variance = (1 - self.v_posterior) * betas * ( - 1.0 - alphas_cumprod_prev - ) / (1.0 - alphas_cumprod) + self.v_posterior * betas - # above: equal to 1. / (1. / (1. - alpha_cumprod_tm1) + alpha_t / beta_t) - self.register_buffer("posterior_variance", to_torch(posterior_variance)) - # below: log calculation clipped because the posterior variance is 0 at the beginning of the diffusion chain - self.register_buffer( - "posterior_log_variance_clipped", - to_torch(np.log(np.maximum(posterior_variance, 1e-20))), - ) - self.register_buffer( - "posterior_mean_coef1", - to_torch(betas * np.sqrt(alphas_cumprod_prev) / (1.0 - alphas_cumprod)), - ) - self.register_buffer( - "posterior_mean_coef2", - to_torch( - (1.0 - alphas_cumprod_prev) * np.sqrt(alphas) / (1.0 - alphas_cumprod) - ), - ) - - if self.parameterization == "eps": - lvlb_weights = self.betas**2 / ( - 2 - * self.posterior_variance - * to_torch(alphas) - * (1 - self.alphas_cumprod) - ) - elif self.parameterization == "x0": - lvlb_weights = ( - 0.5 - * np.sqrt(torch.Tensor(alphas_cumprod)) - / (2.0 * 1 - torch.Tensor(alphas_cumprod)) - ) - else: - raise NotImplementedError("mu not supported") - # TODO how to choose this term - lvlb_weights[0] = lvlb_weights[1] - self.register_buffer("lvlb_weights", lvlb_weights, persistent=False) - assert not torch.isnan(self.lvlb_weights).all() - - @contextmanager - def ema_scope(self, context=None): - if self.use_ema: - self.model_ema.store(self.model.parameters()) - self.model_ema.copy_to(self.model) - if context is not None: - # print(f"{context}: Switched to EMA weights") - pass - try: - yield None - finally: - if self.use_ema: - self.model_ema.restore(self.model.parameters()) - if context is not None: - # print(f"{context}: Restored training weights") - pass - - def q_mean_variance(self, x_start, t): - """ - Get the distribution q(x_t | x_0). - :param x_start: the [N x C x ...] tensor of noiseless inputs. - :param t: the number of diffusion steps (minus 1). Here, 0 means one step. - :return: A tuple (mean, variance, log_variance), all of x_start's shape. - """ - mean = extract_into_tensor(self.sqrt_alphas_cumprod, t, x_start.shape) * x_start - variance = extract_into_tensor(1.0 - self.alphas_cumprod, t, x_start.shape) - log_variance = extract_into_tensor( - self.log_one_minus_alphas_cumprod, t, x_start.shape - ) - return mean, variance, log_variance - - def predict_start_from_noise(self, x_t, t, noise): - return ( - extract_into_tensor(self.sqrt_recip_alphas_cumprod, t, x_t.shape) * x_t - - extract_into_tensor(self.sqrt_recipm1_alphas_cumprod, t, x_t.shape) - * noise - ) - - def q_posterior(self, x_start, x_t, t): - posterior_mean = ( - extract_into_tensor(self.posterior_mean_coef1, t, x_t.shape) * x_start - + extract_into_tensor(self.posterior_mean_coef2, t, x_t.shape) * x_t - ) - posterior_variance = extract_into_tensor(self.posterior_variance, t, x_t.shape) - posterior_log_variance_clipped = extract_into_tensor( - self.posterior_log_variance_clipped, t, x_t.shape - ) - return posterior_mean, posterior_variance, posterior_log_variance_clipped - - def p_mean_variance(self, x, t, clip_denoised: bool): - model_out = self.model(x, t) - if self.parameterization == "eps": - x_recon = self.predict_start_from_noise(x, t=t, noise=model_out) - elif self.parameterization == "x0": - x_recon = model_out - if clip_denoised: - x_recon.clamp_(-1.0, 1.0) - - model_mean, posterior_variance, posterior_log_variance = self.q_posterior( - x_start=x_recon, x_t=x, t=t - ) - return model_mean, posterior_variance, posterior_log_variance - - @torch.no_grad() - def p_sample(self, x, t, clip_denoised=True, repeat_noise=False): - b, *_, device = *x.shape, x.device - model_mean, _, model_log_variance = self.p_mean_variance( - x=x, t=t, clip_denoised=clip_denoised - ) - noise = noise_like(x.shape, device, repeat_noise) - # no noise when t == 0 - nonzero_mask = ( - (1 - (t == 0).float()).reshape(b, *((1,) * (len(x.shape) - 1))).contiguous() - ) - return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise - - @torch.no_grad() - def p_sample_loop(self, shape, return_intermediates=False): - device = self.betas.device - b = shape[0] - img = torch.randn(shape, device=device) - intermediates = [img] - for i in tqdm( - reversed(range(0, self.num_timesteps)), - desc="Sampling t", - total=self.num_timesteps, - ): - img = self.p_sample( - img, - torch.full((b,), i, device=device, dtype=torch.long), - clip_denoised=self.clip_denoised, - ) - if i % self.log_every_t == 0 or i == self.num_timesteps - 1: - intermediates.append(img) - if return_intermediates: - return img, intermediates - return img - - @torch.no_grad() - def sample(self, batch_size=16, return_intermediates=False): - shape = (batch_size, channels, self.latent_t_size, self.latent_f_size) - channels = self.channels - return self.p_sample_loop(shape, return_intermediates=return_intermediates) - - def q_sample(self, x_start, t, noise=None): - noise = default(noise, lambda: torch.randn_like(x_start)) - return ( - extract_into_tensor(self.sqrt_alphas_cumprod, t, x_start.shape) * x_start - + extract_into_tensor(self.sqrt_one_minus_alphas_cumprod, t, x_start.shape) - * noise - ) - - def forward(self, x, *args, **kwargs): - t = torch.randint( - 0, self.num_timesteps, (x.shape[0],), device=self.device - ).long() - return self.p_losses(x, t, *args, **kwargs) - - def get_input(self, batch, k): - # fbank, log_magnitudes_stft, label_indices, fname, waveform, clip_label, text = batch - fbank, log_magnitudes_stft, label_indices, fname, waveform, text = batch - ret = {} - - ret["fbank"] = ( - fbank.unsqueeze(1).to(memory_format=torch.contiguous_format).float() - ) - ret["stft"] = log_magnitudes_stft.to( - memory_format=torch.contiguous_format - ).float() - # ret["clip_label"] = clip_label.to(memory_format=torch.contiguous_format).float() - ret["waveform"] = waveform.to(memory_format=torch.contiguous_format).float() - ret["text"] = list(text) - ret["fname"] = fname - - return ret[k] diff --git a/spaces/fightglory/YoloV4-Webcam/loss.py b/spaces/fightglory/YoloV4-Webcam/loss.py deleted file mode 100644 index 4675441242d67a211ae1048df865fb006d5ec235..0000000000000000000000000000000000000000 --- a/spaces/fightglory/YoloV4-Webcam/loss.py +++ /dev/null @@ -1,212 +0,0 @@ -#!/usr/bin/env python -# coding: utf-8 - -import numpy as np -import math -import tensorflow.keras.backend as K -import tensorflow as tf - - -def xywh_to_x1y1x2y2(boxes): - return tf.concat([boxes[..., :2] - boxes[..., 2:] * 0.5, boxes[..., :2] + boxes[..., 2:] * 0.5], axis=-1) - - -# x,y,w,h -def bbox_iou(boxes1, boxes2): - boxes1_area = boxes1[..., 2] * boxes1[..., 3] # w * h - boxes2_area = boxes2[..., 2] * boxes2[..., 3] - - # (x, y, w, h) -> (x0, y0, x1, y1) - boxes1 = xywh_to_x1y1x2y2(boxes1) - boxes2 = xywh_to_x1y1x2y2(boxes2) - - # coordinates of intersection - top_left = tf.maximum(boxes1[..., :2], boxes2[..., :2]) - bottom_right = tf.minimum(boxes1[..., 2:], boxes2[..., 2:]) - intersection_xy = tf.maximum(bottom_right - top_left, 0.0) - - intersection_area = intersection_xy[..., 0] * intersection_xy[..., 1] - union_area = boxes1_area + boxes2_area - intersection_area - - return 1.0 * intersection_area / (union_area + tf.keras.backend.epsilon()) - - -def bbox_giou(boxes1, boxes2): - boxes1_area = boxes1[..., 2] * boxes1[..., 3] # w*h - boxes2_area = boxes2[..., 2] * boxes2[..., 3] - - # (x, y, w, h) -> (x0, y0, x1, y1) - boxes1 = xywh_to_x1y1x2y2(boxes1) - boxes2 = xywh_to_x1y1x2y2(boxes2) - - top_left = tf.maximum(boxes1[..., :2], boxes2[..., :2]) - bottom_right = tf.minimum(boxes1[..., 2:], boxes2[..., 2:]) - - intersection_xy = tf.maximum(bottom_right - top_left, 0.0) - intersection_area = intersection_xy[..., 0] * intersection_xy[..., 1] - - union_area = boxes1_area + boxes2_area - intersection_area - - iou = 1.0 * intersection_area / (union_area + tf.keras.backend.epsilon()) - - enclose_top_left = tf.minimum(boxes1[..., :2], boxes2[..., :2]) - enclose_bottom_right = tf.maximum(boxes1[..., 2:], boxes2[..., 2:]) - - enclose_xy = enclose_bottom_right - enclose_top_left - enclose_area = enclose_xy[..., 0] * enclose_xy[..., 1] - - giou = iou - tf.math.divide_no_nan(enclose_area - union_area, enclose_area) - - return giou - - -def bbox_ciou(boxes1, boxes2): - ''' - ciou = iou - p2/c2 - av - :param boxes1: (8, 13, 13, 3, 4) pred_xywh - :param boxes2: (8, 13, 13, 3, 4) label_xywh - :return: - ''' - boxes1_x0y0x1y1 = tf.concat([boxes1[..., :2] - boxes1[..., 2:] * 0.5, - boxes1[..., :2] + boxes1[..., 2:] * 0.5], axis=-1) - boxes2_x0y0x1y1 = tf.concat([boxes2[..., :2] - boxes2[..., 2:] * 0.5, - boxes2[..., :2] + boxes2[..., 2:] * 0.5], axis=-1) - boxes1_x0y0x1y1 = tf.concat([tf.minimum(boxes1_x0y0x1y1[..., :2], boxes1_x0y0x1y1[..., 2:]), - tf.maximum(boxes1_x0y0x1y1[..., :2], boxes1_x0y0x1y1[..., 2:])], axis=-1) - boxes2_x0y0x1y1 = tf.concat([tf.minimum(boxes2_x0y0x1y1[..., :2], boxes2_x0y0x1y1[..., 2:]), - tf.maximum(boxes2_x0y0x1y1[..., :2], boxes2_x0y0x1y1[..., 2:])], axis=-1) - - # area - boxes1_area = (boxes1_x0y0x1y1[..., 2] - boxes1_x0y0x1y1[..., 0]) * ( - boxes1_x0y0x1y1[..., 3] - boxes1_x0y0x1y1[..., 1]) - boxes2_area = (boxes2_x0y0x1y1[..., 2] - boxes2_x0y0x1y1[..., 0]) * ( - boxes2_x0y0x1y1[..., 3] - boxes2_x0y0x1y1[..., 1]) - - # top-left and bottom-right coord, shape: (8, 13, 13, 3, 2) - left_up = tf.maximum(boxes1_x0y0x1y1[..., :2], boxes2_x0y0x1y1[..., :2]) - right_down = tf.minimum(boxes1_x0y0x1y1[..., 2:], boxes2_x0y0x1y1[..., 2:]) - - # intersection area and iou - inter_section = tf.maximum(right_down - left_up, 0.0) - inter_area = inter_section[..., 0] * inter_section[..., 1] - union_area = boxes1_area + boxes2_area - inter_area - iou = inter_area / (union_area + 1e-9) - - # top-left and bottom-right coord of the enclosing rectangle, shape: (8, 13, 13, 3, 2) - enclose_left_up = tf.minimum(boxes1_x0y0x1y1[..., :2], boxes2_x0y0x1y1[..., :2]) - enclose_right_down = tf.maximum(boxes1_x0y0x1y1[..., 2:], boxes2_x0y0x1y1[..., 2:]) - - # diagnal ** 2 - enclose_wh = enclose_right_down - enclose_left_up - enclose_c2 = K.pow(enclose_wh[..., 0], 2) + K.pow(enclose_wh[..., 1], 2) - - # center distances between two rectangles - p2 = K.pow(boxes1[..., 0] - boxes2[..., 0], 2) + K.pow(boxes1[..., 1] - boxes2[..., 1], 2) - - # add av - atan1 = tf.atan(boxes1[..., 2] / (boxes1[..., 3] + 1e-9)) - atan2 = tf.atan(boxes2[..., 2] / (boxes2[..., 3] + 1e-9)) - v = 4.0 * K.pow(atan1 - atan2, 2) / (math.pi ** 2) - a = v / (1 - iou + v) - - ciou = iou - 1.0 * p2 / enclose_c2 - 1.0 * a * v - return ciou - - -def yolo_loss(args, num_classes, iou_loss_thresh, anchors): - conv_lbbox = args[2] # (?, ?, ?, 3*(num_classes+5)) - conv_mbbox = args[1] # (?, ?, ?, 3*(num_classes+5)) - conv_sbbox = args[0] # (?, ?, ?, 3*(num_classes+5)) - label_sbbox = args[3] # (?, ?, ?, 3, num_classes+5) - label_mbbox = args[4] # (?, ?, ?, 3, num_classes+5) - label_lbbox = args[5] # (?, ?, ?, 3, num_classes+5) - true_bboxes = args[6] # (?, 50, 4) - pred_sbbox = decode(conv_sbbox, anchors[0], 8, num_classes) - pred_mbbox = decode(conv_mbbox, anchors[1], 16, num_classes) - pred_lbbox = decode(conv_lbbox, anchors[2], 32, num_classes) - sbbox_ciou_loss, sbbox_conf_loss, sbbox_prob_loss = loss_layer(conv_sbbox, pred_sbbox, label_sbbox, true_bboxes, 8, num_classes, iou_loss_thresh) - mbbox_ciou_loss, mbbox_conf_loss, mbbox_prob_loss = loss_layer(conv_mbbox, pred_mbbox, label_mbbox, true_bboxes, 16, num_classes, iou_loss_thresh) - lbbox_ciou_loss, lbbox_conf_loss, lbbox_prob_loss = loss_layer(conv_lbbox, pred_lbbox, label_lbbox, true_bboxes, 32, num_classes, iou_loss_thresh) - - ciou_loss = (lbbox_ciou_loss + sbbox_ciou_loss + mbbox_ciou_loss) * 3.54 - conf_loss = (lbbox_conf_loss + sbbox_conf_loss + mbbox_conf_loss) * 64.3 - prob_loss = (lbbox_prob_loss + sbbox_prob_loss + mbbox_prob_loss) * 1 - - return ciou_loss+conf_loss+prob_loss - - -def loss_layer(conv, pred, label, bboxes, stride, num_class, iou_loss_thresh): - conv_shape = tf.shape(conv) - batch_size = conv_shape[0] - output_size = conv_shape[1] - input_size = stride * output_size - conv = tf.reshape(conv, (batch_size, output_size, output_size, - 3, 5 + num_class)) - conv_raw_prob = conv[:, :, :, :, 5:] - conv_raw_conf = conv[:, :, :, :, 4:5] - - pred_xywh = pred[:, :, :, :, 0:4] - pred_conf = pred[:, :, :, :, 4:5] - - label_xywh = label[:, :, :, :, 0:4] - respond_bbox = label[:, :, :, :, 4:5] - label_prob = label[:, :, :, :, 5:] - - # Coordinate loss - ciou = tf.expand_dims(bbox_giou(pred_xywh, label_xywh), axis=-1) # (8, 13, 13, 3, 1) - # ciou = tf.expand_dims(bbox_ciou(pred_xywh, label_xywh), axis=-1) # (8, 13, 13, 3, 1) - input_size = tf.cast(input_size, tf.float32) - - # loss weight of the gt bbox: 2-(gt area/img area) - bbox_loss_scale = 2.0 - 1.0 * label_xywh[:, :, :, :, 2:3] * label_xywh[:, :, :, :, 3:4] / (input_size ** 2) - ciou_loss = respond_bbox * bbox_loss_scale * (1 - ciou) # iou loss for respond bbox - - # Classification loss for respond bbox - prob_loss = respond_bbox * tf.nn.sigmoid_cross_entropy_with_logits(labels=label_prob, logits=conv_raw_prob) - - expand_pred_xywh = pred_xywh[:, :, :, :, np.newaxis, :] # (?, grid_h, grid_w, 3, 1, 4) - expand_bboxes = bboxes[:, np.newaxis, np.newaxis, np.newaxis, :, :] # (?, 1, 1, 1, 70, 4) - iou = bbox_iou(expand_pred_xywh, expand_bboxes) # IoU between all pred bbox and all gt (?, grid_h, grid_w, 3, 70) - max_iou = tf.expand_dims(tf.reduce_max(iou, axis=-1), axis=-1) # max iou: (?, grid_h, grid_w, 3, 1) - - # ignore the bbox which is not respond bbox and max iou < threshold - respond_bgd = (1.0 - respond_bbox) * tf.cast(max_iou < iou_loss_thresh, tf.float32) - - # Confidence loss - conf_focal = tf.pow(respond_bbox - pred_conf, 2) - - conf_loss = conf_focal * ( - respond_bbox * tf.nn.sigmoid_cross_entropy_with_logits(labels=respond_bbox, logits=conv_raw_conf) - + - respond_bgd * tf.nn.sigmoid_cross_entropy_with_logits(labels=respond_bbox, logits=conv_raw_conf) - ) - - ciou_loss = tf.reduce_mean(tf.reduce_sum(ciou_loss, axis=[1, 2, 3, 4])) - conf_loss = tf.reduce_mean(tf.reduce_sum(conf_loss, axis=[1, 2, 3, 4])) - prob_loss = tf.reduce_mean(tf.reduce_sum(prob_loss, axis=[1, 2, 3, 4])) - - return ciou_loss, conf_loss, prob_loss - - -def decode(conv_output, anchors, stride, num_class): - conv_shape = tf.shape(conv_output) - batch_size = conv_shape[0] - output_size = conv_shape[1] - anchor_per_scale = len(anchors) - conv_output = tf.reshape(conv_output, (batch_size, output_size, output_size, anchor_per_scale, 5 + num_class)) - conv_raw_dxdy = conv_output[:, :, :, :, 0:2] - conv_raw_dwdh = conv_output[:, :, :, :, 2:4] - conv_raw_conf = conv_output[:, :, :, :, 4:5] - conv_raw_prob = conv_output[:, :, :, :, 5:] - y = tf.tile(tf.range(output_size, dtype=tf.int32)[:, tf.newaxis], [1, output_size]) - x = tf.tile(tf.range(output_size, dtype=tf.int32)[tf.newaxis, :], [output_size, 1]) - xy_grid = tf.concat([x[:, :, tf.newaxis], y[:, :, tf.newaxis]], axis=-1) - xy_grid = tf.tile(xy_grid[tf.newaxis, :, :, tf.newaxis, :], [batch_size, 1, 1, anchor_per_scale, 1]) - xy_grid = tf.cast(xy_grid, tf.float32) - pred_xy = (tf.sigmoid(conv_raw_dxdy) + xy_grid) * stride - pred_wh = (tf.exp(conv_raw_dwdh) * anchors) - pred_xywh = tf.concat([pred_xy, pred_wh], axis=-1) - pred_conf = tf.sigmoid(conv_raw_conf) - pred_prob = tf.sigmoid(conv_raw_prob) - return tf.concat([pred_xywh, pred_conf, pred_prob], axis=-1) - diff --git "a/spaces/frncscp/Patacotron/pages/Entorno de Ejecuci\303\263n.py" "b/spaces/frncscp/Patacotron/pages/Entorno de Ejecuci\303\263n.py" deleted file mode 100644 index 3a45c358b4d07ececf29b5b2f6707f18836b9305..0000000000000000000000000000000000000000 --- "a/spaces/frncscp/Patacotron/pages/Entorno de Ejecuci\303\263n.py" +++ /dev/null @@ -1,303 +0,0 @@ -import streamlit as st -import tensorflow as tf -from huggingface_hub import login -from tensorflow.keras.models import load_model -from transformers import AutoConfig, AutoModel, pipeline, Dinov2Model#, AutoProcessor, AutoModelForZeroShotImageClassification -from PIL import Image -import os -import torch -import cv2 -import numpy as np -import requests - -token = os.environ['token'] -#login(token = st.secrets["HF_TOKEN"], new_session = False) - -st.set_page_config( - page_title = 'Patacotrón', - layout = 'wide', - menu_items = { - "About" : 'Proyecto ideado para la investigación de "Clasificación de imágenes de una sola clase con algortimos de Inteligencia Artificial".', - "Report a Bug" : 'https://docs.google.com/forms/d/e/1FAIpQLScH0ZxAV8aSqs7TPYi86u0nkxvQG3iuHCStWNB-BoQnSW2V0g/viewform?usp=sf_link' - } -) - -st.sidebar.write("contact@patacotron.tech") -already_excuted = False -cnn, vit, zero_shot, classic_ml = st.tabs(["CNN", "ViT", "Zero-Shot", "Machine Learning Clásico"]) - -def predict(_model_list, _weights, _img): - y_gorrito = 0 - raw_img = cv2.cvtColor(_img, cv2.COLOR_BGR2RGB) - img = cv2.resize(_img, (IMAGE_WIDTH, IMAGE_HEIGHT)) - for model, weight in zip(_model_list, _weights): - y_gorrito += tf.cast(model(tf.expand_dims(img/255., 0)), dtype=tf.float32)*weight - return [y_gorrito / sum(_weights), raw_img] - -def preprocess(file_uploader, module = 'cv2'): #makes the uploaded image readable - img = np.frombuffer(uploaded_file.read(), np.uint8) - if module == 'cv2': - img = cv2.imdecode(img, cv2.IMREAD_COLOR) - elif module == 'pil': - img = Image.open(file_uploader) - return img - -def multiclass_prediction(classifier, important_class): #made for hf zero-shot pipeline results - score = (max([classifier[i]['score'] for i in range(len(classifier))])) - labels = [predict['label'] for predict in classifier if score == predict['score']] - for clase in classifier: - if clase['label'] == important_class: - class_score = clase['score'] - return (labels[0] if len(labels) == 1 else labels, score, class_score) - -API_URL = "https://api-inference.huggingface.co/models" -headers = {"Authorization": f"Bearer {st.secrets['token']}"} - -def query(data, models): #HF API - response = requests.post(API_URL + "/" + model_name, headers=headers, data=data) - if response.json()["error"] == "Internal Server Error": - return -1 - while "error" in response.json(): - response = requests.post(API_URL + "/" + model_name, headers=headers, data=data) - return response.json()[1]["score"] #.json - -@st.cache_resource -def load_clip(): - #processor = AutoProcessor.from_pretrained("openai/clip-vit-large-patch14-336") - #model = AutoModelForZeroShotImageClassification.from_pretrained("openai/clip-vit-large-patch14-336") - classifier = pipeline("zero-shot-image-classification", model = 'openai/clip-vit-large-patch14-336') - return classifier - -with cnn: - - col_a, col_b, = st.columns(2) - ultra_flag = None - with col_a: - st.title("Redes neuronales convolucionales") - st.caption("Los modelos no están en orden de eficacia, sino en orden de creación.") - - current_dir = os.getcwd() - root_dir = os.path.dirname(current_dir) - - # Join the path to the models folder - DIR = os.path.join(current_dir, "models") - models = os.listdir(DIR) - common_root = r"/home/user/app/models/ptctrn_v" - common_end = ".h5" - - model_dict = dict() - for model in models: #preprocessing of strings so the name is readable in the multiselect bar - model_dir = os.path.join(DIR, model) - model_name = 'Patacotrón ' + model_dir.split(common_root)[-1].split(common_end)[0] - model_dict[model_name] = model_dir - #ultraversions = ['Patacotrón 1.5', 'Patacotrón 1.7', 'Patacotrón 1.8', 'Patacotrón 1.12', 'Patacotrón 1.12.2', 'Patacotrón 1.12.3'] - #ultraversions = ['Patacotrón 1.5', 'Patacotrón 1.6', 'Patacotrón 1.12.2', 'Patacotrón 1.8', 'Patacotrón 1.12']#, 'Patacotrón 1.13.20', 'Patacotrón 1.13.38'] - #['Patacotrón 1.5', 'Patacotrón 1.6', 'Patacotrón 1.7', 'Patacotrón 1.12'] # - #ultra_button = st.checkbox('Ultra-Patacotrón (en construcción, no es la mejor versión)') - #ultra_flag = False - weight_list = [] - - #if ultra_button: - # ultra_flag = True - #weight_list = [3, 1, 4.5, 1, .8, 1] [.5, 1.75, 4, .5, 2] - # weight_list = [2.5, 1.8, 1.5, 3.14, 2.2] #.2, 2] - #[1, 2, 3, 2.5] - # st.caption('Para Ultra-Patacotrón, este porcentaje no representa una a priori una probabilidad, sino la combinación ponderada de modelos con sesgos positivos y negativos, lo importante es que identifique correctamente el objeto.') - - # Create a dropdown menu to select the model - model_choice = st.multiselect("Seleccione uno o varios modelos de clasificación", model_dict.keys()) - - threshold = st.slider('¿Cuál va a ser el límite donde se considere patacón? (el valor recomendado es de 75%-80%)', 0, 100, 50, key = 'threshold_convnet') - - - - selected_models = [] - - # Set the image dimensions - IMAGE_WIDTH = IMAGE_HEIGHT = 224 - - executed = False - - with col_b: - - uploaded_file = st.file_uploader(key = 'conv_upload', label = 'Sube la imagen a clasificar',type= ['jpg','png', 'jpeg', 'jfif', 'webp', 'heic']) - - - if st.button(key = 'convnet_button', label ='¿Hay un patacón en la imagen?'): - if len(model_choice) < 1: - #if (len(model_choice) > 0 and ultra_flag) or (len(model_choice) == 0 and ultra_flag is None): - st.write('Debe elegir como mínimo un modelo.') - - elif uploaded_file is not None: - img = preprocess(uploaded_file) - #if ultra_flag: - # with st.spinner('Cargando ultra-predicción...'): - # if not executed: - # ultraptctrn = [load_model(model_dict[model]) for model in ultraversions] - # executed = True - # final_weights = weight_list if len(weight_list) >= 1 else [1 for i in range(len(ultraptctrn))] - # y_gorrito, raw_img = predict(ultraptctrn, final_weights, img) - - #else: - with st.spinner('Cargando predicción...'): - selected_models = [load_model(model_dict[model_name]) for model_name in model_choice if model_name not in selected_models] - final_weights = weight_list if len(weight_list) >= 1 else [1 for i in range(len(selected_models))] - y_gorrito, raw_img = predict(selected_models, final_weights, img) - - if round(float(y_gorrito*100)) >= threshold: - st.success("¡Patacón Detectado!") - - else: - st.error("No se considera que haya un patacón en la imagen") - - st.caption(f'La probabilidad de que la imagen tenga un patacón es del: {round(float(y_gorrito * 100), 2)}%') - st.caption('Si los resultados no fueron los esperados, por favor, [haz click aquí](https://docs.google.com/forms/d/e/1FAIpQLScH0ZxAV8aSqs7TPYi86u0nkxvQG3iuHCStWNB-BoQnSW2V0g/viewform?usp=sf_link)') - - st.image(raw_img) - - else: - st.write('Revisa haber seleccionado los modelos y la imagen correctamente.') - -with vit: - - col_a, col_b = st.columns(2) - - with col_a: - st.title('Visual Transformers') - st.caption('One class is all you need!') - - model_dict = { - 'google/vit-base-patch16-224-in21k' : 'frncscp/patacoptimus-prime', - 'facebook/dinov2-base' : 'frncscp/dinotron', - 'facebook/convnext-large-224' : 'frncscp/pataconxt', - 'microsoft/focalnet-small' : 'frncscp/focalnet-small-patacon', - 'microsoft/swin-tiny-patch4-window7-224' : 'frncscp/patacoswin' - } - - model_choice = st.multiselect("Seleccione un modelo de clasificación", model_dict.keys(), key = 'ViT_multiselect') - - uploaded_file = st.file_uploader(key = 'ViT_upload', label = 'Sube la imagen a clasificar',type= ['jpg','png', 'jpeg', 'jfif', 'webp', 'heic']) - flag = False - threshold = st.slider('¿Cuál va a ser el límite desde donde se considere patacón? (se recomienda por encima del 80%)', 0, 100, 80, key = 'threshold_vit') - - with col_b: - - if st.button(key = 'ViT_button', label ='¿Hay un patacón en la imagen?'): - if len(model_choice) < 1: - print('Recuerda seleccionar al menos un modelo de clasificación') - elif uploaded_file is not None: - with st.spinner('Cargando predicción...'): - - #y_gorritoo = query(uploaded_file.read(), model_dict[model_choice[0]]) - #st.write(y_gorritoo) - #if "facebook/dinov2-base" in model_choice: - # #classifiers = [Dinov2Model.from_pretrained("frncscp/dinotron")] - # classifiers = [pipeline("image-classification", model="frncscp/dinotron")] - #else: - classifiers = [pipeline("image-classification", model= model_dict[model_choice[i]], token = token) for i in range(len(model_choice))] - - #classifier = pipeline("image-classification", model= model_dict[model_choice[0]]) - img = preprocess(uploaded_file, module = 'pil') - - models = [model_dict[model] for model in model_choice] - #st.write(models) - def vit_ensemble(classifier_list, img): - y_gorrito = 0 - for classifier in classifier_list: - classifier = classifier(img) - for clase in classifier: - if clase['label'] == 'Patacon-True': - y_gorrito += clase["score"] - return y_gorrito / len(classifier_list) - - #models = [model_dict[i] for i in range(len(model_choice))] - #st.write(type(models), models) - - #st.write(model_choice) - - #y_gorrito = 0 - #y_gorritoo = query(uploaded_file.read(), model_choice[0])#[1]["score"] - #i = -1 - #st.write("loop iniciado") - #for model in models: - # i+=1 - # st.write("y gorrito a cargar") - # a = query(uploaded_file.read(), model) - # if a == -1: - # st.write("Los servidores se encuentrar caídos, intente más tarde") - # st.write("query terminado") - # y_gorritoo += a - # st.write("y gorrito cargado") - #y_gorritoo /= i - #st.write(y_gorritoo) - #st.write("loop terminado") - - #st.write("y gorrito calculado", len(model_choice)) - #classifier = classifier(img) - - #for clase in classifier: - # if clase['label'] == 'Patacon-True': - # y_gorrito = clase["score"] - - #y_gorrito = classifier[0]["score"] - - y_gorrito = vit_ensemble(classifiers, img) - # - if round(float(y_gorrito * 100)) >= threshold: - st.success("¡Patacón Detectado!") - else: - st.error("No se considera que haya un patacón en la imagen") - st.caption(f'La probabilidad de que la imagen tenga un patacón es del: {round(float(y_gorrito * 100), 2)}%') - st.image(img) - else: - st.write("Asegúrate de haber subido correctamente la imagen.") - - - - -with zero_shot: - - col_a, col_b = st.columns(2) - zsloaded = [] - - with col_a: - - st.title("Clasificación Zero-Shot") - st.caption("Usando Clip de OpenAI") - - labels_for_classification = ["A yellow deep fried smashed plantain", - "A yellow corn dough", - "A stuffed fried dough", - "Fried food", - "Fruit", - "Anything"] - - uploaded_file = st.file_uploader(key = 'ZS_upload', label = 'Sube la imagen a clasificar',type= ['jpg','png', 'jpeg', 'jfif', 'webp', 'heic']) - - with col_b: - - if st.button(key = 'ZS_button', label ='¿Hay un patacón en la imagen?'): - if uploaded_file is not None: - - with st.spinner('Cargando el modelo (puede demorar hasta un minuto, pero después predice rápido)'): - classifier = load_clip() - - with st.spinner('Cargando predicción...'): - img = preprocess(uploaded_file, module = 'pil') - zs_classifier = classifier(img, - candidate_labels = labels_for_classification) - - label, _, y_gorrito = multiclass_prediction(zs_classifier, labels_for_classification[0]) - - if label == "A yellow deep fried smashed plantain": - st.success("¡Patacón Detectado!") - else: - st.error("No se considera que haya un patacón en la imagen") - - st.caption(f'La probabilidad de que la imagen tenga un patacón es del: {round(float(y_gorrito * 100), 2)}%') - st.image(img) - else: - st.write("Asegúrate de haber subido correctamente la imagen.") - -with classic_ml: - st.write('Próximamente') diff --git a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmcv/cnn/utils/sync_bn.py b/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmcv/cnn/utils/sync_bn.py deleted file mode 100644 index f78f39181d75bb85c53e8c7c8eaf45690e9f0bee..0000000000000000000000000000000000000000 --- a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmcv/cnn/utils/sync_bn.py +++ /dev/null @@ -1,59 +0,0 @@ -import torch - -import annotator.uniformer.mmcv as mmcv - - -class _BatchNormXd(torch.nn.modules.batchnorm._BatchNorm): - """A general BatchNorm layer without input dimension check. - - Reproduced from @kapily's work: - (https://github.com/pytorch/pytorch/issues/41081#issuecomment-783961547) - The only difference between BatchNorm1d, BatchNorm2d, BatchNorm3d, etc - is `_check_input_dim` that is designed for tensor sanity checks. - The check has been bypassed in this class for the convenience of converting - SyncBatchNorm. - """ - - def _check_input_dim(self, input): - return - - -def revert_sync_batchnorm(module): - """Helper function to convert all `SyncBatchNorm` (SyncBN) and - `mmcv.ops.sync_bn.SyncBatchNorm`(MMSyncBN) layers in the model to - `BatchNormXd` layers. - - Adapted from @kapily's work: - (https://github.com/pytorch/pytorch/issues/41081#issuecomment-783961547) - - Args: - module (nn.Module): The module containing `SyncBatchNorm` layers. - - Returns: - module_output: The converted module with `BatchNormXd` layers. - """ - module_output = module - module_checklist = [torch.nn.modules.batchnorm.SyncBatchNorm] - if hasattr(mmcv, 'ops'): - module_checklist.append(mmcv.ops.SyncBatchNorm) - if isinstance(module, tuple(module_checklist)): - module_output = _BatchNormXd(module.num_features, module.eps, - module.momentum, module.affine, - module.track_running_stats) - if module.affine: - # no_grad() may not be needed here but - # just to be consistent with `convert_sync_batchnorm()` - with torch.no_grad(): - module_output.weight = module.weight - module_output.bias = module.bias - module_output.running_mean = module.running_mean - module_output.running_var = module.running_var - module_output.num_batches_tracked = module.num_batches_tracked - module_output.training = module.training - # qconfig exists in quantized models - if hasattr(module, 'qconfig'): - module_output.qconfig = module.qconfig - for name, child in module.named_children(): - module_output.add_module(name, revert_sync_batchnorm(child)) - del module - return module_output diff --git a/spaces/gligen/demo/app.py b/spaces/gligen/demo/app.py deleted file mode 100644 index 8e1926a0b5001c8cd6cbce7b79fefca88088f8f0..0000000000000000000000000000000000000000 --- a/spaces/gligen/demo/app.py +++ /dev/null @@ -1,774 +0,0 @@ -import gradio as gr -import torch -from omegaconf import OmegaConf -from gligen.task_grounded_generation import grounded_generation_box, load_ckpt, load_common_ckpt - -import json -import numpy as np -from PIL import Image, ImageDraw, ImageFont -from functools import partial -from collections import Counter -import math -import gc - -from gradio import processing_utils -from typing import Optional - -import warnings - -from datetime import datetime - -from huggingface_hub import hf_hub_download -hf_hub_download = partial(hf_hub_download, library_name="gligen_demo") - -import sys -sys.tracebacklimit = 0 - - -def load_from_hf(repo_id, filename='diffusion_pytorch_model.bin', subfolder=None): - cache_file = hf_hub_download(repo_id=repo_id, filename=filename, subfolder=subfolder) - return torch.load(cache_file, map_location='cpu') - -def load_ckpt_config_from_hf(modality): - ckpt = load_from_hf('gligen/demo_ckpts_legacy', filename=f'{modality}.pth', subfolder='model') - config = load_from_hf('gligen/demo_ckpts_legacy', filename=f'{modality}.pth', subfolder='config') - return ckpt, config - - -def ckpt_load_helper(modality, is_inpaint, is_style, common_instances=None): - pretrained_ckpt_gligen, config = load_ckpt_config_from_hf(modality) - config = OmegaConf.create( config["_content"] ) # config used in training - config.alpha_scale = 1.0 - config.model['params']['is_inpaint'] = is_inpaint - config.model['params']['is_style'] = is_style - - if common_instances is None: - common_ckpt = load_from_hf('gligen/demo_ckpts_legacy', filename=f'common.pth', subfolder='model') - common_instances = load_common_ckpt(config, common_ckpt) - - loaded_model_list = load_ckpt(config, pretrained_ckpt_gligen, common_instances) - - return loaded_model_list, common_instances - - -class Instance: - def __init__(self, capacity = 2): - self.model_type = 'base' - self.loaded_model_list = {} - self.counter = Counter() - self.global_counter = Counter() - self.loaded_model_list['base'], self.common_instances = ckpt_load_helper( - 'gligen-generation-text-box', - is_inpaint=False, is_style=False, common_instances=None - ) - self.capacity = capacity - - def _log(self, model_type, batch_size, instruction, phrase_list): - self.counter[model_type] += 1 - self.global_counter[model_type] += 1 - current_time = datetime.now().strftime("%Y-%m-%d %H:%M:%S") - print('[{}] Current: {}, All: {}. Samples: {}, prompt: {}, phrases: {}'.format( - current_time, dict(self.counter), dict(self.global_counter), batch_size, instruction, phrase_list - )) - - def get_model(self, model_type, batch_size, instruction, phrase_list): - if model_type in self.loaded_model_list: - self._log(model_type, batch_size, instruction, phrase_list) - return self.loaded_model_list[model_type] - - if self.capacity == len(self.loaded_model_list): - least_used_type = self.counter.most_common()[-1][0] - del self.loaded_model_list[least_used_type] - del self.counter[least_used_type] - gc.collect() - torch.cuda.empty_cache() - - self.loaded_model_list[model_type] = self._get_model(model_type) - self._log(model_type, batch_size, instruction, phrase_list) - return self.loaded_model_list[model_type] - - def _get_model(self, model_type): - if model_type == 'base': - return ckpt_load_helper( - 'gligen-generation-text-box', - is_inpaint=False, is_style=False, common_instances=self.common_instances - )[0] - elif model_type == 'inpaint': - return ckpt_load_helper( - 'gligen-inpainting-text-box', - is_inpaint=True, is_style=False, common_instances=self.common_instances - )[0] - elif model_type == 'style': - return ckpt_load_helper( - 'gligen-generation-text-image-box', - is_inpaint=False, is_style=True, common_instances=self.common_instances - )[0] - - assert False - -instance = Instance() - - -def load_clip_model(): - from transformers import CLIPProcessor, CLIPModel - version = "openai/clip-vit-large-patch14" - model = CLIPModel.from_pretrained(version).cuda() - processor = CLIPProcessor.from_pretrained(version) - - return { - 'version': version, - 'model': model, - 'processor': processor, - } - -clip_model = load_clip_model() - - -class ImageMask(gr.components.Image): - """ - Sets: source="canvas", tool="sketch" - """ - - is_template = True - - def __init__(self, **kwargs): - super().__init__(source="upload", tool="sketch", interactive=True, **kwargs) - - def preprocess(self, x): - if x is None: - return x - if self.tool == "sketch" and self.source in ["upload", "webcam"] and type(x) != dict: - decode_image = processing_utils.decode_base64_to_image(x) - width, height = decode_image.size - mask = np.zeros((height, width, 4), dtype=np.uint8) - mask[..., -1] = 255 - mask = self.postprocess(mask) - x = {'image': x, 'mask': mask} - return super().preprocess(x) - - -class Blocks(gr.Blocks): - - def __init__( - self, - theme: str = "default", - analytics_enabled: Optional[bool] = None, - mode: str = "blocks", - title: str = "Gradio", - css: Optional[str] = None, - **kwargs, - ): - - self.extra_configs = { - 'thumbnail': kwargs.pop('thumbnail', ''), - 'url': kwargs.pop('url', 'https://gradio.app/'), - 'creator': kwargs.pop('creator', '@teamGradio'), - } - - super(Blocks, self).__init__(theme, analytics_enabled, mode, title, css, **kwargs) - warnings.filterwarnings("ignore") - - def get_config_file(self): - config = super(Blocks, self).get_config_file() - - for k, v in self.extra_configs.items(): - config[k] = v - - return config - -''' -inference model -''' - -@torch.no_grad() -def inference(task, language_instruction, grounding_instruction, inpainting_boxes_nodrop, image, - alpha_sample, guidance_scale, batch_size, - fix_seed, rand_seed, actual_mask, style_image, - *args, **kwargs): - grounding_instruction = json.loads(grounding_instruction) - phrase_list, location_list = [], [] - for k, v in grounding_instruction.items(): - phrase_list.append(k) - location_list.append(v) - - placeholder_image = Image.open('images/teddy.jpg').convert("RGB") - image_list = [placeholder_image] * len(phrase_list) # placeholder input for visual prompt, which is disabled - - batch_size = int(batch_size) - if not 1 <= batch_size <= 4: - batch_size = 2 - - if style_image == None: - has_text_mask = 1 - has_image_mask = 0 # then we hack above 'image_list' - else: - valid_phrase_len = len(phrase_list) - - phrase_list += ['placeholder'] - has_text_mask = [1]*valid_phrase_len + [0] - - image_list = [placeholder_image]*valid_phrase_len + [style_image] - has_image_mask = [0]*valid_phrase_len + [1] - - location_list += [ [0.0, 0.0, 1, 0.01] ] # style image grounding location - - if task == 'Grounded Inpainting': - alpha_sample = 1.0 - - instruction = dict( - prompt = language_instruction, - phrases = phrase_list, - images = image_list, - locations = location_list, - alpha_type = [alpha_sample, 0, 1.0 - alpha_sample], - has_text_mask = has_text_mask, - has_image_mask = has_image_mask, - save_folder_name = language_instruction, - guidance_scale = guidance_scale, - batch_size = batch_size, - fix_seed = bool(fix_seed), - rand_seed = int(rand_seed), - actual_mask = actual_mask, - inpainting_boxes_nodrop = inpainting_boxes_nodrop, - ) - - get_model = partial(instance.get_model, - batch_size=batch_size, - instruction=language_instruction, - phrase_list=phrase_list) - - with torch.autocast(device_type='cuda', dtype=torch.float16): - if task == 'Grounded Generation': - if style_image == None: - return grounded_generation_box(get_model('base'), instruction, *args, **kwargs) - else: - return grounded_generation_box(get_model('style'), instruction, *args, **kwargs) - elif task == 'Grounded Inpainting': - assert image is not None - instruction['input_image'] = image.convert("RGB") - return grounded_generation_box(get_model('inpaint'), instruction, *args, **kwargs) - - -def draw_box(boxes=[], texts=[], img=None): - if len(boxes) == 0 and img is None: - return None - - if img is None: - img = Image.new('RGB', (512, 512), (255, 255, 255)) - colors = ["red", "olive", "blue", "green", "orange", "brown", "cyan", "purple"] - draw = ImageDraw.Draw(img) - font = ImageFont.truetype("DejaVuSansMono.ttf", size=18) - for bid, box in enumerate(boxes): - draw.rectangle([box[0], box[1], box[2], box[3]], outline=colors[bid % len(colors)], width=4) - anno_text = texts[bid] - draw.rectangle([box[0], box[3] - int(font.size * 1.2), box[0] + int((len(anno_text) + 0.8) * font.size * 0.6), box[3]], outline=colors[bid % len(colors)], fill=colors[bid % len(colors)], width=4) - draw.text([box[0] + int(font.size * 0.2), box[3] - int(font.size*1.2)], anno_text, font=font, fill=(255,255,255)) - return img - -def get_concat(ims): - if len(ims) == 1: - n_col = 1 - else: - n_col = 2 - n_row = math.ceil(len(ims) / 2) - dst = Image.new('RGB', (ims[0].width * n_col, ims[0].height * n_row), color="white") - for i, im in enumerate(ims): - row_id = i // n_col - col_id = i % n_col - dst.paste(im, (im.width * col_id, im.height * row_id)) - return dst - - -def auto_append_grounding(language_instruction, grounding_texts): - for grounding_text in grounding_texts: - if grounding_text not in language_instruction and grounding_text != 'auto': - language_instruction += "; " + grounding_text - return language_instruction - - - - -def generate(task, language_instruction, grounding_texts, sketch_pad, - alpha_sample, guidance_scale, batch_size, - fix_seed, rand_seed, use_actual_mask, append_grounding, style_cond_image, - state): - if 'boxes' not in state: - state['boxes'] = [] - - boxes = state['boxes'] - grounding_texts = [x.strip() for x in grounding_texts.split(';')] - # assert len(boxes) == len(grounding_texts) - if len(boxes) != len(grounding_texts): - if len(boxes) < len(grounding_texts): - raise ValueError("""The number of boxes should be equal to the number of grounding objects. -Number of boxes drawn: {}, number of grounding tokens: {}. -Please draw boxes accordingly on the sketch pad.""".format(len(boxes), len(grounding_texts))) - grounding_texts = grounding_texts + [""] * (len(boxes) - len(grounding_texts)) - - boxes = (np.asarray(boxes) / 512).tolist() - grounding_instruction = json.dumps({obj: box for obj,box in zip(grounding_texts, boxes)}) - - image = None - actual_mask = None - if task == 'Grounded Inpainting': - image = state.get('original_image', sketch_pad['image']).copy() - image = center_crop(image) - image = Image.fromarray(image) - - if use_actual_mask: - actual_mask = sketch_pad['mask'].copy() - if actual_mask.ndim == 3: - actual_mask = actual_mask[..., 0] - actual_mask = center_crop(actual_mask, tgt_size=(64, 64)) - actual_mask = torch.from_numpy(actual_mask == 0).float() - - if state.get('inpaint_hw', None): - boxes = np.asarray(boxes) * 0.9 + 0.05 - boxes = boxes.tolist() - grounding_instruction = json.dumps({obj: box for obj,box in zip(grounding_texts, boxes) if obj != 'auto'}) - - if append_grounding: - language_instruction = auto_append_grounding(language_instruction, grounding_texts) - - gen_images, gen_overlays = inference( - task, language_instruction, grounding_instruction, boxes, image, - alpha_sample, guidance_scale, batch_size, - fix_seed, rand_seed, actual_mask, style_cond_image, clip_model=clip_model, - ) - - for idx, gen_image in enumerate(gen_images): - - if task == 'Grounded Inpainting' and state.get('inpaint_hw', None): - hw = min(*state['original_image'].shape[:2]) - gen_image = sized_center_fill(state['original_image'].copy(), np.array(gen_image.resize((hw, hw))), hw, hw) - gen_image = Image.fromarray(gen_image) - - gen_images[idx] = gen_image - - blank_samples = batch_size % 2 if batch_size > 1 else 0 - gen_images = [gr.Image.update(value=x, visible=True) for i,x in enumerate(gen_images)] \ - + [gr.Image.update(value=None, visible=True) for _ in range(blank_samples)] \ - + [gr.Image.update(value=None, visible=False) for _ in range(4 - batch_size - blank_samples)] - - return gen_images + [state] - - -def binarize(x): - return (x != 0).astype('uint8') * 255 - -def sized_center_crop(img, cropx, cropy): - y, x = img.shape[:2] - startx = x // 2 - (cropx // 2) - starty = y // 2 - (cropy // 2) - return img[starty:starty+cropy, startx:startx+cropx] - -def sized_center_fill(img, fill, cropx, cropy): - y, x = img.shape[:2] - startx = x // 2 - (cropx // 2) - starty = y // 2 - (cropy // 2) - img[starty:starty+cropy, startx:startx+cropx] = fill - return img - -def sized_center_mask(img, cropx, cropy): - y, x = img.shape[:2] - startx = x // 2 - (cropx // 2) - starty = y // 2 - (cropy // 2) - center_region = img[starty:starty+cropy, startx:startx+cropx].copy() - img = (img * 0.2).astype('uint8') - img[starty:starty+cropy, startx:startx+cropx] = center_region - return img - -def center_crop(img, HW=None, tgt_size=(512, 512)): - if HW is None: - H, W = img.shape[:2] - HW = min(H, W) - img = sized_center_crop(img, HW, HW) - img = Image.fromarray(img) - img = img.resize(tgt_size) - return np.array(img) - -def draw(task, input, grounding_texts, new_image_trigger, state): - if type(input) == dict: - image = input['image'] - mask = input['mask'] - else: - mask = input - - if mask.ndim == 3: - mask = mask[..., 0] - - image_scale = 1.0 - - # resize trigger - if task == "Grounded Inpainting": - mask_cond = mask.sum() == 0 - # size_cond = mask.shape != (512, 512) - if mask_cond and 'original_image' not in state: - image = Image.fromarray(image) - width, height = image.size - scale = 600 / min(width, height) - image = image.resize((int(width * scale), int(height * scale))) - state['original_image'] = np.array(image).copy() - image_scale = float(height / width) - return [None, new_image_trigger + 1, image_scale, state] - else: - original_image = state['original_image'] - H, W = original_image.shape[:2] - image_scale = float(H / W) - - mask = binarize(mask) - if mask.shape != (512, 512): - # assert False, "should not receive any non- 512x512 masks." - if 'original_image' in state and state['original_image'].shape[:2] == mask.shape: - mask = center_crop(mask, state['inpaint_hw']) - image = center_crop(state['original_image'], state['inpaint_hw']) - else: - mask = np.zeros((512, 512), dtype=np.uint8) - # mask = center_crop(mask) - mask = binarize(mask) - - if type(mask) != np.ndarray: - mask = np.array(mask) - - if mask.sum() == 0 and task != "Grounded Inpainting": - state = {} - - if task != 'Grounded Inpainting': - image = None - else: - image = Image.fromarray(image) - - if 'boxes' not in state: - state['boxes'] = [] - - if 'masks' not in state or len(state['masks']) == 0: - state['masks'] = [] - last_mask = np.zeros_like(mask) - else: - last_mask = state['masks'][-1] - - if type(mask) == np.ndarray and mask.size > 1: - diff_mask = mask - last_mask - else: - diff_mask = np.zeros([]) - - if diff_mask.sum() > 0: - x1x2 = np.where(diff_mask.max(0) != 0)[0] - y1y2 = np.where(diff_mask.max(1) != 0)[0] - y1, y2 = y1y2.min(), y1y2.max() - x1, x2 = x1x2.min(), x1x2.max() - - if (x2 - x1 > 5) and (y2 - y1 > 5): - state['masks'].append(mask.copy()) - state['boxes'].append((x1, y1, x2, y2)) - - grounding_texts = [x.strip() for x in grounding_texts.split(';')] - grounding_texts = [x for x in grounding_texts if len(x) > 0] - if len(grounding_texts) < len(state['boxes']): - grounding_texts += [f'Obj. {bid+1}' for bid in range(len(grounding_texts), len(state['boxes']))] - - box_image = draw_box(state['boxes'], grounding_texts, image) - - if box_image is not None and state.get('inpaint_hw', None): - inpaint_hw = state['inpaint_hw'] - box_image_resize = np.array(box_image.resize((inpaint_hw, inpaint_hw))) - original_image = state['original_image'].copy() - box_image = sized_center_fill(original_image, box_image_resize, inpaint_hw, inpaint_hw) - - return [box_image, new_image_trigger, image_scale, state] - -def clear(task, sketch_pad_trigger, batch_size, state, switch_task=False): - if task != 'Grounded Inpainting': - sketch_pad_trigger = sketch_pad_trigger + 1 - blank_samples = batch_size % 2 if batch_size > 1 else 0 - out_images = [gr.Image.update(value=None, visible=True) for i in range(batch_size)] \ - + [gr.Image.update(value=None, visible=True) for _ in range(blank_samples)] \ - + [gr.Image.update(value=None, visible=False) for _ in range(4 - batch_size - blank_samples)] - state = {} - return [None, sketch_pad_trigger, None, 1.0] + out_images + [state] - -css = """ -#img2img_image, #img2img_image > .fixed-height, #img2img_image > .fixed-height > div, #img2img_image > .fixed-height > div > img -{ - height: var(--height) !important; - max-height: var(--height) !important; - min-height: var(--height) !important; -} -#paper-info a { - color:#008AD7; - text-decoration: none; -} -#paper-info a:hover { - cursor: pointer; - text-decoration: none; -} -""" - -rescale_js = """ -function(x) { - const root = document.querySelector('gradio-app').shadowRoot || document.querySelector('gradio-app'); - let image_scale = parseFloat(root.querySelector('#image_scale input').value) || 1.0; - const image_width = root.querySelector('#img2img_image').clientWidth; - const target_height = parseInt(image_width * image_scale); - document.body.style.setProperty('--height', `${target_height}px`); - root.querySelectorAll('button.justify-center.rounded')[0].style.display='none'; - root.querySelectorAll('button.justify-center.rounded')[1].style.display='none'; - return x; -} -""" - -with Blocks( - css=css, - analytics_enabled=False, - title="GLIGen demo", -) as main: - description = """

- GLIGen: Open-Set Grounded Text-to-Image Generation -
- - [Project Page] - [Paper] - [GitHub] - -

-

- To ground concepts of interest with desired spatial specification, please (1) ⌨️ enter the concept names in Grounding Instruction, and (2) 🖱️ draw their corresponding bounding boxes one by one using Sketch Pad -- the parsed boxes will be displayed automatically. -
- For faster inference without waiting in queue, you may duplicate the space and upgrade to GPU in settings. Duplicate Space -

- """ - gr.HTML(description) - - with gr.Row(): - with gr.Column(scale=4): - sketch_pad_trigger = gr.Number(value=0, visible=False) - sketch_pad_resize_trigger = gr.Number(value=0, visible=False) - init_white_trigger = gr.Number(value=0, visible=False) - image_scale = gr.Number(value=0, elem_id="image_scale", visible=False) - new_image_trigger = gr.Number(value=0, visible=False) - - task = gr.Radio( - choices=["Grounded Generation", 'Grounded Inpainting'], - type="value", - value="Grounded Generation", - label="Task", - ) - language_instruction = gr.Textbox( - label="Language instruction", - ) - grounding_instruction = gr.Textbox( - label="Grounding instruction (Separated by semicolon)", - ) - with gr.Row(): - sketch_pad = ImageMask(label="Sketch Pad", elem_id="img2img_image") - out_imagebox = gr.Image(type="pil", label="Parsed Sketch Pad") - with gr.Row(): - clear_btn = gr.Button(value='Clear') - gen_btn = gr.Button(value='Generate') - with gr.Accordion("Advanced Options", open=False): - with gr.Column(): - alpha_sample = gr.Slider(minimum=0, maximum=1.0, step=0.1, value=0.3, label="Scheduled Sampling (τ)") - guidance_scale = gr.Slider(minimum=0, maximum=50, step=0.5, value=7.5, label="Guidance Scale") - batch_size = gr.Slider(minimum=1, maximum=4, step=1, value=2, label="Number of Samples") - append_grounding = gr.Checkbox(value=True, label="Append grounding instructions to the caption") - use_actual_mask = gr.Checkbox(value=False, label="Use actual mask for inpainting", visible=False) - with gr.Row(): - fix_seed = gr.Checkbox(value=True, label="Fixed seed") - rand_seed = gr.Slider(minimum=0, maximum=1000, step=1, value=0, label="Seed") - with gr.Row(): - use_style_cond = gr.Checkbox(value=False, label="Enable Style Condition") - style_cond_image = gr.Image(type="pil", label="Style Condition", visible=False, interactive=True) - with gr.Column(scale=4): - gr.HTML('Generated Images') - with gr.Row(): - out_gen_1 = gr.Image(type="pil", visible=True, show_label=False) - out_gen_2 = gr.Image(type="pil", visible=True, show_label=False) - with gr.Row(): - out_gen_3 = gr.Image(type="pil", visible=False, show_label=False) - out_gen_4 = gr.Image(type="pil", visible=False, show_label=False) - - state = gr.State({}) - - class Controller: - def __init__(self): - self.calls = 0 - self.tracks = 0 - self.resizes = 0 - self.scales = 0 - - def init_white(self, init_white_trigger): - self.calls += 1 - return np.ones((512, 512), dtype='uint8') * 255, 1.0, init_white_trigger+1 - - def change_n_samples(self, n_samples): - blank_samples = n_samples % 2 if n_samples > 1 else 0 - return [gr.Image.update(visible=True) for _ in range(n_samples + blank_samples)] \ - + [gr.Image.update(visible=False) for _ in range(4 - n_samples - blank_samples)] - - def resize_centercrop(self, state): - self.resizes += 1 - image = state['original_image'].copy() - inpaint_hw = int(0.9 * min(*image.shape[:2])) - state['inpaint_hw'] = inpaint_hw - image_cc = center_crop(image, inpaint_hw) - # print(f'resize triggered {self.resizes}', image.shape, '->', image_cc.shape) - return image_cc, state - - def resize_masked(self, state): - self.resizes += 1 - image = state['original_image'].copy() - inpaint_hw = int(0.9 * min(*image.shape[:2])) - state['inpaint_hw'] = inpaint_hw - image_mask = sized_center_mask(image, inpaint_hw, inpaint_hw) - state['masked_image'] = image_mask.copy() - # print(f'mask triggered {self.resizes}') - return image_mask, state - - def switch_task_hide_cond(self, task): - cond = False - if task == "Grounded Generation": - cond = True - - return gr.Checkbox.update(visible=cond, value=False), gr.Image.update(value=None, visible=False), gr.Slider.update(visible=cond), gr.Checkbox.update(visible=(not cond), value=False) - - controller = Controller() - main.load( - lambda x:x+1, - inputs=sketch_pad_trigger, - outputs=sketch_pad_trigger, - queue=False) - sketch_pad.edit( - draw, - inputs=[task, sketch_pad, grounding_instruction, sketch_pad_resize_trigger, state], - outputs=[out_imagebox, sketch_pad_resize_trigger, image_scale, state], - queue=False, - ) - grounding_instruction.change( - draw, - inputs=[task, sketch_pad, grounding_instruction, sketch_pad_resize_trigger, state], - outputs=[out_imagebox, sketch_pad_resize_trigger, image_scale, state], - queue=False, - ) - clear_btn.click( - clear, - inputs=[task, sketch_pad_trigger, batch_size, state], - outputs=[sketch_pad, sketch_pad_trigger, out_imagebox, image_scale, out_gen_1, out_gen_2, out_gen_3, out_gen_4, state], - queue=False) - task.change( - partial(clear, switch_task=True), - inputs=[task, sketch_pad_trigger, batch_size, state], - outputs=[sketch_pad, sketch_pad_trigger, out_imagebox, image_scale, out_gen_1, out_gen_2, out_gen_3, out_gen_4, state], - queue=False) - sketch_pad_trigger.change( - controller.init_white, - inputs=[init_white_trigger], - outputs=[sketch_pad, image_scale, init_white_trigger], - queue=False) - sketch_pad_resize_trigger.change( - controller.resize_masked, - inputs=[state], - outputs=[sketch_pad, state], - queue=False) - batch_size.change( - controller.change_n_samples, - inputs=[batch_size], - outputs=[out_gen_1, out_gen_2, out_gen_3, out_gen_4], - queue=False) - gen_btn.click( - generate, - inputs=[ - task, language_instruction, grounding_instruction, sketch_pad, - alpha_sample, guidance_scale, batch_size, - fix_seed, rand_seed, - use_actual_mask, - append_grounding, style_cond_image, - state, - ], - outputs=[out_gen_1, out_gen_2, out_gen_3, out_gen_4, state], - queue=True - ) - sketch_pad_resize_trigger.change( - None, - None, - sketch_pad_resize_trigger, - _js=rescale_js, - queue=False) - init_white_trigger.change( - None, - None, - init_white_trigger, - _js=rescale_js, - queue=False) - use_style_cond.change( - lambda cond: gr.Image.update(visible=cond), - use_style_cond, - style_cond_image, - queue=False) - task.change( - controller.switch_task_hide_cond, - inputs=task, - outputs=[use_style_cond, style_cond_image, alpha_sample, use_actual_mask], - queue=False) - - with gr.Column(): - gr.Examples( - examples=[ - [ - "images/blank.png", - "Grounded Generation", - "a dog and an apple", - "a dog;an apple", - ], - [ - "images/blank.png", - "Grounded Generation", - "John Lennon is using a pc", - "John Lennon;a pc", - [ - "images/blank.png", - "Grounded Generation", - "a painting of a fox sitting in a field at sunrise in the style of Claude Mone", - "fox;sunrise", - ], - ], - [ - "images/blank.png", - "Grounded Generation", - "a beautiful painting of hot dog by studio ghibli, octane render, brilliantly coloured", - "hot dog", - ], - [ - "images/blank.png", - "Grounded Generation", - "a sport car, unreal engine, global illumination, ray tracing", - "a sport car", - ], - [ - "images/flower_beach.jpg", - "Grounded Inpainting", - "a squirrel and the space needle", - "a squirrel;the space needle", - ], - [ - "images/arg_corgis.jpeg", - "Grounded Inpainting", - "a dog and a birthday cake", - "a dog; a birthday cake", - ], - [ - "images/teddy.jpg", - "Grounded Inpainting", - "a teddy bear wearing a santa claus red shirt; holding a Christmas gift box on hand", - "a santa claus shirt; a Christmas gift box", - ], - ], - inputs=[sketch_pad, task, language_instruction, grounding_instruction], - outputs=None, - fn=None, - cache_examples=False, - ) - -main.queue(concurrency_count=1, api_open=False) -main.launch(share=False, show_api=False, show_error=True) - - diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Kapita Selekta Kedokteran Ebook 20 UPDATED.md b/spaces/gotiQspiryo/whisper-ui/examples/Kapita Selekta Kedokteran Ebook 20 UPDATED.md deleted file mode 100644 index e941d0e42a4c445c8ea383cf4e1390f2119fc63c..0000000000000000000000000000000000000000 --- a/spaces/gotiQspiryo/whisper-ui/examples/Kapita Selekta Kedokteran Ebook 20 UPDATED.md +++ /dev/null @@ -1,6 +0,0 @@ -

kapita selekta kedokteran ebook 20


Download ••• https://urlgoal.com/2uyMgj



- -MURDER IS THE MEAN OF JOKER, A NOVEL BOOK download pdf free. Download kapita selekta kedokteran ebook for android from apkery.com for free. 0. Kapita Selekta Kedokteran Ebook by H. Ismail Scimori, Lener Sercan, Mahmut Berker, Mahmut Togan Ustun, Petru Bratteanu, Tugba Soyke, Volkan Turhan PDF, AZW3, ePUB, TXT, DOC Download ePub book from Kedokteran Selektasi Kerdokteran for free. Kapita Selekta Kedokteran (Ebook) Mahmut Berker. 0. Posted On April 01, 2011 by hame günaydin with No comments. Kapita Selekta Kedokteran (Ebook) Mahmut Berker. 0. Kapita Selekta Kedokteran (Ebook) Mahmut Berker. It was made with the express intention of provoking that kind of reaction from observers. Mahmut Berker has been appointed the head of the Radio-Television Supreme Council, according to a. Kapita Selekta Kedokteran (Ebook) by Mahmut Berker. Follow Sizin İçin Twitterİndir. If you use any of these. 4. It contains a variety of cross-cultural situations, which can be easily applied to our day-to-day life as well. 0. Kapita Selekta Kedokteran (Ebook) by Mahmut Berker. Video lengths: Video Collection. Download here.'In this novel, the author describes and analyses inter-racial, inter-religious, and inter-class conflicts that frequently appear in the literature written by the members of the Turkish intelligentsia. 0. Published by Valyaz Posta ePub, htm, TXT, Kindle Edizioni Kedokteran KAPITA SELÜĞTÜĞ SAVASI. Kapita Selekta Kedokteran. Download. 4. Published by Valyaz Posta ePub, htm, TXT, Kindle Edizioni Kedokteran KAPITA SELÜĞTÜĞ SAVASI 4fefd39f24
-
-
-

diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Marineford War Full Download Watch the Epic Battle Between Whitebeard and the Marines.md b/spaces/gotiQspiryo/whisper-ui/examples/Marineford War Full Download Watch the Epic Battle Between Whitebeard and the Marines.md deleted file mode 100644 index 2cc9d0f70e9b09bef809190255ef70d1051767cd..0000000000000000000000000000000000000000 --- a/spaces/gotiQspiryo/whisper-ui/examples/Marineford War Full Download Watch the Epic Battle Between Whitebeard and the Marines.md +++ /dev/null @@ -1,6 +0,0 @@ -

Marineford War Full Download


Download Zip ——— https://urlgoal.com/2uyNmV



-
- aaccfb2cb3
-
-
-

diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Microeconomics With Calculus Binger Hoffman Solutions Rar !!EXCLUSIVE!!.md b/spaces/gotiQspiryo/whisper-ui/examples/Microeconomics With Calculus Binger Hoffman Solutions Rar !!EXCLUSIVE!!.md deleted file mode 100644 index 9937c0920a231283ccee381ae54d5ec953963417..0000000000000000000000000000000000000000 --- a/spaces/gotiQspiryo/whisper-ui/examples/Microeconomics With Calculus Binger Hoffman Solutions Rar !!EXCLUSIVE!!.md +++ /dev/null @@ -1,6 +0,0 @@ -

microeconomics with calculus binger hoffman solutions rar


DOWNLOADhttps://urlgoal.com/2uyN52



-
- 3cee63e6c2
-
-
-

diff --git a/spaces/gradio/HuBERT/fairseq/modules/quantization/quantization_options.py b/spaces/gradio/HuBERT/fairseq/modules/quantization/quantization_options.py deleted file mode 100644 index b46d682c0edaeaaf2a230e51d50da2a32d4bda98..0000000000000000000000000000000000000000 --- a/spaces/gradio/HuBERT/fairseq/modules/quantization/quantization_options.py +++ /dev/null @@ -1,44 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - - -def parse_config_yaml(yaml_data): - # Initialize to default options. - quantization_options = { - "n_centroids": { - "Linear": ["in_features", {"*": 256}], - "Embedding": ["embedding_dim", {"*": 256}], - }, - "block_sizes": { - "Linear": ["fuzzy_name", {"fc": 8, "attn": 4, "emb": 4}], - "Embedding": ["fuzzy_name", {"emb": 8}], - }, - "layers_to_quantize": [ - "decoder\\.layers\\.\\d+\\.fc[12]", - "decoder\\.embed_tokens\\.embeddings\\.[012]\\.[01]", - "decoder\\.layers\\.\\d+\\.self_attn\\.(k_proj|v_proj|q_proj|out_proj)", - ], - } - - if "n_centroids" in yaml_data: - quantization_options["n_centroids"] = { - layer: convert_yaml_to_tuple(layer_data) - for layer, layer_data in yaml_data["n_centroids"].items() - } - if "block_sizes" in yaml_data: - quantization_options["block_sizes"] = { - layer: convert_yaml_to_tuple(layer_data) - for layer, layer_data in yaml_data["block_sizes"].items() - } - if "layers_to_quantize" in yaml_data: - quantization_options["layers_to_quantize"] = yaml_data["layers_to_quantize"] - - return quantization_options - - -def convert_yaml_to_tuple(yaml_dictionary): - """Converts a yaml dictionary with two keys: `key` and `value` into a two - argument tuple of those values.""" - return (yaml_dictionary["key"], yaml_dictionary["value"]) diff --git a/spaces/gradio/HuBERT/tests/test_train.py b/spaces/gradio/HuBERT/tests/test_train.py deleted file mode 100644 index 65f4683bc67ca80c81bf1d2c27be621b57f7df94..0000000000000000000000000000000000000000 --- a/spaces/gradio/HuBERT/tests/test_train.py +++ /dev/null @@ -1,246 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import contextlib -import logging -import unittest -from io import StringIO -from unittest.mock import MagicMock, patch - -import torch -from fairseq import checkpoint_utils, data -from omegaconf import OmegaConf - - -def mock_trainer(epoch, num_updates, iterations_in_epoch): - trainer = MagicMock() - trainer.load_checkpoint.return_value = { - "train_iterator": { - "epoch": epoch, - "iterations_in_epoch": iterations_in_epoch, - "shuffle": False, - }, - } - trainer.get_num_updates.return_value = num_updates - return trainer - - -def mock_dict(): - d = MagicMock() - d.pad.return_value = 1 - d.eos.return_value = 2 - d.unk.return_value = 3 - return d - - -def get_trainer_and_epoch_itr(epoch, epoch_size, num_updates, iterations_in_epoch): - tokens = torch.LongTensor(list(range(epoch_size))).view(1, -1) - tokens_ds = data.TokenBlockDataset( - tokens, - sizes=[tokens.size(-1)], - block_size=1, - pad=0, - eos=1, - include_targets=False, - ) - trainer = mock_trainer(epoch, num_updates, iterations_in_epoch) - dataset = data.LanguagePairDataset( - tokens_ds, tokens_ds.sizes, mock_dict(), shuffle=False - ) - epoch_itr = data.EpochBatchIterator( - dataset=dataset, - collate_fn=dataset.collater, - batch_sampler=[[i] for i in range(epoch_size)], - ) - return trainer, epoch_itr - - -def get_mock_cfg(finetune_from_model): - cfg_mock = OmegaConf.create( - { - "checkpoint": { - "optimizer_overrides": "{}", - "reset_dataloader": False, - "reset_meters": False, - "reset_optimizer": False, - "reset_lr_scheduler": False, - "finetune_from_model": finetune_from_model, - "model_parallel_size": 1, - "restore_file": "checkpoint_last.pt", - }, - "common": { - "model_parallel_size": 1, - }, - } - ) - return cfg_mock - - -class TestLoadCheckpoint(unittest.TestCase): - def setUp(self): - self.cfg_mock = get_mock_cfg(None) - self.patches = { - "os.makedirs": MagicMock(), - "os.path.join": MagicMock(), - "os.path.isfile": MagicMock(return_value=True), - "os.path.isabs": MagicMock(return_value=False), - "fairseq.file_io.PathManager.exists": MagicMock(return_value=False), - } - self.applied_patches = [patch(p, d) for p, d in self.patches.items()] - [p.start() for p in self.applied_patches] - logging.disable(logging.CRITICAL) - - def tearDown(self): - patch.stopall() - logging.disable(logging.NOTSET) - - def test_load_partial_checkpoint(self): - with contextlib.redirect_stdout(StringIO()): - trainer, epoch_itr = get_trainer_and_epoch_itr(2, 150, 200, 50) - trainer.get_train_iterator = MagicMock(return_value=epoch_itr) - - _, epoch_itr = checkpoint_utils.load_checkpoint( - self.cfg_mock.checkpoint, trainer - ) - - self.assertEqual(epoch_itr.epoch, 2) - self.assertEqual(epoch_itr.iterations_in_epoch, 50) - - itr = epoch_itr.next_epoch_itr(shuffle=False) - self.assertEqual(epoch_itr.epoch, 2) - self.assertEqual(epoch_itr.iterations_in_epoch, 50) - - self.assertEqual(next(itr)["net_input"]["src_tokens"][0].item(), 50) - self.assertEqual(epoch_itr.iterations_in_epoch, 51) - - for _ in range(150 - 52): - next(itr) - self.assertEqual(epoch_itr.iterations_in_epoch, 149) - self.assertTrue(itr.has_next()) - next(itr) - self.assertFalse(itr.has_next()) - - itr = epoch_itr.next_epoch_itr(shuffle=False) - self.assertTrue(itr.has_next()) - self.assertEqual(epoch_itr.epoch, 3) - self.assertEqual(epoch_itr.iterations_in_epoch, 0) - - def test_load_full_checkpoint(self): - with contextlib.redirect_stdout(StringIO()): - trainer, epoch_itr = get_trainer_and_epoch_itr(2, 150, 300, 150) - trainer.get_train_iterator = MagicMock(return_value=epoch_itr) - - _, epoch_itr = checkpoint_utils.load_checkpoint( - self.cfg_mock.checkpoint, trainer - ) - itr = epoch_itr.next_epoch_itr(shuffle=False) - - self.assertEqual(epoch_itr.epoch, 3) - self.assertEqual(epoch_itr.iterations_in_epoch, 0) - self.assertEqual(next(itr)["net_input"]["src_tokens"][0].item(), 0) - - def test_load_no_checkpoint(self): - with contextlib.redirect_stdout(StringIO()): - trainer, epoch_itr = get_trainer_and_epoch_itr(1, 150, 0, 0) - trainer.get_train_iterator = MagicMock(return_value=epoch_itr) - self.patches["os.path.isfile"].return_value = False - - _, epoch_itr = checkpoint_utils.load_checkpoint( - self.cfg_mock.checkpoint, trainer - ) - itr = epoch_itr.next_epoch_itr(shuffle=False) - - self.assertEqual(epoch_itr.epoch, 1) - self.assertEqual(epoch_itr.iterations_in_epoch, 0) - self.assertEqual(next(itr)["net_input"]["src_tokens"][0].item(), 0) - - def test_finetune_from_model_args_conflict(self): - with contextlib.redirect_stdout(StringIO()): - trainer, epoch_itr = get_trainer_and_epoch_itr(1, 150, 0, 0) - trainer.get_train_iterator = MagicMock(return_value=epoch_itr) - - for arg in [ - "reset_optimizer", - "reset_lr_scheduler", - "reset_meters", - "reset_dataloader", - ]: - with self.subTest(arg=arg): - cfg_mock = get_mock_cfg("/temp/checkpoint_pretrained.pt") - cfg_mock["checkpoint"][arg] = True - with self.assertRaises(Exception) as context: - _, _ = checkpoint_utils.load_checkpoint( - cfg_mock.checkpoint, trainer - ) - - self.assertTrue( - "--finetune-from-model can not be set together with either --reset-optimizer" - " or reset_lr_scheduler or reset_meters or reset_dataloader" - in str(context.exception) - ) - - def test_finetune_from_model(self): - with contextlib.redirect_stdout(StringIO()): - trainer, epoch_itr = get_trainer_and_epoch_itr(1, 150, 0, 0) - trainer.get_train_iterator = MagicMock(return_value=epoch_itr) - from_model_path = "/temp/checkpoint_pretrained.pt" - - def mock_finetune_exist(path): - if path == from_model_path: - return True - else: - return False - - self.patches[ - "fairseq.file_io.PathManager.exists" - ].side_effect = mock_finetune_exist - cfg_mock = get_mock_cfg(from_model_path) - cfg_mock.checkpoint.restore_file = "checkpoint_last.pt" - _, _ = checkpoint_utils.load_checkpoint(cfg_mock.checkpoint, trainer) - ( - checkpoint_path, - reset_optimizer, - reset_lr_scheduler, - optimizer_overrides, - ) = trainer.load_checkpoint.call_args[0] - reset_meters = trainer.load_checkpoint.call_args[1]["reset_meters"] - self.assertTrue(reset_optimizer) - self.assertTrue(reset_lr_scheduler) - self.assertTrue(reset_meters) - - def test_finetune_from_model_resume(self): - with contextlib.redirect_stdout(StringIO()): - trainer, epoch_itr = get_trainer_and_epoch_itr(1, 150, 0, 0) - trainer.get_train_iterator = MagicMock(return_value=epoch_itr) - from_model_path = "/temp/checkpoint_pretrained.pt" - - # launch second time - # both restore_file=checkpoint_last.pt and finetune_from_model are set - def mock_finetune_exist(path): - if path == from_model_path or path.endsWith("checkpoint_last.pt"): - return True - else: - return False - - self.patches[ - "fairseq.file_io.PathManager.exists" - ].side_effect = mock_finetune_exist - cfg_mock = get_mock_cfg(from_model_path) - cfg_mock.checkpoint.restore_file = "checkpoint_last.pt" - _, _ = checkpoint_utils.load_checkpoint(cfg_mock.checkpoint, trainer) - ( - checkpoint_path, - reset_optimizer, - reset_lr_scheduler, - optimizer_overrides, - ) = trainer.load_checkpoint.call_args[0] - reset_meters = trainer.load_checkpoint.call_args[1]["reset_meters"] - self.assertFalse(reset_optimizer) - self.assertFalse(reset_lr_scheduler) - self.assertFalse(reset_meters) - - -if __name__ == "__main__": - unittest.main() diff --git a/spaces/gradio/gpt-neo/run_experiment.py b/spaces/gradio/gpt-neo/run_experiment.py deleted file mode 100644 index ed6c243a149411b2c896537f87deeb2c9c59558d..0000000000000000000000000000000000000000 --- a/spaces/gradio/gpt-neo/run_experiment.py +++ /dev/null @@ -1,265 +0,0 @@ -import atexit -import sacred -import argparse -import time -import math -import subprocess -import shutil -import os -import json -import threading -import requests -import glob -from configs import fetch_model_params -import socket -import subprocess -import queue -import sys -import signal - - -parser = argparse.ArgumentParser() -parser.add_argument('--tpu', type=str, required=True) # Name of TPU to train on, if any -parser.add_argument('--model', type=str, required=True) # JSON file that contains model parameters -parser.add_argument('--experiment_name', type=str, required=True) # name of experiment (will show up in omniboard) -parser.add_argument('--steps_per_checkpoint', type=int, default=5000) -parser.add_argument('--autostack', action="store_false") -parser.add_argument('--auto_layout', action="store_true") -parser.add_argument('--auto_layout_and_mesh_shape', action="store_true") -parser.add_argument('--new', action='store_true') -parser.add_argument('--test', action='store_true') -parser.add_argument('--eval', action='store_true') -parser.add_argument('--predict', action='store_true') -parser.add_argument('--no_delete_tpu', action='store_true') -parser.add_argument('--initial_heartbeat_timeout', type=int, default=7200) -parser.add_argument('--heartbeat_timeout', type=int, default=1800) # kill and restart if nothing logged to tensorboard in this many seconds -args = parser.parse_args() - -params = fetch_model_params(args.model) - -ex = sacred.Experiment(args.experiment_name) -ex.observers.append(sacred.observers.QueuedMongoObserver(url='127.0.0.1:27017', db_name='db', username='user', password='password')) - - -def get_open_port(lo=8000, hi=8100): - for i in range(lo, hi): - with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s: - if s.connect_ex(('localhost', i)) != 0: - return i - - -def train_thread(args, tpu, id, q): - print('starting training on', tpu) - - # pass binary flags through - opts = '' - for flag in ['auto_layout', 'auto_layout_and_mesh_shape', 'new', 'test', 'predict', 'eval', ]: - if args.__getattribute__(flag): - opts += ' --' + flag - - for flag in ['autostack', ]: - if not args.__getattribute__(flag): - opts += ' --' + flag - - cmd = "python3 main.py --tpu {tpu} --model run_configs/config_{id}.json --steps_per_checkpoint {steps_per_checkpoint} {opts} --sacred_id {run_id}".format(tpu=tpu, id=id, steps_per_checkpoint=args.steps_per_checkpoint, opts=opts, run_id=id) - print('Running:', cmd) - proc = subprocess.Popen(cmd, shell=True) - - # poll until it's exited - while proc.poll() is None: - time.sleep(60) - try: - nq, *nargs = q.get_nowait() - if nq == 'kill': - print('train thread recieved kill signal from logging thread') - # first send SIGTERM - proc.terminate() - - time.sleep(60) - - # if it still hasn't exited, we send SIGKILL - if proc.poll() is None: - print('SIGTERM not successful, sending SIGKILL') - proc.kill() - - except queue.Empty: - pass - - print('exited training!') - if proc.returncode == 0: - print('exited gracefully') - os.kill(os.getpid(), signal.SIGINT) - return - - if args.no_delete_tpu: - print('recreate done, exiting train_thread - not killing tpu!') - return - print("Recreating {} in 60sec...".format(tpu)) - time.sleep(60) - os.system("pu recreate {} --yes --retry 3600 --retry-randomness 1.5".format(tpu)) - print('recreate done, exiting train_thread') - - # clear out queue - while True: - try: - q.get_nowait() - print('dropped request in queue after pu recreate') - except queue.Empty: - break - - -def get_json(uri, params=None, timeout=15): - resp = requests.get(uri, params=params, timeout=timeout) - resp.raise_for_status() - return resp.json() - - -def get_tag_sets(base_uri): - j = get_json(f'{base_uri}/data/plugin/scalars/tags', {'experiment': ''}) - assert isinstance(j, dict) - return { - run: j[run].keys() - for run in j.keys() - } - - -def get_scalar_data(base_uri, run, tag): - j = get_json(f'{base_uri}/data/plugin/scalars/scalars', {'experiment': '', 'run': run, 'tag': tag}) - assert isinstance(j, list) - return j - - -def get_run_data(port): - base_uri = f'http://localhost:{port}/' - r = {} - try: - tag_sets = get_tag_sets(base_uri) - runs = tag_sets.keys() - if '.' in runs: - if 'loss' in tag_sets['.']: - r['loss'] = get_scalar_data(base_uri, '.', 'loss') - if 'eval' in runs: - if 'loss' in tag_sets['eval']: - r['val_loss'] = get_scalar_data(base_uri, 'eval', 'loss') - if 'eval_lambada' in runs: - if 'lambada_acc' in tag_sets['eval_lambada']: - r['lambada_acc'] = get_scalar_data(base_uri, 'eval_lambada', 'lambada_acc') - if 'lambada_log_ppl' in tag_sets['eval_lambada']: - r['lambada_ppl'] = [ - [t, s, math.exp(lp)] - for [t, s, lp] in get_scalar_data(base_uri, 'eval_lambada', 'lambada_log_ppl') - ] - except: - import traceback - traceback.print_exc() - return r - - -@ex.main -def main(_run): - print('Starting run', _run._id) - print('experiment main invoked with argv:', " ".join(sys.argv)) - print('WARNING: please remember to remove old metric log files from the model directory.') - - os.makedirs('run_configs', exist_ok=True) - shutil.copy(args.model if args.model.endswith('.json') else 'configs/{}.json'.format(args.model), 'run_configs/config_{}.json'.format(_run._id)) - - tensorboard_port = get_open_port() - print('Tensorboard at port:', tensorboard_port) - print('Tensorboard url: ', 'http://eleutherai.bmk.sh:'+ str(tensorboard_port)) - os.system("screen -S tensorboard_{} -d -m bash -c 'tensorboard --logdir {} --port {} --bind_all --reload_multifile=true || tensorboard --logdir {} --port {} --reload_multifile=true'".format(_run._id, params["model_path"], tensorboard_port,params["model_path"], tensorboard_port,)) - atexit.register(goodbye, _run._id) - - curr_step = {} - seen_predictions = set() - - heartbeat_timeout = args.initial_heartbeat_timeout * 2 - while True: - last_tb_log_time = time.time() - start_time = time.time() - q = queue.Queue() - trainthd = threading.Thread(target=train_thread, args=(args, args.tpu, _run._id, q)) - trainthd.start() - - while trainthd.is_alive(): - time.sleep(60) - - if start_time + args.initial_heartbeat_timeout < time.time(): - # after initial args.initial_heartbeat_timeout grace period, now we want to set the timeout threshold much lower - heartbeat_timeout = args.heartbeat_timeout - - print('Polling tensorboard for metrics...') - data = get_run_data(tensorboard_port) - for k in data.keys(): - for ts, step, val in data[k]: - if step <= curr_step.get(k, -1): - continue - _run.log_scalar(k, val, step) - if k == 'loss': - _run.log_scalar('tb_ts', ts, step) - print('Logged to sacred: step={},loss={},tb_ts={}'.format(step, val, ts)) - - # found something new, so logging! - last_tb_log_time = time.time() - - curr_step[k] = step - - for f in glob.glob('predictions_{}_*'.format(_run._id)): - if f in seen_predictions: - continue - print('collecting prediction file', f) - ex.add_artifact(f) - - seen_predictions.add(f) - - # collect eval metrics from jsonl - if os.path.exists(f'eval_{_run._id}.jsonl'): - with open(f'eval_{_run._id}.jsonl') as fh: - for line in fh: - ob = json.loads(line) - val_step = ob['global_step'] - val_task = ob['task'] - for metr in ob.keys(): - k = 'fs.' + val_task + '.' + metr - if metr in ['task', 'global_step']: continue - if val_step <= curr_step.get(k, -1): continue - _run.log_scalar(k, ob[metr], val_step) - curr_step[k] = val_step - - if time.time() - last_tb_log_time > heartbeat_timeout: - # the run hasn't logged in a while, so we restart it - q.put(('kill',)) - - # give training thread some time to do its thing and recreate tpu - while trainthd.is_alive(): - print('logging thread waiting for killing stalled run and for tpu recreate to finish') - time.sleep(60) - - # reset heartbeat timeout to initial - heartbeat_timeout = args.initial_heartbeat_timeout - last_tb_log_time = time.time() - - - if args.no_delete_tpu: - break - - -def goodbye(id): - print("You are now leaving the Python sector.") - print("Sie verlassen den pythonischen Sektor.") - - os.system("screen -S tensorboard_{} -X quit".format(id)) - - -if __name__ == '__main__': - for file in glob.glob("**/*", recursive=True): - if file.split('.')[-1] in ['py']: - print('Adding', file, 'to sacred') - ex.add_source_file(file) - - ex.add_config({ - 'tpu_name': args.tpu, - **params - }) - - ex.run() diff --git a/spaces/gradio/longformer/README.md b/spaces/gradio/longformer/README.md deleted file mode 100644 index bd15613aa442ec6cf7ee4e54e0a16fd0077ca84f..0000000000000000000000000000000000000000 --- a/spaces/gradio/longformer/README.md +++ /dev/null @@ -1,33 +0,0 @@ ---- -title: LongformerQA -emoji: 🏢 -colorFrom: pink -colorTo: pink -sdk: gradio -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/gsaivinay/Llama-2-13B-GGML-UI/prettier.config.js b/spaces/gsaivinay/Llama-2-13B-GGML-UI/prettier.config.js deleted file mode 100644 index daf4139177fd80181d50b1542647a69cd76fcac4..0000000000000000000000000000000000000000 --- a/spaces/gsaivinay/Llama-2-13B-GGML-UI/prettier.config.js +++ /dev/null @@ -1,25 +0,0 @@ -module.exports = { - trailingComma: 'all', - singleQuote: true, - plugins: [ - 'prettier-plugin-tailwindcss', - '@trivago/prettier-plugin-sort-imports', - ], - importOrder: [ - 'react', // React - '^react-.*$', // React-related imports - '^next', // Next-related imports - '^next-.*$', // Next-related imports - '^next/.*$', // Next-related imports - '^.*/hooks/.*$', // Hooks - '^.*/services/.*$', // Services - '^.*/utils/.*$', // Utils - '^.*/types/.*$', // Types - '^.*/pages/.*$', // Components - '^.*/components/.*$', // Components - '^[./]', // Other imports - '.*', // Any uncaught imports - ], - importOrderSeparation: true, - importOrderSortSpecifiers: true, -}; diff --git a/spaces/gyugnsu/DragGan-Inversion/stylegan_human/torch_utils/op_edit/__init__.py b/spaces/gyugnsu/DragGan-Inversion/stylegan_human/torch_utils/op_edit/__init__.py deleted file mode 100644 index d2a7efe79d871852affd9de7b46f726a7942f218..0000000000000000000000000000000000000000 --- a/spaces/gyugnsu/DragGan-Inversion/stylegan_human/torch_utils/op_edit/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -# Copyright (c) SenseTime Research. All rights reserved. - -from .fused_act import FusedLeakyReLU, fused_leaky_relu -from .upfirdn2d import upfirdn2d diff --git a/spaces/hamacojr/CAT-Seg/demo/demo.py b/spaces/hamacojr/CAT-Seg/demo/demo.py deleted file mode 100644 index 2105b06ba1aa7fcafac51965035df5905ec974d7..0000000000000000000000000000000000000000 --- a/spaces/hamacojr/CAT-Seg/demo/demo.py +++ /dev/null @@ -1,194 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# Modified by Bowen Cheng from: https://github.com/facebookresearch/detectron2/blob/master/demo/demo.py -import argparse -import glob -import multiprocessing as mp -import os - -# fmt: off -import sys -sys.path.insert(1, os.path.join(sys.path[0], '..')) -# fmt: on - -import tempfile -import time -import warnings - -import cv2 -import numpy as np -import tqdm - -from detectron2.config import get_cfg -from detectron2.data.detection_utils import read_image -from detectron2.projects.deeplab import add_deeplab_config -from detectron2.utils.logger import setup_logger - -from mask_former import add_mask_former_config -from predictor import VisualizationDemo - - -# constants -WINDOW_NAME = "MaskFormer demo" - - -def setup_cfg(args): - # load config from file and command-line arguments - cfg = get_cfg() - add_deeplab_config(cfg) - add_mask_former_config(cfg) - cfg.merge_from_file(args.config_file) - cfg.merge_from_list(args.opts) - cfg.freeze() - return cfg - - -def get_parser(): - parser = argparse.ArgumentParser(description="Detectron2 demo for builtin configs") - parser.add_argument( - "--config-file", - default="configs/ade20k-150/maskformer_R50_bs16_160k.yaml", - metavar="FILE", - help="path to config file", - ) - parser.add_argument("--webcam", action="store_true", help="Take inputs from webcam.") - parser.add_argument("--video-input", help="Path to video file.") - parser.add_argument( - "--input", - nargs="+", - help="A list of space separated input images; " - "or a single glob pattern such as 'directory/*.jpg'", - ) - parser.add_argument( - "--output", - help="A file or directory to save output visualizations. " - "If not given, will show output in an OpenCV window.", - ) - - parser.add_argument( - "--confidence-threshold", - type=float, - default=0.5, - help="Minimum score for instance predictions to be shown", - ) - parser.add_argument( - "--opts", - help="Modify config options using the command-line 'KEY VALUE' pairs", - default=[], - nargs=argparse.REMAINDER, - ) - return parser - - -def test_opencv_video_format(codec, file_ext): - with tempfile.TemporaryDirectory(prefix="video_format_test") as dir: - filename = os.path.join(dir, "test_file" + file_ext) - writer = cv2.VideoWriter( - filename=filename, - fourcc=cv2.VideoWriter_fourcc(*codec), - fps=float(30), - frameSize=(10, 10), - isColor=True, - ) - [writer.write(np.zeros((10, 10, 3), np.uint8)) for _ in range(30)] - writer.release() - if os.path.isfile(filename): - return True - return False - - -if __name__ == "__main__": - mp.set_start_method("spawn", force=True) - args = get_parser().parse_args() - setup_logger(name="fvcore") - logger = setup_logger() - logger.info("Arguments: " + str(args)) - - cfg = setup_cfg(args) - - demo = VisualizationDemo(cfg) - - if args.input: - if len(args.input) == 1: - args.input = glob.glob(os.path.expanduser(args.input[0])) - assert args.input, "The input path(s) was not found" - for path in tqdm.tqdm(args.input, disable=not args.output): - # use PIL, to be consistent with evaluation - img = read_image(path, format="BGR") - start_time = time.time() - predictions, visualized_output = demo.run_on_image(img) - logger.info( - "{}: {} in {:.2f}s".format( - path, - "detected {} instances".format(len(predictions["instances"])) - if "instances" in predictions - else "finished", - time.time() - start_time, - ) - ) - - if args.output: - if os.path.isdir(args.output): - assert os.path.isdir(args.output), args.output - out_filename = os.path.join(args.output, os.path.basename(path)) - else: - assert len(args.input) == 1, "Please specify a directory with args.output" - out_filename = args.output - visualized_output.save(out_filename) - else: - cv2.namedWindow(WINDOW_NAME, cv2.WINDOW_NORMAL) - cv2.imshow(WINDOW_NAME, visualized_output.get_image()[:, :, ::-1]) - if cv2.waitKey(0) == 27: - break # esc to quit - elif args.webcam: - assert args.input is None, "Cannot have both --input and --webcam!" - assert args.output is None, "output not yet supported with --webcam!" - cam = cv2.VideoCapture(0) - for vis in tqdm.tqdm(demo.run_on_video(cam)): - cv2.namedWindow(WINDOW_NAME, cv2.WINDOW_NORMAL) - cv2.imshow(WINDOW_NAME, vis) - if cv2.waitKey(1) == 27: - break # esc to quit - cam.release() - cv2.destroyAllWindows() - elif args.video_input: - video = cv2.VideoCapture(args.video_input) - width = int(video.get(cv2.CAP_PROP_FRAME_WIDTH)) - height = int(video.get(cv2.CAP_PROP_FRAME_HEIGHT)) - frames_per_second = video.get(cv2.CAP_PROP_FPS) - num_frames = int(video.get(cv2.CAP_PROP_FRAME_COUNT)) - basename = os.path.basename(args.video_input) - codec, file_ext = ( - ("x264", ".mkv") if test_opencv_video_format("x264", ".mkv") else ("mp4v", ".mp4") - ) - if codec == ".mp4v": - warnings.warn("x264 codec not available, switching to mp4v") - if args.output: - if os.path.isdir(args.output): - output_fname = os.path.join(args.output, basename) - output_fname = os.path.splitext(output_fname)[0] + file_ext - else: - output_fname = args.output - assert not os.path.isfile(output_fname), output_fname - output_file = cv2.VideoWriter( - filename=output_fname, - # some installation of opencv may not support x264 (due to its license), - # you can try other format (e.g. MPEG) - fourcc=cv2.VideoWriter_fourcc(*codec), - fps=float(frames_per_second), - frameSize=(width, height), - isColor=True, - ) - assert os.path.isfile(args.video_input) - for vis_frame in tqdm.tqdm(demo.run_on_video(video), total=num_frames): - if args.output: - output_file.write(vis_frame) - else: - cv2.namedWindow(basename, cv2.WINDOW_NORMAL) - cv2.imshow(basename, vis_frame) - if cv2.waitKey(1) == 27: - break # esc to quit - video.release() - if args.output: - output_file.release() - else: - cv2.destroyAllWindows() diff --git a/spaces/hank1996/yolopv2/lib/utils/split_dataset.py b/spaces/hank1996/yolopv2/lib/utils/split_dataset.py deleted file mode 100644 index 7b263477bbd5a9abd300e4e52303cfcc4026590e..0000000000000000000000000000000000000000 --- a/spaces/hank1996/yolopv2/lib/utils/split_dataset.py +++ /dev/null @@ -1,29 +0,0 @@ - -import random -import shutil -import os - -def split(path, mask_path, lane_path): - os.mkdir(path + 'train') - os.mkdir(path + 'val') - os.mkdir(mask_path + 'train') - os.mkdir(mask_path + 'val') - os.mkdir(lane_path + 'train') - os.mkdir(lane_path + 'val') - val_index = random.sample(range(660), 200) - for i in range(660): - if i in val_index: - shutil.move(path+'{}.png'.format(i), path + 'val') - shutil.move(mask_path+'{}.png'.format(i), mask_path + 'val') - shutil.move(lane_path+'{}.png'.format(i), lane_path + 'val') - else: - shutil.move(path+'{}.png'.format(i), path + 'train') - shutil.move(mask_path+'{}.png'.format(i), mask_path + 'train') - shutil.move(lane_path+'{}.png'.format(i), lane_path + 'train') - - -if __name__ == '__main__': - path = "/home/wqm/bdd/data_hust/" - mask_path = "/home/wqm/bdd/hust_area/" - lane_path = "/home/wqm/bdd/hust_lane/" - split(path, mask_path, lane_path) diff --git a/spaces/hank1996/yolopv2/utils/plots.py b/spaces/hank1996/yolopv2/utils/plots.py deleted file mode 100644 index 94c7c886e1be14fcea23373f06fbc4f3b0d3ce95..0000000000000000000000000000000000000000 --- a/spaces/hank1996/yolopv2/utils/plots.py +++ /dev/null @@ -1,434 +0,0 @@ - - -import glob -import math -import os -import random -from copy import copy -from pathlib import Path - -import cv2 -import matplotlib -import matplotlib.pyplot as plt -import numpy as np -import pandas as pd -import seaborn as sns -import torch -import yaml -from PIL import Image, ImageDraw, ImageFont -from scipy.signal import butter, filtfilt - -from utils.general import xywh2xyxy, xyxy2xywh -from utils.metrics import fitness - -# Settings -matplotlib.rc('font', **{'size': 11}) -matplotlib.use('Agg') # for writing to files only - - -def color_list(): - # Return first 10 plt colors as (r,g,b) https://stackoverflow.com/questions/51350872/python-from-color-name-to-rgb - def hex2rgb(h): - return tuple(int(h[1 + i:1 + i + 2], 16) for i in (0, 2, 4)) - - return [hex2rgb(h) for h in matplotlib.colors.TABLEAU_COLORS.values()] # or BASE_ (8), CSS4_ (148), XKCD_ (949) - - -def hist2d(x, y, n=100): - # 2d histogram used in labels.png and evolve.png - xedges, yedges = np.linspace(x.min(), x.max(), n), np.linspace(y.min(), y.max(), n) - hist, xedges, yedges = np.histogram2d(x, y, (xedges, yedges)) - xidx = np.clip(np.digitize(x, xedges) - 1, 0, hist.shape[0] - 1) - yidx = np.clip(np.digitize(y, yedges) - 1, 0, hist.shape[1] - 1) - return np.log(hist[xidx, yidx]) - - -def butter_lowpass_filtfilt(data, cutoff=1500, fs=50000, order=5): - # https://stackoverflow.com/questions/28536191/how-to-filter-smooth-with-scipy-numpy - def butter_lowpass(cutoff, fs, order): - nyq = 0.5 * fs - normal_cutoff = cutoff / nyq - return butter(order, normal_cutoff, btype='low', analog=False) - - b, a = butter_lowpass(cutoff, fs, order=order) - return filtfilt(b, a, data) # forward-backward filter - - -def plot_one_box(x, img, color=None, label=None, line_thickness=3): - # Plots one bounding box on image img - tl = line_thickness or round(0.002 * (img.shape[0] + img.shape[1]) / 2) + 1 # line/font thickness - color = color or [random.randint(0, 255) for _ in range(3)] - c1, c2 = (int(x[0]), int(x[1])), (int(x[2]), int(x[3])) - cv2.rectangle(img, c1, c2, color, thickness=tl, lineType=cv2.LINE_AA) - if label: - tf = max(tl - 1, 1) # font thickness - t_size = cv2.getTextSize(label, 0, fontScale=tl / 3, thickness=tf)[0] - c2 = c1[0] + t_size[0], c1[1] - t_size[1] - 3 - cv2.rectangle(img, c1, c2, color, -1, cv2.LINE_AA) # filled - cv2.putText(img, label, (c1[0], c1[1] - 2), 0, tl / 3, [225, 255, 255], thickness=tf, lineType=cv2.LINE_AA) - - -def plot_one_box_PIL(box, img, color=None, label=None, line_thickness=None): - img = Image.fromarray(img) - draw = ImageDraw.Draw(img) - line_thickness = line_thickness or max(int(min(img.size) / 200), 2) - draw.rectangle(box, width=line_thickness, outline=tuple(color)) # plot - if label: - fontsize = max(round(max(img.size) / 40), 12) - font = ImageFont.truetype("Arial.ttf", fontsize) - txt_width, txt_height = font.getsize(label) - draw.rectangle([box[0], box[1] - txt_height + 4, box[0] + txt_width, box[1]], fill=tuple(color)) - draw.text((box[0], box[1] - txt_height + 1), label, fill=(255, 255, 255), font=font) - return np.asarray(img) - - -def plot_wh_methods(): # from utils.plots import *; plot_wh_methods() - # Compares the two methods for width-height anchor multiplication - # https://github.com/ultralytics/yolov3/issues/168 - x = np.arange(-4.0, 4.0, .1) - ya = np.exp(x) - yb = torch.sigmoid(torch.from_numpy(x)).numpy() * 2 - - fig = plt.figure(figsize=(6, 3), tight_layout=True) - plt.plot(x, ya, '.-', label='YOLOv3') - plt.plot(x, yb ** 2, '.-', label='YOLOR ^2') - plt.plot(x, yb ** 1.6, '.-', label='YOLOR ^1.6') - plt.xlim(left=-4, right=4) - plt.ylim(bottom=0, top=6) - plt.xlabel('input') - plt.ylabel('output') - plt.grid() - plt.legend() - fig.savefig('comparison.png', dpi=200) - - -def output_to_target(output): - # Convert model output to target format [batch_id, class_id, x, y, w, h, conf] - targets = [] - for i, o in enumerate(output): - for *box, conf, cls in o.cpu().numpy(): - targets.append([i, cls, *list(*xyxy2xywh(np.array(box)[None])), conf]) - return np.array(targets) - - -def plot_images(images, targets, paths=None, fname='images.jpg', names=None, max_size=640, max_subplots=16): - # Plot image grid with labels - - if isinstance(images, torch.Tensor): - images = images.cpu().float().numpy() - if isinstance(targets, torch.Tensor): - targets = targets.cpu().numpy() - - # un-normalise - if np.max(images[0]) <= 1: - images *= 255 - - tl = 3 # line thickness - tf = max(tl - 1, 1) # font thickness - bs, _, h, w = images.shape # batch size, _, height, width - bs = min(bs, max_subplots) # limit plot images - ns = np.ceil(bs ** 0.5) # number of subplots (square) - - # Check if we should resize - scale_factor = max_size / max(h, w) - if scale_factor < 1: - h = math.ceil(scale_factor * h) - w = math.ceil(scale_factor * w) - - colors = color_list() # list of colors - mosaic = np.full((int(ns * h), int(ns * w), 3), 255, dtype=np.uint8) # init - for i, img in enumerate(images): - if i == max_subplots: # if last batch has fewer images than we expect - break - - block_x = int(w * (i // ns)) - block_y = int(h * (i % ns)) - - img = img.transpose(1, 2, 0) - if scale_factor < 1: - img = cv2.resize(img, (w, h)) - - mosaic[block_y:block_y + h, block_x:block_x + w, :] = img - if len(targets) > 0: - image_targets = targets[targets[:, 0] == i] - boxes = xywh2xyxy(image_targets[:, 2:6]).T - classes = image_targets[:, 1].astype('int') - labels = image_targets.shape[1] == 6 # labels if no conf column - conf = None if labels else image_targets[:, 6] # check for confidence presence (label vs pred) - - if boxes.shape[1]: - if boxes.max() <= 1.01: # if normalized with tolerance 0.01 - boxes[[0, 2]] *= w # scale to pixels - boxes[[1, 3]] *= h - elif scale_factor < 1: # absolute coords need scale if image scales - boxes *= scale_factor - boxes[[0, 2]] += block_x - boxes[[1, 3]] += block_y - for j, box in enumerate(boxes.T): - cls = int(classes[j]) - color = colors[cls % len(colors)] - cls = names[cls] if names else cls - if labels or conf[j] > 0.25: # 0.25 conf thresh - label = '%s' % cls if labels else '%s %.1f' % (cls, conf[j]) - plot_one_box(box, mosaic, label=label, color=color, line_thickness=tl) - - # Draw image filename labels - if paths: - label = Path(paths[i]).name[:40] # trim to 40 char - t_size = cv2.getTextSize(label, 0, fontScale=tl / 3, thickness=tf)[0] - cv2.putText(mosaic, label, (block_x + 5, block_y + t_size[1] + 5), 0, tl / 3, [220, 220, 220], thickness=tf, - lineType=cv2.LINE_AA) - - # Image border - cv2.rectangle(mosaic, (block_x, block_y), (block_x + w, block_y + h), (255, 255, 255), thickness=3) - - if fname: - r = min(1280. / max(h, w) / ns, 1.0) # ratio to limit image size - mosaic = cv2.resize(mosaic, (int(ns * w * r), int(ns * h * r)), interpolation=cv2.INTER_AREA) - # cv2.imwrite(fname, cv2.cvtColor(mosaic, cv2.COLOR_BGR2RGB)) # cv2 save - Image.fromarray(mosaic).save(fname) # PIL save - return mosaic - - -def plot_lr_scheduler(optimizer, scheduler, epochs=300, save_dir=''): - # Plot LR simulating training for full epochs - optimizer, scheduler = copy(optimizer), copy(scheduler) # do not modify originals - y = [] - for _ in range(epochs): - scheduler.step() - y.append(optimizer.param_groups[0]['lr']) - plt.plot(y, '.-', label='LR') - plt.xlabel('epoch') - plt.ylabel('LR') - plt.grid() - plt.xlim(0, epochs) - plt.ylim(0) - plt.savefig(Path(save_dir) / 'LR.png', dpi=200) - plt.close() - - -def plot_test_txt(): # from utils.plots import *; plot_test() - # Plot test.txt histograms - x = np.loadtxt('test.txt', dtype=np.float32) - box = xyxy2xywh(x[:, :4]) - cx, cy = box[:, 0], box[:, 1] - - fig, ax = plt.subplots(1, 1, figsize=(6, 6), tight_layout=True) - ax.hist2d(cx, cy, bins=600, cmax=10, cmin=0) - ax.set_aspect('equal') - plt.savefig('hist2d.png', dpi=300) - - fig, ax = plt.subplots(1, 2, figsize=(12, 6), tight_layout=True) - ax[0].hist(cx, bins=600) - ax[1].hist(cy, bins=600) - plt.savefig('hist1d.png', dpi=200) - - -def plot_targets_txt(): # from utils.plots import *; plot_targets_txt() - # Plot targets.txt histograms - x = np.loadtxt('targets.txt', dtype=np.float32).T - s = ['x targets', 'y targets', 'width targets', 'height targets'] - fig, ax = plt.subplots(2, 2, figsize=(8, 8), tight_layout=True) - ax = ax.ravel() - for i in range(4): - ax[i].hist(x[i], bins=100, label='%.3g +/- %.3g' % (x[i].mean(), x[i].std())) - ax[i].legend() - ax[i].set_title(s[i]) - plt.savefig('targets.jpg', dpi=200) - - -def plot_study_txt(path='', x=None): # from utils.plots import *; plot_study_txt() - # Plot study.txt generated by test.py - fig, ax = plt.subplots(2, 4, figsize=(10, 6), tight_layout=True) - # ax = ax.ravel() - - fig2, ax2 = plt.subplots(1, 1, figsize=(8, 4), tight_layout=True) - # for f in [Path(path) / f'study_coco_{x}.txt' for x in ['yolor-p6', 'yolor-w6', 'yolor-e6', 'yolor-d6']]: - for f in sorted(Path(path).glob('study*.txt')): - y = np.loadtxt(f, dtype=np.float32, usecols=[0, 1, 2, 3, 7, 8, 9], ndmin=2).T - x = np.arange(y.shape[1]) if x is None else np.array(x) - s = ['P', 'R', 'mAP@.5', 'mAP@.5:.95', 't_inference (ms/img)', 't_NMS (ms/img)', 't_total (ms/img)'] - # for i in range(7): - # ax[i].plot(x, y[i], '.-', linewidth=2, markersize=8) - # ax[i].set_title(s[i]) - - j = y[3].argmax() + 1 - ax2.plot(y[6, 1:j], y[3, 1:j] * 1E2, '.-', linewidth=2, markersize=8, - label=f.stem.replace('study_coco_', '').replace('yolo', 'YOLO')) - - ax2.plot(1E3 / np.array([209, 140, 97, 58, 35, 18]), [34.6, 40.5, 43.0, 47.5, 49.7, 51.5], - 'k.-', linewidth=2, markersize=8, alpha=.25, label='EfficientDet') - - ax2.grid(alpha=0.2) - ax2.set_yticks(np.arange(20, 60, 5)) - ax2.set_xlim(0, 57) - ax2.set_ylim(30, 55) - ax2.set_xlabel('GPU Speed (ms/img)') - ax2.set_ylabel('COCO AP val') - ax2.legend(loc='lower right') - plt.savefig(str(Path(path).name) + '.png', dpi=300) - - -def plot_labels(labels, names=(), save_dir=Path(''), loggers=None): - # plot dataset labels - print('Plotting labels... ') - c, b = labels[:, 0], labels[:, 1:].transpose() # classes, boxes - nc = int(c.max() + 1) # number of classes - colors = color_list() - x = pd.DataFrame(b.transpose(), columns=['x', 'y', 'width', 'height']) - - # seaborn correlogram - sns.pairplot(x, corner=True, diag_kind='auto', kind='hist', diag_kws=dict(bins=50), plot_kws=dict(pmax=0.9)) - plt.savefig(save_dir / 'labels_correlogram.jpg', dpi=200) - plt.close() - - # matplotlib labels - matplotlib.use('svg') # faster - ax = plt.subplots(2, 2, figsize=(8, 8), tight_layout=True)[1].ravel() - ax[0].hist(c, bins=np.linspace(0, nc, nc + 1) - 0.5, rwidth=0.8) - ax[0].set_ylabel('instances') - if 0 < len(names) < 30: - ax[0].set_xticks(range(len(names))) - ax[0].set_xticklabels(names, rotation=90, fontsize=10) - else: - ax[0].set_xlabel('classes') - sns.histplot(x, x='x', y='y', ax=ax[2], bins=50, pmax=0.9) - sns.histplot(x, x='width', y='height', ax=ax[3], bins=50, pmax=0.9) - - # rectangles - labels[:, 1:3] = 0.5 # center - labels[:, 1:] = xywh2xyxy(labels[:, 1:]) * 2000 - img = Image.fromarray(np.ones((2000, 2000, 3), dtype=np.uint8) * 255) - for cls, *box in labels[:1000]: - ImageDraw.Draw(img).rectangle(box, width=1, outline=colors[int(cls) % 10]) # plot - ax[1].imshow(img) - ax[1].axis('off') - - for a in [0, 1, 2, 3]: - for s in ['top', 'right', 'left', 'bottom']: - ax[a].spines[s].set_visible(False) - - plt.savefig(save_dir / 'labels.jpg', dpi=200) - matplotlib.use('Agg') - plt.close() - - # loggers - for k, v in loggers.items() or {}: - if k == 'wandb' and v: - v.log({"Labels": [v.Image(str(x), caption=x.name) for x in save_dir.glob('*labels*.jpg')]}, commit=False) - - -def plot_evolution(yaml_file='data/hyp.finetune.yaml'): # from utils.plots import *; plot_evolution() - # Plot hyperparameter evolution results in evolve.txt - with open(yaml_file) as f: - hyp = yaml.load(f, Loader=yaml.SafeLoader) - x = np.loadtxt('evolve.txt', ndmin=2) - f = fitness(x) - # weights = (f - f.min()) ** 2 # for weighted results - plt.figure(figsize=(10, 12), tight_layout=True) - matplotlib.rc('font', **{'size': 8}) - for i, (k, v) in enumerate(hyp.items()): - y = x[:, i + 7] - # mu = (y * weights).sum() / weights.sum() # best weighted result - mu = y[f.argmax()] # best single result - plt.subplot(6, 5, i + 1) - plt.scatter(y, f, c=hist2d(y, f, 20), cmap='viridis', alpha=.8, edgecolors='none') - plt.plot(mu, f.max(), 'k+', markersize=15) - plt.title('%s = %.3g' % (k, mu), fontdict={'size': 9}) # limit to 40 characters - if i % 5 != 0: - plt.yticks([]) - print('%15s: %.3g' % (k, mu)) - plt.savefig('evolve.png', dpi=200) - print('\nPlot saved as evolve.png') - - -def profile_idetection(start=0, stop=0, labels=(), save_dir=''): - # Plot iDetection '*.txt' per-image logs. from utils.plots import *; profile_idetection() - ax = plt.subplots(2, 4, figsize=(12, 6), tight_layout=True)[1].ravel() - s = ['Images', 'Free Storage (GB)', 'RAM Usage (GB)', 'Battery', 'dt_raw (ms)', 'dt_smooth (ms)', 'real-world FPS'] - files = list(Path(save_dir).glob('frames*.txt')) - for fi, f in enumerate(files): - try: - results = np.loadtxt(f, ndmin=2).T[:, 90:-30] # clip first and last rows - n = results.shape[1] # number of rows - x = np.arange(start, min(stop, n) if stop else n) - results = results[:, x] - t = (results[0] - results[0].min()) # set t0=0s - results[0] = x - for i, a in enumerate(ax): - if i < len(results): - label = labels[fi] if len(labels) else f.stem.replace('frames_', '') - a.plot(t, results[i], marker='.', label=label, linewidth=1, markersize=5) - a.set_title(s[i]) - a.set_xlabel('time (s)') - # if fi == len(files) - 1: - # a.set_ylim(bottom=0) - for side in ['top', 'right']: - a.spines[side].set_visible(False) - else: - a.remove() - except Exception as e: - print('Warning: Plotting error for %s; %s' % (f, e)) - - ax[1].legend() - plt.savefig(Path(save_dir) / 'idetection_profile.png', dpi=200) - - -def plot_results_overlay(start=0, stop=0): # from utils.plots import *; plot_results_overlay() - # Plot training 'results*.txt', overlaying train and val losses - s = ['train', 'train', 'train', 'Precision', 'mAP@0.5', 'val', 'val', 'val', 'Recall', 'mAP@0.5:0.95'] # legends - t = ['Box', 'Objectness', 'Classification', 'P-R', 'mAP-F1'] # titles - for f in sorted(glob.glob('results*.txt') + glob.glob('../../Downloads/results*.txt')): - results = np.loadtxt(f, usecols=[2, 3, 4, 8, 9, 12, 13, 14, 10, 11], ndmin=2).T - n = results.shape[1] # number of rows - x = range(start, min(stop, n) if stop else n) - fig, ax = plt.subplots(1, 5, figsize=(14, 3.5), tight_layout=True) - ax = ax.ravel() - for i in range(5): - for j in [i, i + 5]: - y = results[j, x] - ax[i].plot(x, y, marker='.', label=s[j]) - # y_smooth = butter_lowpass_filtfilt(y) - # ax[i].plot(x, np.gradient(y_smooth), marker='.', label=s[j]) - - ax[i].set_title(t[i]) - ax[i].legend() - ax[i].set_ylabel(f) if i == 0 else None # add filename - fig.savefig(f.replace('.txt', '.png'), dpi=200) - - -def plot_results(start=0, stop=0, bucket='', id=(), labels=(), save_dir=''): - # Plot training 'results*.txt'. from utils.plots import *; plot_results(save_dir='runs/train/exp') - fig, ax = plt.subplots(2, 5, figsize=(12, 6), tight_layout=True) - ax = ax.ravel() - s = ['Box', 'Objectness', 'Classification', 'Precision', 'Recall', - 'val Box', 'val Objectness', 'val Classification', 'mAP@0.5', 'mAP@0.5:0.95'] - if bucket: - # files = ['https://storage.googleapis.com/%s/results%g.txt' % (bucket, x) for x in id] - files = ['results%g.txt' % x for x in id] - c = ('gsutil cp ' + '%s ' * len(files) + '.') % tuple('gs://%s/results%g.txt' % (bucket, x) for x in id) - os.system(c) - else: - files = list(Path(save_dir).glob('results*.txt')) - assert len(files), 'No results.txt files found in %s, nothing to plot.' % os.path.abspath(save_dir) - for fi, f in enumerate(files): - try: - results = np.loadtxt(f, usecols=[2, 3, 4, 8, 9, 12, 13, 14, 10, 11], ndmin=2).T - n = results.shape[1] # number of rows - x = range(start, min(stop, n) if stop else n) - for i in range(10): - y = results[i, x] - if i in [0, 1, 2, 5, 6, 7]: - y[y == 0] = np.nan # don't show zero loss values - # y /= y[0] # normalize - label = labels[fi] if len(labels) else f.stem - ax[i].plot(x, y, marker='.', label=label, linewidth=2, markersize=8) - ax[i].set_title(s[i]) - # if i in [5, 6, 7]: # share train and val loss y axes - # ax[i].get_shared_y_axes().join(ax[i], ax[i - 5]) - except Exception as e: - print('Warning: Plotting error for %s; %s' % (f, e)) - - ax[1].legend() - fig.savefig(Path(save_dir) / 'results.png', dpi=200) - diff --git a/spaces/harshvardhansb/ObjectDetection/src/index.js b/spaces/harshvardhansb/ObjectDetection/src/index.js deleted file mode 100644 index c15f402930fea45e21700f946755964e3a10a554..0000000000000000000000000000000000000000 --- a/spaces/harshvardhansb/ObjectDetection/src/index.js +++ /dev/null @@ -1,11 +0,0 @@ -import React from 'react'; -import ReactDOM from 'react-dom'; -import './index.css'; -import App from './App'; - -ReactDOM.render( - - - , - document.getElementById('root') -); \ No newline at end of file diff --git a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/detectron2/modeling/backbone/build.py b/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/detectron2/modeling/backbone/build.py deleted file mode 100644 index 3d2ecae783257418708b572e298a23e167dabb26..0000000000000000000000000000000000000000 --- a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/detectron2/modeling/backbone/build.py +++ /dev/null @@ -1,33 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -from detectron2.layers import ShapeSpec -from detectron2.utils.registry import Registry - -from .backbone import Backbone - -BACKBONE_REGISTRY = Registry("BACKBONE") -BACKBONE_REGISTRY.__doc__ = """ -Registry for backbones, which extract feature maps from images - -The registered object must be a callable that accepts two arguments: - -1. A :class:`detectron2.config.CfgNode` -2. A :class:`detectron2.layers.ShapeSpec`, which contains the input shape specification. - -It must returns an instance of :class:`Backbone`. -""" - - -def build_backbone(cfg, input_shape=None): - """ - Build a backbone from `cfg.MODEL.BACKBONE.NAME`. - - Returns: - an instance of :class:`Backbone` - """ - if input_shape is None: - input_shape = ShapeSpec(channels=len(cfg.MODEL.PIXEL_MEAN)) - - backbone_name = cfg.MODEL.BACKBONE.NAME - backbone = BACKBONE_REGISTRY.get(backbone_name)(cfg, input_shape) - assert isinstance(backbone, Backbone) - return backbone diff --git a/spaces/hero-intelligent/MT3/installation.py b/spaces/hero-intelligent/MT3/installation.py deleted file mode 100644 index 3c356e506ca5809847a96c1c1511489e965a960e..0000000000000000000000000000000000000000 --- a/spaces/hero-intelligent/MT3/installation.py +++ /dev/null @@ -1,26 +0,0 @@ -import os -os.system("pip install gradio") - -import gradio as gr -from pathlib import Path -os.system("pip install gsutil") - - -os.system("git clone --branch=main https://github.com/google-research/t5x") -os.system("mv t5x t5x_tmp; mv t5x_tmp/* .; rm -r t5x_tmp") -os.system("sed -i 's:jax\[tpu\]:jax:' setup.py") -os.system("python3 -m pip install -e .") -os.system("python3 -m pip install --upgrade pip") - - - -# install mt3 -os.system("git clone --branch=main https://github.com/magenta/mt3") -os.system("mv mt3 mt3_tmp; mv mt3_tmp/* .; rm -r mt3_tmp") -os.system("python3 -m pip install -e .") -os.system("pip install tensorflow_cpu") -# copy checkpoints -os.system("gsutil -q -m cp -r gs://mt3/checkpoints .") - -# copy soundfont (originally from https://sites.google.com/site/soundfonts4u) -os.system("gsutil -q -m cp gs://magentadata/soundfonts/SGM-v2.01-Sal-Guit-Bass-V1.3.sf2 .") diff --git a/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/utilities/nd_softmax.py b/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/utilities/nd_softmax.py deleted file mode 100644 index 98f3161a1af71dc364b74a56db0d930371206ccb..0000000000000000000000000000000000000000 --- a/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/utilities/nd_softmax.py +++ /dev/null @@ -1,21 +0,0 @@ -# Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import torch -from torch import nn -import torch.nn.functional as F - - -softmax_helper = lambda x: F.softmax(x, 1) - diff --git a/spaces/huy-ha/semabs-relevancy/CLIP/tests/test_consistency.py b/spaces/huy-ha/semabs-relevancy/CLIP/tests/test_consistency.py deleted file mode 100644 index 27d49eaae8721b7ad82d4949f2ab2606c8875d9f..0000000000000000000000000000000000000000 --- a/spaces/huy-ha/semabs-relevancy/CLIP/tests/test_consistency.py +++ /dev/null @@ -1,25 +0,0 @@ -import numpy as np -import pytest -import torch -from PIL import Image - -import clip - - -@pytest.mark.parametrize("model_name", clip.available_models()) -def test_consistency(model_name): - device = "cpu" - jit_model, transform = clip.load(model_name, device=device, jit=True) - py_model, _ = clip.load(model_name, device=device, jit=False) - - image = transform(Image.open("CLIP.png")).unsqueeze(0).to(device) - text = clip.tokenize(["a diagram", "a dog", "a cat"]).to(device) - - with torch.no_grad(): - logits_per_image, _ = jit_model(image, text) - jit_probs = logits_per_image.softmax(dim=-1).cpu().numpy() - - logits_per_image, _ = py_model(image, text) - py_probs = logits_per_image.softmax(dim=-1).cpu().numpy() - - assert np.allclose(jit_probs, py_probs, atol=0.01, rtol=0.1) diff --git a/spaces/hzwluoye/gpt4/client/css/message-input.css b/spaces/hzwluoye/gpt4/client/css/message-input.css deleted file mode 100644 index de5f58388133bd3b2b2333dd99cecf0110002367..0000000000000000000000000000000000000000 --- a/spaces/hzwluoye/gpt4/client/css/message-input.css +++ /dev/null @@ -1,27 +0,0 @@ -#message-input { - margin-right: 30px; - height: 64px; -} - -#message-input::-webkit-scrollbar { - width: 5px; -} - -#message-input::-webkit-scrollbar-track { - background: #f1f1f1; -} - -#message-input::-webkit-scrollbar-thumb { - background: #c7a2ff; -} - -#message-input::-webkit-scrollbar-thumb:hover { - background: #8b3dff; -} - -@media screen and (max-width: 360px) { - #message-input { - margin: 0; - } -} - diff --git a/spaces/iamtahiralvi/stabilityai-stable-diffusion-2-1/app.py b/spaces/iamtahiralvi/stabilityai-stable-diffusion-2-1/app.py deleted file mode 100644 index 0160420876923d89f2ab5fccb9f4d13725e29972..0000000000000000000000000000000000000000 --- a/spaces/iamtahiralvi/stabilityai-stable-diffusion-2-1/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/stabilityai/stable-diffusion-2-1").launch() \ No newline at end of file diff --git a/spaces/inamXcontru/PoeticTTS/?z?kb?????k ?ny?r ?rg?kz?o ??.md b/spaces/inamXcontru/PoeticTTS/?z?kb?????k ?ny?r ?rg?kz?o ??.md deleted file mode 100644 index 201dd4555c4a2d9c33dbaadf4070fe7a5a4baf23..0000000000000000000000000000000000000000 --- a/spaces/inamXcontru/PoeticTTS/?z?kb?????k ?ny?r ?rg?kz?o ??.md +++ /dev/null @@ -1,6 +0,0 @@ -

?z?kb?????k ?ny?r ?rg?kz?o ??


Download ••• https://gohhs.com/2uz3Mx



- - aaccfb2cb3
-
-
-

diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/AdjProg PX660.rar.md b/spaces/inplisQlawa/anything-midjourney-v4-1/AdjProg PX660.rar.md deleted file mode 100644 index ea0b44bedb93dd60c59c4ecfa602c08db78990e0..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/AdjProg PX660.rar.md +++ /dev/null @@ -1,35 +0,0 @@ - -

How to Reset Epson PX660 Printer with AdjProg PX660.rar

-

If you have an Epson PX660 printer that is showing errors such as "Service Required" or "Ink Pad Full", you may need to reset it using a software tool called AdjProg PX660.rar. This tool can help you clear the waste ink counter and restore your printer to normal operation. In this article, we will show you how to download and use AdjProg PX660.rar to reset your Epson PX660 printer.

-

AdjProg PX660.rar


Downloadhttps://urlin.us/2uExQc



-

What is AdjProg PX660.rar?

-

AdjProg PX660.rar is a compressed file that contains an adjustment program for Epson PX660 printer. The adjustment program is a software utility that can modify some settings of the printer, such as resetting the waste ink counter, cleaning the print head, checking the nozzle pattern, etc. The adjustment program can also help you troubleshoot some common problems with your printer, such as paper jam, ink cartridge error, etc.

-

Where to Download AdjProg PX660.rar?

-

You can download AdjProg PX660.rar from various online sources, such as software websites, file sharing platforms, or blogs. However, you should be careful when downloading files from unknown sources, as they may contain viruses or malware that can harm your computer or printer. Therefore, we recommend you to download AdjProg PX660.rar from a trusted and reliable source, such as [^1^] or [^4^]. These websites offer free and safe downloads of AdjProg PX660.rar and other Epson printer drivers and utilities.

-

How to Use AdjProg PX660.rar to Reset Epson PX660 Printer?

-

Before using AdjProg PX660.rar to reset your Epson PX660 printer, you should make sure that your printer is connected to your computer via USB cable and turned on. You should also disable any antivirus or firewall software that may interfere with the adjustment program. Then, follow these steps:

-

-
    -
  1. Extract the AdjProg PX660.rar file to a folder on your computer. You may need a software like WinRAR or 7-Zip to extract compressed files.
  2. -
  3. Open the folder and double-click on the file named AdjProg.exe. This will launch the adjustment program.
  4. -
  5. Select your printer model (Epson PX660) and port (USB) from the drop-down menus and click OK.
  6. -
  7. Click on Particular Adjustment Mode on the main menu.
  8. -
  9. Select Waste Ink Pad Counter from the list of options and click OK.
  10. -
  11. Check the boxes next to Main Pad Counter and Platen Pad Counter and click Check. This will show you the current values of the waste ink counters.
  12. -
  13. Click Initialization to reset the waste ink counters to zero. This will clear the error messages on your printer.
  14. -
  15. Click Finish and close the adjustment program.
  16. -
  17. Turn off your printer and wait for a few seconds. Then turn it back on and check if it works normally.
  18. -
-

Congratulations! You have successfully reset your Epson PX660 printer with AdjProg PX660.rar. You can now enjoy printing without any errors or interruptions.

- -

How to Prevent Waste Ink Pad Overflow?

-

Resetting your Epson PX660 printer with AdjProg PX660.rar can help you clear the error messages and resume printing. However, this does not solve the underlying problem of waste ink pad overflow. The waste ink pad is a sponge-like component inside your printer that absorbs the excess ink during printing and cleaning cycles. Over time, the waste ink pad becomes saturated and can no longer absorb any more ink. This can cause ink leakage and damage to your printer.

-

Therefore, it is important to prevent waste ink pad overflow by taking some preventive measures, such as:

-
    -
  • Reducing the frequency of print head cleaning. Print head cleaning consumes a lot of ink and fills up the waste ink pad quickly. You should only perform print head cleaning when necessary, such as when you notice poor print quality or missing colors.
  • -
  • Using genuine Epson ink cartridges. Genuine Epson ink cartridges are designed to work optimally with your printer and produce high-quality prints. They also have a smart chip that monitors the ink level and alerts you when the ink is low or empty. This can help you avoid overfilling or underfilling the ink cartridges, which can cause ink wastage and overflow.
  • -
  • Replacing or modifying the waste ink pad. If your waste ink pad is already full or near full, you should replace it with a new one or modify it with an external waste ink tank. Replacing the waste ink pad requires opening the printer case and removing some parts, which can be risky and void your warranty. Modifying the waste ink pad involves attaching a hose and a bottle to the waste ink outlet, which can be easier and cheaper. However, you should be careful not to damage or spill any ink during the process. You can find some tutorials on how to replace or modify the waste ink pad online, such as .
  • -
-

By following these tips, you can prevent waste ink pad overflow and extend the life of your Epson PX660 printer.

d5da3c52bf
-
-
\ No newline at end of file diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Car Mechanic Simulator 2015 Gold Edition V1.1.1.5 Incl ALL DLC CPY.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Car Mechanic Simulator 2015 Gold Edition V1.1.1.5 Incl ALL DLC CPY.md deleted file mode 100644 index 72871527c300ac8baaad4050ca28624bbd3b05dd..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Car Mechanic Simulator 2015 Gold Edition V1.1.1.5 Incl ALL DLC CPY.md +++ /dev/null @@ -1,18 +0,0 @@ -

Car Mechanic Simulator 2015 Gold Edition V1.1.1.5 Incl ALL DLC CPY


DOWNLOADhttps://urlin.us/2uEyAe



- -Models, Sounds, DLC, Cars. Get access to the full game with all of its content. All the cars, tracks, and more are included. - -Description. Download Free Game. Car Mechanic Simulator 2015 Gold Edition V1.1.6.0 Incl ALL DLC Crack. All of the vehicles are included in the game, but you can also get new ones. The game is available in many countries around the world. You will be able to work in the workshop to build cars and trucks. All of the game mechanics are included in the game. If you need to build a car that has no specific engine, you can build a car with a specific engine to make it work. - -You can build a truck or a car with a specific engine. Some cars are also available for purchase. However, you can also use parts from the workshop. The game is played in the city and you can repair vehicles using the workshop. If you need to get a specific car from the museum or the scrapyard, you can do so. You can also run various missions on the go and see what is happening in the world. You can get money for your repairs, and you can use the money to buy things in the city. - -Car Mechanic Simulator - -The game is available in many languages. You will be able to choose between English, Spanish, French, Italian, and German. If you have an account in Steam, you can get the game. You can get the game in a variety of ways. You can download it through the official website. You can also get the game through the Humble Bundle, a huge game collection. You can also get the game from a local store. - -The game allows you to play as a beginner or as a pro mechanic. If you want to play as a pro mechanic, you will need to complete the tutorial. You will also get a full tutorial. The tutorial will teach you how to build cars. You will also get a tutorial that teaches you how to repair vehicles. You will be able to work in the workshop and learn about the entire game. You can also work in the workshop to build the cars, collect scrap to use in the workshop, and repair vehicles. - -The game has various vehicles available in the game. You can use the truck, the van, the taxi, the bus, and the car. You will be able to use the tools to repair the vehicle, and you can then use the vehicle. You can use tools to 4fefd39f24
-
-
-

diff --git a/spaces/inreVtussa/clothingai/Examples/Arunachalamai Vilangidum Lingam Song Downloadl Fix.md b/spaces/inreVtussa/clothingai/Examples/Arunachalamai Vilangidum Lingam Song Downloadl Fix.md deleted file mode 100644 index 91f62a78a4b8f3b3645d395281d9b63a23b3e1dc..0000000000000000000000000000000000000000 --- a/spaces/inreVtussa/clothingai/Examples/Arunachalamai Vilangidum Lingam Song Downloadl Fix.md +++ /dev/null @@ -1,6 +0,0 @@ -

Arunachalamai Vilangidum Lingam Song Downloadl


DOWNLOAD ✶✶✶ https://tiurll.com/2uCkGd



- -ஓமஓம நமசிவாய 108 Lingam Song 108 லிஙலிஙகமகம பாடலபாடல Pradhosha Song பிரதோஷபிரதோஷபிரதோஷதிலகேட கேடகேடக & nbsp; # ## February 24, 2019 - Arunachalamai Vilangidum Lingam Song 121 House B Brahma Murariyar Potridum Lingaastakam Devotional texts.. May 11, 2018 . —— . May 12, 2018 - Sriman Ramanujacharya, Sri Sivananda Paramahamsa, Sri Svavivarta Swami, Sri Maheshwara Swami and Sri Brahmananda Swami at Tirupati temple. . May 11, 2018 - Sri Brahmananda Swami, Sri Sivananda Paramahamsa 8a78ff9644
-
-
-

diff --git a/spaces/inreVtussa/clothingai/Examples/Ben 10 Alien Swarm Full Movie In Hindi UPD Download.md b/spaces/inreVtussa/clothingai/Examples/Ben 10 Alien Swarm Full Movie In Hindi UPD Download.md deleted file mode 100644 index 1d296fc48389867664280a931d20d68c44cd1b11..0000000000000000000000000000000000000000 --- a/spaces/inreVtussa/clothingai/Examples/Ben 10 Alien Swarm Full Movie In Hindi UPD Download.md +++ /dev/null @@ -1,32 +0,0 @@ -

ben 10 alien swarm full movie in hindi download


Download Zip https://tiurll.com/2uCm8j



-
-Ben 10: Alien Swarm is an action-adventure game developed by Warthog and published by Vivendi Universal Games for the Xbox 360 and PlayStation 3. It is the fourth entry in the Ben 10 video game series and is set in the Ben 10 Alien Force universe. - -The game features a storyline that takes place after the events of Ben 10 Alien Force and takes Ben 10 and the rest of the Alien Force team into the alien-controlled ocean to find the missing lab pieces necessary to free aliens from stasis. The game features elements of third-person shooter gameplay and includes sub-aquatic level designs. Ben 10: Alien Swarm is Ben 10s first game to be released on the PlayStation 3 as well as being the first Ben 10 game to be released in Asia. - -You can play with Ben 10: Alien Swarm in the Hindi language. It's supported by the Indian version of PlayStation Store. This version is not available in the United States, United Kingdom, or any other country. In the United States, the game will be available for Xbox Live Arcade. - -You can watch and download the video Ben 10: Alien Swarm in the Hindi language. To play the video Ben 10: Alien Swarm in Hindi, you need an Xbox Live Gold membership. If you do not have an Xbox Live Gold membership, you can buy one. - -The video Ben 10: Alien Swarm has been rated by the Motion Picture Association of America. The game has an M rating for "mild violence" as well as "blood and gore", which is present throughout the game. The rating also includes some language and mild drug use. - -1.You can watch Ben 10: Alien Swarm video in full screen mode by clicking the button above. - -2.You can watch Ben 10: Alien Swarm video in full screen mode by clicking the button above. - -3.You can watch Ben 10: Alien Swarm video in full screen mode by clicking the button above. - -4.You can watch Ben 10: Alien Swarm video in full screen mode by clicking the button above. - -5.You can watch Ben 10: Alien Swarm video in full screen mode by clicking the button above. - -6.You can watch Ben 10: Alien Swarm video in full screen mode by clicking the button above. - -7.You can watch Ben 10: Alien Swarm video in full screen mode by clicking the button above. - -8.You can watch Ben 10: Alien Swarm video in full screen mode by clicking the button above. - -9.You can 4fefd39f24
-
-
-

diff --git a/spaces/isaiah08/dalle-mini-test/html2canvas.js b/spaces/isaiah08/dalle-mini-test/html2canvas.js deleted file mode 100644 index 96e2dc5707b1a584ff7b3b583aea7c6c18d4ea76..0000000000000000000000000000000000000000 --- a/spaces/isaiah08/dalle-mini-test/html2canvas.js +++ /dev/null @@ -1,7756 +0,0 @@ -/*! - * html2canvas 1.4.1 - * Copyright (c) 2022 Niklas von Hertzen - * Released under MIT License - */ -(function (global, factory) { - typeof exports === 'object' && typeof module !== 'undefined' ? module.exports = factory() : - typeof define === 'function' && define.amd ? define(factory) : - (global = typeof globalThis !== 'undefined' ? globalThis : global || self, global.html2canvas = factory()); -}(this, (function () { 'use strict'; - - /*! ***************************************************************************** - Copyright (c) Microsoft Corporation. - - Permission to use, copy, modify, and/or distribute this software for any - purpose with or without fee is hereby granted. - - THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH - REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY - AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT, - INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM - LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR - OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR - PERFORMANCE OF THIS SOFTWARE. - ***************************************************************************** */ - /* global Reflect, Promise */ - - var extendStatics = function(d, b) { - extendStatics = Object.setPrototypeOf || - ({ __proto__: [] } instanceof Array && function (d, b) { d.__proto__ = b; }) || - function (d, b) { for (var p in b) if (Object.prototype.hasOwnProperty.call(b, p)) d[p] = b[p]; }; - return extendStatics(d, b); - }; - - function __extends(d, b) { - if (typeof b !== "function" && b !== null) - throw new TypeError("Class extends value " + String(b) + " is not a constructor or null"); - extendStatics(d, b); - function __() { this.constructor = d; } - d.prototype = b === null ? Object.create(b) : (__.prototype = b.prototype, new __()); - } - - var __assign = function() { - __assign = Object.assign || function __assign(t) { - for (var s, i = 1, n = arguments.length; i < n; i++) { - s = arguments[i]; - for (var p in s) if (Object.prototype.hasOwnProperty.call(s, p)) t[p] = s[p]; - } - return t; - }; - return __assign.apply(this, arguments); - }; - - function __awaiter(thisArg, _arguments, P, generator) { - function adopt(value) { return value instanceof P ? value : new P(function (resolve) { resolve(value); }); } - return new (P || (P = Promise))(function (resolve, reject) { - function fulfilled(value) { try { step(generator.next(value)); } catch (e) { reject(e); } } - function rejected(value) { try { step(generator["throw"](value)); } catch (e) { reject(e); } } - function step(result) { result.done ? resolve(result.value) : adopt(result.value).then(fulfilled, rejected); } - step((generator = generator.apply(thisArg, _arguments || [])).next()); - }); - } - - function __generator(thisArg, body) { - var _ = { label: 0, sent: function() { if (t[0] & 1) throw t[1]; return t[1]; }, trys: [], ops: [] }, f, y, t, g; - return g = { next: verb(0), "throw": verb(1), "return": verb(2) }, typeof Symbol === "function" && (g[Symbol.iterator] = function() { return this; }), g; - function verb(n) { return function (v) { return step([n, v]); }; } - function step(op) { - if (f) throw new TypeError("Generator is already executing."); - while (_) try { - if (f = 1, y && (t = op[0] & 2 ? y["return"] : op[0] ? y["throw"] || ((t = y["return"]) && t.call(y), 0) : y.next) && !(t = t.call(y, op[1])).done) return t; - if (y = 0, t) op = [op[0] & 2, t.value]; - switch (op[0]) { - case 0: case 1: t = op; break; - case 4: _.label++; return { value: op[1], done: false }; - case 5: _.label++; y = op[1]; op = [0]; continue; - case 7: op = _.ops.pop(); _.trys.pop(); continue; - default: - if (!(t = _.trys, t = t.length > 0 && t[t.length - 1]) && (op[0] === 6 || op[0] === 2)) { _ = 0; continue; } - if (op[0] === 3 && (!t || (op[1] > t[0] && op[1] < t[3]))) { _.label = op[1]; break; } - if (op[0] === 6 && _.label < t[1]) { _.label = t[1]; t = op; break; } - if (t && _.label < t[2]) { _.label = t[2]; _.ops.push(op); break; } - if (t[2]) _.ops.pop(); - _.trys.pop(); continue; - } - op = body.call(thisArg, _); - } catch (e) { op = [6, e]; y = 0; } finally { f = t = 0; } - if (op[0] & 5) throw op[1]; return { value: op[0] ? op[1] : void 0, done: true }; - } - } - - function __spreadArray(to, from, pack) { - if (pack || arguments.length === 2) for (var i = 0, l = from.length, ar; i < l; i++) { - if (ar || !(i in from)) { - if (!ar) ar = Array.prototype.slice.call(from, 0, i); - ar[i] = from[i]; - } - } - return to.concat(ar || from); - } - - var Bounds = /** @class */ (function () { - function Bounds(left, top, width, height) { - this.left = left; - this.top = top; - this.width = width; - this.height = height; - } - Bounds.prototype.add = function (x, y, w, h) { - return new Bounds(this.left + x, this.top + y, this.width + w, this.height + h); - }; - Bounds.fromClientRect = function (context, clientRect) { - return new Bounds(clientRect.left + context.windowBounds.left, clientRect.top + context.windowBounds.top, clientRect.width, clientRect.height); - }; - Bounds.fromDOMRectList = function (context, domRectList) { - var domRect = Array.from(domRectList).find(function (rect) { return rect.width !== 0; }); - return domRect - ? new Bounds(domRect.left + context.windowBounds.left, domRect.top + context.windowBounds.top, domRect.width, domRect.height) - : Bounds.EMPTY; - }; - Bounds.EMPTY = new Bounds(0, 0, 0, 0); - return Bounds; - }()); - var parseBounds = function (context, node) { - return Bounds.fromClientRect(context, node.getBoundingClientRect()); - }; - var parseDocumentSize = function (document) { - var body = document.body; - var documentElement = document.documentElement; - if (!body || !documentElement) { - throw new Error("Unable to get document size"); - } - var width = Math.max(Math.max(body.scrollWidth, documentElement.scrollWidth), Math.max(body.offsetWidth, documentElement.offsetWidth), Math.max(body.clientWidth, documentElement.clientWidth)); - var height = Math.max(Math.max(body.scrollHeight, documentElement.scrollHeight), Math.max(body.offsetHeight, documentElement.offsetHeight), Math.max(body.clientHeight, documentElement.clientHeight)); - return new Bounds(0, 0, width, height); - }; - - /* - * css-line-break 2.1.0 - * Copyright (c) 2022 Niklas von Hertzen - * Released under MIT License - */ - var toCodePoints$1 = function (str) { - var codePoints = []; - var i = 0; - var length = str.length; - while (i < length) { - var value = str.charCodeAt(i++); - if (value >= 0xd800 && value <= 0xdbff && i < length) { - var extra = str.charCodeAt(i++); - if ((extra & 0xfc00) === 0xdc00) { - codePoints.push(((value & 0x3ff) << 10) + (extra & 0x3ff) + 0x10000); - } - else { - codePoints.push(value); - i--; - } - } - else { - codePoints.push(value); - } - } - return codePoints; - }; - var fromCodePoint$1 = function () { - var codePoints = []; - for (var _i = 0; _i < arguments.length; _i++) { - codePoints[_i] = arguments[_i]; - } - if (String.fromCodePoint) { - return String.fromCodePoint.apply(String, codePoints); - } - var length = codePoints.length; - if (!length) { - return ''; - } - var codeUnits = []; - var index = -1; - var result = ''; - while (++index < length) { - var codePoint = codePoints[index]; - if (codePoint <= 0xffff) { - codeUnits.push(codePoint); - } - else { - codePoint -= 0x10000; - codeUnits.push((codePoint >> 10) + 0xd800, (codePoint % 0x400) + 0xdc00); - } - if (index + 1 === length || codeUnits.length > 0x4000) { - result += String.fromCharCode.apply(String, codeUnits); - codeUnits.length = 0; - } - } - return result; - }; - var chars$2 = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/'; - // Use a lookup table to find the index. - var lookup$2 = typeof Uint8Array === 'undefined' ? [] : new Uint8Array(256); - for (var i$2 = 0; i$2 < chars$2.length; i$2++) { - lookup$2[chars$2.charCodeAt(i$2)] = i$2; - } - - /* - * utrie 1.0.2 - * Copyright (c) 2022 Niklas von Hertzen - * Released under MIT License - */ - var chars$1$1 = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/'; - // Use a lookup table to find the index. - var lookup$1$1 = typeof Uint8Array === 'undefined' ? [] : new Uint8Array(256); - for (var i$1$1 = 0; i$1$1 < chars$1$1.length; i$1$1++) { - lookup$1$1[chars$1$1.charCodeAt(i$1$1)] = i$1$1; - } - var decode$1 = function (base64) { - var bufferLength = base64.length * 0.75, len = base64.length, i, p = 0, encoded1, encoded2, encoded3, encoded4; - if (base64[base64.length - 1] === '=') { - bufferLength--; - if (base64[base64.length - 2] === '=') { - bufferLength--; - } - } - var buffer = typeof ArrayBuffer !== 'undefined' && - typeof Uint8Array !== 'undefined' && - typeof Uint8Array.prototype.slice !== 'undefined' - ? new ArrayBuffer(bufferLength) - : new Array(bufferLength); - var bytes = Array.isArray(buffer) ? buffer : new Uint8Array(buffer); - for (i = 0; i < len; i += 4) { - encoded1 = lookup$1$1[base64.charCodeAt(i)]; - encoded2 = lookup$1$1[base64.charCodeAt(i + 1)]; - encoded3 = lookup$1$1[base64.charCodeAt(i + 2)]; - encoded4 = lookup$1$1[base64.charCodeAt(i + 3)]; - bytes[p++] = (encoded1 << 2) | (encoded2 >> 4); - bytes[p++] = ((encoded2 & 15) << 4) | (encoded3 >> 2); - bytes[p++] = ((encoded3 & 3) << 6) | (encoded4 & 63); - } - return buffer; - }; - var polyUint16Array$1 = function (buffer) { - var length = buffer.length; - var bytes = []; - for (var i = 0; i < length; i += 2) { - bytes.push((buffer[i + 1] << 8) | buffer[i]); - } - return bytes; - }; - var polyUint32Array$1 = function (buffer) { - var length = buffer.length; - var bytes = []; - for (var i = 0; i < length; i += 4) { - bytes.push((buffer[i + 3] << 24) | (buffer[i + 2] << 16) | (buffer[i + 1] << 8) | buffer[i]); - } - return bytes; - }; - - /** Shift size for getting the index-2 table offset. */ - var UTRIE2_SHIFT_2$1 = 5; - /** Shift size for getting the index-1 table offset. */ - var UTRIE2_SHIFT_1$1 = 6 + 5; - /** - * Shift size for shifting left the index array values. - * Increases possible data size with 16-bit index values at the cost - * of compactability. - * This requires data blocks to be aligned by UTRIE2_DATA_GRANULARITY. - */ - var UTRIE2_INDEX_SHIFT$1 = 2; - /** - * Difference between the two shift sizes, - * for getting an index-1 offset from an index-2 offset. 6=11-5 - */ - var UTRIE2_SHIFT_1_2$1 = UTRIE2_SHIFT_1$1 - UTRIE2_SHIFT_2$1; - /** - * The part of the index-2 table for U+D800..U+DBFF stores values for - * lead surrogate code _units_ not code _points_. - * Values for lead surrogate code _points_ are indexed with this portion of the table. - * Length=32=0x20=0x400>>UTRIE2_SHIFT_2. (There are 1024=0x400 lead surrogates.) - */ - var UTRIE2_LSCP_INDEX_2_OFFSET$1 = 0x10000 >> UTRIE2_SHIFT_2$1; - /** Number of entries in a data block. 32=0x20 */ - var UTRIE2_DATA_BLOCK_LENGTH$1 = 1 << UTRIE2_SHIFT_2$1; - /** Mask for getting the lower bits for the in-data-block offset. */ - var UTRIE2_DATA_MASK$1 = UTRIE2_DATA_BLOCK_LENGTH$1 - 1; - var UTRIE2_LSCP_INDEX_2_LENGTH$1 = 0x400 >> UTRIE2_SHIFT_2$1; - /** Count the lengths of both BMP pieces. 2080=0x820 */ - var UTRIE2_INDEX_2_BMP_LENGTH$1 = UTRIE2_LSCP_INDEX_2_OFFSET$1 + UTRIE2_LSCP_INDEX_2_LENGTH$1; - /** - * The 2-byte UTF-8 version of the index-2 table follows at offset 2080=0x820. - * Length 32=0x20 for lead bytes C0..DF, regardless of UTRIE2_SHIFT_2. - */ - var UTRIE2_UTF8_2B_INDEX_2_OFFSET$1 = UTRIE2_INDEX_2_BMP_LENGTH$1; - var UTRIE2_UTF8_2B_INDEX_2_LENGTH$1 = 0x800 >> 6; /* U+0800 is the first code point after 2-byte UTF-8 */ - /** - * The index-1 table, only used for supplementary code points, at offset 2112=0x840. - * Variable length, for code points up to highStart, where the last single-value range starts. - * Maximum length 512=0x200=0x100000>>UTRIE2_SHIFT_1. - * (For 0x100000 supplementary code points U+10000..U+10ffff.) - * - * The part of the index-2 table for supplementary code points starts - * after this index-1 table. - * - * Both the index-1 table and the following part of the index-2 table - * are omitted completely if there is only BMP data. - */ - var UTRIE2_INDEX_1_OFFSET$1 = UTRIE2_UTF8_2B_INDEX_2_OFFSET$1 + UTRIE2_UTF8_2B_INDEX_2_LENGTH$1; - /** - * Number of index-1 entries for the BMP. 32=0x20 - * This part of the index-1 table is omitted from the serialized form. - */ - var UTRIE2_OMITTED_BMP_INDEX_1_LENGTH$1 = 0x10000 >> UTRIE2_SHIFT_1$1; - /** Number of entries in an index-2 block. 64=0x40 */ - var UTRIE2_INDEX_2_BLOCK_LENGTH$1 = 1 << UTRIE2_SHIFT_1_2$1; - /** Mask for getting the lower bits for the in-index-2-block offset. */ - var UTRIE2_INDEX_2_MASK$1 = UTRIE2_INDEX_2_BLOCK_LENGTH$1 - 1; - var slice16$1 = function (view, start, end) { - if (view.slice) { - return view.slice(start, end); - } - return new Uint16Array(Array.prototype.slice.call(view, start, end)); - }; - var slice32$1 = function (view, start, end) { - if (view.slice) { - return view.slice(start, end); - } - return new Uint32Array(Array.prototype.slice.call(view, start, end)); - }; - var createTrieFromBase64$1 = function (base64, _byteLength) { - var buffer = decode$1(base64); - var view32 = Array.isArray(buffer) ? polyUint32Array$1(buffer) : new Uint32Array(buffer); - var view16 = Array.isArray(buffer) ? polyUint16Array$1(buffer) : new Uint16Array(buffer); - var headerLength = 24; - var index = slice16$1(view16, headerLength / 2, view32[4] / 2); - var data = view32[5] === 2 - ? slice16$1(view16, (headerLength + view32[4]) / 2) - : slice32$1(view32, Math.ceil((headerLength + view32[4]) / 4)); - return new Trie$1(view32[0], view32[1], view32[2], view32[3], index, data); - }; - var Trie$1 = /** @class */ (function () { - function Trie(initialValue, errorValue, highStart, highValueIndex, index, data) { - this.initialValue = initialValue; - this.errorValue = errorValue; - this.highStart = highStart; - this.highValueIndex = highValueIndex; - this.index = index; - this.data = data; - } - /** - * Get the value for a code point as stored in the Trie. - * - * @param codePoint the code point - * @return the value - */ - Trie.prototype.get = function (codePoint) { - var ix; - if (codePoint >= 0) { - if (codePoint < 0x0d800 || (codePoint > 0x0dbff && codePoint <= 0x0ffff)) { - // Ordinary BMP code point, excluding leading surrogates. - // BMP uses a single level lookup. BMP index starts at offset 0 in the Trie2 index. - // 16 bit data is stored in the index array itself. - ix = this.index[codePoint >> UTRIE2_SHIFT_2$1]; - ix = (ix << UTRIE2_INDEX_SHIFT$1) + (codePoint & UTRIE2_DATA_MASK$1); - return this.data[ix]; - } - if (codePoint <= 0xffff) { - // Lead Surrogate Code Point. A Separate index section is stored for - // lead surrogate code units and code points. - // The main index has the code unit data. - // For this function, we need the code point data. - // Note: this expression could be refactored for slightly improved efficiency, but - // surrogate code points will be so rare in practice that it's not worth it. - ix = this.index[UTRIE2_LSCP_INDEX_2_OFFSET$1 + ((codePoint - 0xd800) >> UTRIE2_SHIFT_2$1)]; - ix = (ix << UTRIE2_INDEX_SHIFT$1) + (codePoint & UTRIE2_DATA_MASK$1); - return this.data[ix]; - } - if (codePoint < this.highStart) { - // Supplemental code point, use two-level lookup. - ix = UTRIE2_INDEX_1_OFFSET$1 - UTRIE2_OMITTED_BMP_INDEX_1_LENGTH$1 + (codePoint >> UTRIE2_SHIFT_1$1); - ix = this.index[ix]; - ix += (codePoint >> UTRIE2_SHIFT_2$1) & UTRIE2_INDEX_2_MASK$1; - ix = this.index[ix]; - ix = (ix << UTRIE2_INDEX_SHIFT$1) + (codePoint & UTRIE2_DATA_MASK$1); - return this.data[ix]; - } - if (codePoint <= 0x10ffff) { - return this.data[this.highValueIndex]; - } - } - // Fall through. The code point is outside of the legal range of 0..0x10ffff. - return this.errorValue; - }; - return Trie; - }()); - - /* - * base64-arraybuffer 1.0.2 - * Copyright (c) 2022 Niklas von Hertzen - * Released under MIT License - */ - var chars$3 = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/'; - // Use a lookup table to find the index. - var lookup$3 = typeof Uint8Array === 'undefined' ? [] : new Uint8Array(256); - for (var i$3 = 0; i$3 < chars$3.length; i$3++) { - lookup$3[chars$3.charCodeAt(i$3)] = i$3; - } - - var base64$1 = 'KwAAAAAAAAAACA4AUD0AADAgAAACAAAAAAAIABAAGABAAEgAUABYAGAAaABgAGgAYgBqAF8AZwBgAGgAcQB5AHUAfQCFAI0AlQCdAKIAqgCyALoAYABoAGAAaABgAGgAwgDKAGAAaADGAM4A0wDbAOEA6QDxAPkAAQEJAQ8BFwF1AH0AHAEkASwBNAE6AUIBQQFJAVEBWQFhAWgBcAF4ATAAgAGGAY4BlQGXAZ8BpwGvAbUBvQHFAc0B0wHbAeMB6wHxAfkBAQIJAvEBEQIZAiECKQIxAjgCQAJGAk4CVgJeAmQCbAJ0AnwCgQKJApECmQKgAqgCsAK4ArwCxAIwAMwC0wLbAjAA4wLrAvMC+AIAAwcDDwMwABcDHQMlAy0DNQN1AD0DQQNJA0kDSQNRA1EDVwNZA1kDdQB1AGEDdQBpA20DdQN1AHsDdQCBA4kDkQN1AHUAmQOhA3UAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AKYDrgN1AHUAtgO+A8YDzgPWAxcD3gPjA+sD8wN1AHUA+wMDBAkEdQANBBUEHQQlBCoEFwMyBDgEYABABBcDSARQBFgEYARoBDAAcAQzAXgEgASIBJAEdQCXBHUAnwSnBK4EtgS6BMIEyAR1AHUAdQB1AHUAdQCVANAEYABgAGAAYABgAGAAYABgANgEYADcBOQEYADsBPQE/AQEBQwFFAUcBSQFLAU0BWQEPAVEBUsFUwVbBWAAYgVgAGoFcgV6BYIFigWRBWAAmQWfBaYFYABgAGAAYABgAKoFYACxBbAFuQW6BcEFwQXHBcEFwQXPBdMF2wXjBeoF8gX6BQIGCgYSBhoGIgYqBjIGOgZgAD4GRgZMBmAAUwZaBmAAYABgAGAAYABgAGAAYABgAGAAYABgAGIGYABpBnAGYABgAGAAYABgAGAAYABgAGAAYAB4Bn8GhQZgAGAAYAB1AHcDFQSLBmAAYABgAJMGdQA9A3UAmwajBqsGqwaVALMGuwbDBjAAywbSBtIG1QbSBtIG0gbSBtIG0gbdBuMG6wbzBvsGAwcLBxMHAwcbByMHJwcsBywHMQcsB9IGOAdAB0gHTgfSBkgHVgfSBtIG0gbSBtIG0gbSBtIG0gbSBiwHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAdgAGAALAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAdbB2MHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsB2kH0gZwB64EdQB1AHUAdQB1AHUAdQB1AHUHfQdgAIUHjQd1AHUAlQedB2AAYAClB6sHYACzB7YHvgfGB3UAzgfWBzMB3gfmB1EB7gf1B/0HlQENAQUIDQh1ABUIHQglCBcDLQg1CD0IRQhNCEEDUwh1AHUAdQBbCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIcAh3CHoIMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwAIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIgggwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAALAcsBywHLAcsBywHLAcsBywHLAcsB4oILAcsB44I0gaWCJ4Ipgh1AHUAqgiyCHUAdQB1AHUAdQB1AHUAdQB1AHUAtwh8AXUAvwh1AMUIyQjRCNkI4AjoCHUAdQB1AO4I9gj+CAYJDgkTCS0HGwkjCYIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiAAIAAAAFAAYABgAGIAXwBgAHEAdQBFAJUAogCyAKAAYABgAEIA4ABGANMA4QDxAMEBDwE1AFwBLAE6AQEBUQF4QkhCmEKoQrhCgAHIQsAB0MLAAcABwAHAAeDC6ABoAHDCwMMAAcABwAHAAdDDGMMAAcAB6MM4wwjDWMNow3jDaABoAGgAaABoAGgAaABoAGgAaABoAGgAaABoAGgAaABoAGgAaABoAEjDqABWw6bDqABpg6gAaABoAHcDvwOPA+gAaABfA/8DvwO/A78DvwO/A78DvwO/A78DvwO/A78DvwO/A78DvwO/A78DvwO/A78DvwO/A78DvwO/A78DpcPAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcAB9cPKwkyCToJMAB1AHUAdQBCCUoJTQl1AFUJXAljCWcJawkwADAAMAAwAHMJdQB2CX4JdQCECYoJjgmWCXUAngkwAGAAYABxAHUApgn3A64JtAl1ALkJdQDACTAAMAAwADAAdQB1AHUAdQB1AHUAdQB1AHUAowYNBMUIMAAwADAAMADICcsJ0wnZCRUE4QkwAOkJ8An4CTAAMAB1AAAKvwh1AAgKDwoXCh8KdQAwACcKLgp1ADYKqAmICT4KRgowADAAdQB1AE4KMAB1AFYKdQBeCnUAZQowADAAMAAwADAAMAAwADAAMAAVBHUAbQowADAAdQC5CXUKMAAwAHwBxAijBogEMgF9CoQKiASMCpQKmgqIBKIKqgquCogEDQG2Cr4KxgrLCjAAMADTCtsKCgHjCusK8Qr5CgELMAAwADAAMAB1AIsECQsRC3UANAEZCzAAMAAwADAAMAB1ACELKQswAHUANAExCzkLdQBBC0kLMABRC1kLMAAwADAAMAAwADAAdQBhCzAAMAAwAGAAYABpC3ELdwt/CzAAMACHC4sLkwubC58Lpwt1AK4Ltgt1APsDMAAwADAAMAAwADAAMAAwAL4LwwvLC9IL1wvdCzAAMADlC+kL8Qv5C/8LSQswADAAMAAwADAAMAAwADAAMAAHDDAAMAAwADAAMAAODBYMHgx1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1ACYMMAAwADAAdQB1AHUALgx1AHUAdQB1AHUAdQA2DDAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwAHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AD4MdQBGDHUAdQB1AHUAdQB1AEkMdQB1AHUAdQB1AFAMMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwAHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQBYDHUAdQB1AF8MMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUA+wMVBGcMMAAwAHwBbwx1AHcMfwyHDI8MMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAYABgAJcMMAAwADAAdQB1AJ8MlQClDDAAMACtDCwHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsB7UMLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AA0EMAC9DDAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAsBywHLAcsBywHLAcsBywHLQcwAMEMyAwsBywHLAcsBywHLAcsBywHLAcsBywHzAwwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwAHUAdQB1ANQM2QzhDDAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMABgAGAAYABgAGAAYABgAOkMYADxDGAA+AwADQYNYABhCWAAYAAODTAAMAAwADAAFg1gAGAAHg37AzAAMAAwADAAYABgACYNYAAsDTQNPA1gAEMNPg1LDWAAYABgAGAAYABgAGAAYABgAGAAUg1aDYsGVglhDV0NcQBnDW0NdQ15DWAAYABgAGAAYABgAGAAYABgAGAAYABgAGAAYABgAGAAlQCBDZUAiA2PDZcNMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAnw2nDTAAMAAwADAAMAAwAHUArw23DTAAMAAwADAAMAAwADAAMAAwADAAMAB1AL8NMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAB1AHUAdQB1AHUAdQDHDTAAYABgAM8NMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAA1w11ANwNMAAwAD0B5A0wADAAMAAwADAAMADsDfQN/A0EDgwOFA4wABsOMAAwADAAMAAwADAAMAAwANIG0gbSBtIG0gbSBtIG0gYjDigOwQUuDsEFMw7SBjoO0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIGQg5KDlIOVg7SBtIGXg5lDm0OdQ7SBtIGfQ6EDooOjQ6UDtIGmg6hDtIG0gaoDqwO0ga0DrwO0gZgAGAAYADEDmAAYAAkBtIGzA5gANIOYADaDokO0gbSBt8O5w7SBu8O0gb1DvwO0gZgAGAAxA7SBtIG0gbSBtIGYABgAGAAYAAED2AAsAUMD9IG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIGFA8sBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAccD9IGLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHJA8sBywHLAcsBywHLAccDywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywPLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAc0D9IG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIGLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAccD9IG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIGFA8sBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHPA/SBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gYUD0QPlQCVAJUAMAAwADAAMACVAJUAlQCVAJUAlQCVAEwPMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAA//8EAAQABAAEAAQABAAEAAQABAANAAMAAQABAAIABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQACgATABcAHgAbABoAHgAXABYAEgAeABsAGAAPABgAHABLAEsASwBLAEsASwBLAEsASwBLABgAGAAeAB4AHgATAB4AUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQABYAGwASAB4AHgAeAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAWAA0AEQAeAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAAQABAAEAAQABAAFAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAJABYAGgAbABsAGwAeAB0AHQAeAE8AFwAeAA0AHgAeABoAGwBPAE8ADgBQAB0AHQAdAE8ATwAXAE8ATwBPABYAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAB0AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAdAFAAUABQAFAAUABQAFAAUAAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAFAAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAeAB4AHgAeAFAATwBAAE8ATwBPAEAATwBQAFAATwBQAB4AHgAeAB4AHgAeAB0AHQAdAB0AHgAdAB4ADgBQAFAAUABQAFAAHgAeAB4AHgAeAB4AHgBQAB4AUAAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4ABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAJAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAkACQAJAAkACQAJAAkABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAeAB4AHgAeAFAAHgAeAB4AKwArAFAAUABQAFAAGABQACsAKwArACsAHgAeAFAAHgBQAFAAUAArAFAAKwAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AKwAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4ABAAEAAQABAAEAAQABAAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAUAAeAB4AHgAeAB4AHgBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAYAA0AKwArAB4AHgAbACsABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQADQAEAB4ABAAEAB4ABAAEABMABAArACsAKwArACsAKwArACsAVgBWAFYAVgBWAFYAVgBWAFYAVgBWAFYAVgBWAFYAVgBWAFYAVgBWAFYAVgBWAFYAVgBWAFYAKwArACsAKwBWAFYAVgBWAB4AHgArACsAKwArACsAKwArACsAKwArACsAHgAeAB4AHgAeAB4AHgAeAB4AGgAaABoAGAAYAB4AHgAEAAQABAAEAAQABAAEAAQABAAEAAQAEwAEACsAEwATAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABABLAEsASwBLAEsASwBLAEsASwBLABoAGQAZAB4AUABQAAQAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQABMAUAAEAAQABAAEAAQABAAEAB4AHgAEAAQABAAEAAQABABQAFAABAAEAB4ABAAEAAQABABQAFAASwBLAEsASwBLAEsASwBLAEsASwBQAFAAUAAeAB4AUAAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AKwAeAFAABABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAABAAEAAQABAAEAAQABAAEAAQABAAEAFAAKwArACsAKwArACsAKwArACsAKwArACsAKwArAEsASwBLAEsASwBLAEsASwBLAEsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAABAAEAAQABAAEAAQABAAEAAQAUABQAB4AHgAYABMAUAArACsABAAbABsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAFAABAAEAAQABAAEAFAABAAEAAQAUAAEAAQABAAEAAQAKwArAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAArACsAHgArAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArAFAAUABQAFAAUABQAFAAUABQAFAAKwArACsAKwArACsAKwArACsAKwArAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAB4ABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAQABAAEAFAABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQAUAAEAAQABAAEAAQABAAEAFAAUABQAFAAUABQAFAAUABQAFAABAAEAA0ADQBLAEsASwBLAEsASwBLAEsASwBLAB4AUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAArAFAAUABQAFAAUABQAFAAUAArACsAUABQACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwBQAFAAUABQAFAAUABQACsAUAArACsAKwBQAFAAUABQACsAKwAEAFAABAAEAAQABAAEAAQABAArACsABAAEACsAKwAEAAQABABQACsAKwArACsAKwArACsAKwAEACsAKwArACsAUABQACsAUABQAFAABAAEACsAKwBLAEsASwBLAEsASwBLAEsASwBLAFAAUAAaABoAUABQAFAAUABQAEwAHgAbAFAAHgAEACsAKwAEAAQABAArAFAAUABQAFAAUABQACsAKwArACsAUABQACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwBQAFAAUABQAFAAUABQACsAUABQACsAUABQACsAUABQACsAKwAEACsABAAEAAQABAAEACsAKwArACsABAAEACsAKwAEAAQABAArACsAKwAEACsAKwArACsAKwArACsAUABQAFAAUAArAFAAKwArACsAKwArACsAKwBLAEsASwBLAEsASwBLAEsASwBLAAQABABQAFAAUAAEAB4AKwArACsAKwArACsAKwArACsAKwAEAAQABAArAFAAUABQAFAAUABQAFAAUABQACsAUABQAFAAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwBQAFAAUABQAFAAUABQACsAUABQACsAUABQAFAAUABQACsAKwAEAFAABAAEAAQABAAEAAQABAAEACsABAAEAAQAKwAEAAQABAArACsAUAArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwBQAFAABAAEACsAKwBLAEsASwBLAEsASwBLAEsASwBLAB4AGwArACsAKwArACsAKwArAFAABAAEAAQABAAEAAQAKwAEAAQABAArAFAAUABQAFAAUABQAFAAUAArACsAUABQACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAQABAAEAAQABAArACsABAAEACsAKwAEAAQABAArACsAKwArACsAKwArAAQABAAEACsAKwArACsAUABQACsAUABQAFAABAAEACsAKwBLAEsASwBLAEsASwBLAEsASwBLAB4AUABQAFAAUABQAFAAUAArACsAKwArACsAKwArACsAKwArAAQAUAArAFAAUABQAFAAUABQACsAKwArAFAAUABQACsAUABQAFAAUAArACsAKwBQAFAAKwBQACsAUABQACsAKwArAFAAUAArACsAKwBQAFAAUAArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArAAQABAAEAAQABAArACsAKwAEAAQABAArAAQABAAEAAQAKwArAFAAKwArACsAKwArACsABAArACsAKwArACsAKwArACsAKwArAEsASwBLAEsASwBLAEsASwBLAEsAUABQAFAAHgAeAB4AHgAeAB4AGwAeACsAKwArACsAKwAEAAQABAAEAAQAUABQAFAAUABQAFAAUABQACsAUABQAFAAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArACsAUAAEAAQABAAEAAQABAAEACsABAAEAAQAKwAEAAQABAAEACsAKwArACsAKwArACsABAAEACsAUABQAFAAKwArACsAKwArAFAAUAAEAAQAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArACsAKwAOAFAAUABQAFAAUABQAFAAHgBQAAQABAAEAA4AUABQAFAAUABQAFAAUABQACsAUABQAFAAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArAFAAUABQAFAAUABQAFAAUABQAFAAKwBQAFAAUABQAFAAKwArAAQAUAAEAAQABAAEAAQABAAEACsABAAEAAQAKwAEAAQABAAEACsAKwArACsAKwArACsABAAEACsAKwArACsAKwArACsAUAArAFAAUAAEAAQAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwBQAFAAKwArACsAKwArACsAKwArACsAKwArACsAKwAEAAQABAAEAFAAUABQAFAAUABQAFAAUABQACsAUABQAFAAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAABAAEAFAABAAEAAQABAAEAAQABAArAAQABAAEACsABAAEAAQABABQAB4AKwArACsAKwBQAFAAUAAEAFAAUABQAFAAUABQAFAAUABQAFAABAAEACsAKwBLAEsASwBLAEsASwBLAEsASwBLAFAAUABQAFAAUABQAFAAUABQABoAUABQAFAAUABQAFAAKwAEAAQABAArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArAFAAUABQAFAAUABQAFAAUABQACsAUAArACsAUABQAFAAUABQAFAAUAArACsAKwAEACsAKwArACsABAAEAAQABAAEAAQAKwAEACsABAAEAAQABAAEAAQABAAEACsAKwArACsAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArAAQABAAeACsAKwArACsAKwArACsAKwArACsAKwArAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXAAqAFwAXAAqACoAKgAqACoAKgAqACsAKwArACsAGwBcAFwAXABcAFwAXABcACoAKgAqACoAKgAqACoAKgAeAEsASwBLAEsASwBLAEsASwBLAEsADQANACsAKwArACsAKwBcAFwAKwBcACsAXABcAFwAXABcACsAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcACsAXAArAFwAXABcAFwAXABcAFwAXABcAFwAKgBcAFwAKgAqACoAKgAqACoAKgAqACoAXAArACsAXABcAFwAXABcACsAXAArACoAKgAqACoAKgAqACsAKwBLAEsASwBLAEsASwBLAEsASwBLACsAKwBcAFwAXABcAFAADgAOAA4ADgAeAA4ADgAJAA4ADgANAAkAEwATABMAEwATAAkAHgATAB4AHgAeAAQABAAeAB4AHgAeAB4AHgBLAEsASwBLAEsASwBLAEsASwBLAFAAUABQAFAAUABQAFAAUABQAFAADQAEAB4ABAAeAAQAFgARABYAEQAEAAQAUABQAFAAUABQAFAAUABQACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArACsAKwAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQADQAEAAQABAAEAAQADQAEAAQAUABQAFAAUABQAAQABAAEAAQABAAEAAQABAAEAAQABAArAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAArAA0ADQAeAB4AHgAeAB4AHgAEAB4AHgAeAB4AHgAeACsAHgAeAA4ADgANAA4AHgAeAB4AHgAeAAkACQArACsAKwArACsAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgBcAEsASwBLAEsASwBLAEsASwBLAEsADQANAB4AHgAeAB4AXABcAFwAXABcAFwAKgAqACoAKgBcAFwAXABcACoAKgAqAFwAKgAqACoAXABcACoAKgAqACoAKgAqACoAXABcAFwAKgAqACoAKgBcAFwAXABcAFwAXABcAFwAXABcAFwAXABcACoAKgAqACoAKgAqACoAKgAqACoAKgAqAFwAKgBLAEsASwBLAEsASwBLAEsASwBLACoAKgAqACoAKgAqAFAAUABQAFAAUABQACsAUAArACsAKwArACsAUAArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAHgBQAFAAUABQAFgAWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABZAFkAWQBZAFkAWQBZAFkAWQBZAFkAWQBZAFkAWQBZAFkAWQBZAFkAWQBZAFkAWQBZAFkAWQBZAFkAWQBZAFkAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFAAUABQAFAAUABQAFAAUABQACsAUABQAFAAUAArACsAUABQAFAAUABQAFAAUAArAFAAKwBQAFAAUABQACsAKwBQAFAAUABQAFAAUABQAFAAUAArAFAAUABQAFAAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArAFAAUABQAFAAKwArAFAAUABQAFAAUABQAFAAKwBQACsAUABQAFAAUAArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwBQAFAAUABQACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsABAAEAAQAHgANAB4AHgAeAB4AHgAeAB4AUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAHgAeAB4AHgAeAB4AHgAeAB4AHgArACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwBQAFAAUABQAFAAUAArACsADQBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAHgAeAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAANAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAWABEAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAA0ADQANAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwBQAFAAUABQAAQABAAEACsAKwArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAANAA0AKwArACsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAABAAEACsAKwArACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwBQAFAAUAArAAQABAArACsAKwArACsAKwArACsAKwArACsAKwBcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqAA0ADQAVAFwADQAeAA0AGwBcACoAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwAeAB4AEwATAA0ADQAOAB4AEwATAB4ABAAEAAQACQArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArAFAAUABQAFAAUAAEAAQAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAQAUAArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwAEAAQABAAEAAQABAAEAAQABAAEAAQABAArACsAKwArAAQABAAEAAQABAAEAAQABAAEAAQABAAEACsAKwArACsAHgArACsAKwATABMASwBLAEsASwBLAEsASwBLAEsASwBcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXAArACsAXABcAFwAXABcACsAKwArACsAKwArACsAKwArACsAKwBcAFwAXABcAFwAXABcAFwAXABcAFwAXAArACsAKwArAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcACsAKwArACsAKwArAEsASwBLAEsASwBLAEsASwBLAEsAXAArACsAKwAqACoAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAQABAAEAAQABAArACsAHgAeAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcACoAKgAqACoAKgAqACoAKgAqACoAKwAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKwArAAQASwBLAEsASwBLAEsASwBLAEsASwArACsAKwArACsAKwBLAEsASwBLAEsASwBLAEsASwBLACsAKwArACsAKwArACoAKgAqACoAKgAqACoAXAAqACoAKgAqACoAKgArACsABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsABAAEAAQABAAEAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAQABAAEAAQABABQAFAAUABQAFAAUABQACsAKwArACsASwBLAEsASwBLAEsASwBLAEsASwANAA0AHgANAA0ADQANAB4AHgAeAB4AHgAeAB4AHgAeAB4ABAAEAAQABAAEAAQABAAEAAQAHgAeAB4AHgAeAB4AHgAeAB4AKwArACsABAAEAAQAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAABAAEAAQABAAEAAQABAAEAAQABAAEAAQABABQAFAASwBLAEsASwBLAEsASwBLAEsASwBQAFAAUABQAFAAUABQAFAABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEACsAKwArACsAKwArACsAKwAeAB4AHgAeAFAAUABQAFAABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEACsAKwArAA0ADQANAA0ADQBLAEsASwBLAEsASwBLAEsASwBLACsAKwArAFAAUABQAEsASwBLAEsASwBLAEsASwBLAEsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAA0ADQBQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwBQAFAAUAAeAB4AHgAeAB4AHgAeAB4AKwArACsAKwArACsAKwArAAQABAAEAB4ABAAEAAQABAAEAAQABAAEAAQABAAEAAQABABQAFAAUABQAAQAUABQAFAAUABQAFAABABQAFAABAAEAAQAUAArACsAKwArACsABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEACsABAAEAAQABAAEAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AKwArAFAAUABQAFAAUABQACsAKwBQAFAAUABQAFAAUABQAFAAKwBQACsAUAArAFAAKwAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeACsAKwAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgArAB4AHgAeAB4AHgAeAB4AHgBQAB4AHgAeAFAAUABQACsAHgAeAB4AHgAeAB4AHgAeAB4AHgBQAFAAUABQACsAKwAeAB4AHgAeAB4AHgArAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AKwArAFAAUABQACsAHgAeAB4AHgAeAB4AHgAOAB4AKwANAA0ADQANAA0ADQANAAkADQANAA0ACAAEAAsABAAEAA0ACQANAA0ADAAdAB0AHgAXABcAFgAXABcAFwAWABcAHQAdAB4AHgAUABQAFAANAAEAAQAEAAQABAAEAAQACQAaABoAGgAaABoAGgAaABoAHgAXABcAHQAVABUAHgAeAB4AHgAeAB4AGAAWABEAFQAVABUAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4ADQAeAA0ADQANAA0AHgANAA0ADQAHAB4AHgAeAB4AKwAEAAQABAAEAAQABAAEAAQABAAEAFAAUAArACsATwBQAFAAUABQAFAAHgAeAB4AFgARAE8AUABPAE8ATwBPAFAAUABQAFAAUAAeAB4AHgAWABEAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArABsAGwAbABsAGwAbABsAGgAbABsAGwAbABsAGwAbABsAGwAbABsAGwAbABsAGgAbABsAGwAbABoAGwAbABoAGwAbABsAGwAbABsAGwAbABsAGwAbABsAGwAbABsAGwAbAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQAHgAeAFAAGgAeAB0AHgBQAB4AGgAeAB4AHgAeAB4AHgAeAB4AHgBPAB4AUAAbAB4AHgBQAFAAUABQAFAAHgAeAB4AHQAdAB4AUAAeAFAAHgBQAB4AUABPAFAAUAAeAB4AHgAeAB4AHgAeAFAAUABQAFAAUAAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAFAAHgBQAFAAUABQAE8ATwBQAFAAUABQAFAATwBQAFAATwBQAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAFAAUABQAFAATwBPAE8ATwBPAE8ATwBPAE8ATwBQAFAAUABQAFAAUABQAFAAUAAeAB4AUABQAFAAUABPAB4AHgArACsAKwArAB0AHQAdAB0AHQAdAB0AHQAdAB0AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB0AHgAdAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAdAB4AHQAdAB4AHgAeAB0AHQAeAB4AHQAeAB4AHgAdAB4AHQAbABsAHgAdAB4AHgAeAB4AHQAeAB4AHQAdAB0AHQAeAB4AHQAeAB0AHgAdAB0AHQAdAB0AHQAeAB0AHgAeAB4AHgAeAB0AHQAdAB0AHgAeAB4AHgAdAB0AHgAeAB4AHgAeAB4AHgAeAB4AHgAdAB4AHgAeAB0AHgAeAB4AHgAeAB0AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAdAB0AHgAeAB0AHQAdAB0AHgAeAB0AHQAeAB4AHQAdAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB0AHQAeAB4AHQAdAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHQAeAB4AHgAdAB4AHgAeAB4AHgAeAB4AHQAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB0AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AFAAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeABYAEQAWABEAHgAeAB4AHgAeAB4AHQAeAB4AHgAeAB4AHgAeACUAJQAeAB4AHgAeAB4AHgAeAB4AHgAWABEAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AJQAlACUAJQAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAFAAHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHgAeAB4AHgAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAeAB4AHQAdAB0AHQAeAB4AHgAeAB4AHgAeAB4AHgAeAB0AHQAeAB0AHQAdAB0AHQAdAB0AHgAeAB4AHgAeAB4AHgAeAB0AHQAeAB4AHQAdAB4AHgAeAB4AHQAdAB4AHgAeAB4AHQAdAB0AHgAeAB0AHgAeAB0AHQAdAB0AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAdAB0AHQAdAB4AHgAeAB4AHgAeAB4AHgAeAB0AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAlACUAJQAlAB4AHQAdAB4AHgAdAB4AHgAeAB4AHQAdAB4AHgAeAB4AJQAlAB0AHQAlAB4AJQAlACUAIAAlACUAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAlACUAJQAeAB4AHgAeAB0AHgAdAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAdAB0AHgAdAB0AHQAeAB0AJQAdAB0AHgAdAB0AHgAdAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeACUAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHQAdAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAlACUAJQAlACUAJQAlACUAJQAlACUAJQAdAB0AHQAdACUAHgAlACUAJQAdACUAJQAdAB0AHQAlACUAHQAdACUAHQAdACUAJQAlAB4AHQAeAB4AHgAeAB0AHQAlAB0AHQAdAB0AHQAdACUAJQAlACUAJQAdACUAJQAgACUAHQAdACUAJQAlACUAJQAlACUAJQAeAB4AHgAlACUAIAAgACAAIAAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB0AHgAeAB4AFwAXABcAFwAXABcAHgATABMAJQAeAB4AHgAWABEAFgARABYAEQAWABEAFgARABYAEQAWABEATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeABYAEQAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAWABEAFgARABYAEQAWABEAFgARAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AFgARABYAEQAWABEAFgARABYAEQAWABEAFgARABYAEQAWABEAFgARABYAEQAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAWABEAFgARAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AFgARAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAdAB0AHQAdAB0AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgArACsAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AKwAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AUABQAFAAUAAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAEAAQABAAeAB4AKwArACsAKwArABMADQANAA0AUAATAA0AUABQAFAAUABQAFAAUABQACsAKwArACsAKwArACsAUAANACsAKwArACsAKwArACsAKwArACsAKwArACsAKwAEAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUABQACsAUABQAFAAUABQAFAAUAArAFAAUABQAFAAUABQAFAAKwBQAFAAUABQAFAAUABQACsAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXAA0ADQANAA0ADQANAA0ADQAeAA0AFgANAB4AHgAXABcAHgAeABcAFwAWABEAFgARABYAEQAWABEADQANAA0ADQATAFAADQANAB4ADQANAB4AHgAeAB4AHgAMAAwADQANAA0AHgANAA0AFgANAA0ADQANAA0ADQANAA0AHgANAB4ADQANAB4AHgAeACsAKwArACsAKwArACsAKwArACsAKwArACsAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACsAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAKwArACsAKwArACsAKwArACsAKwArACsAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwAlACUAJQAlACUAJQAlACUAJQAlACUAJQArACsAKwArAA0AEQARACUAJQBHAFcAVwAWABEAFgARABYAEQAWABEAFgARACUAJQAWABEAFgARABYAEQAWABEAFQAWABEAEQAlAFcAVwBXAFcAVwBXAFcAVwBXAAQABAAEAAQABAAEACUAVwBXAFcAVwA2ACUAJQBXAFcAVwBHAEcAJQAlACUAKwBRAFcAUQBXAFEAVwBRAFcAUQBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFEAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBRAFcAUQBXAFEAVwBXAFcAVwBXAFcAUQBXAFcAVwBXAFcAVwBRAFEAKwArAAQABAAVABUARwBHAFcAFQBRAFcAUQBXAFEAVwBRAFcAUQBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFEAVwBRAFcAUQBXAFcAVwBXAFcAVwBRAFcAVwBXAFcAVwBXAFEAUQBXAFcAVwBXABUAUQBHAEcAVwArACsAKwArACsAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAKwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAKwAlACUAVwBXAFcAVwAlACUAJQAlACUAJQAlACUAJQAlACsAKwArACsAKwArACsAKwArACsAKwArAFEAUQBRAFEAUQBRAFEAUQBRAFEAUQBRAFEAUQBRAFEAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQArAFcAVwBXAFcAVwBXAFcAVwBXAFcAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQBPAE8ATwBPAE8ATwBPAE8AJQBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXACUAJQAlAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAEcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAKwArACsAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAADQATAA0AUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABLAEsASwBLAEsASwBLAEsASwBLAFAAUAArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAFAABAAEAAQABAAeAAQABAAEAAQABAAEAAQABAAEAAQAHgBQAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AUABQAAQABABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAQABAAeAA0ADQANAA0ADQArACsAKwArACsAKwArACsAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAFAAUABQAFAAUABQAFAAUABQAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AUAAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgBQAB4AHgAeAB4AHgAeAFAAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgArACsAHgAeAB4AHgAeAB4AHgAeAB4AKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwAeAB4AUABQAFAAUABQAFAAUABQAFAAUABQAAQAUABQAFAABABQAFAAUABQAAQAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAQABAAEAAQABAAeAB4AHgAeAAQAKwArACsAUABQAFAAUABQAFAAHgAeABoAHgArACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAADgAOABMAEwArACsAKwArACsAKwArACsABAAEAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAQABAAEAAQABAAEACsAKwArACsAKwArACsAKwANAA0ASwBLAEsASwBLAEsASwBLAEsASwArACsAKwArACsAKwAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABABQAFAAUABQAFAAUAAeAB4AHgBQAA4AUABQAAQAUABQAFAAUABQAFAABAAEAAQABAAEAAQABAAEAA0ADQBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQAKwArACsAKwArACsAKwArACsAKwArAB4AWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABYACsAKwArAAQAHgAeAB4AHgAeAB4ADQANAA0AHgAeAB4AHgArAFAASwBLAEsASwBLAEsASwBLAEsASwArACsAKwArAB4AHgBcAFwAXABcAFwAKgBcAFwAXABcAFwAXABcAFwAXABcAEsASwBLAEsASwBLAEsASwBLAEsAXABcAFwAXABcACsAUABQAFAAUABQAFAAUABQAFAABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEACsAKwArACsAKwArACsAKwArAFAAUABQAAQAUABQAFAAUABQAFAAUABQAAQABAArACsASwBLAEsASwBLAEsASwBLAEsASwArACsAHgANAA0ADQBcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAKgAqACoAXAAqACoAKgBcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXAAqAFwAKgAqACoAXABcACoAKgBcAFwAXABcAFwAKgAqAFwAKgBcACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAFwAXABcACoAKgBQAFAAUABQAFAAUABQAFAAUABQAFAABAAEAAQABAAEAA0ADQBQAFAAUAAEAAQAKwArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUAArACsAUABQAFAAUABQAFAAKwArAFAAUABQAFAAUABQACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAKwBQAFAAUABQAFAAUABQACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAHgAeACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQABAAEAAQADQAEAAQAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArACsAVABVAFUAVQBVAFUAVQBVAFUAVQBVAFUAVQBVAFUAVQBVAFUAVQBVAFUAVQBVAFUAVQBVAFUAVQBUAFUAVQBVAFUAVQBVAFUAVQBVAFUAVQBVAFUAVQBVAFUAVQBVAFUAVQBVAFUAVQBVAFUAVQBVACsAKwArACsAKwArACsAKwArACsAKwArAFkAWQBZAFkAWQBZAFkAWQBZAFkAWQBZAFkAWQBZAFkAWQBZAFkAKwArACsAKwBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAKwArACsAKwAGAAYABgAGAAYABgAGAAYABgAGAAYABgAGAAYABgAGAAYABgAGAAYABgAGAAYABgAGAAYABgAGAAYABgAGAAYAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXACUAJQBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAJQAlACUAJQAlACUAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAKwArACsAKwArAFYABABWAFYAVgBWAFYAVgBWAFYAVgBWAB4AVgBWAFYAVgBWAFYAVgBWAFYAVgBWAFYAVgArAFYAVgBWAFYAVgArAFYAKwBWAFYAKwBWAFYAKwBWAFYAVgBWAFYAVgBWAFYAVgBWAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAEQAWAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUAAaAB4AKwArAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQAGAARABEAGAAYABMAEwAWABEAFAArACsAKwArACsAKwAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEACUAJQAlACUAJQAWABEAFgARABYAEQAWABEAFgARABYAEQAlACUAFgARACUAJQAlACUAJQAlACUAEQAlABEAKwAVABUAEwATACUAFgARABYAEQAWABEAJQAlACUAJQAlACUAJQAlACsAJQAbABoAJQArACsAKwArAFAAUABQAFAAUAArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArAAcAKwATACUAJQAbABoAJQAlABYAEQAlACUAEQAlABEAJQBXAFcAVwBXAFcAVwBXAFcAVwBXABUAFQAlACUAJQATACUAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXABYAJQARACUAJQAlAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwAWACUAEQAlABYAEQARABYAEQARABUAVwBRAFEAUQBRAFEAUQBRAFEAUQBRAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAEcARwArACsAVwBXAFcAVwBXAFcAKwArAFcAVwBXAFcAVwBXACsAKwBXAFcAVwBXAFcAVwArACsAVwBXAFcAKwArACsAGgAbACUAJQAlABsAGwArAB4AHgAeAB4AHgAeAB4AKwArACsAKwArACsAKwArACsAKwAEAAQABAAQAB0AKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwBQAFAAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsADQANAA0AKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArAB4AHgAeAB4AHgAeAB4AHgAeAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgBQAFAAHgAeAB4AKwAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAAQAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwAEAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAABAAEAAQABAAEACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArAA0AUABQAFAAUAArACsAKwArAFAAUABQAFAAUABQAFAAUAANAFAAUABQAFAAUAArACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArACsAKwArACsAKwArACsAKwAeACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAUABQAFAAUABQAFAAKwArAFAAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArAFAAUAArACsAKwBQACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwANAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAeAB4AUABQAFAAUABQAFAAUAArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArAFAAUAArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArAA0AUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArACsAKwAeAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArACsAUABQAFAAUABQAAQABAAEACsABAAEACsAKwArACsAKwAEAAQABAAEAFAAUABQAFAAKwBQAFAAUAArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArAAQABAAEACsAKwArACsABABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArAA0ADQANAA0ADQANAA0ADQAeACsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAeAFAAUABQAFAAUABQAFAAUAAeAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAQABAArACsAKwArAFAAUABQAFAAUAANAA0ADQANAA0ADQAUACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArACsADQANAA0ADQANAA0ADQBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArAB4AHgAeAB4AKwArACsAKwArACsAKwArACsAKwArACsAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArACsAKwArACsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArAFAAUABQAFAAUABQAAQABAAEAAQAKwArACsAKwArACsAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUAArAAQABAANACsAKwBQAFAAKwArACsAKwArACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAAQABAAEAAQABAAEAAQABAAEAAQABABQAFAAUABQAB4AHgAeAB4AHgArACsAKwArACsAKwAEAAQABAAEAAQABAAEAA0ADQAeAB4AHgAeAB4AKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsABABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAQABAAEAAQABAAEAAQABAAEAAQABAAeAB4AHgANAA0ADQANACsAKwArACsAKwArACsAKwArACsAKwAeACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArACsAKwArACsAKwBLAEsASwBLAEsASwBLAEsASwBLACsAKwArACsAKwArAFAAUABQAFAAUABQAFAABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEACsASwBLAEsASwBLAEsASwBLAEsASwANAA0ADQANAFAABAAEAFAAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAABAAeAA4AUAArACsAKwArACsAKwArACsAKwAEAFAAUABQAFAADQANAB4ADQAEAAQABAAEAB4ABAAEAEsASwBLAEsASwBLAEsASwBLAEsAUAAOAFAADQANAA0AKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQABAAEAAQABAAEAAQABAANAA0AHgANAA0AHgAEACsAUABQAFAAUABQAFAAUAArAFAAKwBQAFAAUABQACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwBQAFAAUABQAFAAUABQAFAAUABQAA0AKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQABAAEAAQABAAEAAQAKwArACsAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArACsABAAEAAQABAArAFAAUABQAFAAUABQAFAAUAArACsAUABQACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwBQAFAAUABQAFAAUABQACsAUABQACsAUABQAFAAUABQACsABAAEAFAABAAEAAQABAAEAAQABAArACsABAAEACsAKwAEAAQABAArACsAUAArACsAKwArACsAKwAEACsAKwArACsAKwBQAFAAUABQAFAABAAEACsAKwAEAAQABAAEAAQABAAEACsAKwArAAQABAAEAAQABAArACsAKwArACsAKwArACsAKwArACsABAAEAAQABAAEAAQABABQAFAAUABQAA0ADQANAA0AHgBLAEsASwBLAEsASwBLAEsASwBLAA0ADQArAB4ABABQAFAAUAArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwAEAAQABAAEAFAAUAAeAFAAKwArACsAKwArACsAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAABAAEAAQABAAEAAQABAArACsABAAEAAQABAAEAAQABAAEAAQADgANAA0AEwATAB4AHgAeAA0ADQANAA0ADQANAA0ADQANAA0ADQANAA0ADQANAFAAUABQAFAABAAEACsAKwAEAA0ADQAeAFAAKwArACsAKwArACsAKwArACsAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArACsADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAFAAKwArACsAKwArACsAKwBLAEsASwBLAEsASwBLAEsASwBLACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAKwArACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACsAKwArACsASwBLAEsASwBLAEsASwBLAEsASwBcAFwADQANAA0AKgBQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAeACsAKwArACsASwBLAEsASwBLAEsASwBLAEsASwBQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAKwArAFAAKwArAFAAUABQAFAAUABQAFAAUAArAFAAUAArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAABAAEAAQABAAEAAQAKwAEAAQAKwArAAQABAAEAAQAUAAEAFAABAAEAA0ADQANACsAKwArACsAKwArACsAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAABAAEAAQABAAEAAQABAArACsABAAEAAQABAAEAAQABABQAA4AUAAEACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAFAABAAEAAQABAAEAAQABAAEAAQABABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQABAAEAFAABAAEAAQABAAOAB4ADQANAA0ADQAOAB4ABAArACsAKwArACsAKwArACsAUAAEAAQABAAEAAQABAAEAAQABAAEAAQAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAA0ADQANAFAADgAOAA4ADQANACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUAArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAABAAEAAQABAAEAAQABAAEACsABAAEAAQABAAEAAQABAAEAFAADQANAA0ADQANACsAKwArACsAKwArACsAKwArACsASwBLAEsASwBLAEsASwBLAEsASwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwAOABMAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAArAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUABQACsAUABQACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQABAArACsAKwAEACsABAAEACsABAAEAAQABAAEAAQABABQAAQAKwArACsAKwArACsAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArACsAUABQAFAAUABQAFAAKwBQAFAAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQAKwAEAAQAKwAEAAQABAAEAAQAUAArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAABAAEAAQABAAeAB4AKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwBQACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAB4AHgAeAB4AHgAeAB4AHgAaABoAGgAaAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgArACsAKwArACsAKwArACsAKwArACsAKwArAA0AUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsADQANAA0ADQANACsAKwArACsAKwArACsAKwArACsAKwBQAFAAUABQACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAASABIAEgAQwBDAEMAUABQAFAAUABDAFAAUABQAEgAQwBIAEMAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAASABDAEMAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwAJAAkACQAJAAkACQAJABYAEQArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABIAEMAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwANAA0AKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArAAQABAAEAAQABAANACsAKwArACsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQABAAEAA0ADQANAB4AHgAeAB4AHgAeAFAAUABQAFAADQAeACsAKwArACsAKwArACsAKwArACsASwBLAEsASwBLAEsASwBLAEsASwArAFAAUABQAFAAUABQAFAAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAANAA0AHgAeACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAKwArACsAKwAEAFAABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQAKwArACsAKwArACsAKwAEAAQABAAEAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAARwBHABUARwAJACsAKwArACsAKwArACsAKwArACsAKwAEAAQAKwArACsAKwArACsAKwArACsAKwArACsAKwArAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXACsAKwArACsAKwArACsAKwBXAFcAVwBXAFcAVwBXAFcAVwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAUQBRAFEAKwArACsAKwArACsAKwArACsAKwArACsAKwBRAFEAUQBRACsAKwArACsAKwArACsAKwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUAArACsAHgAEAAQADQAEAAQABAAEACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgArACsAKwArACsAKwArACsAKwArAB4AHgAeAB4AHgAeAB4AKwArAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAAQABAAEAAQABAAeAB4AHgAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAB4AHgAEAAQABAAEAAQABAAEAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4ABAAEAAQABAAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4ABAAEAAQAHgArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArACsAKwArACsAKwArACsAKwArAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgArACsAKwArACsAKwArACsAKwAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgArAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AKwBQAFAAKwArAFAAKwArAFAAUAArACsAUABQAFAAUAArAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeACsAUAArAFAAUABQAFAAUABQAFAAKwAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AKwBQAFAAUABQACsAKwBQAFAAUABQAFAAUABQAFAAKwBQAFAAUABQAFAAUABQACsAHgAeAFAAUABQAFAAUAArAFAAKwArACsAUABQAFAAUABQAFAAUAArAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAHgBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgBQAFAAUABQAFAAUABQAFAAUABQAFAAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAB4AHgAeAB4AHgAeAB4AHgAeACsAKwBLAEsASwBLAEsASwBLAEsASwBLAEsASwBLAEsASwBLAEsASwBLAEsASwBLAEsASwBLAEsASwBLAEsASwBLAEsASwBLAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAeAB4AHgAeAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAeAB4AHgAeAB4AHgAeAB4ABAAeAB4AHgAeAB4AHgAeAB4AHgAeAAQAHgAeAA0ADQANAA0AHgArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwAEAAQABAAEAAQAKwAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAAQABAAEAAQABAAEAAQAKwAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQAKwArAAQABAAEAAQABAAEAAQAKwAEAAQAKwAEAAQABAAEAAQAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwAEAAQABAAEAAQABAAEAFAAUABQAFAAUABQAFAAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwBQAB4AKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArABsAUABQAFAAUABQACsAKwBQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQABAAEACsAKwArACsAKwArACsAKwArAB4AHgAeAB4ABAAEAAQABAAEAAQABABQACsAKwArACsASwBLAEsASwBLAEsASwBLAEsASwArACsAKwArABYAFgArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAGgBQAFAAUAAaAFAAUABQAFAAKwArACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAeAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwBQAFAAUABQACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwBQAFAAKwBQACsAKwBQACsAUABQAFAAUABQAFAAUABQAFAAUAArAFAAUABQAFAAKwBQACsAUAArACsAKwArACsAKwBQACsAKwArACsAUAArAFAAKwBQACsAUABQAFAAKwBQAFAAKwBQACsAKwBQACsAUAArAFAAKwBQACsAUAArAFAAUAArAFAAKwArAFAAUABQAFAAKwBQAFAAUABQAFAAUABQACsAUABQAFAAUAArAFAAUABQAFAAKwBQACsAUABQAFAAUABQAFAAUABQAFAAUAArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAUABQAFAAKwBQAFAAUABQAFAAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwAeAB4AKwArACsAKwArACsAKwArACsAKwArACsAKwArAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8AJQAlACUAHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHgAeAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB4AHgAeACUAJQAlAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQApACkAKQApACkAKQApACkAKQApACkAKQApACkAKQApACkAKQApACkAKQApACkAKQApACkAJQAlACUAJQAlACAAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAeAB4AJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlAB4AHgAlACUAJQAlACUAHgAlACUAJQAlACUAIAAgACAAJQAlACAAJQAlACAAIAAgACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACEAIQAhACEAIQAlACUAIAAgACUAJQAgACAAIAAgACAAIAAgACAAIAAgACAAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAJQAlACUAIAAlACUAJQAlACAAIAAgACUAIAAgACAAJQAlACUAJQAlACUAJQAgACUAIAAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAHgAlAB4AJQAeACUAJQAlACUAJQAgACUAJQAlACUAHgAlAB4AHgAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlAB4AHgAeAB4AHgAeAB4AJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAeAB4AHgAeAB4AHgAeAB4AHgAeACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACAAIAAlACUAJQAlACAAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACAAJQAlACUAJQAgACAAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAHgAeAB4AHgAeAB4AHgAeACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAeAB4AHgAeAB4AHgAlACUAJQAlACUAJQAlACAAIAAgACUAJQAlACAAIAAgACAAIAAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeABcAFwAXABUAFQAVAB4AHgAeAB4AJQAlACUAIAAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACAAIAAgACUAJQAlACUAJQAlACUAJQAlACAAJQAlACUAJQAlACUAJQAlACUAJQAlACAAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AJQAlACUAJQAlACUAJQAlACUAJQAlACUAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AJQAlACUAJQAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeACUAJQAlACUAJQAlACUAJQAeAB4AHgAeAB4AHgAeAB4AHgAeACUAJQAlACUAJQAlAB4AHgAeAB4AHgAeAB4AHgAlACUAJQAlACUAJQAlACUAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAgACUAJQAgACUAJQAlACUAJQAlACUAJQAgACAAIAAgACAAIAAgACAAJQAlACUAJQAlACUAIAAlACUAJQAlACUAJQAlACUAJQAgACAAIAAgACAAIAAgACAAIAAgACUAJQAgACAAIAAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAgACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACAAIAAlACAAIAAlACAAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAgACAAIAAlACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAJQAlAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AKwAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArACsAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAKwArAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXACUAJQBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwAlACUAJQAlACUAJQAlACUAJQAlACUAVwBXACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAKwAEACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAA=='; - - var LETTER_NUMBER_MODIFIER = 50; - // Non-tailorable Line Breaking Classes - var BK = 1; // Cause a line break (after) - var CR$1 = 2; // Cause a line break (after), except between CR and LF - var LF$1 = 3; // Cause a line break (after) - var CM = 4; // Prohibit a line break between the character and the preceding character - var NL = 5; // Cause a line break (after) - var WJ = 7; // Prohibit line breaks before and after - var ZW = 8; // Provide a break opportunity - var GL = 9; // Prohibit line breaks before and after - var SP = 10; // Enable indirect line breaks - var ZWJ$1 = 11; // Prohibit line breaks within joiner sequences - // Break Opportunities - var B2 = 12; // Provide a line break opportunity before and after the character - var BA = 13; // Generally provide a line break opportunity after the character - var BB = 14; // Generally provide a line break opportunity before the character - var HY = 15; // Provide a line break opportunity after the character, except in numeric context - var CB = 16; // Provide a line break opportunity contingent on additional information - // Characters Prohibiting Certain Breaks - var CL = 17; // Prohibit line breaks before - var CP = 18; // Prohibit line breaks before - var EX = 19; // Prohibit line breaks before - var IN = 20; // Allow only indirect line breaks between pairs - var NS = 21; // Allow only indirect line breaks before - var OP = 22; // Prohibit line breaks after - var QU = 23; // Act like they are both opening and closing - // Numeric Context - var IS = 24; // Prevent breaks after any and before numeric - var NU = 25; // Form numeric expressions for line breaking purposes - var PO = 26; // Do not break following a numeric expression - var PR = 27; // Do not break in front of a numeric expression - var SY = 28; // Prevent a break before; and allow a break after - // Other Characters - var AI = 29; // Act like AL when the resolvedEAW is N; otherwise; act as ID - var AL = 30; // Are alphabetic characters or symbols that are used with alphabetic characters - var CJ = 31; // Treat as NS or ID for strict or normal breaking. - var EB = 32; // Do not break from following Emoji Modifier - var EM = 33; // Do not break from preceding Emoji Base - var H2 = 34; // Form Korean syllable blocks - var H3 = 35; // Form Korean syllable blocks - var HL = 36; // Do not break around a following hyphen; otherwise act as Alphabetic - var ID = 37; // Break before or after; except in some numeric context - var JL = 38; // Form Korean syllable blocks - var JV = 39; // Form Korean syllable blocks - var JT = 40; // Form Korean syllable blocks - var RI$1 = 41; // Keep pairs together. For pairs; break before and after other classes - var SA = 42; // Provide a line break opportunity contingent on additional, language-specific context analysis - var XX = 43; // Have as yet unknown line breaking behavior or unassigned code positions - var ea_OP = [0x2329, 0xff08]; - var BREAK_MANDATORY = '!'; - var BREAK_NOT_ALLOWED$1 = '×'; - var BREAK_ALLOWED$1 = '÷'; - var UnicodeTrie$1 = createTrieFromBase64$1(base64$1); - var ALPHABETICS = [AL, HL]; - var HARD_LINE_BREAKS = [BK, CR$1, LF$1, NL]; - var SPACE$1 = [SP, ZW]; - var PREFIX_POSTFIX = [PR, PO]; - var LINE_BREAKS = HARD_LINE_BREAKS.concat(SPACE$1); - var KOREAN_SYLLABLE_BLOCK = [JL, JV, JT, H2, H3]; - var HYPHEN = [HY, BA]; - var codePointsToCharacterClasses = function (codePoints, lineBreak) { - if (lineBreak === void 0) { lineBreak = 'strict'; } - var types = []; - var indices = []; - var categories = []; - codePoints.forEach(function (codePoint, index) { - var classType = UnicodeTrie$1.get(codePoint); - if (classType > LETTER_NUMBER_MODIFIER) { - categories.push(true); - classType -= LETTER_NUMBER_MODIFIER; - } - else { - categories.push(false); - } - if (['normal', 'auto', 'loose'].indexOf(lineBreak) !== -1) { - // U+2010, – U+2013, 〜 U+301C, ゠ U+30A0 - if ([0x2010, 0x2013, 0x301c, 0x30a0].indexOf(codePoint) !== -1) { - indices.push(index); - return types.push(CB); - } - } - if (classType === CM || classType === ZWJ$1) { - // LB10 Treat any remaining combining mark or ZWJ as AL. - if (index === 0) { - indices.push(index); - return types.push(AL); - } - // LB9 Do not break a combining character sequence; treat it as if it has the line breaking class of - // the base character in all of the following rules. Treat ZWJ as if it were CM. - var prev = types[index - 1]; - if (LINE_BREAKS.indexOf(prev) === -1) { - indices.push(indices[index - 1]); - return types.push(prev); - } - indices.push(index); - return types.push(AL); - } - indices.push(index); - if (classType === CJ) { - return types.push(lineBreak === 'strict' ? NS : ID); - } - if (classType === SA) { - return types.push(AL); - } - if (classType === AI) { - return types.push(AL); - } - // For supplementary characters, a useful default is to treat characters in the range 10000..1FFFD as AL - // and characters in the ranges 20000..2FFFD and 30000..3FFFD as ID, until the implementation can be revised - // to take into account the actual line breaking properties for these characters. - if (classType === XX) { - if ((codePoint >= 0x20000 && codePoint <= 0x2fffd) || (codePoint >= 0x30000 && codePoint <= 0x3fffd)) { - return types.push(ID); - } - else { - return types.push(AL); - } - } - types.push(classType); - }); - return [indices, types, categories]; - }; - var isAdjacentWithSpaceIgnored = function (a, b, currentIndex, classTypes) { - var current = classTypes[currentIndex]; - if (Array.isArray(a) ? a.indexOf(current) !== -1 : a === current) { - var i = currentIndex; - while (i <= classTypes.length) { - i++; - var next = classTypes[i]; - if (next === b) { - return true; - } - if (next !== SP) { - break; - } - } - } - if (current === SP) { - var i = currentIndex; - while (i > 0) { - i--; - var prev = classTypes[i]; - if (Array.isArray(a) ? a.indexOf(prev) !== -1 : a === prev) { - var n = currentIndex; - while (n <= classTypes.length) { - n++; - var next = classTypes[n]; - if (next === b) { - return true; - } - if (next !== SP) { - break; - } - } - } - if (prev !== SP) { - break; - } - } - } - return false; - }; - var previousNonSpaceClassType = function (currentIndex, classTypes) { - var i = currentIndex; - while (i >= 0) { - var type = classTypes[i]; - if (type === SP) { - i--; - } - else { - return type; - } - } - return 0; - }; - var _lineBreakAtIndex = function (codePoints, classTypes, indicies, index, forbiddenBreaks) { - if (indicies[index] === 0) { - return BREAK_NOT_ALLOWED$1; - } - var currentIndex = index - 1; - if (Array.isArray(forbiddenBreaks) && forbiddenBreaks[currentIndex] === true) { - return BREAK_NOT_ALLOWED$1; - } - var beforeIndex = currentIndex - 1; - var afterIndex = currentIndex + 1; - var current = classTypes[currentIndex]; - // LB4 Always break after hard line breaks. - // LB5 Treat CR followed by LF, as well as CR, LF, and NL as hard line breaks. - var before = beforeIndex >= 0 ? classTypes[beforeIndex] : 0; - var next = classTypes[afterIndex]; - if (current === CR$1 && next === LF$1) { - return BREAK_NOT_ALLOWED$1; - } - if (HARD_LINE_BREAKS.indexOf(current) !== -1) { - return BREAK_MANDATORY; - } - // LB6 Do not break before hard line breaks. - if (HARD_LINE_BREAKS.indexOf(next) !== -1) { - return BREAK_NOT_ALLOWED$1; - } - // LB7 Do not break before spaces or zero width space. - if (SPACE$1.indexOf(next) !== -1) { - return BREAK_NOT_ALLOWED$1; - } - // LB8 Break before any character following a zero-width space, even if one or more spaces intervene. - if (previousNonSpaceClassType(currentIndex, classTypes) === ZW) { - return BREAK_ALLOWED$1; - } - // LB8a Do not break after a zero width joiner. - if (UnicodeTrie$1.get(codePoints[currentIndex]) === ZWJ$1) { - return BREAK_NOT_ALLOWED$1; - } - // zwj emojis - if ((current === EB || current === EM) && UnicodeTrie$1.get(codePoints[afterIndex]) === ZWJ$1) { - return BREAK_NOT_ALLOWED$1; - } - // LB11 Do not break before or after Word joiner and related characters. - if (current === WJ || next === WJ) { - return BREAK_NOT_ALLOWED$1; - } - // LB12 Do not break after NBSP and related characters. - if (current === GL) { - return BREAK_NOT_ALLOWED$1; - } - // LB12a Do not break before NBSP and related characters, except after spaces and hyphens. - if ([SP, BA, HY].indexOf(current) === -1 && next === GL) { - return BREAK_NOT_ALLOWED$1; - } - // LB13 Do not break before ‘]’ or ‘!’ or ‘;’ or ‘/’, even after spaces. - if ([CL, CP, EX, IS, SY].indexOf(next) !== -1) { - return BREAK_NOT_ALLOWED$1; - } - // LB14 Do not break after ‘[’, even after spaces. - if (previousNonSpaceClassType(currentIndex, classTypes) === OP) { - return BREAK_NOT_ALLOWED$1; - } - // LB15 Do not break within ‘”[’, even with intervening spaces. - if (isAdjacentWithSpaceIgnored(QU, OP, currentIndex, classTypes)) { - return BREAK_NOT_ALLOWED$1; - } - // LB16 Do not break between closing punctuation and a nonstarter (lb=NS), even with intervening spaces. - if (isAdjacentWithSpaceIgnored([CL, CP], NS, currentIndex, classTypes)) { - return BREAK_NOT_ALLOWED$1; - } - // LB17 Do not break within ‘——’, even with intervening spaces. - if (isAdjacentWithSpaceIgnored(B2, B2, currentIndex, classTypes)) { - return BREAK_NOT_ALLOWED$1; - } - // LB18 Break after spaces. - if (current === SP) { - return BREAK_ALLOWED$1; - } - // LB19 Do not break before or after quotation marks, such as ‘ ” ’. - if (current === QU || next === QU) { - return BREAK_NOT_ALLOWED$1; - } - // LB20 Break before and after unresolved CB. - if (next === CB || current === CB) { - return BREAK_ALLOWED$1; - } - // LB21 Do not break before hyphen-minus, other hyphens, fixed-width spaces, small kana, and other non-starters, or after acute accents. - if ([BA, HY, NS].indexOf(next) !== -1 || current === BB) { - return BREAK_NOT_ALLOWED$1; - } - // LB21a Don't break after Hebrew + Hyphen. - if (before === HL && HYPHEN.indexOf(current) !== -1) { - return BREAK_NOT_ALLOWED$1; - } - // LB21b Don’t break between Solidus and Hebrew letters. - if (current === SY && next === HL) { - return BREAK_NOT_ALLOWED$1; - } - // LB22 Do not break before ellipsis. - if (next === IN) { - return BREAK_NOT_ALLOWED$1; - } - // LB23 Do not break between digits and letters. - if ((ALPHABETICS.indexOf(next) !== -1 && current === NU) || (ALPHABETICS.indexOf(current) !== -1 && next === NU)) { - return BREAK_NOT_ALLOWED$1; - } - // LB23a Do not break between numeric prefixes and ideographs, or between ideographs and numeric postfixes. - if ((current === PR && [ID, EB, EM].indexOf(next) !== -1) || - ([ID, EB, EM].indexOf(current) !== -1 && next === PO)) { - return BREAK_NOT_ALLOWED$1; - } - // LB24 Do not break between numeric prefix/postfix and letters, or between letters and prefix/postfix. - if ((ALPHABETICS.indexOf(current) !== -1 && PREFIX_POSTFIX.indexOf(next) !== -1) || - (PREFIX_POSTFIX.indexOf(current) !== -1 && ALPHABETICS.indexOf(next) !== -1)) { - return BREAK_NOT_ALLOWED$1; - } - // LB25 Do not break between the following pairs of classes relevant to numbers: - if ( - // (PR | PO) × ( OP | HY )? NU - ([PR, PO].indexOf(current) !== -1 && - (next === NU || ([OP, HY].indexOf(next) !== -1 && classTypes[afterIndex + 1] === NU))) || - // ( OP | HY ) × NU - ([OP, HY].indexOf(current) !== -1 && next === NU) || - // NU × (NU | SY | IS) - (current === NU && [NU, SY, IS].indexOf(next) !== -1)) { - return BREAK_NOT_ALLOWED$1; - } - // NU (NU | SY | IS)* × (NU | SY | IS | CL | CP) - if ([NU, SY, IS, CL, CP].indexOf(next) !== -1) { - var prevIndex = currentIndex; - while (prevIndex >= 0) { - var type = classTypes[prevIndex]; - if (type === NU) { - return BREAK_NOT_ALLOWED$1; - } - else if ([SY, IS].indexOf(type) !== -1) { - prevIndex--; - } - else { - break; - } - } - } - // NU (NU | SY | IS)* (CL | CP)? × (PO | PR)) - if ([PR, PO].indexOf(next) !== -1) { - var prevIndex = [CL, CP].indexOf(current) !== -1 ? beforeIndex : currentIndex; - while (prevIndex >= 0) { - var type = classTypes[prevIndex]; - if (type === NU) { - return BREAK_NOT_ALLOWED$1; - } - else if ([SY, IS].indexOf(type) !== -1) { - prevIndex--; - } - else { - break; - } - } - } - // LB26 Do not break a Korean syllable. - if ((JL === current && [JL, JV, H2, H3].indexOf(next) !== -1) || - ([JV, H2].indexOf(current) !== -1 && [JV, JT].indexOf(next) !== -1) || - ([JT, H3].indexOf(current) !== -1 && next === JT)) { - return BREAK_NOT_ALLOWED$1; - } - // LB27 Treat a Korean Syllable Block the same as ID. - if ((KOREAN_SYLLABLE_BLOCK.indexOf(current) !== -1 && [IN, PO].indexOf(next) !== -1) || - (KOREAN_SYLLABLE_BLOCK.indexOf(next) !== -1 && current === PR)) { - return BREAK_NOT_ALLOWED$1; - } - // LB28 Do not break between alphabetics (“at”). - if (ALPHABETICS.indexOf(current) !== -1 && ALPHABETICS.indexOf(next) !== -1) { - return BREAK_NOT_ALLOWED$1; - } - // LB29 Do not break between numeric punctuation and alphabetics (“e.g.”). - if (current === IS && ALPHABETICS.indexOf(next) !== -1) { - return BREAK_NOT_ALLOWED$1; - } - // LB30 Do not break between letters, numbers, or ordinary symbols and opening or closing parentheses. - if ((ALPHABETICS.concat(NU).indexOf(current) !== -1 && - next === OP && - ea_OP.indexOf(codePoints[afterIndex]) === -1) || - (ALPHABETICS.concat(NU).indexOf(next) !== -1 && current === CP)) { - return BREAK_NOT_ALLOWED$1; - } - // LB30a Break between two regional indicator symbols if and only if there are an even number of regional - // indicators preceding the position of the break. - if (current === RI$1 && next === RI$1) { - var i = indicies[currentIndex]; - var count = 1; - while (i > 0) { - i--; - if (classTypes[i] === RI$1) { - count++; - } - else { - break; - } - } - if (count % 2 !== 0) { - return BREAK_NOT_ALLOWED$1; - } - } - // LB30b Do not break between an emoji base and an emoji modifier. - if (current === EB && next === EM) { - return BREAK_NOT_ALLOWED$1; - } - return BREAK_ALLOWED$1; - }; - var cssFormattedClasses = function (codePoints, options) { - if (!options) { - options = { lineBreak: 'normal', wordBreak: 'normal' }; - } - var _a = codePointsToCharacterClasses(codePoints, options.lineBreak), indicies = _a[0], classTypes = _a[1], isLetterNumber = _a[2]; - if (options.wordBreak === 'break-all' || options.wordBreak === 'break-word') { - classTypes = classTypes.map(function (type) { return ([NU, AL, SA].indexOf(type) !== -1 ? ID : type); }); - } - var forbiddenBreakpoints = options.wordBreak === 'keep-all' - ? isLetterNumber.map(function (letterNumber, i) { - return letterNumber && codePoints[i] >= 0x4e00 && codePoints[i] <= 0x9fff; - }) - : undefined; - return [indicies, classTypes, forbiddenBreakpoints]; - }; - var Break = /** @class */ (function () { - function Break(codePoints, lineBreak, start, end) { - this.codePoints = codePoints; - this.required = lineBreak === BREAK_MANDATORY; - this.start = start; - this.end = end; - } - Break.prototype.slice = function () { - return fromCodePoint$1.apply(void 0, this.codePoints.slice(this.start, this.end)); - }; - return Break; - }()); - var LineBreaker = function (str, options) { - var codePoints = toCodePoints$1(str); - var _a = cssFormattedClasses(codePoints, options), indicies = _a[0], classTypes = _a[1], forbiddenBreakpoints = _a[2]; - var length = codePoints.length; - var lastEnd = 0; - var nextIndex = 0; - return { - next: function () { - if (nextIndex >= length) { - return { done: true, value: null }; - } - var lineBreak = BREAK_NOT_ALLOWED$1; - while (nextIndex < length && - (lineBreak = _lineBreakAtIndex(codePoints, classTypes, indicies, ++nextIndex, forbiddenBreakpoints)) === - BREAK_NOT_ALLOWED$1) { } - if (lineBreak !== BREAK_NOT_ALLOWED$1 || nextIndex === length) { - var value = new Break(codePoints, lineBreak, lastEnd, nextIndex); - lastEnd = nextIndex; - return { value: value, done: false }; - } - return { done: true, value: null }; - }, - }; - }; - - // https://www.w3.org/TR/css-syntax-3 - var FLAG_UNRESTRICTED = 1 << 0; - var FLAG_ID = 1 << 1; - var FLAG_INTEGER = 1 << 2; - var FLAG_NUMBER = 1 << 3; - var LINE_FEED = 0x000a; - var SOLIDUS = 0x002f; - var REVERSE_SOLIDUS = 0x005c; - var CHARACTER_TABULATION = 0x0009; - var SPACE = 0x0020; - var QUOTATION_MARK = 0x0022; - var EQUALS_SIGN = 0x003d; - var NUMBER_SIGN = 0x0023; - var DOLLAR_SIGN = 0x0024; - var PERCENTAGE_SIGN = 0x0025; - var APOSTROPHE = 0x0027; - var LEFT_PARENTHESIS = 0x0028; - var RIGHT_PARENTHESIS = 0x0029; - var LOW_LINE = 0x005f; - var HYPHEN_MINUS = 0x002d; - var EXCLAMATION_MARK = 0x0021; - var LESS_THAN_SIGN = 0x003c; - var GREATER_THAN_SIGN = 0x003e; - var COMMERCIAL_AT = 0x0040; - var LEFT_SQUARE_BRACKET = 0x005b; - var RIGHT_SQUARE_BRACKET = 0x005d; - var CIRCUMFLEX_ACCENT = 0x003d; - var LEFT_CURLY_BRACKET = 0x007b; - var QUESTION_MARK = 0x003f; - var RIGHT_CURLY_BRACKET = 0x007d; - var VERTICAL_LINE = 0x007c; - var TILDE = 0x007e; - var CONTROL = 0x0080; - var REPLACEMENT_CHARACTER = 0xfffd; - var ASTERISK = 0x002a; - var PLUS_SIGN = 0x002b; - var COMMA = 0x002c; - var COLON = 0x003a; - var SEMICOLON = 0x003b; - var FULL_STOP = 0x002e; - var NULL = 0x0000; - var BACKSPACE = 0x0008; - var LINE_TABULATION = 0x000b; - var SHIFT_OUT = 0x000e; - var INFORMATION_SEPARATOR_ONE = 0x001f; - var DELETE = 0x007f; - var EOF = -1; - var ZERO = 0x0030; - var a = 0x0061; - var e = 0x0065; - var f = 0x0066; - var u = 0x0075; - var z = 0x007a; - var A = 0x0041; - var E = 0x0045; - var F = 0x0046; - var U = 0x0055; - var Z = 0x005a; - var isDigit = function (codePoint) { return codePoint >= ZERO && codePoint <= 0x0039; }; - var isSurrogateCodePoint = function (codePoint) { return codePoint >= 0xd800 && codePoint <= 0xdfff; }; - var isHex = function (codePoint) { - return isDigit(codePoint) || (codePoint >= A && codePoint <= F) || (codePoint >= a && codePoint <= f); - }; - var isLowerCaseLetter = function (codePoint) { return codePoint >= a && codePoint <= z; }; - var isUpperCaseLetter = function (codePoint) { return codePoint >= A && codePoint <= Z; }; - var isLetter = function (codePoint) { return isLowerCaseLetter(codePoint) || isUpperCaseLetter(codePoint); }; - var isNonASCIICodePoint = function (codePoint) { return codePoint >= CONTROL; }; - var isWhiteSpace = function (codePoint) { - return codePoint === LINE_FEED || codePoint === CHARACTER_TABULATION || codePoint === SPACE; - }; - var isNameStartCodePoint = function (codePoint) { - return isLetter(codePoint) || isNonASCIICodePoint(codePoint) || codePoint === LOW_LINE; - }; - var isNameCodePoint = function (codePoint) { - return isNameStartCodePoint(codePoint) || isDigit(codePoint) || codePoint === HYPHEN_MINUS; - }; - var isNonPrintableCodePoint = function (codePoint) { - return ((codePoint >= NULL && codePoint <= BACKSPACE) || - codePoint === LINE_TABULATION || - (codePoint >= SHIFT_OUT && codePoint <= INFORMATION_SEPARATOR_ONE) || - codePoint === DELETE); - }; - var isValidEscape = function (c1, c2) { - if (c1 !== REVERSE_SOLIDUS) { - return false; - } - return c2 !== LINE_FEED; - }; - var isIdentifierStart = function (c1, c2, c3) { - if (c1 === HYPHEN_MINUS) { - return isNameStartCodePoint(c2) || isValidEscape(c2, c3); - } - else if (isNameStartCodePoint(c1)) { - return true; - } - else if (c1 === REVERSE_SOLIDUS && isValidEscape(c1, c2)) { - return true; - } - return false; - }; - var isNumberStart = function (c1, c2, c3) { - if (c1 === PLUS_SIGN || c1 === HYPHEN_MINUS) { - if (isDigit(c2)) { - return true; - } - return c2 === FULL_STOP && isDigit(c3); - } - if (c1 === FULL_STOP) { - return isDigit(c2); - } - return isDigit(c1); - }; - var stringToNumber = function (codePoints) { - var c = 0; - var sign = 1; - if (codePoints[c] === PLUS_SIGN || codePoints[c] === HYPHEN_MINUS) { - if (codePoints[c] === HYPHEN_MINUS) { - sign = -1; - } - c++; - } - var integers = []; - while (isDigit(codePoints[c])) { - integers.push(codePoints[c++]); - } - var int = integers.length ? parseInt(fromCodePoint$1.apply(void 0, integers), 10) : 0; - if (codePoints[c] === FULL_STOP) { - c++; - } - var fraction = []; - while (isDigit(codePoints[c])) { - fraction.push(codePoints[c++]); - } - var fracd = fraction.length; - var frac = fracd ? parseInt(fromCodePoint$1.apply(void 0, fraction), 10) : 0; - if (codePoints[c] === E || codePoints[c] === e) { - c++; - } - var expsign = 1; - if (codePoints[c] === PLUS_SIGN || codePoints[c] === HYPHEN_MINUS) { - if (codePoints[c] === HYPHEN_MINUS) { - expsign = -1; - } - c++; - } - var exponent = []; - while (isDigit(codePoints[c])) { - exponent.push(codePoints[c++]); - } - var exp = exponent.length ? parseInt(fromCodePoint$1.apply(void 0, exponent), 10) : 0; - return sign * (int + frac * Math.pow(10, -fracd)) * Math.pow(10, expsign * exp); - }; - var LEFT_PARENTHESIS_TOKEN = { - type: 2 /* LEFT_PARENTHESIS_TOKEN */ - }; - var RIGHT_PARENTHESIS_TOKEN = { - type: 3 /* RIGHT_PARENTHESIS_TOKEN */ - }; - var COMMA_TOKEN = { type: 4 /* COMMA_TOKEN */ }; - var SUFFIX_MATCH_TOKEN = { type: 13 /* SUFFIX_MATCH_TOKEN */ }; - var PREFIX_MATCH_TOKEN = { type: 8 /* PREFIX_MATCH_TOKEN */ }; - var COLUMN_TOKEN = { type: 21 /* COLUMN_TOKEN */ }; - var DASH_MATCH_TOKEN = { type: 9 /* DASH_MATCH_TOKEN */ }; - var INCLUDE_MATCH_TOKEN = { type: 10 /* INCLUDE_MATCH_TOKEN */ }; - var LEFT_CURLY_BRACKET_TOKEN = { - type: 11 /* LEFT_CURLY_BRACKET_TOKEN */ - }; - var RIGHT_CURLY_BRACKET_TOKEN = { - type: 12 /* RIGHT_CURLY_BRACKET_TOKEN */ - }; - var SUBSTRING_MATCH_TOKEN = { type: 14 /* SUBSTRING_MATCH_TOKEN */ }; - var BAD_URL_TOKEN = { type: 23 /* BAD_URL_TOKEN */ }; - var BAD_STRING_TOKEN = { type: 1 /* BAD_STRING_TOKEN */ }; - var CDO_TOKEN = { type: 25 /* CDO_TOKEN */ }; - var CDC_TOKEN = { type: 24 /* CDC_TOKEN */ }; - var COLON_TOKEN = { type: 26 /* COLON_TOKEN */ }; - var SEMICOLON_TOKEN = { type: 27 /* SEMICOLON_TOKEN */ }; - var LEFT_SQUARE_BRACKET_TOKEN = { - type: 28 /* LEFT_SQUARE_BRACKET_TOKEN */ - }; - var RIGHT_SQUARE_BRACKET_TOKEN = { - type: 29 /* RIGHT_SQUARE_BRACKET_TOKEN */ - }; - var WHITESPACE_TOKEN = { type: 31 /* WHITESPACE_TOKEN */ }; - var EOF_TOKEN = { type: 32 /* EOF_TOKEN */ }; - var Tokenizer = /** @class */ (function () { - function Tokenizer() { - this._value = []; - } - Tokenizer.prototype.write = function (chunk) { - this._value = this._value.concat(toCodePoints$1(chunk)); - }; - Tokenizer.prototype.read = function () { - var tokens = []; - var token = this.consumeToken(); - while (token !== EOF_TOKEN) { - tokens.push(token); - token = this.consumeToken(); - } - return tokens; - }; - Tokenizer.prototype.consumeToken = function () { - var codePoint = this.consumeCodePoint(); - switch (codePoint) { - case QUOTATION_MARK: - return this.consumeStringToken(QUOTATION_MARK); - case NUMBER_SIGN: - var c1 = this.peekCodePoint(0); - var c2 = this.peekCodePoint(1); - var c3 = this.peekCodePoint(2); - if (isNameCodePoint(c1) || isValidEscape(c2, c3)) { - var flags = isIdentifierStart(c1, c2, c3) ? FLAG_ID : FLAG_UNRESTRICTED; - var value = this.consumeName(); - return { type: 5 /* HASH_TOKEN */, value: value, flags: flags }; - } - break; - case DOLLAR_SIGN: - if (this.peekCodePoint(0) === EQUALS_SIGN) { - this.consumeCodePoint(); - return SUFFIX_MATCH_TOKEN; - } - break; - case APOSTROPHE: - return this.consumeStringToken(APOSTROPHE); - case LEFT_PARENTHESIS: - return LEFT_PARENTHESIS_TOKEN; - case RIGHT_PARENTHESIS: - return RIGHT_PARENTHESIS_TOKEN; - case ASTERISK: - if (this.peekCodePoint(0) === EQUALS_SIGN) { - this.consumeCodePoint(); - return SUBSTRING_MATCH_TOKEN; - } - break; - case PLUS_SIGN: - if (isNumberStart(codePoint, this.peekCodePoint(0), this.peekCodePoint(1))) { - this.reconsumeCodePoint(codePoint); - return this.consumeNumericToken(); - } - break; - case COMMA: - return COMMA_TOKEN; - case HYPHEN_MINUS: - var e1 = codePoint; - var e2 = this.peekCodePoint(0); - var e3 = this.peekCodePoint(1); - if (isNumberStart(e1, e2, e3)) { - this.reconsumeCodePoint(codePoint); - return this.consumeNumericToken(); - } - if (isIdentifierStart(e1, e2, e3)) { - this.reconsumeCodePoint(codePoint); - return this.consumeIdentLikeToken(); - } - if (e2 === HYPHEN_MINUS && e3 === GREATER_THAN_SIGN) { - this.consumeCodePoint(); - this.consumeCodePoint(); - return CDC_TOKEN; - } - break; - case FULL_STOP: - if (isNumberStart(codePoint, this.peekCodePoint(0), this.peekCodePoint(1))) { - this.reconsumeCodePoint(codePoint); - return this.consumeNumericToken(); - } - break; - case SOLIDUS: - if (this.peekCodePoint(0) === ASTERISK) { - this.consumeCodePoint(); - while (true) { - var c = this.consumeCodePoint(); - if (c === ASTERISK) { - c = this.consumeCodePoint(); - if (c === SOLIDUS) { - return this.consumeToken(); - } - } - if (c === EOF) { - return this.consumeToken(); - } - } - } - break; - case COLON: - return COLON_TOKEN; - case SEMICOLON: - return SEMICOLON_TOKEN; - case LESS_THAN_SIGN: - if (this.peekCodePoint(0) === EXCLAMATION_MARK && - this.peekCodePoint(1) === HYPHEN_MINUS && - this.peekCodePoint(2) === HYPHEN_MINUS) { - this.consumeCodePoint(); - this.consumeCodePoint(); - return CDO_TOKEN; - } - break; - case COMMERCIAL_AT: - var a1 = this.peekCodePoint(0); - var a2 = this.peekCodePoint(1); - var a3 = this.peekCodePoint(2); - if (isIdentifierStart(a1, a2, a3)) { - var value = this.consumeName(); - return { type: 7 /* AT_KEYWORD_TOKEN */, value: value }; - } - break; - case LEFT_SQUARE_BRACKET: - return LEFT_SQUARE_BRACKET_TOKEN; - case REVERSE_SOLIDUS: - if (isValidEscape(codePoint, this.peekCodePoint(0))) { - this.reconsumeCodePoint(codePoint); - return this.consumeIdentLikeToken(); - } - break; - case RIGHT_SQUARE_BRACKET: - return RIGHT_SQUARE_BRACKET_TOKEN; - case CIRCUMFLEX_ACCENT: - if (this.peekCodePoint(0) === EQUALS_SIGN) { - this.consumeCodePoint(); - return PREFIX_MATCH_TOKEN; - } - break; - case LEFT_CURLY_BRACKET: - return LEFT_CURLY_BRACKET_TOKEN; - case RIGHT_CURLY_BRACKET: - return RIGHT_CURLY_BRACKET_TOKEN; - case u: - case U: - var u1 = this.peekCodePoint(0); - var u2 = this.peekCodePoint(1); - if (u1 === PLUS_SIGN && (isHex(u2) || u2 === QUESTION_MARK)) { - this.consumeCodePoint(); - this.consumeUnicodeRangeToken(); - } - this.reconsumeCodePoint(codePoint); - return this.consumeIdentLikeToken(); - case VERTICAL_LINE: - if (this.peekCodePoint(0) === EQUALS_SIGN) { - this.consumeCodePoint(); - return DASH_MATCH_TOKEN; - } - if (this.peekCodePoint(0) === VERTICAL_LINE) { - this.consumeCodePoint(); - return COLUMN_TOKEN; - } - break; - case TILDE: - if (this.peekCodePoint(0) === EQUALS_SIGN) { - this.consumeCodePoint(); - return INCLUDE_MATCH_TOKEN; - } - break; - case EOF: - return EOF_TOKEN; - } - if (isWhiteSpace(codePoint)) { - this.consumeWhiteSpace(); - return WHITESPACE_TOKEN; - } - if (isDigit(codePoint)) { - this.reconsumeCodePoint(codePoint); - return this.consumeNumericToken(); - } - if (isNameStartCodePoint(codePoint)) { - this.reconsumeCodePoint(codePoint); - return this.consumeIdentLikeToken(); - } - return { type: 6 /* DELIM_TOKEN */, value: fromCodePoint$1(codePoint) }; - }; - Tokenizer.prototype.consumeCodePoint = function () { - var value = this._value.shift(); - return typeof value === 'undefined' ? -1 : value; - }; - Tokenizer.prototype.reconsumeCodePoint = function (codePoint) { - this._value.unshift(codePoint); - }; - Tokenizer.prototype.peekCodePoint = function (delta) { - if (delta >= this._value.length) { - return -1; - } - return this._value[delta]; - }; - Tokenizer.prototype.consumeUnicodeRangeToken = function () { - var digits = []; - var codePoint = this.consumeCodePoint(); - while (isHex(codePoint) && digits.length < 6) { - digits.push(codePoint); - codePoint = this.consumeCodePoint(); - } - var questionMarks = false; - while (codePoint === QUESTION_MARK && digits.length < 6) { - digits.push(codePoint); - codePoint = this.consumeCodePoint(); - questionMarks = true; - } - if (questionMarks) { - var start_1 = parseInt(fromCodePoint$1.apply(void 0, digits.map(function (digit) { return (digit === QUESTION_MARK ? ZERO : digit); })), 16); - var end = parseInt(fromCodePoint$1.apply(void 0, digits.map(function (digit) { return (digit === QUESTION_MARK ? F : digit); })), 16); - return { type: 30 /* UNICODE_RANGE_TOKEN */, start: start_1, end: end }; - } - var start = parseInt(fromCodePoint$1.apply(void 0, digits), 16); - if (this.peekCodePoint(0) === HYPHEN_MINUS && isHex(this.peekCodePoint(1))) { - this.consumeCodePoint(); - codePoint = this.consumeCodePoint(); - var endDigits = []; - while (isHex(codePoint) && endDigits.length < 6) { - endDigits.push(codePoint); - codePoint = this.consumeCodePoint(); - } - var end = parseInt(fromCodePoint$1.apply(void 0, endDigits), 16); - return { type: 30 /* UNICODE_RANGE_TOKEN */, start: start, end: end }; - } - else { - return { type: 30 /* UNICODE_RANGE_TOKEN */, start: start, end: start }; - } - }; - Tokenizer.prototype.consumeIdentLikeToken = function () { - var value = this.consumeName(); - if (value.toLowerCase() === 'url' && this.peekCodePoint(0) === LEFT_PARENTHESIS) { - this.consumeCodePoint(); - return this.consumeUrlToken(); - } - else if (this.peekCodePoint(0) === LEFT_PARENTHESIS) { - this.consumeCodePoint(); - return { type: 19 /* FUNCTION_TOKEN */, value: value }; - } - return { type: 20 /* IDENT_TOKEN */, value: value }; - }; - Tokenizer.prototype.consumeUrlToken = function () { - var value = []; - this.consumeWhiteSpace(); - if (this.peekCodePoint(0) === EOF) { - return { type: 22 /* URL_TOKEN */, value: '' }; - } - var next = this.peekCodePoint(0); - if (next === APOSTROPHE || next === QUOTATION_MARK) { - var stringToken = this.consumeStringToken(this.consumeCodePoint()); - if (stringToken.type === 0 /* STRING_TOKEN */) { - this.consumeWhiteSpace(); - if (this.peekCodePoint(0) === EOF || this.peekCodePoint(0) === RIGHT_PARENTHESIS) { - this.consumeCodePoint(); - return { type: 22 /* URL_TOKEN */, value: stringToken.value }; - } - } - this.consumeBadUrlRemnants(); - return BAD_URL_TOKEN; - } - while (true) { - var codePoint = this.consumeCodePoint(); - if (codePoint === EOF || codePoint === RIGHT_PARENTHESIS) { - return { type: 22 /* URL_TOKEN */, value: fromCodePoint$1.apply(void 0, value) }; - } - else if (isWhiteSpace(codePoint)) { - this.consumeWhiteSpace(); - if (this.peekCodePoint(0) === EOF || this.peekCodePoint(0) === RIGHT_PARENTHESIS) { - this.consumeCodePoint(); - return { type: 22 /* URL_TOKEN */, value: fromCodePoint$1.apply(void 0, value) }; - } - this.consumeBadUrlRemnants(); - return BAD_URL_TOKEN; - } - else if (codePoint === QUOTATION_MARK || - codePoint === APOSTROPHE || - codePoint === LEFT_PARENTHESIS || - isNonPrintableCodePoint(codePoint)) { - this.consumeBadUrlRemnants(); - return BAD_URL_TOKEN; - } - else if (codePoint === REVERSE_SOLIDUS) { - if (isValidEscape(codePoint, this.peekCodePoint(0))) { - value.push(this.consumeEscapedCodePoint()); - } - else { - this.consumeBadUrlRemnants(); - return BAD_URL_TOKEN; - } - } - else { - value.push(codePoint); - } - } - }; - Tokenizer.prototype.consumeWhiteSpace = function () { - while (isWhiteSpace(this.peekCodePoint(0))) { - this.consumeCodePoint(); - } - }; - Tokenizer.prototype.consumeBadUrlRemnants = function () { - while (true) { - var codePoint = this.consumeCodePoint(); - if (codePoint === RIGHT_PARENTHESIS || codePoint === EOF) { - return; - } - if (isValidEscape(codePoint, this.peekCodePoint(0))) { - this.consumeEscapedCodePoint(); - } - } - }; - Tokenizer.prototype.consumeStringSlice = function (count) { - var SLICE_STACK_SIZE = 50000; - var value = ''; - while (count > 0) { - var amount = Math.min(SLICE_STACK_SIZE, count); - value += fromCodePoint$1.apply(void 0, this._value.splice(0, amount)); - count -= amount; - } - this._value.shift(); - return value; - }; - Tokenizer.prototype.consumeStringToken = function (endingCodePoint) { - var value = ''; - var i = 0; - do { - var codePoint = this._value[i]; - if (codePoint === EOF || codePoint === undefined || codePoint === endingCodePoint) { - value += this.consumeStringSlice(i); - return { type: 0 /* STRING_TOKEN */, value: value }; - } - if (codePoint === LINE_FEED) { - this._value.splice(0, i); - return BAD_STRING_TOKEN; - } - if (codePoint === REVERSE_SOLIDUS) { - var next = this._value[i + 1]; - if (next !== EOF && next !== undefined) { - if (next === LINE_FEED) { - value += this.consumeStringSlice(i); - i = -1; - this._value.shift(); - } - else if (isValidEscape(codePoint, next)) { - value += this.consumeStringSlice(i); - value += fromCodePoint$1(this.consumeEscapedCodePoint()); - i = -1; - } - } - } - i++; - } while (true); - }; - Tokenizer.prototype.consumeNumber = function () { - var repr = []; - var type = FLAG_INTEGER; - var c1 = this.peekCodePoint(0); - if (c1 === PLUS_SIGN || c1 === HYPHEN_MINUS) { - repr.push(this.consumeCodePoint()); - } - while (isDigit(this.peekCodePoint(0))) { - repr.push(this.consumeCodePoint()); - } - c1 = this.peekCodePoint(0); - var c2 = this.peekCodePoint(1); - if (c1 === FULL_STOP && isDigit(c2)) { - repr.push(this.consumeCodePoint(), this.consumeCodePoint()); - type = FLAG_NUMBER; - while (isDigit(this.peekCodePoint(0))) { - repr.push(this.consumeCodePoint()); - } - } - c1 = this.peekCodePoint(0); - c2 = this.peekCodePoint(1); - var c3 = this.peekCodePoint(2); - if ((c1 === E || c1 === e) && (((c2 === PLUS_SIGN || c2 === HYPHEN_MINUS) && isDigit(c3)) || isDigit(c2))) { - repr.push(this.consumeCodePoint(), this.consumeCodePoint()); - type = FLAG_NUMBER; - while (isDigit(this.peekCodePoint(0))) { - repr.push(this.consumeCodePoint()); - } - } - return [stringToNumber(repr), type]; - }; - Tokenizer.prototype.consumeNumericToken = function () { - var _a = this.consumeNumber(), number = _a[0], flags = _a[1]; - var c1 = this.peekCodePoint(0); - var c2 = this.peekCodePoint(1); - var c3 = this.peekCodePoint(2); - if (isIdentifierStart(c1, c2, c3)) { - var unit = this.consumeName(); - return { type: 15 /* DIMENSION_TOKEN */, number: number, flags: flags, unit: unit }; - } - if (c1 === PERCENTAGE_SIGN) { - this.consumeCodePoint(); - return { type: 16 /* PERCENTAGE_TOKEN */, number: number, flags: flags }; - } - return { type: 17 /* NUMBER_TOKEN */, number: number, flags: flags }; - }; - Tokenizer.prototype.consumeEscapedCodePoint = function () { - var codePoint = this.consumeCodePoint(); - if (isHex(codePoint)) { - var hex = fromCodePoint$1(codePoint); - while (isHex(this.peekCodePoint(0)) && hex.length < 6) { - hex += fromCodePoint$1(this.consumeCodePoint()); - } - if (isWhiteSpace(this.peekCodePoint(0))) { - this.consumeCodePoint(); - } - var hexCodePoint = parseInt(hex, 16); - if (hexCodePoint === 0 || isSurrogateCodePoint(hexCodePoint) || hexCodePoint > 0x10ffff) { - return REPLACEMENT_CHARACTER; - } - return hexCodePoint; - } - if (codePoint === EOF) { - return REPLACEMENT_CHARACTER; - } - return codePoint; - }; - Tokenizer.prototype.consumeName = function () { - var result = ''; - while (true) { - var codePoint = this.consumeCodePoint(); - if (isNameCodePoint(codePoint)) { - result += fromCodePoint$1(codePoint); - } - else if (isValidEscape(codePoint, this.peekCodePoint(0))) { - result += fromCodePoint$1(this.consumeEscapedCodePoint()); - } - else { - this.reconsumeCodePoint(codePoint); - return result; - } - } - }; - return Tokenizer; - }()); - - var Parser = /** @class */ (function () { - function Parser(tokens) { - this._tokens = tokens; - } - Parser.create = function (value) { - var tokenizer = new Tokenizer(); - tokenizer.write(value); - return new Parser(tokenizer.read()); - }; - Parser.parseValue = function (value) { - return Parser.create(value).parseComponentValue(); - }; - Parser.parseValues = function (value) { - return Parser.create(value).parseComponentValues(); - }; - Parser.prototype.parseComponentValue = function () { - var token = this.consumeToken(); - while (token.type === 31 /* WHITESPACE_TOKEN */) { - token = this.consumeToken(); - } - if (token.type === 32 /* EOF_TOKEN */) { - throw new SyntaxError("Error parsing CSS component value, unexpected EOF"); - } - this.reconsumeToken(token); - var value = this.consumeComponentValue(); - do { - token = this.consumeToken(); - } while (token.type === 31 /* WHITESPACE_TOKEN */); - if (token.type === 32 /* EOF_TOKEN */) { - return value; - } - throw new SyntaxError("Error parsing CSS component value, multiple values found when expecting only one"); - }; - Parser.prototype.parseComponentValues = function () { - var values = []; - while (true) { - var value = this.consumeComponentValue(); - if (value.type === 32 /* EOF_TOKEN */) { - return values; - } - values.push(value); - values.push(); - } - }; - Parser.prototype.consumeComponentValue = function () { - var token = this.consumeToken(); - switch (token.type) { - case 11 /* LEFT_CURLY_BRACKET_TOKEN */: - case 28 /* LEFT_SQUARE_BRACKET_TOKEN */: - case 2 /* LEFT_PARENTHESIS_TOKEN */: - return this.consumeSimpleBlock(token.type); - case 19 /* FUNCTION_TOKEN */: - return this.consumeFunction(token); - } - return token; - }; - Parser.prototype.consumeSimpleBlock = function (type) { - var block = { type: type, values: [] }; - var token = this.consumeToken(); - while (true) { - if (token.type === 32 /* EOF_TOKEN */ || isEndingTokenFor(token, type)) { - return block; - } - this.reconsumeToken(token); - block.values.push(this.consumeComponentValue()); - token = this.consumeToken(); - } - }; - Parser.prototype.consumeFunction = function (functionToken) { - var cssFunction = { - name: functionToken.value, - values: [], - type: 18 /* FUNCTION */ - }; - while (true) { - var token = this.consumeToken(); - if (token.type === 32 /* EOF_TOKEN */ || token.type === 3 /* RIGHT_PARENTHESIS_TOKEN */) { - return cssFunction; - } - this.reconsumeToken(token); - cssFunction.values.push(this.consumeComponentValue()); - } - }; - Parser.prototype.consumeToken = function () { - var token = this._tokens.shift(); - return typeof token === 'undefined' ? EOF_TOKEN : token; - }; - Parser.prototype.reconsumeToken = function (token) { - this._tokens.unshift(token); - }; - return Parser; - }()); - var isDimensionToken = function (token) { return token.type === 15 /* DIMENSION_TOKEN */; }; - var isNumberToken = function (token) { return token.type === 17 /* NUMBER_TOKEN */; }; - var isIdentToken = function (token) { return token.type === 20 /* IDENT_TOKEN */; }; - var isStringToken = function (token) { return token.type === 0 /* STRING_TOKEN */; }; - var isIdentWithValue = function (token, value) { - return isIdentToken(token) && token.value === value; - }; - var nonWhiteSpace = function (token) { return token.type !== 31 /* WHITESPACE_TOKEN */; }; - var nonFunctionArgSeparator = function (token) { - return token.type !== 31 /* WHITESPACE_TOKEN */ && token.type !== 4 /* COMMA_TOKEN */; - }; - var parseFunctionArgs = function (tokens) { - var args = []; - var arg = []; - tokens.forEach(function (token) { - if (token.type === 4 /* COMMA_TOKEN */) { - if (arg.length === 0) { - throw new Error("Error parsing function args, zero tokens for arg"); - } - args.push(arg); - arg = []; - return; - } - if (token.type !== 31 /* WHITESPACE_TOKEN */) { - arg.push(token); - } - }); - if (arg.length) { - args.push(arg); - } - return args; - }; - var isEndingTokenFor = function (token, type) { - if (type === 11 /* LEFT_CURLY_BRACKET_TOKEN */ && token.type === 12 /* RIGHT_CURLY_BRACKET_TOKEN */) { - return true; - } - if (type === 28 /* LEFT_SQUARE_BRACKET_TOKEN */ && token.type === 29 /* RIGHT_SQUARE_BRACKET_TOKEN */) { - return true; - } - return type === 2 /* LEFT_PARENTHESIS_TOKEN */ && token.type === 3 /* RIGHT_PARENTHESIS_TOKEN */; - }; - - var isLength = function (token) { - return token.type === 17 /* NUMBER_TOKEN */ || token.type === 15 /* DIMENSION_TOKEN */; - }; - - var isLengthPercentage = function (token) { - return token.type === 16 /* PERCENTAGE_TOKEN */ || isLength(token); - }; - var parseLengthPercentageTuple = function (tokens) { - return tokens.length > 1 ? [tokens[0], tokens[1]] : [tokens[0]]; - }; - var ZERO_LENGTH = { - type: 17 /* NUMBER_TOKEN */, - number: 0, - flags: FLAG_INTEGER - }; - var FIFTY_PERCENT = { - type: 16 /* PERCENTAGE_TOKEN */, - number: 50, - flags: FLAG_INTEGER - }; - var HUNDRED_PERCENT = { - type: 16 /* PERCENTAGE_TOKEN */, - number: 100, - flags: FLAG_INTEGER - }; - var getAbsoluteValueForTuple = function (tuple, width, height) { - var x = tuple[0], y = tuple[1]; - return [getAbsoluteValue(x, width), getAbsoluteValue(typeof y !== 'undefined' ? y : x, height)]; - }; - var getAbsoluteValue = function (token, parent) { - if (token.type === 16 /* PERCENTAGE_TOKEN */) { - return (token.number / 100) * parent; - } - if (isDimensionToken(token)) { - switch (token.unit) { - case 'rem': - case 'em': - return 16 * token.number; // TODO use correct font-size - case 'px': - default: - return token.number; - } - } - return token.number; - }; - - var DEG = 'deg'; - var GRAD = 'grad'; - var RAD = 'rad'; - var TURN = 'turn'; - var angle = { - name: 'angle', - parse: function (_context, value) { - if (value.type === 15 /* DIMENSION_TOKEN */) { - switch (value.unit) { - case DEG: - return (Math.PI * value.number) / 180; - case GRAD: - return (Math.PI / 200) * value.number; - case RAD: - return value.number; - case TURN: - return Math.PI * 2 * value.number; - } - } - throw new Error("Unsupported angle type"); - } - }; - var isAngle = function (value) { - if (value.type === 15 /* DIMENSION_TOKEN */) { - if (value.unit === DEG || value.unit === GRAD || value.unit === RAD || value.unit === TURN) { - return true; - } - } - return false; - }; - var parseNamedSide = function (tokens) { - var sideOrCorner = tokens - .filter(isIdentToken) - .map(function (ident) { return ident.value; }) - .join(' '); - switch (sideOrCorner) { - case 'to bottom right': - case 'to right bottom': - case 'left top': - case 'top left': - return [ZERO_LENGTH, ZERO_LENGTH]; - case 'to top': - case 'bottom': - return deg(0); - case 'to bottom left': - case 'to left bottom': - case 'right top': - case 'top right': - return [ZERO_LENGTH, HUNDRED_PERCENT]; - case 'to right': - case 'left': - return deg(90); - case 'to top left': - case 'to left top': - case 'right bottom': - case 'bottom right': - return [HUNDRED_PERCENT, HUNDRED_PERCENT]; - case 'to bottom': - case 'top': - return deg(180); - case 'to top right': - case 'to right top': - case 'left bottom': - case 'bottom left': - return [HUNDRED_PERCENT, ZERO_LENGTH]; - case 'to left': - case 'right': - return deg(270); - } - return 0; - }; - var deg = function (deg) { return (Math.PI * deg) / 180; }; - - var color$1 = { - name: 'color', - parse: function (context, value) { - if (value.type === 18 /* FUNCTION */) { - var colorFunction = SUPPORTED_COLOR_FUNCTIONS[value.name]; - if (typeof colorFunction === 'undefined') { - throw new Error("Attempting to parse an unsupported color function \"" + value.name + "\""); - } - return colorFunction(context, value.values); - } - if (value.type === 5 /* HASH_TOKEN */) { - if (value.value.length === 3) { - var r = value.value.substring(0, 1); - var g = value.value.substring(1, 2); - var b = value.value.substring(2, 3); - return pack(parseInt(r + r, 16), parseInt(g + g, 16), parseInt(b + b, 16), 1); - } - if (value.value.length === 4) { - var r = value.value.substring(0, 1); - var g = value.value.substring(1, 2); - var b = value.value.substring(2, 3); - var a = value.value.substring(3, 4); - return pack(parseInt(r + r, 16), parseInt(g + g, 16), parseInt(b + b, 16), parseInt(a + a, 16) / 255); - } - if (value.value.length === 6) { - var r = value.value.substring(0, 2); - var g = value.value.substring(2, 4); - var b = value.value.substring(4, 6); - return pack(parseInt(r, 16), parseInt(g, 16), parseInt(b, 16), 1); - } - if (value.value.length === 8) { - var r = value.value.substring(0, 2); - var g = value.value.substring(2, 4); - var b = value.value.substring(4, 6); - var a = value.value.substring(6, 8); - return pack(parseInt(r, 16), parseInt(g, 16), parseInt(b, 16), parseInt(a, 16) / 255); - } - } - if (value.type === 20 /* IDENT_TOKEN */) { - var namedColor = COLORS[value.value.toUpperCase()]; - if (typeof namedColor !== 'undefined') { - return namedColor; - } - } - return COLORS.TRANSPARENT; - } - }; - var isTransparent = function (color) { return (0xff & color) === 0; }; - var asString = function (color) { - var alpha = 0xff & color; - var blue = 0xff & (color >> 8); - var green = 0xff & (color >> 16); - var red = 0xff & (color >> 24); - return alpha < 255 ? "rgba(" + red + "," + green + "," + blue + "," + alpha / 255 + ")" : "rgb(" + red + "," + green + "," + blue + ")"; - }; - var pack = function (r, g, b, a) { - return ((r << 24) | (g << 16) | (b << 8) | (Math.round(a * 255) << 0)) >>> 0; - }; - var getTokenColorValue = function (token, i) { - if (token.type === 17 /* NUMBER_TOKEN */) { - return token.number; - } - if (token.type === 16 /* PERCENTAGE_TOKEN */) { - var max = i === 3 ? 1 : 255; - return i === 3 ? (token.number / 100) * max : Math.round((token.number / 100) * max); - } - return 0; - }; - var rgb = function (_context, args) { - var tokens = args.filter(nonFunctionArgSeparator); - if (tokens.length === 3) { - var _a = tokens.map(getTokenColorValue), r = _a[0], g = _a[1], b = _a[2]; - return pack(r, g, b, 1); - } - if (tokens.length === 4) { - var _b = tokens.map(getTokenColorValue), r = _b[0], g = _b[1], b = _b[2], a = _b[3]; - return pack(r, g, b, a); - } - return 0; - }; - function hue2rgb(t1, t2, hue) { - if (hue < 0) { - hue += 1; - } - if (hue >= 1) { - hue -= 1; - } - if (hue < 1 / 6) { - return (t2 - t1) * hue * 6 + t1; - } - else if (hue < 1 / 2) { - return t2; - } - else if (hue < 2 / 3) { - return (t2 - t1) * 6 * (2 / 3 - hue) + t1; - } - else { - return t1; - } - } - var hsl = function (context, args) { - var tokens = args.filter(nonFunctionArgSeparator); - var hue = tokens[0], saturation = tokens[1], lightness = tokens[2], alpha = tokens[3]; - var h = (hue.type === 17 /* NUMBER_TOKEN */ ? deg(hue.number) : angle.parse(context, hue)) / (Math.PI * 2); - var s = isLengthPercentage(saturation) ? saturation.number / 100 : 0; - var l = isLengthPercentage(lightness) ? lightness.number / 100 : 0; - var a = typeof alpha !== 'undefined' && isLengthPercentage(alpha) ? getAbsoluteValue(alpha, 1) : 1; - if (s === 0) { - return pack(l * 255, l * 255, l * 255, 1); - } - var t2 = l <= 0.5 ? l * (s + 1) : l + s - l * s; - var t1 = l * 2 - t2; - var r = hue2rgb(t1, t2, h + 1 / 3); - var g = hue2rgb(t1, t2, h); - var b = hue2rgb(t1, t2, h - 1 / 3); - return pack(r * 255, g * 255, b * 255, a); - }; - var SUPPORTED_COLOR_FUNCTIONS = { - hsl: hsl, - hsla: hsl, - rgb: rgb, - rgba: rgb - }; - var parseColor = function (context, value) { - return color$1.parse(context, Parser.create(value).parseComponentValue()); - }; - var COLORS = { - ALICEBLUE: 0xf0f8ffff, - ANTIQUEWHITE: 0xfaebd7ff, - AQUA: 0x00ffffff, - AQUAMARINE: 0x7fffd4ff, - AZURE: 0xf0ffffff, - BEIGE: 0xf5f5dcff, - BISQUE: 0xffe4c4ff, - BLACK: 0x000000ff, - BLANCHEDALMOND: 0xffebcdff, - BLUE: 0x0000ffff, - BLUEVIOLET: 0x8a2be2ff, - BROWN: 0xa52a2aff, - BURLYWOOD: 0xdeb887ff, - CADETBLUE: 0x5f9ea0ff, - CHARTREUSE: 0x7fff00ff, - CHOCOLATE: 0xd2691eff, - CORAL: 0xff7f50ff, - CORNFLOWERBLUE: 0x6495edff, - CORNSILK: 0xfff8dcff, - CRIMSON: 0xdc143cff, - CYAN: 0x00ffffff, - DARKBLUE: 0x00008bff, - DARKCYAN: 0x008b8bff, - DARKGOLDENROD: 0xb886bbff, - DARKGRAY: 0xa9a9a9ff, - DARKGREEN: 0x006400ff, - DARKGREY: 0xa9a9a9ff, - DARKKHAKI: 0xbdb76bff, - DARKMAGENTA: 0x8b008bff, - DARKOLIVEGREEN: 0x556b2fff, - DARKORANGE: 0xff8c00ff, - DARKORCHID: 0x9932ccff, - DARKRED: 0x8b0000ff, - DARKSALMON: 0xe9967aff, - DARKSEAGREEN: 0x8fbc8fff, - DARKSLATEBLUE: 0x483d8bff, - DARKSLATEGRAY: 0x2f4f4fff, - DARKSLATEGREY: 0x2f4f4fff, - DARKTURQUOISE: 0x00ced1ff, - DARKVIOLET: 0x9400d3ff, - DEEPPINK: 0xff1493ff, - DEEPSKYBLUE: 0x00bfffff, - DIMGRAY: 0x696969ff, - DIMGREY: 0x696969ff, - DODGERBLUE: 0x1e90ffff, - FIREBRICK: 0xb22222ff, - FLORALWHITE: 0xfffaf0ff, - FORESTGREEN: 0x228b22ff, - FUCHSIA: 0xff00ffff, - GAINSBORO: 0xdcdcdcff, - GHOSTWHITE: 0xf8f8ffff, - GOLD: 0xffd700ff, - GOLDENROD: 0xdaa520ff, - GRAY: 0x808080ff, - GREEN: 0x008000ff, - GREENYELLOW: 0xadff2fff, - GREY: 0x808080ff, - HONEYDEW: 0xf0fff0ff, - HOTPINK: 0xff69b4ff, - INDIANRED: 0xcd5c5cff, - INDIGO: 0x4b0082ff, - IVORY: 0xfffff0ff, - KHAKI: 0xf0e68cff, - LAVENDER: 0xe6e6faff, - LAVENDERBLUSH: 0xfff0f5ff, - LAWNGREEN: 0x7cfc00ff, - LEMONCHIFFON: 0xfffacdff, - LIGHTBLUE: 0xadd8e6ff, - LIGHTCORAL: 0xf08080ff, - LIGHTCYAN: 0xe0ffffff, - LIGHTGOLDENRODYELLOW: 0xfafad2ff, - LIGHTGRAY: 0xd3d3d3ff, - LIGHTGREEN: 0x90ee90ff, - LIGHTGREY: 0xd3d3d3ff, - LIGHTPINK: 0xffb6c1ff, - LIGHTSALMON: 0xffa07aff, - LIGHTSEAGREEN: 0x20b2aaff, - LIGHTSKYBLUE: 0x87cefaff, - LIGHTSLATEGRAY: 0x778899ff, - LIGHTSLATEGREY: 0x778899ff, - LIGHTSTEELBLUE: 0xb0c4deff, - LIGHTYELLOW: 0xffffe0ff, - LIME: 0x00ff00ff, - LIMEGREEN: 0x32cd32ff, - LINEN: 0xfaf0e6ff, - MAGENTA: 0xff00ffff, - MAROON: 0x800000ff, - MEDIUMAQUAMARINE: 0x66cdaaff, - MEDIUMBLUE: 0x0000cdff, - MEDIUMORCHID: 0xba55d3ff, - MEDIUMPURPLE: 0x9370dbff, - MEDIUMSEAGREEN: 0x3cb371ff, - MEDIUMSLATEBLUE: 0x7b68eeff, - MEDIUMSPRINGGREEN: 0x00fa9aff, - MEDIUMTURQUOISE: 0x48d1ccff, - MEDIUMVIOLETRED: 0xc71585ff, - MIDNIGHTBLUE: 0x191970ff, - MINTCREAM: 0xf5fffaff, - MISTYROSE: 0xffe4e1ff, - MOCCASIN: 0xffe4b5ff, - NAVAJOWHITE: 0xffdeadff, - NAVY: 0x000080ff, - OLDLACE: 0xfdf5e6ff, - OLIVE: 0x808000ff, - OLIVEDRAB: 0x6b8e23ff, - ORANGE: 0xffa500ff, - ORANGERED: 0xff4500ff, - ORCHID: 0xda70d6ff, - PALEGOLDENROD: 0xeee8aaff, - PALEGREEN: 0x98fb98ff, - PALETURQUOISE: 0xafeeeeff, - PALEVIOLETRED: 0xdb7093ff, - PAPAYAWHIP: 0xffefd5ff, - PEACHPUFF: 0xffdab9ff, - PERU: 0xcd853fff, - PINK: 0xffc0cbff, - PLUM: 0xdda0ddff, - POWDERBLUE: 0xb0e0e6ff, - PURPLE: 0x800080ff, - REBECCAPURPLE: 0x663399ff, - RED: 0xff0000ff, - ROSYBROWN: 0xbc8f8fff, - ROYALBLUE: 0x4169e1ff, - SADDLEBROWN: 0x8b4513ff, - SALMON: 0xfa8072ff, - SANDYBROWN: 0xf4a460ff, - SEAGREEN: 0x2e8b57ff, - SEASHELL: 0xfff5eeff, - SIENNA: 0xa0522dff, - SILVER: 0xc0c0c0ff, - SKYBLUE: 0x87ceebff, - SLATEBLUE: 0x6a5acdff, - SLATEGRAY: 0x708090ff, - SLATEGREY: 0x708090ff, - SNOW: 0xfffafaff, - SPRINGGREEN: 0x00ff7fff, - STEELBLUE: 0x4682b4ff, - TAN: 0xd2b48cff, - TEAL: 0x008080ff, - THISTLE: 0xd8bfd8ff, - TOMATO: 0xff6347ff, - TRANSPARENT: 0x00000000, - TURQUOISE: 0x40e0d0ff, - VIOLET: 0xee82eeff, - WHEAT: 0xf5deb3ff, - WHITE: 0xffffffff, - WHITESMOKE: 0xf5f5f5ff, - YELLOW: 0xffff00ff, - YELLOWGREEN: 0x9acd32ff - }; - - var backgroundClip = { - name: 'background-clip', - initialValue: 'border-box', - prefix: false, - type: 1 /* LIST */, - parse: function (_context, tokens) { - return tokens.map(function (token) { - if (isIdentToken(token)) { - switch (token.value) { - case 'padding-box': - return 1 /* PADDING_BOX */; - case 'content-box': - return 2 /* CONTENT_BOX */; - } - } - return 0 /* BORDER_BOX */; - }); - } - }; - - var backgroundColor = { - name: "background-color", - initialValue: 'transparent', - prefix: false, - type: 3 /* TYPE_VALUE */, - format: 'color' - }; - - var parseColorStop = function (context, args) { - var color = color$1.parse(context, args[0]); - var stop = args[1]; - return stop && isLengthPercentage(stop) ? { color: color, stop: stop } : { color: color, stop: null }; - }; - var processColorStops = function (stops, lineLength) { - var first = stops[0]; - var last = stops[stops.length - 1]; - if (first.stop === null) { - first.stop = ZERO_LENGTH; - } - if (last.stop === null) { - last.stop = HUNDRED_PERCENT; - } - var processStops = []; - var previous = 0; - for (var i = 0; i < stops.length; i++) { - var stop_1 = stops[i].stop; - if (stop_1 !== null) { - var absoluteValue = getAbsoluteValue(stop_1, lineLength); - if (absoluteValue > previous) { - processStops.push(absoluteValue); - } - else { - processStops.push(previous); - } - previous = absoluteValue; - } - else { - processStops.push(null); - } - } - var gapBegin = null; - for (var i = 0; i < processStops.length; i++) { - var stop_2 = processStops[i]; - if (stop_2 === null) { - if (gapBegin === null) { - gapBegin = i; - } - } - else if (gapBegin !== null) { - var gapLength = i - gapBegin; - var beforeGap = processStops[gapBegin - 1]; - var gapValue = (stop_2 - beforeGap) / (gapLength + 1); - for (var g = 1; g <= gapLength; g++) { - processStops[gapBegin + g - 1] = gapValue * g; - } - gapBegin = null; - } - } - return stops.map(function (_a, i) { - var color = _a.color; - return { color: color, stop: Math.max(Math.min(1, processStops[i] / lineLength), 0) }; - }); - }; - var getAngleFromCorner = function (corner, width, height) { - var centerX = width / 2; - var centerY = height / 2; - var x = getAbsoluteValue(corner[0], width) - centerX; - var y = centerY - getAbsoluteValue(corner[1], height); - return (Math.atan2(y, x) + Math.PI * 2) % (Math.PI * 2); - }; - var calculateGradientDirection = function (angle, width, height) { - var radian = typeof angle === 'number' ? angle : getAngleFromCorner(angle, width, height); - var lineLength = Math.abs(width * Math.sin(radian)) + Math.abs(height * Math.cos(radian)); - var halfWidth = width / 2; - var halfHeight = height / 2; - var halfLineLength = lineLength / 2; - var yDiff = Math.sin(radian - Math.PI / 2) * halfLineLength; - var xDiff = Math.cos(radian - Math.PI / 2) * halfLineLength; - return [lineLength, halfWidth - xDiff, halfWidth + xDiff, halfHeight - yDiff, halfHeight + yDiff]; - }; - var distance = function (a, b) { return Math.sqrt(a * a + b * b); }; - var findCorner = function (width, height, x, y, closest) { - var corners = [ - [0, 0], - [0, height], - [width, 0], - [width, height] - ]; - return corners.reduce(function (stat, corner) { - var cx = corner[0], cy = corner[1]; - var d = distance(x - cx, y - cy); - if (closest ? d < stat.optimumDistance : d > stat.optimumDistance) { - return { - optimumCorner: corner, - optimumDistance: d - }; - } - return stat; - }, { - optimumDistance: closest ? Infinity : -Infinity, - optimumCorner: null - }).optimumCorner; - }; - var calculateRadius = function (gradient, x, y, width, height) { - var rx = 0; - var ry = 0; - switch (gradient.size) { - case 0 /* CLOSEST_SIDE */: - // The ending shape is sized so that that it exactly meets the side of the gradient box closest to the gradient’s center. - // If the shape is an ellipse, it exactly meets the closest side in each dimension. - if (gradient.shape === 0 /* CIRCLE */) { - rx = ry = Math.min(Math.abs(x), Math.abs(x - width), Math.abs(y), Math.abs(y - height)); - } - else if (gradient.shape === 1 /* ELLIPSE */) { - rx = Math.min(Math.abs(x), Math.abs(x - width)); - ry = Math.min(Math.abs(y), Math.abs(y - height)); - } - break; - case 2 /* CLOSEST_CORNER */: - // The ending shape is sized so that that it passes through the corner of the gradient box closest to the gradient’s center. - // If the shape is an ellipse, the ending shape is given the same aspect-ratio it would have if closest-side were specified. - if (gradient.shape === 0 /* CIRCLE */) { - rx = ry = Math.min(distance(x, y), distance(x, y - height), distance(x - width, y), distance(x - width, y - height)); - } - else if (gradient.shape === 1 /* ELLIPSE */) { - // Compute the ratio ry/rx (which is to be the same as for "closest-side") - var c = Math.min(Math.abs(y), Math.abs(y - height)) / Math.min(Math.abs(x), Math.abs(x - width)); - var _a = findCorner(width, height, x, y, true), cx = _a[0], cy = _a[1]; - rx = distance(cx - x, (cy - y) / c); - ry = c * rx; - } - break; - case 1 /* FARTHEST_SIDE */: - // Same as closest-side, except the ending shape is sized based on the farthest side(s) - if (gradient.shape === 0 /* CIRCLE */) { - rx = ry = Math.max(Math.abs(x), Math.abs(x - width), Math.abs(y), Math.abs(y - height)); - } - else if (gradient.shape === 1 /* ELLIPSE */) { - rx = Math.max(Math.abs(x), Math.abs(x - width)); - ry = Math.max(Math.abs(y), Math.abs(y - height)); - } - break; - case 3 /* FARTHEST_CORNER */: - // Same as closest-corner, except the ending shape is sized based on the farthest corner. - // If the shape is an ellipse, the ending shape is given the same aspect ratio it would have if farthest-side were specified. - if (gradient.shape === 0 /* CIRCLE */) { - rx = ry = Math.max(distance(x, y), distance(x, y - height), distance(x - width, y), distance(x - width, y - height)); - } - else if (gradient.shape === 1 /* ELLIPSE */) { - // Compute the ratio ry/rx (which is to be the same as for "farthest-side") - var c = Math.max(Math.abs(y), Math.abs(y - height)) / Math.max(Math.abs(x), Math.abs(x - width)); - var _b = findCorner(width, height, x, y, false), cx = _b[0], cy = _b[1]; - rx = distance(cx - x, (cy - y) / c); - ry = c * rx; - } - break; - } - if (Array.isArray(gradient.size)) { - rx = getAbsoluteValue(gradient.size[0], width); - ry = gradient.size.length === 2 ? getAbsoluteValue(gradient.size[1], height) : rx; - } - return [rx, ry]; - }; - - var linearGradient = function (context, tokens) { - var angle$1 = deg(180); - var stops = []; - parseFunctionArgs(tokens).forEach(function (arg, i) { - if (i === 0) { - var firstToken = arg[0]; - if (firstToken.type === 20 /* IDENT_TOKEN */ && firstToken.value === 'to') { - angle$1 = parseNamedSide(arg); - return; - } - else if (isAngle(firstToken)) { - angle$1 = angle.parse(context, firstToken); - return; - } - } - var colorStop = parseColorStop(context, arg); - stops.push(colorStop); - }); - return { angle: angle$1, stops: stops, type: 1 /* LINEAR_GRADIENT */ }; - }; - - var prefixLinearGradient = function (context, tokens) { - var angle$1 = deg(180); - var stops = []; - parseFunctionArgs(tokens).forEach(function (arg, i) { - if (i === 0) { - var firstToken = arg[0]; - if (firstToken.type === 20 /* IDENT_TOKEN */ && - ['top', 'left', 'right', 'bottom'].indexOf(firstToken.value) !== -1) { - angle$1 = parseNamedSide(arg); - return; - } - else if (isAngle(firstToken)) { - angle$1 = (angle.parse(context, firstToken) + deg(270)) % deg(360); - return; - } - } - var colorStop = parseColorStop(context, arg); - stops.push(colorStop); - }); - return { - angle: angle$1, - stops: stops, - type: 1 /* LINEAR_GRADIENT */ - }; - }; - - var webkitGradient = function (context, tokens) { - var angle = deg(180); - var stops = []; - var type = 1 /* LINEAR_GRADIENT */; - var shape = 0 /* CIRCLE */; - var size = 3 /* FARTHEST_CORNER */; - var position = []; - parseFunctionArgs(tokens).forEach(function (arg, i) { - var firstToken = arg[0]; - if (i === 0) { - if (isIdentToken(firstToken) && firstToken.value === 'linear') { - type = 1 /* LINEAR_GRADIENT */; - return; - } - else if (isIdentToken(firstToken) && firstToken.value === 'radial') { - type = 2 /* RADIAL_GRADIENT */; - return; - } - } - if (firstToken.type === 18 /* FUNCTION */) { - if (firstToken.name === 'from') { - var color = color$1.parse(context, firstToken.values[0]); - stops.push({ stop: ZERO_LENGTH, color: color }); - } - else if (firstToken.name === 'to') { - var color = color$1.parse(context, firstToken.values[0]); - stops.push({ stop: HUNDRED_PERCENT, color: color }); - } - else if (firstToken.name === 'color-stop') { - var values = firstToken.values.filter(nonFunctionArgSeparator); - if (values.length === 2) { - var color = color$1.parse(context, values[1]); - var stop_1 = values[0]; - if (isNumberToken(stop_1)) { - stops.push({ - stop: { type: 16 /* PERCENTAGE_TOKEN */, number: stop_1.number * 100, flags: stop_1.flags }, - color: color - }); - } - } - } - } - }); - return type === 1 /* LINEAR_GRADIENT */ - ? { - angle: (angle + deg(180)) % deg(360), - stops: stops, - type: type - } - : { size: size, shape: shape, stops: stops, position: position, type: type }; - }; - - var CLOSEST_SIDE = 'closest-side'; - var FARTHEST_SIDE = 'farthest-side'; - var CLOSEST_CORNER = 'closest-corner'; - var FARTHEST_CORNER = 'farthest-corner'; - var CIRCLE = 'circle'; - var ELLIPSE = 'ellipse'; - var COVER = 'cover'; - var CONTAIN = 'contain'; - var radialGradient = function (context, tokens) { - var shape = 0 /* CIRCLE */; - var size = 3 /* FARTHEST_CORNER */; - var stops = []; - var position = []; - parseFunctionArgs(tokens).forEach(function (arg, i) { - var isColorStop = true; - if (i === 0) { - var isAtPosition_1 = false; - isColorStop = arg.reduce(function (acc, token) { - if (isAtPosition_1) { - if (isIdentToken(token)) { - switch (token.value) { - case 'center': - position.push(FIFTY_PERCENT); - return acc; - case 'top': - case 'left': - position.push(ZERO_LENGTH); - return acc; - case 'right': - case 'bottom': - position.push(HUNDRED_PERCENT); - return acc; - } - } - else if (isLengthPercentage(token) || isLength(token)) { - position.push(token); - } - } - else if (isIdentToken(token)) { - switch (token.value) { - case CIRCLE: - shape = 0 /* CIRCLE */; - return false; - case ELLIPSE: - shape = 1 /* ELLIPSE */; - return false; - case 'at': - isAtPosition_1 = true; - return false; - case CLOSEST_SIDE: - size = 0 /* CLOSEST_SIDE */; - return false; - case COVER: - case FARTHEST_SIDE: - size = 1 /* FARTHEST_SIDE */; - return false; - case CONTAIN: - case CLOSEST_CORNER: - size = 2 /* CLOSEST_CORNER */; - return false; - case FARTHEST_CORNER: - size = 3 /* FARTHEST_CORNER */; - return false; - } - } - else if (isLength(token) || isLengthPercentage(token)) { - if (!Array.isArray(size)) { - size = []; - } - size.push(token); - return false; - } - return acc; - }, isColorStop); - } - if (isColorStop) { - var colorStop = parseColorStop(context, arg); - stops.push(colorStop); - } - }); - return { size: size, shape: shape, stops: stops, position: position, type: 2 /* RADIAL_GRADIENT */ }; - }; - - var prefixRadialGradient = function (context, tokens) { - var shape = 0 /* CIRCLE */; - var size = 3 /* FARTHEST_CORNER */; - var stops = []; - var position = []; - parseFunctionArgs(tokens).forEach(function (arg, i) { - var isColorStop = true; - if (i === 0) { - isColorStop = arg.reduce(function (acc, token) { - if (isIdentToken(token)) { - switch (token.value) { - case 'center': - position.push(FIFTY_PERCENT); - return false; - case 'top': - case 'left': - position.push(ZERO_LENGTH); - return false; - case 'right': - case 'bottom': - position.push(HUNDRED_PERCENT); - return false; - } - } - else if (isLengthPercentage(token) || isLength(token)) { - position.push(token); - return false; - } - return acc; - }, isColorStop); - } - else if (i === 1) { - isColorStop = arg.reduce(function (acc, token) { - if (isIdentToken(token)) { - switch (token.value) { - case CIRCLE: - shape = 0 /* CIRCLE */; - return false; - case ELLIPSE: - shape = 1 /* ELLIPSE */; - return false; - case CONTAIN: - case CLOSEST_SIDE: - size = 0 /* CLOSEST_SIDE */; - return false; - case FARTHEST_SIDE: - size = 1 /* FARTHEST_SIDE */; - return false; - case CLOSEST_CORNER: - size = 2 /* CLOSEST_CORNER */; - return false; - case COVER: - case FARTHEST_CORNER: - size = 3 /* FARTHEST_CORNER */; - return false; - } - } - else if (isLength(token) || isLengthPercentage(token)) { - if (!Array.isArray(size)) { - size = []; - } - size.push(token); - return false; - } - return acc; - }, isColorStop); - } - if (isColorStop) { - var colorStop = parseColorStop(context, arg); - stops.push(colorStop); - } - }); - return { size: size, shape: shape, stops: stops, position: position, type: 2 /* RADIAL_GRADIENT */ }; - }; - - var isLinearGradient = function (background) { - return background.type === 1 /* LINEAR_GRADIENT */; - }; - var isRadialGradient = function (background) { - return background.type === 2 /* RADIAL_GRADIENT */; - }; - var image = { - name: 'image', - parse: function (context, value) { - if (value.type === 22 /* URL_TOKEN */) { - var image_1 = { url: value.value, type: 0 /* URL */ }; - context.cache.addImage(value.value); - return image_1; - } - if (value.type === 18 /* FUNCTION */) { - var imageFunction = SUPPORTED_IMAGE_FUNCTIONS[value.name]; - if (typeof imageFunction === 'undefined') { - throw new Error("Attempting to parse an unsupported image function \"" + value.name + "\""); - } - return imageFunction(context, value.values); - } - throw new Error("Unsupported image type " + value.type); - } - }; - function isSupportedImage(value) { - return (!(value.type === 20 /* IDENT_TOKEN */ && value.value === 'none') && - (value.type !== 18 /* FUNCTION */ || !!SUPPORTED_IMAGE_FUNCTIONS[value.name])); - } - var SUPPORTED_IMAGE_FUNCTIONS = { - 'linear-gradient': linearGradient, - '-moz-linear-gradient': prefixLinearGradient, - '-ms-linear-gradient': prefixLinearGradient, - '-o-linear-gradient': prefixLinearGradient, - '-webkit-linear-gradient': prefixLinearGradient, - 'radial-gradient': radialGradient, - '-moz-radial-gradient': prefixRadialGradient, - '-ms-radial-gradient': prefixRadialGradient, - '-o-radial-gradient': prefixRadialGradient, - '-webkit-radial-gradient': prefixRadialGradient, - '-webkit-gradient': webkitGradient - }; - - var backgroundImage = { - name: 'background-image', - initialValue: 'none', - type: 1 /* LIST */, - prefix: false, - parse: function (context, tokens) { - if (tokens.length === 0) { - return []; - } - var first = tokens[0]; - if (first.type === 20 /* IDENT_TOKEN */ && first.value === 'none') { - return []; - } - return tokens - .filter(function (value) { return nonFunctionArgSeparator(value) && isSupportedImage(value); }) - .map(function (value) { return image.parse(context, value); }); - } - }; - - var backgroundOrigin = { - name: 'background-origin', - initialValue: 'border-box', - prefix: false, - type: 1 /* LIST */, - parse: function (_context, tokens) { - return tokens.map(function (token) { - if (isIdentToken(token)) { - switch (token.value) { - case 'padding-box': - return 1 /* PADDING_BOX */; - case 'content-box': - return 2 /* CONTENT_BOX */; - } - } - return 0 /* BORDER_BOX */; - }); - } - }; - - var backgroundPosition = { - name: 'background-position', - initialValue: '0% 0%', - type: 1 /* LIST */, - prefix: false, - parse: function (_context, tokens) { - return parseFunctionArgs(tokens) - .map(function (values) { return values.filter(isLengthPercentage); }) - .map(parseLengthPercentageTuple); - } - }; - - var backgroundRepeat = { - name: 'background-repeat', - initialValue: 'repeat', - prefix: false, - type: 1 /* LIST */, - parse: function (_context, tokens) { - return parseFunctionArgs(tokens) - .map(function (values) { - return values - .filter(isIdentToken) - .map(function (token) { return token.value; }) - .join(' '); - }) - .map(parseBackgroundRepeat); - } - }; - var parseBackgroundRepeat = function (value) { - switch (value) { - case 'no-repeat': - return 1 /* NO_REPEAT */; - case 'repeat-x': - case 'repeat no-repeat': - return 2 /* REPEAT_X */; - case 'repeat-y': - case 'no-repeat repeat': - return 3 /* REPEAT_Y */; - case 'repeat': - default: - return 0 /* REPEAT */; - } - }; - - var BACKGROUND_SIZE; - (function (BACKGROUND_SIZE) { - BACKGROUND_SIZE["AUTO"] = "auto"; - BACKGROUND_SIZE["CONTAIN"] = "contain"; - BACKGROUND_SIZE["COVER"] = "cover"; - })(BACKGROUND_SIZE || (BACKGROUND_SIZE = {})); - var backgroundSize = { - name: 'background-size', - initialValue: '0', - prefix: false, - type: 1 /* LIST */, - parse: function (_context, tokens) { - return parseFunctionArgs(tokens).map(function (values) { return values.filter(isBackgroundSizeInfoToken); }); - } - }; - var isBackgroundSizeInfoToken = function (value) { - return isIdentToken(value) || isLengthPercentage(value); - }; - - var borderColorForSide = function (side) { return ({ - name: "border-" + side + "-color", - initialValue: 'transparent', - prefix: false, - type: 3 /* TYPE_VALUE */, - format: 'color' - }); }; - var borderTopColor = borderColorForSide('top'); - var borderRightColor = borderColorForSide('right'); - var borderBottomColor = borderColorForSide('bottom'); - var borderLeftColor = borderColorForSide('left'); - - var borderRadiusForSide = function (side) { return ({ - name: "border-radius-" + side, - initialValue: '0 0', - prefix: false, - type: 1 /* LIST */, - parse: function (_context, tokens) { - return parseLengthPercentageTuple(tokens.filter(isLengthPercentage)); - } - }); }; - var borderTopLeftRadius = borderRadiusForSide('top-left'); - var borderTopRightRadius = borderRadiusForSide('top-right'); - var borderBottomRightRadius = borderRadiusForSide('bottom-right'); - var borderBottomLeftRadius = borderRadiusForSide('bottom-left'); - - var borderStyleForSide = function (side) { return ({ - name: "border-" + side + "-style", - initialValue: 'solid', - prefix: false, - type: 2 /* IDENT_VALUE */, - parse: function (_context, style) { - switch (style) { - case 'none': - return 0 /* NONE */; - case 'dashed': - return 2 /* DASHED */; - case 'dotted': - return 3 /* DOTTED */; - case 'double': - return 4 /* DOUBLE */; - } - return 1 /* SOLID */; - } - }); }; - var borderTopStyle = borderStyleForSide('top'); - var borderRightStyle = borderStyleForSide('right'); - var borderBottomStyle = borderStyleForSide('bottom'); - var borderLeftStyle = borderStyleForSide('left'); - - var borderWidthForSide = function (side) { return ({ - name: "border-" + side + "-width", - initialValue: '0', - type: 0 /* VALUE */, - prefix: false, - parse: function (_context, token) { - if (isDimensionToken(token)) { - return token.number; - } - return 0; - } - }); }; - var borderTopWidth = borderWidthForSide('top'); - var borderRightWidth = borderWidthForSide('right'); - var borderBottomWidth = borderWidthForSide('bottom'); - var borderLeftWidth = borderWidthForSide('left'); - - var color = { - name: "color", - initialValue: 'transparent', - prefix: false, - type: 3 /* TYPE_VALUE */, - format: 'color' - }; - - var direction = { - name: 'direction', - initialValue: 'ltr', - prefix: false, - type: 2 /* IDENT_VALUE */, - parse: function (_context, direction) { - switch (direction) { - case 'rtl': - return 1 /* RTL */; - case 'ltr': - default: - return 0 /* LTR */; - } - } - }; - - var display = { - name: 'display', - initialValue: 'inline-block', - prefix: false, - type: 1 /* LIST */, - parse: function (_context, tokens) { - return tokens.filter(isIdentToken).reduce(function (bit, token) { - return bit | parseDisplayValue(token.value); - }, 0 /* NONE */); - } - }; - var parseDisplayValue = function (display) { - switch (display) { - case 'block': - case '-webkit-box': - return 2 /* BLOCK */; - case 'inline': - return 4 /* INLINE */; - case 'run-in': - return 8 /* RUN_IN */; - case 'flow': - return 16 /* FLOW */; - case 'flow-root': - return 32 /* FLOW_ROOT */; - case 'table': - return 64 /* TABLE */; - case 'flex': - case '-webkit-flex': - return 128 /* FLEX */; - case 'grid': - case '-ms-grid': - return 256 /* GRID */; - case 'ruby': - return 512 /* RUBY */; - case 'subgrid': - return 1024 /* SUBGRID */; - case 'list-item': - return 2048 /* LIST_ITEM */; - case 'table-row-group': - return 4096 /* TABLE_ROW_GROUP */; - case 'table-header-group': - return 8192 /* TABLE_HEADER_GROUP */; - case 'table-footer-group': - return 16384 /* TABLE_FOOTER_GROUP */; - case 'table-row': - return 32768 /* TABLE_ROW */; - case 'table-cell': - return 65536 /* TABLE_CELL */; - case 'table-column-group': - return 131072 /* TABLE_COLUMN_GROUP */; - case 'table-column': - return 262144 /* TABLE_COLUMN */; - case 'table-caption': - return 524288 /* TABLE_CAPTION */; - case 'ruby-base': - return 1048576 /* RUBY_BASE */; - case 'ruby-text': - return 2097152 /* RUBY_TEXT */; - case 'ruby-base-container': - return 4194304 /* RUBY_BASE_CONTAINER */; - case 'ruby-text-container': - return 8388608 /* RUBY_TEXT_CONTAINER */; - case 'contents': - return 16777216 /* CONTENTS */; - case 'inline-block': - return 33554432 /* INLINE_BLOCK */; - case 'inline-list-item': - return 67108864 /* INLINE_LIST_ITEM */; - case 'inline-table': - return 134217728 /* INLINE_TABLE */; - case 'inline-flex': - return 268435456 /* INLINE_FLEX */; - case 'inline-grid': - return 536870912 /* INLINE_GRID */; - } - return 0 /* NONE */; - }; - - var float = { - name: 'float', - initialValue: 'none', - prefix: false, - type: 2 /* IDENT_VALUE */, - parse: function (_context, float) { - switch (float) { - case 'left': - return 1 /* LEFT */; - case 'right': - return 2 /* RIGHT */; - case 'inline-start': - return 3 /* INLINE_START */; - case 'inline-end': - return 4 /* INLINE_END */; - } - return 0 /* NONE */; - } - }; - - var letterSpacing = { - name: 'letter-spacing', - initialValue: '0', - prefix: false, - type: 0 /* VALUE */, - parse: function (_context, token) { - if (token.type === 20 /* IDENT_TOKEN */ && token.value === 'normal') { - return 0; - } - if (token.type === 17 /* NUMBER_TOKEN */) { - return token.number; - } - if (token.type === 15 /* DIMENSION_TOKEN */) { - return token.number; - } - return 0; - } - }; - - var LINE_BREAK; - (function (LINE_BREAK) { - LINE_BREAK["NORMAL"] = "normal"; - LINE_BREAK["STRICT"] = "strict"; - })(LINE_BREAK || (LINE_BREAK = {})); - var lineBreak = { - name: 'line-break', - initialValue: 'normal', - prefix: false, - type: 2 /* IDENT_VALUE */, - parse: function (_context, lineBreak) { - switch (lineBreak) { - case 'strict': - return LINE_BREAK.STRICT; - case 'normal': - default: - return LINE_BREAK.NORMAL; - } - } - }; - - var lineHeight = { - name: 'line-height', - initialValue: 'normal', - prefix: false, - type: 4 /* TOKEN_VALUE */ - }; - var computeLineHeight = function (token, fontSize) { - if (isIdentToken(token) && token.value === 'normal') { - return 1.2 * fontSize; - } - else if (token.type === 17 /* NUMBER_TOKEN */) { - return fontSize * token.number; - } - else if (isLengthPercentage(token)) { - return getAbsoluteValue(token, fontSize); - } - return fontSize; - }; - - var listStyleImage = { - name: 'list-style-image', - initialValue: 'none', - type: 0 /* VALUE */, - prefix: false, - parse: function (context, token) { - if (token.type === 20 /* IDENT_TOKEN */ && token.value === 'none') { - return null; - } - return image.parse(context, token); - } - }; - - var listStylePosition = { - name: 'list-style-position', - initialValue: 'outside', - prefix: false, - type: 2 /* IDENT_VALUE */, - parse: function (_context, position) { - switch (position) { - case 'inside': - return 0 /* INSIDE */; - case 'outside': - default: - return 1 /* OUTSIDE */; - } - } - }; - - var listStyleType = { - name: 'list-style-type', - initialValue: 'none', - prefix: false, - type: 2 /* IDENT_VALUE */, - parse: function (_context, type) { - switch (type) { - case 'disc': - return 0 /* DISC */; - case 'circle': - return 1 /* CIRCLE */; - case 'square': - return 2 /* SQUARE */; - case 'decimal': - return 3 /* DECIMAL */; - case 'cjk-decimal': - return 4 /* CJK_DECIMAL */; - case 'decimal-leading-zero': - return 5 /* DECIMAL_LEADING_ZERO */; - case 'lower-roman': - return 6 /* LOWER_ROMAN */; - case 'upper-roman': - return 7 /* UPPER_ROMAN */; - case 'lower-greek': - return 8 /* LOWER_GREEK */; - case 'lower-alpha': - return 9 /* LOWER_ALPHA */; - case 'upper-alpha': - return 10 /* UPPER_ALPHA */; - case 'arabic-indic': - return 11 /* ARABIC_INDIC */; - case 'armenian': - return 12 /* ARMENIAN */; - case 'bengali': - return 13 /* BENGALI */; - case 'cambodian': - return 14 /* CAMBODIAN */; - case 'cjk-earthly-branch': - return 15 /* CJK_EARTHLY_BRANCH */; - case 'cjk-heavenly-stem': - return 16 /* CJK_HEAVENLY_STEM */; - case 'cjk-ideographic': - return 17 /* CJK_IDEOGRAPHIC */; - case 'devanagari': - return 18 /* DEVANAGARI */; - case 'ethiopic-numeric': - return 19 /* ETHIOPIC_NUMERIC */; - case 'georgian': - return 20 /* GEORGIAN */; - case 'gujarati': - return 21 /* GUJARATI */; - case 'gurmukhi': - return 22 /* GURMUKHI */; - case 'hebrew': - return 22 /* HEBREW */; - case 'hiragana': - return 23 /* HIRAGANA */; - case 'hiragana-iroha': - return 24 /* HIRAGANA_IROHA */; - case 'japanese-formal': - return 25 /* JAPANESE_FORMAL */; - case 'japanese-informal': - return 26 /* JAPANESE_INFORMAL */; - case 'kannada': - return 27 /* KANNADA */; - case 'katakana': - return 28 /* KATAKANA */; - case 'katakana-iroha': - return 29 /* KATAKANA_IROHA */; - case 'khmer': - return 30 /* KHMER */; - case 'korean-hangul-formal': - return 31 /* KOREAN_HANGUL_FORMAL */; - case 'korean-hanja-formal': - return 32 /* KOREAN_HANJA_FORMAL */; - case 'korean-hanja-informal': - return 33 /* KOREAN_HANJA_INFORMAL */; - case 'lao': - return 34 /* LAO */; - case 'lower-armenian': - return 35 /* LOWER_ARMENIAN */; - case 'malayalam': - return 36 /* MALAYALAM */; - case 'mongolian': - return 37 /* MONGOLIAN */; - case 'myanmar': - return 38 /* MYANMAR */; - case 'oriya': - return 39 /* ORIYA */; - case 'persian': - return 40 /* PERSIAN */; - case 'simp-chinese-formal': - return 41 /* SIMP_CHINESE_FORMAL */; - case 'simp-chinese-informal': - return 42 /* SIMP_CHINESE_INFORMAL */; - case 'tamil': - return 43 /* TAMIL */; - case 'telugu': - return 44 /* TELUGU */; - case 'thai': - return 45 /* THAI */; - case 'tibetan': - return 46 /* TIBETAN */; - case 'trad-chinese-formal': - return 47 /* TRAD_CHINESE_FORMAL */; - case 'trad-chinese-informal': - return 48 /* TRAD_CHINESE_INFORMAL */; - case 'upper-armenian': - return 49 /* UPPER_ARMENIAN */; - case 'disclosure-open': - return 50 /* DISCLOSURE_OPEN */; - case 'disclosure-closed': - return 51 /* DISCLOSURE_CLOSED */; - case 'none': - default: - return -1 /* NONE */; - } - } - }; - - var marginForSide = function (side) { return ({ - name: "margin-" + side, - initialValue: '0', - prefix: false, - type: 4 /* TOKEN_VALUE */ - }); }; - var marginTop = marginForSide('top'); - var marginRight = marginForSide('right'); - var marginBottom = marginForSide('bottom'); - var marginLeft = marginForSide('left'); - - var overflow = { - name: 'overflow', - initialValue: 'visible', - prefix: false, - type: 1 /* LIST */, - parse: function (_context, tokens) { - return tokens.filter(isIdentToken).map(function (overflow) { - switch (overflow.value) { - case 'hidden': - return 1 /* HIDDEN */; - case 'scroll': - return 2 /* SCROLL */; - case 'clip': - return 3 /* CLIP */; - case 'auto': - return 4 /* AUTO */; - case 'visible': - default: - return 0 /* VISIBLE */; - } - }); - } - }; - - var overflowWrap = { - name: 'overflow-wrap', - initialValue: 'normal', - prefix: false, - type: 2 /* IDENT_VALUE */, - parse: function (_context, overflow) { - switch (overflow) { - case 'break-word': - return "break-word" /* BREAK_WORD */; - case 'normal': - default: - return "normal" /* NORMAL */; - } - } - }; - - var paddingForSide = function (side) { return ({ - name: "padding-" + side, - initialValue: '0', - prefix: false, - type: 3 /* TYPE_VALUE */, - format: 'length-percentage' - }); }; - var paddingTop = paddingForSide('top'); - var paddingRight = paddingForSide('right'); - var paddingBottom = paddingForSide('bottom'); - var paddingLeft = paddingForSide('left'); - - var textAlign = { - name: 'text-align', - initialValue: 'left', - prefix: false, - type: 2 /* IDENT_VALUE */, - parse: function (_context, textAlign) { - switch (textAlign) { - case 'right': - return 2 /* RIGHT */; - case 'center': - case 'justify': - return 1 /* CENTER */; - case 'left': - default: - return 0 /* LEFT */; - } - } - }; - - var position = { - name: 'position', - initialValue: 'static', - prefix: false, - type: 2 /* IDENT_VALUE */, - parse: function (_context, position) { - switch (position) { - case 'relative': - return 1 /* RELATIVE */; - case 'absolute': - return 2 /* ABSOLUTE */; - case 'fixed': - return 3 /* FIXED */; - case 'sticky': - return 4 /* STICKY */; - } - return 0 /* STATIC */; - } - }; - - var textShadow = { - name: 'text-shadow', - initialValue: 'none', - type: 1 /* LIST */, - prefix: false, - parse: function (context, tokens) { - if (tokens.length === 1 && isIdentWithValue(tokens[0], 'none')) { - return []; - } - return parseFunctionArgs(tokens).map(function (values) { - var shadow = { - color: COLORS.TRANSPARENT, - offsetX: ZERO_LENGTH, - offsetY: ZERO_LENGTH, - blur: ZERO_LENGTH - }; - var c = 0; - for (var i = 0; i < values.length; i++) { - var token = values[i]; - if (isLength(token)) { - if (c === 0) { - shadow.offsetX = token; - } - else if (c === 1) { - shadow.offsetY = token; - } - else { - shadow.blur = token; - } - c++; - } - else { - shadow.color = color$1.parse(context, token); - } - } - return shadow; - }); - } - }; - - var textTransform = { - name: 'text-transform', - initialValue: 'none', - prefix: false, - type: 2 /* IDENT_VALUE */, - parse: function (_context, textTransform) { - switch (textTransform) { - case 'uppercase': - return 2 /* UPPERCASE */; - case 'lowercase': - return 1 /* LOWERCASE */; - case 'capitalize': - return 3 /* CAPITALIZE */; - } - return 0 /* NONE */; - } - }; - - var transform$1 = { - name: 'transform', - initialValue: 'none', - prefix: true, - type: 0 /* VALUE */, - parse: function (_context, token) { - if (token.type === 20 /* IDENT_TOKEN */ && token.value === 'none') { - return null; - } - if (token.type === 18 /* FUNCTION */) { - var transformFunction = SUPPORTED_TRANSFORM_FUNCTIONS[token.name]; - if (typeof transformFunction === 'undefined') { - throw new Error("Attempting to parse an unsupported transform function \"" + token.name + "\""); - } - return transformFunction(token.values); - } - return null; - } - }; - var matrix = function (args) { - var values = args.filter(function (arg) { return arg.type === 17 /* NUMBER_TOKEN */; }).map(function (arg) { return arg.number; }); - return values.length === 6 ? values : null; - }; - // doesn't support 3D transforms at the moment - var matrix3d = function (args) { - var values = args.filter(function (arg) { return arg.type === 17 /* NUMBER_TOKEN */; }).map(function (arg) { return arg.number; }); - var a1 = values[0], b1 = values[1]; values[2]; values[3]; var a2 = values[4], b2 = values[5]; values[6]; values[7]; values[8]; values[9]; values[10]; values[11]; var a4 = values[12], b4 = values[13]; values[14]; values[15]; - return values.length === 16 ? [a1, b1, a2, b2, a4, b4] : null; - }; - var SUPPORTED_TRANSFORM_FUNCTIONS = { - matrix: matrix, - matrix3d: matrix3d - }; - - var DEFAULT_VALUE = { - type: 16 /* PERCENTAGE_TOKEN */, - number: 50, - flags: FLAG_INTEGER - }; - var DEFAULT = [DEFAULT_VALUE, DEFAULT_VALUE]; - var transformOrigin = { - name: 'transform-origin', - initialValue: '50% 50%', - prefix: true, - type: 1 /* LIST */, - parse: function (_context, tokens) { - var origins = tokens.filter(isLengthPercentage); - if (origins.length !== 2) { - return DEFAULT; - } - return [origins[0], origins[1]]; - } - }; - - var visibility = { - name: 'visible', - initialValue: 'none', - prefix: false, - type: 2 /* IDENT_VALUE */, - parse: function (_context, visibility) { - switch (visibility) { - case 'hidden': - return 1 /* HIDDEN */; - case 'collapse': - return 2 /* COLLAPSE */; - case 'visible': - default: - return 0 /* VISIBLE */; - } - } - }; - - var WORD_BREAK; - (function (WORD_BREAK) { - WORD_BREAK["NORMAL"] = "normal"; - WORD_BREAK["BREAK_ALL"] = "break-all"; - WORD_BREAK["KEEP_ALL"] = "keep-all"; - })(WORD_BREAK || (WORD_BREAK = {})); - var wordBreak = { - name: 'word-break', - initialValue: 'normal', - prefix: false, - type: 2 /* IDENT_VALUE */, - parse: function (_context, wordBreak) { - switch (wordBreak) { - case 'break-all': - return WORD_BREAK.BREAK_ALL; - case 'keep-all': - return WORD_BREAK.KEEP_ALL; - case 'normal': - default: - return WORD_BREAK.NORMAL; - } - } - }; - - var zIndex = { - name: 'z-index', - initialValue: 'auto', - prefix: false, - type: 0 /* VALUE */, - parse: function (_context, token) { - if (token.type === 20 /* IDENT_TOKEN */) { - return { auto: true, order: 0 }; - } - if (isNumberToken(token)) { - return { auto: false, order: token.number }; - } - throw new Error("Invalid z-index number parsed"); - } - }; - - var time = { - name: 'time', - parse: function (_context, value) { - if (value.type === 15 /* DIMENSION_TOKEN */) { - switch (value.unit.toLowerCase()) { - case 's': - return 1000 * value.number; - case 'ms': - return value.number; - } - } - throw new Error("Unsupported time type"); - } - }; - - var opacity = { - name: 'opacity', - initialValue: '1', - type: 0 /* VALUE */, - prefix: false, - parse: function (_context, token) { - if (isNumberToken(token)) { - return token.number; - } - return 1; - } - }; - - var textDecorationColor = { - name: "text-decoration-color", - initialValue: 'transparent', - prefix: false, - type: 3 /* TYPE_VALUE */, - format: 'color' - }; - - var textDecorationLine = { - name: 'text-decoration-line', - initialValue: 'none', - prefix: false, - type: 1 /* LIST */, - parse: function (_context, tokens) { - return tokens - .filter(isIdentToken) - .map(function (token) { - switch (token.value) { - case 'underline': - return 1 /* UNDERLINE */; - case 'overline': - return 2 /* OVERLINE */; - case 'line-through': - return 3 /* LINE_THROUGH */; - case 'none': - return 4 /* BLINK */; - } - return 0 /* NONE */; - }) - .filter(function (line) { return line !== 0 /* NONE */; }); - } - }; - - var fontFamily = { - name: "font-family", - initialValue: '', - prefix: false, - type: 1 /* LIST */, - parse: function (_context, tokens) { - var accumulator = []; - var results = []; - tokens.forEach(function (token) { - switch (token.type) { - case 20 /* IDENT_TOKEN */: - case 0 /* STRING_TOKEN */: - accumulator.push(token.value); - break; - case 17 /* NUMBER_TOKEN */: - accumulator.push(token.number.toString()); - break; - case 4 /* COMMA_TOKEN */: - results.push(accumulator.join(' ')); - accumulator.length = 0; - break; - } - }); - if (accumulator.length) { - results.push(accumulator.join(' ')); - } - return results.map(function (result) { return (result.indexOf(' ') === -1 ? result : "'" + result + "'"); }); - } - }; - - var fontSize = { - name: "font-size", - initialValue: '0', - prefix: false, - type: 3 /* TYPE_VALUE */, - format: 'length' - }; - - var fontWeight = { - name: 'font-weight', - initialValue: 'normal', - type: 0 /* VALUE */, - prefix: false, - parse: function (_context, token) { - if (isNumberToken(token)) { - return token.number; - } - if (isIdentToken(token)) { - switch (token.value) { - case 'bold': - return 700; - case 'normal': - default: - return 400; - } - } - return 400; - } - }; - - var fontVariant = { - name: 'font-variant', - initialValue: 'none', - type: 1 /* LIST */, - prefix: false, - parse: function (_context, tokens) { - return tokens.filter(isIdentToken).map(function (token) { return token.value; }); - } - }; - - var fontStyle = { - name: 'font-style', - initialValue: 'normal', - prefix: false, - type: 2 /* IDENT_VALUE */, - parse: function (_context, overflow) { - switch (overflow) { - case 'oblique': - return "oblique" /* OBLIQUE */; - case 'italic': - return "italic" /* ITALIC */; - case 'normal': - default: - return "normal" /* NORMAL */; - } - } - }; - - var contains = function (bit, value) { return (bit & value) !== 0; }; - - var content = { - name: 'content', - initialValue: 'none', - type: 1 /* LIST */, - prefix: false, - parse: function (_context, tokens) { - if (tokens.length === 0) { - return []; - } - var first = tokens[0]; - if (first.type === 20 /* IDENT_TOKEN */ && first.value === 'none') { - return []; - } - return tokens; - } - }; - - var counterIncrement = { - name: 'counter-increment', - initialValue: 'none', - prefix: true, - type: 1 /* LIST */, - parse: function (_context, tokens) { - if (tokens.length === 0) { - return null; - } - var first = tokens[0]; - if (first.type === 20 /* IDENT_TOKEN */ && first.value === 'none') { - return null; - } - var increments = []; - var filtered = tokens.filter(nonWhiteSpace); - for (var i = 0; i < filtered.length; i++) { - var counter = filtered[i]; - var next = filtered[i + 1]; - if (counter.type === 20 /* IDENT_TOKEN */) { - var increment = next && isNumberToken(next) ? next.number : 1; - increments.push({ counter: counter.value, increment: increment }); - } - } - return increments; - } - }; - - var counterReset = { - name: 'counter-reset', - initialValue: 'none', - prefix: true, - type: 1 /* LIST */, - parse: function (_context, tokens) { - if (tokens.length === 0) { - return []; - } - var resets = []; - var filtered = tokens.filter(nonWhiteSpace); - for (var i = 0; i < filtered.length; i++) { - var counter = filtered[i]; - var next = filtered[i + 1]; - if (isIdentToken(counter) && counter.value !== 'none') { - var reset = next && isNumberToken(next) ? next.number : 0; - resets.push({ counter: counter.value, reset: reset }); - } - } - return resets; - } - }; - - var duration = { - name: 'duration', - initialValue: '0s', - prefix: false, - type: 1 /* LIST */, - parse: function (context, tokens) { - return tokens.filter(isDimensionToken).map(function (token) { return time.parse(context, token); }); - } - }; - - var quotes = { - name: 'quotes', - initialValue: 'none', - prefix: true, - type: 1 /* LIST */, - parse: function (_context, tokens) { - if (tokens.length === 0) { - return null; - } - var first = tokens[0]; - if (first.type === 20 /* IDENT_TOKEN */ && first.value === 'none') { - return null; - } - var quotes = []; - var filtered = tokens.filter(isStringToken); - if (filtered.length % 2 !== 0) { - return null; - } - for (var i = 0; i < filtered.length; i += 2) { - var open_1 = filtered[i].value; - var close_1 = filtered[i + 1].value; - quotes.push({ open: open_1, close: close_1 }); - } - return quotes; - } - }; - var getQuote = function (quotes, depth, open) { - if (!quotes) { - return ''; - } - var quote = quotes[Math.min(depth, quotes.length - 1)]; - if (!quote) { - return ''; - } - return open ? quote.open : quote.close; - }; - - var paintOrder = { - name: 'paint-order', - initialValue: 'normal', - prefix: false, - type: 1 /* LIST */, - parse: function (_context, tokens) { - var DEFAULT_VALUE = [0 /* FILL */, 1 /* STROKE */, 2 /* MARKERS */]; - var layers = []; - tokens.filter(isIdentToken).forEach(function (token) { - switch (token.value) { - case 'stroke': - layers.push(1 /* STROKE */); - break; - case 'fill': - layers.push(0 /* FILL */); - break; - case 'markers': - layers.push(2 /* MARKERS */); - break; - } - }); - DEFAULT_VALUE.forEach(function (value) { - if (layers.indexOf(value) === -1) { - layers.push(value); - } - }); - return layers; - } - }; - - var webkitTextStrokeColor = { - name: "-webkit-text-stroke-color", - initialValue: 'currentcolor', - prefix: false, - type: 3 /* TYPE_VALUE */, - format: 'color' - }; - - var webkitTextStrokeWidth = { - name: "-webkit-text-stroke-width", - initialValue: '0', - type: 0 /* VALUE */, - prefix: false, - parse: function (_context, token) { - if (isDimensionToken(token)) { - return token.number; - } - return 0; - } - }; - - var CSSParsedDeclaration = /** @class */ (function () { - function CSSParsedDeclaration(context, declaration) { - var _a, _b; - this.animationDuration = parse(context, duration, declaration.animationDuration); - this.backgroundClip = parse(context, backgroundClip, declaration.backgroundClip); - this.backgroundColor = parse(context, backgroundColor, declaration.backgroundColor); - this.backgroundImage = parse(context, backgroundImage, declaration.backgroundImage); - this.backgroundOrigin = parse(context, backgroundOrigin, declaration.backgroundOrigin); - this.backgroundPosition = parse(context, backgroundPosition, declaration.backgroundPosition); - this.backgroundRepeat = parse(context, backgroundRepeat, declaration.backgroundRepeat); - this.backgroundSize = parse(context, backgroundSize, declaration.backgroundSize); - this.borderTopColor = parse(context, borderTopColor, declaration.borderTopColor); - this.borderRightColor = parse(context, borderRightColor, declaration.borderRightColor); - this.borderBottomColor = parse(context, borderBottomColor, declaration.borderBottomColor); - this.borderLeftColor = parse(context, borderLeftColor, declaration.borderLeftColor); - this.borderTopLeftRadius = parse(context, borderTopLeftRadius, declaration.borderTopLeftRadius); - this.borderTopRightRadius = parse(context, borderTopRightRadius, declaration.borderTopRightRadius); - this.borderBottomRightRadius = parse(context, borderBottomRightRadius, declaration.borderBottomRightRadius); - this.borderBottomLeftRadius = parse(context, borderBottomLeftRadius, declaration.borderBottomLeftRadius); - this.borderTopStyle = parse(context, borderTopStyle, declaration.borderTopStyle); - this.borderRightStyle = parse(context, borderRightStyle, declaration.borderRightStyle); - this.borderBottomStyle = parse(context, borderBottomStyle, declaration.borderBottomStyle); - this.borderLeftStyle = parse(context, borderLeftStyle, declaration.borderLeftStyle); - this.borderTopWidth = parse(context, borderTopWidth, declaration.borderTopWidth); - this.borderRightWidth = parse(context, borderRightWidth, declaration.borderRightWidth); - this.borderBottomWidth = parse(context, borderBottomWidth, declaration.borderBottomWidth); - this.borderLeftWidth = parse(context, borderLeftWidth, declaration.borderLeftWidth); - this.color = parse(context, color, declaration.color); - this.direction = parse(context, direction, declaration.direction); - this.display = parse(context, display, declaration.display); - this.float = parse(context, float, declaration.cssFloat); - this.fontFamily = parse(context, fontFamily, declaration.fontFamily); - this.fontSize = parse(context, fontSize, declaration.fontSize); - this.fontStyle = parse(context, fontStyle, declaration.fontStyle); - this.fontVariant = parse(context, fontVariant, declaration.fontVariant); - this.fontWeight = parse(context, fontWeight, declaration.fontWeight); - this.letterSpacing = parse(context, letterSpacing, declaration.letterSpacing); - this.lineBreak = parse(context, lineBreak, declaration.lineBreak); - this.lineHeight = parse(context, lineHeight, declaration.lineHeight); - this.listStyleImage = parse(context, listStyleImage, declaration.listStyleImage); - this.listStylePosition = parse(context, listStylePosition, declaration.listStylePosition); - this.listStyleType = parse(context, listStyleType, declaration.listStyleType); - this.marginTop = parse(context, marginTop, declaration.marginTop); - this.marginRight = parse(context, marginRight, declaration.marginRight); - this.marginBottom = parse(context, marginBottom, declaration.marginBottom); - this.marginLeft = parse(context, marginLeft, declaration.marginLeft); - this.opacity = parse(context, opacity, declaration.opacity); - var overflowTuple = parse(context, overflow, declaration.overflow); - this.overflowX = overflowTuple[0]; - this.overflowY = overflowTuple[overflowTuple.length > 1 ? 1 : 0]; - this.overflowWrap = parse(context, overflowWrap, declaration.overflowWrap); - this.paddingTop = parse(context, paddingTop, declaration.paddingTop); - this.paddingRight = parse(context, paddingRight, declaration.paddingRight); - this.paddingBottom = parse(context, paddingBottom, declaration.paddingBottom); - this.paddingLeft = parse(context, paddingLeft, declaration.paddingLeft); - this.paintOrder = parse(context, paintOrder, declaration.paintOrder); - this.position = parse(context, position, declaration.position); - this.textAlign = parse(context, textAlign, declaration.textAlign); - this.textDecorationColor = parse(context, textDecorationColor, (_a = declaration.textDecorationColor) !== null && _a !== void 0 ? _a : declaration.color); - this.textDecorationLine = parse(context, textDecorationLine, (_b = declaration.textDecorationLine) !== null && _b !== void 0 ? _b : declaration.textDecoration); - this.textShadow = parse(context, textShadow, declaration.textShadow); - this.textTransform = parse(context, textTransform, declaration.textTransform); - this.transform = parse(context, transform$1, declaration.transform); - this.transformOrigin = parse(context, transformOrigin, declaration.transformOrigin); - this.visibility = parse(context, visibility, declaration.visibility); - this.webkitTextStrokeColor = parse(context, webkitTextStrokeColor, declaration.webkitTextStrokeColor); - this.webkitTextStrokeWidth = parse(context, webkitTextStrokeWidth, declaration.webkitTextStrokeWidth); - this.wordBreak = parse(context, wordBreak, declaration.wordBreak); - this.zIndex = parse(context, zIndex, declaration.zIndex); - } - CSSParsedDeclaration.prototype.isVisible = function () { - return this.display > 0 && this.opacity > 0 && this.visibility === 0 /* VISIBLE */; - }; - CSSParsedDeclaration.prototype.isTransparent = function () { - return isTransparent(this.backgroundColor); - }; - CSSParsedDeclaration.prototype.isTransformed = function () { - return this.transform !== null; - }; - CSSParsedDeclaration.prototype.isPositioned = function () { - return this.position !== 0 /* STATIC */; - }; - CSSParsedDeclaration.prototype.isPositionedWithZIndex = function () { - return this.isPositioned() && !this.zIndex.auto; - }; - CSSParsedDeclaration.prototype.isFloating = function () { - return this.float !== 0 /* NONE */; - }; - CSSParsedDeclaration.prototype.isInlineLevel = function () { - return (contains(this.display, 4 /* INLINE */) || - contains(this.display, 33554432 /* INLINE_BLOCK */) || - contains(this.display, 268435456 /* INLINE_FLEX */) || - contains(this.display, 536870912 /* INLINE_GRID */) || - contains(this.display, 67108864 /* INLINE_LIST_ITEM */) || - contains(this.display, 134217728 /* INLINE_TABLE */)); - }; - return CSSParsedDeclaration; - }()); - var CSSParsedPseudoDeclaration = /** @class */ (function () { - function CSSParsedPseudoDeclaration(context, declaration) { - this.content = parse(context, content, declaration.content); - this.quotes = parse(context, quotes, declaration.quotes); - } - return CSSParsedPseudoDeclaration; - }()); - var CSSParsedCounterDeclaration = /** @class */ (function () { - function CSSParsedCounterDeclaration(context, declaration) { - this.counterIncrement = parse(context, counterIncrement, declaration.counterIncrement); - this.counterReset = parse(context, counterReset, declaration.counterReset); - } - return CSSParsedCounterDeclaration; - }()); - // eslint-disable-next-line @typescript-eslint/no-explicit-any - var parse = function (context, descriptor, style) { - var tokenizer = new Tokenizer(); - var value = style !== null && typeof style !== 'undefined' ? style.toString() : descriptor.initialValue; - tokenizer.write(value); - var parser = new Parser(tokenizer.read()); - switch (descriptor.type) { - case 2 /* IDENT_VALUE */: - var token = parser.parseComponentValue(); - return descriptor.parse(context, isIdentToken(token) ? token.value : descriptor.initialValue); - case 0 /* VALUE */: - return descriptor.parse(context, parser.parseComponentValue()); - case 1 /* LIST */: - return descriptor.parse(context, parser.parseComponentValues()); - case 4 /* TOKEN_VALUE */: - return parser.parseComponentValue(); - case 3 /* TYPE_VALUE */: - switch (descriptor.format) { - case 'angle': - return angle.parse(context, parser.parseComponentValue()); - case 'color': - return color$1.parse(context, parser.parseComponentValue()); - case 'image': - return image.parse(context, parser.parseComponentValue()); - case 'length': - var length_1 = parser.parseComponentValue(); - return isLength(length_1) ? length_1 : ZERO_LENGTH; - case 'length-percentage': - var value_1 = parser.parseComponentValue(); - return isLengthPercentage(value_1) ? value_1 : ZERO_LENGTH; - case 'time': - return time.parse(context, parser.parseComponentValue()); - } - break; - } - }; - - var elementDebuggerAttribute = 'data-html2canvas-debug'; - var getElementDebugType = function (element) { - var attribute = element.getAttribute(elementDebuggerAttribute); - switch (attribute) { - case 'all': - return 1 /* ALL */; - case 'clone': - return 2 /* CLONE */; - case 'parse': - return 3 /* PARSE */; - case 'render': - return 4 /* RENDER */; - default: - return 0 /* NONE */; - } - }; - var isDebugging = function (element, type) { - var elementType = getElementDebugType(element); - return elementType === 1 /* ALL */ || type === elementType; - }; - - var ElementContainer = /** @class */ (function () { - function ElementContainer(context, element) { - this.context = context; - this.textNodes = []; - this.elements = []; - this.flags = 0; - if (isDebugging(element, 3 /* PARSE */)) { - debugger; - } - this.styles = new CSSParsedDeclaration(context, window.getComputedStyle(element, null)); - if (isHTMLElementNode(element)) { - if (this.styles.animationDuration.some(function (duration) { return duration > 0; })) { - element.style.animationDuration = '0s'; - } - if (this.styles.transform !== null) { - // getBoundingClientRect takes transforms into account - element.style.transform = 'none'; - } - } - this.bounds = parseBounds(this.context, element); - if (isDebugging(element, 4 /* RENDER */)) { - this.flags |= 16 /* DEBUG_RENDER */; - } - } - return ElementContainer; - }()); - - /* - * text-segmentation 1.0.3 - * Copyright (c) 2022 Niklas von Hertzen - * Released under MIT License - */ - var base64 = 'AAAAAAAAAAAAEA4AGBkAAFAaAAACAAAAAAAIABAAGAAwADgACAAQAAgAEAAIABAACAAQAAgAEAAIABAACAAQAAgAEAAIABAAQABIAEQATAAIABAACAAQAAgAEAAIABAAVABcAAgAEAAIABAACAAQAGAAaABwAHgAgACIAI4AlgAIABAAmwCjAKgAsAC2AL4AvQDFAMoA0gBPAVYBWgEIAAgACACMANoAYgFkAWwBdAF8AX0BhQGNAZUBlgGeAaMBlQGWAasBswF8AbsBwwF0AcsBYwHTAQgA2wG/AOMBdAF8AekB8QF0AfkB+wHiAHQBfAEIAAMC5gQIAAsCEgIIAAgAFgIeAggAIgIpAggAMQI5AkACygEIAAgASAJQAlgCYAIIAAgACAAKBQoFCgUTBRMFGQUrBSsFCAAIAAgACAAIAAgACAAIAAgACABdAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACABoAmgCrwGvAQgAbgJ2AggAHgEIAAgACADnAXsCCAAIAAgAgwIIAAgACAAIAAgACACKAggAkQKZAggAPADJAAgAoQKkAqwCsgK6AsICCADJAggA0AIIAAgACAAIANYC3gIIAAgACAAIAAgACABAAOYCCAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAkASoB+QIEAAgACAA8AEMCCABCBQgACABJBVAFCAAIAAgACAAIAAgACAAIAAgACABTBVoFCAAIAFoFCABfBWUFCAAIAAgACAAIAAgAbQUIAAgACAAIAAgACABzBXsFfQWFBYoFigWKBZEFigWKBYoFmAWfBaYFrgWxBbkFCAAIAAgACAAIAAgACAAIAAgACAAIAMEFCAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAMgFCADQBQgACAAIAAgACAAIAAgACAAIAAgACAAIAO4CCAAIAAgAiQAIAAgACABAAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAD0AggACAD8AggACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIANYFCAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAMDvwAIAAgAJAIIAAgACAAIAAgACAAIAAgACwMTAwgACAB9BOsEGwMjAwgAKwMyAwsFYgE3A/MEPwMIAEUDTQNRAwgAWQOsAGEDCAAIAAgACAAIAAgACABpAzQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFIQUoBSwFCAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACABtAwgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACABMAEwACAAIAAgACAAIABgACAAIAAgACAC/AAgACAAyAQgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACACAAIAAwAAgACAAIAAgACAAIAAgACAAIAAAARABIAAgACAAIABQASAAIAAgAIABwAEAAjgCIABsAqAC2AL0AigDQAtwC+IJIQqVAZUBWQqVAZUBlQGVAZUBlQGrC5UBlQGVAZUBlQGVAZUBlQGVAXsKlQGVAbAK6wsrDGUMpQzlDJUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAfAKAAuZA64AtwCJALoC6ADwAAgAuACgA/oEpgO6AqsD+AAIAAgAswMIAAgACAAIAIkAuwP5AfsBwwPLAwgACAAIAAgACADRA9kDCAAIAOED6QMIAAgACAAIAAgACADuA/YDCAAIAP4DyQAIAAgABgQIAAgAXQAOBAgACAAIAAgACAAIABMECAAIAAgACAAIAAgACAD8AAQBCAAIAAgAGgQiBCoECAExBAgAEAEIAAgACAAIAAgACAAIAAgACAAIAAgACAA4BAgACABABEYECAAIAAgATAQYAQgAVAQIAAgACAAIAAgACAAIAAgACAAIAFoECAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgAOQEIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAB+BAcACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAEABhgSMBAgACAAIAAgAlAQIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAwAEAAQABAADAAMAAwADAAQABAAEAAQABAAEAAQABHATAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgAdQMIAAgACAAIAAgACAAIAMkACAAIAAgAfQMIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACACFA4kDCAAIAAgACAAIAOcBCAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAIcDCAAIAAgACAAIAAgACAAIAAgACAAIAJEDCAAIAAgACADFAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACABgBAgAZgQIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgAbAQCBXIECAAIAHkECAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACABAAJwEQACjBKoEsgQIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAC6BMIECAAIAAgACAAIAAgACABmBAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgAxwQIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAGYECAAIAAgAzgQIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgAigWKBYoFigWKBYoFigWKBd0FXwUIAOIF6gXxBYoF3gT5BQAGCAaKBYoFigWKBYoFigWKBYoFigWKBYoFigXWBIoFigWKBYoFigWKBYoFigWKBYsFEAaKBYoFigWKBYoFigWKBRQGCACKBYoFigWKBQgACAAIANEECAAIABgGigUgBggAJgYIAC4GMwaKBYoF0wQ3Bj4GigWKBYoFigWKBYoFigWKBYoFigWKBYoFigUIAAgACAAIAAgACAAIAAgAigWKBYoFigWKBYoFigWKBYoFigWKBYoFigWKBYoFigWKBYoFigWKBYoFigWKBYoFigWKBYoFigWKBYoFigWLBf///////wQABAAEAAQABAAEAAQABAAEAAQAAwAEAAQAAgAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAAAAAAAAAAAAAAAAAAAAAAAAAOAAAAAAAAAAQADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAAAUAAAAFAAUAAAAFAAUAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABAAEAAQABAAEAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAFAAUABQAFAAUABQAAAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAFAAUAAQAAAAUABQAFAAUABQAFAAAAAAAFAAUAAAAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAEAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAFAAUABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAFAAAAAAAFAAUAAQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABwAFAAUABQAFAAAABwAHAAcAAAAHAAcABwAFAAEAAAAAAAAAAAAAAAAAAAAAAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAAcABwAFAAUABQAFAAcABwAFAAUAAAAAAAEAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAAAAAQABAAAAAAAAAAAAAAAFAAUABQAFAAAABwAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAHAAcABwAHAAcAAAAHAAcAAAAAAAUABQAHAAUAAQAHAAEABwAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUABwABAAUABQAFAAUAAAAAAAAAAAAAAAEAAQABAAEAAQABAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABwAFAAUAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUAAQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAQABQANAAQABAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAQABAAEAAQABAAEAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABAAEAAQABAAEAAQABAAEAAQABAAEAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAEAAQABAAEAAQABAAEAAQABAAAAAAAAAAAAAAAAAAAAAAABQAHAAUABQAFAAAAAAAAAAcABQAFAAUABQAFAAQABAAEAAQABAAEAAQABAAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUAAAAFAAUABQAFAAUAAAAFAAUABQAAAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAAAAAAAAAAAAUABQAFAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAHAAUAAAAHAAcABwAFAAUABQAFAAUABQAFAAUABwAHAAcABwAFAAcABwAAAAUABQAFAAUABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABwAHAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAUABwAHAAUABQAFAAUAAAAAAAcABwAAAAAABwAHAAUAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAAABQAFAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAAABwAHAAcABQAFAAAAAAAAAAAABQAFAAAAAAAFAAUABQAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAABwAFAAUABQAFAAUAAAAFAAUABwAAAAcABwAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUAAAAFAAUABwAFAAUABQAFAAAAAAAHAAcAAAAAAAcABwAFAAAAAAAAAAAAAAAAAAAABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAcABwAAAAAAAAAHAAcABwAAAAcABwAHAAUAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAABQAHAAcABwAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABwAHAAcABwAAAAUABQAFAAAABQAFAAUABQAAAAAAAAAAAAAAAAAAAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAcABQAHAAcABQAHAAcAAAAFAAcABwAAAAcABwAFAAUAAAAAAAAAAAAAAAAAAAAFAAUAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAcABwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAAAAUABwAAAAAAAAAAAAAAAAAAAAAAAAAAAAUAAAAAAAAAAAAFAAcABwAFAAUABQAAAAUAAAAHAAcABwAHAAcABwAHAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUAAAAHAAUABQAFAAUABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAAABwAFAAUABQAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAUAAAAFAAAAAAAAAAAABwAHAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABwAFAAUABQAFAAUAAAAFAAUAAAAAAAAAAAAAAAUABQAFAAUABQAFAAUABQAFAAUABQAAAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABwAFAAUABQAFAAUABQAAAAUABQAHAAcABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAAcABQAFAAAAAAAAAAAABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAcABQAFAAAAAAAAAAAAAAAAAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAHAAUABQAFAAUABQAFAAUABwAHAAcABwAHAAcABwAHAAUABwAHAAUABQAFAAUABQAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABwAHAAcABwAFAAUABwAHAAcAAAAAAAAAAAAHAAcABQAHAAcABwAHAAcABwAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAcABwAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcABQAHAAUABQAFAAUABQAFAAUAAAAFAAAABQAAAAAABQAFAAUABQAFAAUABQAFAAcABwAHAAcABwAHAAUABQAFAAUABQAFAAUABQAFAAUAAAAAAAUABQAFAAUABQAHAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAFAAUABwAFAAcABwAHAAcABwAFAAcABwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAAUABQAFAAUABwAHAAUABQAHAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAcABQAFAAcABwAHAAUABwAFAAUABQAHAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABwAHAAcABwAHAAcABwAHAAUABQAFAAUABQAFAAUABQAHAAcABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUAAAAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAcABQAFAAUABQAFAAUABQAAAAAAAAAAAAUAAAAAAAAAAAAAAAAABQAAAAAABwAFAAUAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAAABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUAAAAFAAUABQAFAAUABQAFAAUABQAFAAAAAAAAAAAABQAAAAAAAAAFAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABwAHAAUABQAHAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcABwAHAAcABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAFAAUABQAFAAUABQAHAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAcABwAFAAUABQAFAAcABwAFAAUABwAHAAAAAAAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAFAAcABwAFAAUABwAHAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAFAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUAAAAFAAUABQAAAAAABQAFAAAAAAAAAAAAAAAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcABQAFAAcABwAAAAAAAAAAAAAABwAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcABwAFAAcABwAFAAcABwAAAAcABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAAAAAAAAAAAAAAAAAFAAUABQAAAAUABQAAAAAAAAAAAAAABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAAAAAAAAAAAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcABQAHAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABwAFAAUABQAFAAUABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABwAHAAcABQAFAAUABQAFAAUABQAFAAUABwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAAcABwAFAAUABQAHAAcABQAHAAUABQAAAAAAAAAAAAAAAAAFAAAABwAHAAcABQAFAAUABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABwAHAAcABwAAAAAABwAHAAAAAAAHAAcABwAAAAAAAAAAAAAAAAAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAABwAHAAAAAAAFAAUABQAFAAUABQAFAAAAAAAAAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAAcABwAFAAUABQAFAAUABQAFAAUABwAHAAUABQAFAAcABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAHAAcABQAFAAUABQAFAAUABwAFAAcABwAFAAcABQAFAAcABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAHAAcABQAFAAUABQAAAAAABwAHAAcABwAFAAUABwAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcABwAHAAUABQAFAAUABQAFAAUABQAHAAcABQAHAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABwAFAAcABwAFAAUABQAFAAUABQAHAAUAAAAAAAAAAAAAAAAAAAAAAAcABwAFAAUABQAFAAcABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAAcABwAFAAUABQAFAAUABQAFAAUABQAHAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAAcABwAFAAUABQAFAAAAAAAFAAUABwAHAAcABwAFAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUABwAHAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcABQAFAAUABQAFAAUABQAAAAUABQAFAAUABQAFAAcABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAAAHAAUABQAFAAUABQAFAAUABwAFAAUABwAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUAAAAAAAAABQAAAAUABQAAAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAAcABwAHAAcAAAAFAAUAAAAHAAcABQAHAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABwAHAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAAAAAAAAAAAAAAAAAAABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcABwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAAAAUABQAFAAAAAAAFAAUABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAAAAAAAAAAABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAAAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUABQAAAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAAAAABQAFAAUABQAFAAUABQAAAAUABQAAAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAFAAUABQAFAAUADgAOAA4ADgAOAA4ADwAPAA8ADwAPAA8ADwAPAA8ADwAPAA8ADwAPAA8ADwAPAA8ADwAPAA8ADwAPAA8ADwAPAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcABwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABwAHAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAAAAAAAAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkACQAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAMAAwADAAMAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkAAAAAAAAAAAAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAAAAAAAAAAAAsADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwACwAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAAAAAADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA4AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAA4ADgAOAA4ADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA4ADgAAAAAAAAAAAAAAAAAAAAAADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAOAA4ADgAOAA4ADgAOAA4ADgAOAAAAAAAAAAAADgAOAA4AAAAAAAAAAAAAAAAAAAAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAAAAAAAAAAAAAAAAAAAAAAAAAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAOAA4ADgAAAA4ADgAOAA4ADgAOAAAADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4AAAAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4AAAAAAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAAAA4AAAAOAAAAAAAAAAAAAAAAAA4AAAAAAAAAAAAAAAAADgAAAAAAAAAAAAAAAAAAAAAAAAAAAA4ADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAAAAAADgAAAAAAAAAAAA4AAAAOAAAAAAAAAAAADgAOAA4AAAAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAA4ADgAOAA4AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAA4ADgAAAAAAAAAAAAAAAAAAAAAAAAAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAA4AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA4ADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAAAAAAAAAAAA4AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAAAADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAA4ADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA4ADgAOAA4ADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA4ADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAAAAAADgAOAA4ADgAOAA4ADgAOAA4ADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAAAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA4AAAAAAA4ADgAOAA4ADgAOAA4ADgAOAAAADgAOAA4ADgAAAAAAAAAAAAAAAAAAAAAAAAAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4AAAAAAAAAAAAAAAAADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAA4ADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAOAA4ADgAOAA4ADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAOAA4ADgAOAA4AAAAAAAAAAAAAAAAAAAAAAA4ADgAOAA4ADgAOAA4ADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4AAAAOAA4ADgAOAA4ADgAAAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4AAAAAAAAAAAA='; - - /* - * utrie 1.0.2 - * Copyright (c) 2022 Niklas von Hertzen - * Released under MIT License - */ - var chars$1 = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/'; - // Use a lookup table to find the index. - var lookup$1 = typeof Uint8Array === 'undefined' ? [] : new Uint8Array(256); - for (var i$1 = 0; i$1 < chars$1.length; i$1++) { - lookup$1[chars$1.charCodeAt(i$1)] = i$1; - } - var decode = function (base64) { - var bufferLength = base64.length * 0.75, len = base64.length, i, p = 0, encoded1, encoded2, encoded3, encoded4; - if (base64[base64.length - 1] === '=') { - bufferLength--; - if (base64[base64.length - 2] === '=') { - bufferLength--; - } - } - var buffer = typeof ArrayBuffer !== 'undefined' && - typeof Uint8Array !== 'undefined' && - typeof Uint8Array.prototype.slice !== 'undefined' - ? new ArrayBuffer(bufferLength) - : new Array(bufferLength); - var bytes = Array.isArray(buffer) ? buffer : new Uint8Array(buffer); - for (i = 0; i < len; i += 4) { - encoded1 = lookup$1[base64.charCodeAt(i)]; - encoded2 = lookup$1[base64.charCodeAt(i + 1)]; - encoded3 = lookup$1[base64.charCodeAt(i + 2)]; - encoded4 = lookup$1[base64.charCodeAt(i + 3)]; - bytes[p++] = (encoded1 << 2) | (encoded2 >> 4); - bytes[p++] = ((encoded2 & 15) << 4) | (encoded3 >> 2); - bytes[p++] = ((encoded3 & 3) << 6) | (encoded4 & 63); - } - return buffer; - }; - var polyUint16Array = function (buffer) { - var length = buffer.length; - var bytes = []; - for (var i = 0; i < length; i += 2) { - bytes.push((buffer[i + 1] << 8) | buffer[i]); - } - return bytes; - }; - var polyUint32Array = function (buffer) { - var length = buffer.length; - var bytes = []; - for (var i = 0; i < length; i += 4) { - bytes.push((buffer[i + 3] << 24) | (buffer[i + 2] << 16) | (buffer[i + 1] << 8) | buffer[i]); - } - return bytes; - }; - - /** Shift size for getting the index-2 table offset. */ - var UTRIE2_SHIFT_2 = 5; - /** Shift size for getting the index-1 table offset. */ - var UTRIE2_SHIFT_1 = 6 + 5; - /** - * Shift size for shifting left the index array values. - * Increases possible data size with 16-bit index values at the cost - * of compactability. - * This requires data blocks to be aligned by UTRIE2_DATA_GRANULARITY. - */ - var UTRIE2_INDEX_SHIFT = 2; - /** - * Difference between the two shift sizes, - * for getting an index-1 offset from an index-2 offset. 6=11-5 - */ - var UTRIE2_SHIFT_1_2 = UTRIE2_SHIFT_1 - UTRIE2_SHIFT_2; - /** - * The part of the index-2 table for U+D800..U+DBFF stores values for - * lead surrogate code _units_ not code _points_. - * Values for lead surrogate code _points_ are indexed with this portion of the table. - * Length=32=0x20=0x400>>UTRIE2_SHIFT_2. (There are 1024=0x400 lead surrogates.) - */ - var UTRIE2_LSCP_INDEX_2_OFFSET = 0x10000 >> UTRIE2_SHIFT_2; - /** Number of entries in a data block. 32=0x20 */ - var UTRIE2_DATA_BLOCK_LENGTH = 1 << UTRIE2_SHIFT_2; - /** Mask for getting the lower bits for the in-data-block offset. */ - var UTRIE2_DATA_MASK = UTRIE2_DATA_BLOCK_LENGTH - 1; - var UTRIE2_LSCP_INDEX_2_LENGTH = 0x400 >> UTRIE2_SHIFT_2; - /** Count the lengths of both BMP pieces. 2080=0x820 */ - var UTRIE2_INDEX_2_BMP_LENGTH = UTRIE2_LSCP_INDEX_2_OFFSET + UTRIE2_LSCP_INDEX_2_LENGTH; - /** - * The 2-byte UTF-8 version of the index-2 table follows at offset 2080=0x820. - * Length 32=0x20 for lead bytes C0..DF, regardless of UTRIE2_SHIFT_2. - */ - var UTRIE2_UTF8_2B_INDEX_2_OFFSET = UTRIE2_INDEX_2_BMP_LENGTH; - var UTRIE2_UTF8_2B_INDEX_2_LENGTH = 0x800 >> 6; /* U+0800 is the first code point after 2-byte UTF-8 */ - /** - * The index-1 table, only used for supplementary code points, at offset 2112=0x840. - * Variable length, for code points up to highStart, where the last single-value range starts. - * Maximum length 512=0x200=0x100000>>UTRIE2_SHIFT_1. - * (For 0x100000 supplementary code points U+10000..U+10ffff.) - * - * The part of the index-2 table for supplementary code points starts - * after this index-1 table. - * - * Both the index-1 table and the following part of the index-2 table - * are omitted completely if there is only BMP data. - */ - var UTRIE2_INDEX_1_OFFSET = UTRIE2_UTF8_2B_INDEX_2_OFFSET + UTRIE2_UTF8_2B_INDEX_2_LENGTH; - /** - * Number of index-1 entries for the BMP. 32=0x20 - * This part of the index-1 table is omitted from the serialized form. - */ - var UTRIE2_OMITTED_BMP_INDEX_1_LENGTH = 0x10000 >> UTRIE2_SHIFT_1; - /** Number of entries in an index-2 block. 64=0x40 */ - var UTRIE2_INDEX_2_BLOCK_LENGTH = 1 << UTRIE2_SHIFT_1_2; - /** Mask for getting the lower bits for the in-index-2-block offset. */ - var UTRIE2_INDEX_2_MASK = UTRIE2_INDEX_2_BLOCK_LENGTH - 1; - var slice16 = function (view, start, end) { - if (view.slice) { - return view.slice(start, end); - } - return new Uint16Array(Array.prototype.slice.call(view, start, end)); - }; - var slice32 = function (view, start, end) { - if (view.slice) { - return view.slice(start, end); - } - return new Uint32Array(Array.prototype.slice.call(view, start, end)); - }; - var createTrieFromBase64 = function (base64, _byteLength) { - var buffer = decode(base64); - var view32 = Array.isArray(buffer) ? polyUint32Array(buffer) : new Uint32Array(buffer); - var view16 = Array.isArray(buffer) ? polyUint16Array(buffer) : new Uint16Array(buffer); - var headerLength = 24; - var index = slice16(view16, headerLength / 2, view32[4] / 2); - var data = view32[5] === 2 - ? slice16(view16, (headerLength + view32[4]) / 2) - : slice32(view32, Math.ceil((headerLength + view32[4]) / 4)); - return new Trie(view32[0], view32[1], view32[2], view32[3], index, data); - }; - var Trie = /** @class */ (function () { - function Trie(initialValue, errorValue, highStart, highValueIndex, index, data) { - this.initialValue = initialValue; - this.errorValue = errorValue; - this.highStart = highStart; - this.highValueIndex = highValueIndex; - this.index = index; - this.data = data; - } - /** - * Get the value for a code point as stored in the Trie. - * - * @param codePoint the code point - * @return the value - */ - Trie.prototype.get = function (codePoint) { - var ix; - if (codePoint >= 0) { - if (codePoint < 0x0d800 || (codePoint > 0x0dbff && codePoint <= 0x0ffff)) { - // Ordinary BMP code point, excluding leading surrogates. - // BMP uses a single level lookup. BMP index starts at offset 0 in the Trie2 index. - // 16 bit data is stored in the index array itself. - ix = this.index[codePoint >> UTRIE2_SHIFT_2]; - ix = (ix << UTRIE2_INDEX_SHIFT) + (codePoint & UTRIE2_DATA_MASK); - return this.data[ix]; - } - if (codePoint <= 0xffff) { - // Lead Surrogate Code Point. A Separate index section is stored for - // lead surrogate code units and code points. - // The main index has the code unit data. - // For this function, we need the code point data. - // Note: this expression could be refactored for slightly improved efficiency, but - // surrogate code points will be so rare in practice that it's not worth it. - ix = this.index[UTRIE2_LSCP_INDEX_2_OFFSET + ((codePoint - 0xd800) >> UTRIE2_SHIFT_2)]; - ix = (ix << UTRIE2_INDEX_SHIFT) + (codePoint & UTRIE2_DATA_MASK); - return this.data[ix]; - } - if (codePoint < this.highStart) { - // Supplemental code point, use two-level lookup. - ix = UTRIE2_INDEX_1_OFFSET - UTRIE2_OMITTED_BMP_INDEX_1_LENGTH + (codePoint >> UTRIE2_SHIFT_1); - ix = this.index[ix]; - ix += (codePoint >> UTRIE2_SHIFT_2) & UTRIE2_INDEX_2_MASK; - ix = this.index[ix]; - ix = (ix << UTRIE2_INDEX_SHIFT) + (codePoint & UTRIE2_DATA_MASK); - return this.data[ix]; - } - if (codePoint <= 0x10ffff) { - return this.data[this.highValueIndex]; - } - } - // Fall through. The code point is outside of the legal range of 0..0x10ffff. - return this.errorValue; - }; - return Trie; - }()); - - /* - * base64-arraybuffer 1.0.2 - * Copyright (c) 2022 Niklas von Hertzen - * Released under MIT License - */ - var chars = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/'; - // Use a lookup table to find the index. - var lookup = typeof Uint8Array === 'undefined' ? [] : new Uint8Array(256); - for (var i = 0; i < chars.length; i++) { - lookup[chars.charCodeAt(i)] = i; - } - - var Prepend = 1; - var CR = 2; - var LF = 3; - var Control = 4; - var Extend = 5; - var SpacingMark = 7; - var L = 8; - var V = 9; - var T = 10; - var LV = 11; - var LVT = 12; - var ZWJ = 13; - var Extended_Pictographic = 14; - var RI = 15; - var toCodePoints = function (str) { - var codePoints = []; - var i = 0; - var length = str.length; - while (i < length) { - var value = str.charCodeAt(i++); - if (value >= 0xd800 && value <= 0xdbff && i < length) { - var extra = str.charCodeAt(i++); - if ((extra & 0xfc00) === 0xdc00) { - codePoints.push(((value & 0x3ff) << 10) + (extra & 0x3ff) + 0x10000); - } - else { - codePoints.push(value); - i--; - } - } - else { - codePoints.push(value); - } - } - return codePoints; - }; - var fromCodePoint = function () { - var codePoints = []; - for (var _i = 0; _i < arguments.length; _i++) { - codePoints[_i] = arguments[_i]; - } - if (String.fromCodePoint) { - return String.fromCodePoint.apply(String, codePoints); - } - var length = codePoints.length; - if (!length) { - return ''; - } - var codeUnits = []; - var index = -1; - var result = ''; - while (++index < length) { - var codePoint = codePoints[index]; - if (codePoint <= 0xffff) { - codeUnits.push(codePoint); - } - else { - codePoint -= 0x10000; - codeUnits.push((codePoint >> 10) + 0xd800, (codePoint % 0x400) + 0xdc00); - } - if (index + 1 === length || codeUnits.length > 0x4000) { - result += String.fromCharCode.apply(String, codeUnits); - codeUnits.length = 0; - } - } - return result; - }; - var UnicodeTrie = createTrieFromBase64(base64); - var BREAK_NOT_ALLOWED = '×'; - var BREAK_ALLOWED = '÷'; - var codePointToClass = function (codePoint) { return UnicodeTrie.get(codePoint); }; - var _graphemeBreakAtIndex = function (_codePoints, classTypes, index) { - var prevIndex = index - 2; - var prev = classTypes[prevIndex]; - var current = classTypes[index - 1]; - var next = classTypes[index]; - // GB3 Do not break between a CR and LF - if (current === CR && next === LF) { - return BREAK_NOT_ALLOWED; - } - // GB4 Otherwise, break before and after controls. - if (current === CR || current === LF || current === Control) { - return BREAK_ALLOWED; - } - // GB5 - if (next === CR || next === LF || next === Control) { - return BREAK_ALLOWED; - } - // Do not break Hangul syllable sequences. - // GB6 - if (current === L && [L, V, LV, LVT].indexOf(next) !== -1) { - return BREAK_NOT_ALLOWED; - } - // GB7 - if ((current === LV || current === V) && (next === V || next === T)) { - return BREAK_NOT_ALLOWED; - } - // GB8 - if ((current === LVT || current === T) && next === T) { - return BREAK_NOT_ALLOWED; - } - // GB9 Do not break before extending characters or ZWJ. - if (next === ZWJ || next === Extend) { - return BREAK_NOT_ALLOWED; - } - // Do not break before SpacingMarks, or after Prepend characters. - // GB9a - if (next === SpacingMark) { - return BREAK_NOT_ALLOWED; - } - // GB9a - if (current === Prepend) { - return BREAK_NOT_ALLOWED; - } - // GB11 Do not break within emoji modifier sequences or emoji zwj sequences. - if (current === ZWJ && next === Extended_Pictographic) { - while (prev === Extend) { - prev = classTypes[--prevIndex]; - } - if (prev === Extended_Pictographic) { - return BREAK_NOT_ALLOWED; - } - } - // GB12 Do not break within emoji flag sequences. - // That is, do not break between regional indicator (RI) symbols - // if there is an odd number of RI characters before the break point. - if (current === RI && next === RI) { - var countRI = 0; - while (prev === RI) { - countRI++; - prev = classTypes[--prevIndex]; - } - if (countRI % 2 === 0) { - return BREAK_NOT_ALLOWED; - } - } - return BREAK_ALLOWED; - }; - var GraphemeBreaker = function (str) { - var codePoints = toCodePoints(str); - var length = codePoints.length; - var index = 0; - var lastEnd = 0; - var classTypes = codePoints.map(codePointToClass); - return { - next: function () { - if (index >= length) { - return { done: true, value: null }; - } - var graphemeBreak = BREAK_NOT_ALLOWED; - while (index < length && - (graphemeBreak = _graphemeBreakAtIndex(codePoints, classTypes, ++index)) === BREAK_NOT_ALLOWED) { } - if (graphemeBreak !== BREAK_NOT_ALLOWED || index === length) { - var value = fromCodePoint.apply(null, codePoints.slice(lastEnd, index)); - lastEnd = index; - return { value: value, done: false }; - } - return { done: true, value: null }; - }, - }; - }; - var splitGraphemes = function (str) { - var breaker = GraphemeBreaker(str); - var graphemes = []; - var bk; - while (!(bk = breaker.next()).done) { - if (bk.value) { - graphemes.push(bk.value.slice()); - } - } - return graphemes; - }; - - var testRangeBounds = function (document) { - var TEST_HEIGHT = 123; - if (document.createRange) { - var range = document.createRange(); - if (range.getBoundingClientRect) { - var testElement = document.createElement('boundtest'); - testElement.style.height = TEST_HEIGHT + "px"; - testElement.style.display = 'block'; - document.body.appendChild(testElement); - range.selectNode(testElement); - var rangeBounds = range.getBoundingClientRect(); - var rangeHeight = Math.round(rangeBounds.height); - document.body.removeChild(testElement); - if (rangeHeight === TEST_HEIGHT) { - return true; - } - } - } - return false; - }; - var testIOSLineBreak = function (document) { - var testElement = document.createElement('boundtest'); - testElement.style.width = '50px'; - testElement.style.display = 'block'; - testElement.style.fontSize = '12px'; - testElement.style.letterSpacing = '0px'; - testElement.style.wordSpacing = '0px'; - document.body.appendChild(testElement); - var range = document.createRange(); - testElement.innerHTML = typeof ''.repeat === 'function' ? '👨'.repeat(10) : ''; - var node = testElement.firstChild; - var textList = toCodePoints$1(node.data).map(function (i) { return fromCodePoint$1(i); }); - var offset = 0; - var prev = {}; - // ios 13 does not handle range getBoundingClientRect line changes correctly #2177 - var supports = textList.every(function (text, i) { - range.setStart(node, offset); - range.setEnd(node, offset + text.length); - var rect = range.getBoundingClientRect(); - offset += text.length; - var boundAhead = rect.x > prev.x || rect.y > prev.y; - prev = rect; - if (i === 0) { - return true; - } - return boundAhead; - }); - document.body.removeChild(testElement); - return supports; - }; - var testCORS = function () { return typeof new Image().crossOrigin !== 'undefined'; }; - var testResponseType = function () { return typeof new XMLHttpRequest().responseType === 'string'; }; - var testSVG = function (document) { - var img = new Image(); - var canvas = document.createElement('canvas'); - var ctx = canvas.getContext('2d'); - if (!ctx) { - return false; - } - img.src = "data:image/svg+xml,"; - try { - ctx.drawImage(img, 0, 0); - canvas.toDataURL(); - } - catch (e) { - return false; - } - return true; - }; - var isGreenPixel = function (data) { - return data[0] === 0 && data[1] === 255 && data[2] === 0 && data[3] === 255; - }; - var testForeignObject = function (document) { - var canvas = document.createElement('canvas'); - var size = 100; - canvas.width = size; - canvas.height = size; - var ctx = canvas.getContext('2d'); - if (!ctx) { - return Promise.reject(false); - } - ctx.fillStyle = 'rgb(0, 255, 0)'; - ctx.fillRect(0, 0, size, size); - var img = new Image(); - var greenImageSrc = canvas.toDataURL(); - img.src = greenImageSrc; - var svg = createForeignObjectSVG(size, size, 0, 0, img); - ctx.fillStyle = 'red'; - ctx.fillRect(0, 0, size, size); - return loadSerializedSVG$1(svg) - .then(function (img) { - ctx.drawImage(img, 0, 0); - var data = ctx.getImageData(0, 0, size, size).data; - ctx.fillStyle = 'red'; - ctx.fillRect(0, 0, size, size); - var node = document.createElement('div'); - node.style.backgroundImage = "url(" + greenImageSrc + ")"; - node.style.height = size + "px"; - // Firefox 55 does not render inline tags - return isGreenPixel(data) - ? loadSerializedSVG$1(createForeignObjectSVG(size, size, 0, 0, node)) - : Promise.reject(false); - }) - .then(function (img) { - ctx.drawImage(img, 0, 0); - // Edge does not render background-images - return isGreenPixel(ctx.getImageData(0, 0, size, size).data); - }) - .catch(function () { return false; }); - }; - var createForeignObjectSVG = function (width, height, x, y, node) { - var xmlns = 'http://www.w3.org/2000/svg'; - var svg = document.createElementNS(xmlns, 'svg'); - var foreignObject = document.createElementNS(xmlns, 'foreignObject'); - svg.setAttributeNS(null, 'width', width.toString()); - svg.setAttributeNS(null, 'height', height.toString()); - foreignObject.setAttributeNS(null, 'width', '100%'); - foreignObject.setAttributeNS(null, 'height', '100%'); - foreignObject.setAttributeNS(null, 'x', x.toString()); - foreignObject.setAttributeNS(null, 'y', y.toString()); - foreignObject.setAttributeNS(null, 'externalResourcesRequired', 'true'); - svg.appendChild(foreignObject); - foreignObject.appendChild(node); - return svg; - }; - var loadSerializedSVG$1 = function (svg) { - return new Promise(function (resolve, reject) { - var img = new Image(); - img.onload = function () { return resolve(img); }; - img.onerror = reject; - img.src = "data:image/svg+xml;charset=utf-8," + encodeURIComponent(new XMLSerializer().serializeToString(svg)); - }); - }; - var FEATURES = { - get SUPPORT_RANGE_BOUNDS() { - var value = testRangeBounds(document); - Object.defineProperty(FEATURES, 'SUPPORT_RANGE_BOUNDS', { value: value }); - return value; - }, - get SUPPORT_WORD_BREAKING() { - var value = FEATURES.SUPPORT_RANGE_BOUNDS && testIOSLineBreak(document); - Object.defineProperty(FEATURES, 'SUPPORT_WORD_BREAKING', { value: value }); - return value; - }, - get SUPPORT_SVG_DRAWING() { - var value = testSVG(document); - Object.defineProperty(FEATURES, 'SUPPORT_SVG_DRAWING', { value: value }); - return value; - }, - get SUPPORT_FOREIGNOBJECT_DRAWING() { - var value = typeof Array.from === 'function' && typeof window.fetch === 'function' - ? testForeignObject(document) - : Promise.resolve(false); - Object.defineProperty(FEATURES, 'SUPPORT_FOREIGNOBJECT_DRAWING', { value: value }); - return value; - }, - get SUPPORT_CORS_IMAGES() { - var value = testCORS(); - Object.defineProperty(FEATURES, 'SUPPORT_CORS_IMAGES', { value: value }); - return value; - }, - get SUPPORT_RESPONSE_TYPE() { - var value = testResponseType(); - Object.defineProperty(FEATURES, 'SUPPORT_RESPONSE_TYPE', { value: value }); - return value; - }, - get SUPPORT_CORS_XHR() { - var value = 'withCredentials' in new XMLHttpRequest(); - Object.defineProperty(FEATURES, 'SUPPORT_CORS_XHR', { value: value }); - return value; - }, - get SUPPORT_NATIVE_TEXT_SEGMENTATION() { - // eslint-disable-next-line @typescript-eslint/no-explicit-any - var value = !!(typeof Intl !== 'undefined' && Intl.Segmenter); - Object.defineProperty(FEATURES, 'SUPPORT_NATIVE_TEXT_SEGMENTATION', { value: value }); - return value; - } - }; - - var TextBounds = /** @class */ (function () { - function TextBounds(text, bounds) { - this.text = text; - this.bounds = bounds; - } - return TextBounds; - }()); - var parseTextBounds = function (context, value, styles, node) { - var textList = breakText(value, styles); - var textBounds = []; - var offset = 0; - textList.forEach(function (text) { - if (styles.textDecorationLine.length || text.trim().length > 0) { - if (FEATURES.SUPPORT_RANGE_BOUNDS) { - var clientRects = createRange(node, offset, text.length).getClientRects(); - if (clientRects.length > 1) { - var subSegments = segmentGraphemes(text); - var subOffset_1 = 0; - subSegments.forEach(function (subSegment) { - textBounds.push(new TextBounds(subSegment, Bounds.fromDOMRectList(context, createRange(node, subOffset_1 + offset, subSegment.length).getClientRects()))); - subOffset_1 += subSegment.length; - }); - } - else { - textBounds.push(new TextBounds(text, Bounds.fromDOMRectList(context, clientRects))); - } - } - else { - var replacementNode = node.splitText(text.length); - textBounds.push(new TextBounds(text, getWrapperBounds(context, node))); - node = replacementNode; - } - } - else if (!FEATURES.SUPPORT_RANGE_BOUNDS) { - node = node.splitText(text.length); - } - offset += text.length; - }); - return textBounds; - }; - var getWrapperBounds = function (context, node) { - var ownerDocument = node.ownerDocument; - if (ownerDocument) { - var wrapper = ownerDocument.createElement('html2canvaswrapper'); - wrapper.appendChild(node.cloneNode(true)); - var parentNode = node.parentNode; - if (parentNode) { - parentNode.replaceChild(wrapper, node); - var bounds = parseBounds(context, wrapper); - if (wrapper.firstChild) { - parentNode.replaceChild(wrapper.firstChild, wrapper); - } - return bounds; - } - } - return Bounds.EMPTY; - }; - var createRange = function (node, offset, length) { - var ownerDocument = node.ownerDocument; - if (!ownerDocument) { - throw new Error('Node has no owner document'); - } - var range = ownerDocument.createRange(); - range.setStart(node, offset); - range.setEnd(node, offset + length); - return range; - }; - var segmentGraphemes = function (value) { - if (FEATURES.SUPPORT_NATIVE_TEXT_SEGMENTATION) { - // eslint-disable-next-line @typescript-eslint/no-explicit-any - var segmenter = new Intl.Segmenter(void 0, { granularity: 'grapheme' }); - // eslint-disable-next-line @typescript-eslint/no-explicit-any - return Array.from(segmenter.segment(value)).map(function (segment) { return segment.segment; }); - } - return splitGraphemes(value); - }; - var segmentWords = function (value, styles) { - if (FEATURES.SUPPORT_NATIVE_TEXT_SEGMENTATION) { - // eslint-disable-next-line @typescript-eslint/no-explicit-any - var segmenter = new Intl.Segmenter(void 0, { - granularity: 'word' - }); - // eslint-disable-next-line @typescript-eslint/no-explicit-any - return Array.from(segmenter.segment(value)).map(function (segment) { return segment.segment; }); - } - return breakWords(value, styles); - }; - var breakText = function (value, styles) { - return styles.letterSpacing !== 0 ? segmentGraphemes(value) : segmentWords(value, styles); - }; - // https://drafts.csswg.org/css-text/#word-separator - var wordSeparators = [0x0020, 0x00a0, 0x1361, 0x10100, 0x10101, 0x1039, 0x1091]; - var breakWords = function (str, styles) { - var breaker = LineBreaker(str, { - lineBreak: styles.lineBreak, - wordBreak: styles.overflowWrap === "break-word" /* BREAK_WORD */ ? 'break-word' : styles.wordBreak - }); - var words = []; - var bk; - var _loop_1 = function () { - if (bk.value) { - var value = bk.value.slice(); - var codePoints = toCodePoints$1(value); - var word_1 = ''; - codePoints.forEach(function (codePoint) { - if (wordSeparators.indexOf(codePoint) === -1) { - word_1 += fromCodePoint$1(codePoint); - } - else { - if (word_1.length) { - words.push(word_1); - } - words.push(fromCodePoint$1(codePoint)); - word_1 = ''; - } - }); - if (word_1.length) { - words.push(word_1); - } - } - }; - while (!(bk = breaker.next()).done) { - _loop_1(); - } - return words; - }; - - var TextContainer = /** @class */ (function () { - function TextContainer(context, node, styles) { - this.text = transform(node.data, styles.textTransform); - this.textBounds = parseTextBounds(context, this.text, styles, node); - } - return TextContainer; - }()); - var transform = function (text, transform) { - switch (transform) { - case 1 /* LOWERCASE */: - return text.toLowerCase(); - case 3 /* CAPITALIZE */: - return text.replace(CAPITALIZE, capitalize); - case 2 /* UPPERCASE */: - return text.toUpperCase(); - default: - return text; - } - }; - var CAPITALIZE = /(^|\s|:|-|\(|\))([a-z])/g; - var capitalize = function (m, p1, p2) { - if (m.length > 0) { - return p1 + p2.toUpperCase(); - } - return m; - }; - - var ImageElementContainer = /** @class */ (function (_super) { - __extends(ImageElementContainer, _super); - function ImageElementContainer(context, img) { - var _this = _super.call(this, context, img) || this; - _this.src = img.currentSrc || img.src; - _this.intrinsicWidth = img.naturalWidth; - _this.intrinsicHeight = img.naturalHeight; - _this.context.cache.addImage(_this.src); - return _this; - } - return ImageElementContainer; - }(ElementContainer)); - - var CanvasElementContainer = /** @class */ (function (_super) { - __extends(CanvasElementContainer, _super); - function CanvasElementContainer(context, canvas) { - var _this = _super.call(this, context, canvas) || this; - _this.canvas = canvas; - _this.intrinsicWidth = canvas.width; - _this.intrinsicHeight = canvas.height; - return _this; - } - return CanvasElementContainer; - }(ElementContainer)); - - var SVGElementContainer = /** @class */ (function (_super) { - __extends(SVGElementContainer, _super); - function SVGElementContainer(context, img) { - var _this = _super.call(this, context, img) || this; - var s = new XMLSerializer(); - var bounds = parseBounds(context, img); - img.setAttribute('width', bounds.width + "px"); - img.setAttribute('height', bounds.height + "px"); - _this.svg = "data:image/svg+xml," + encodeURIComponent(s.serializeToString(img)); - _this.intrinsicWidth = img.width.baseVal.value; - _this.intrinsicHeight = img.height.baseVal.value; - _this.context.cache.addImage(_this.svg); - return _this; - } - return SVGElementContainer; - }(ElementContainer)); - - var LIElementContainer = /** @class */ (function (_super) { - __extends(LIElementContainer, _super); - function LIElementContainer(context, element) { - var _this = _super.call(this, context, element) || this; - _this.value = element.value; - return _this; - } - return LIElementContainer; - }(ElementContainer)); - - var OLElementContainer = /** @class */ (function (_super) { - __extends(OLElementContainer, _super); - function OLElementContainer(context, element) { - var _this = _super.call(this, context, element) || this; - _this.start = element.start; - _this.reversed = typeof element.reversed === 'boolean' && element.reversed === true; - return _this; - } - return OLElementContainer; - }(ElementContainer)); - - var CHECKBOX_BORDER_RADIUS = [ - { - type: 15 /* DIMENSION_TOKEN */, - flags: 0, - unit: 'px', - number: 3 - } - ]; - var RADIO_BORDER_RADIUS = [ - { - type: 16 /* PERCENTAGE_TOKEN */, - flags: 0, - number: 50 - } - ]; - var reformatInputBounds = function (bounds) { - if (bounds.width > bounds.height) { - return new Bounds(bounds.left + (bounds.width - bounds.height) / 2, bounds.top, bounds.height, bounds.height); - } - else if (bounds.width < bounds.height) { - return new Bounds(bounds.left, bounds.top + (bounds.height - bounds.width) / 2, bounds.width, bounds.width); - } - return bounds; - }; - var getInputValue = function (node) { - var value = node.type === PASSWORD ? new Array(node.value.length + 1).join('\u2022') : node.value; - return value.length === 0 ? node.placeholder || '' : value; - }; - var CHECKBOX = 'checkbox'; - var RADIO = 'radio'; - var PASSWORD = 'password'; - var INPUT_COLOR = 0x2a2a2aff; - var InputElementContainer = /** @class */ (function (_super) { - __extends(InputElementContainer, _super); - function InputElementContainer(context, input) { - var _this = _super.call(this, context, input) || this; - _this.type = input.type.toLowerCase(); - _this.checked = input.checked; - _this.value = getInputValue(input); - if (_this.type === CHECKBOX || _this.type === RADIO) { - _this.styles.backgroundColor = 0xdededeff; - _this.styles.borderTopColor = - _this.styles.borderRightColor = - _this.styles.borderBottomColor = - _this.styles.borderLeftColor = - 0xa5a5a5ff; - _this.styles.borderTopWidth = - _this.styles.borderRightWidth = - _this.styles.borderBottomWidth = - _this.styles.borderLeftWidth = - 1; - _this.styles.borderTopStyle = - _this.styles.borderRightStyle = - _this.styles.borderBottomStyle = - _this.styles.borderLeftStyle = - 1 /* SOLID */; - _this.styles.backgroundClip = [0 /* BORDER_BOX */]; - _this.styles.backgroundOrigin = [0 /* BORDER_BOX */]; - _this.bounds = reformatInputBounds(_this.bounds); - } - switch (_this.type) { - case CHECKBOX: - _this.styles.borderTopRightRadius = - _this.styles.borderTopLeftRadius = - _this.styles.borderBottomRightRadius = - _this.styles.borderBottomLeftRadius = - CHECKBOX_BORDER_RADIUS; - break; - case RADIO: - _this.styles.borderTopRightRadius = - _this.styles.borderTopLeftRadius = - _this.styles.borderBottomRightRadius = - _this.styles.borderBottomLeftRadius = - RADIO_BORDER_RADIUS; - break; - } - return _this; - } - return InputElementContainer; - }(ElementContainer)); - - var SelectElementContainer = /** @class */ (function (_super) { - __extends(SelectElementContainer, _super); - function SelectElementContainer(context, element) { - var _this = _super.call(this, context, element) || this; - var option = element.options[element.selectedIndex || 0]; - _this.value = option ? option.text || '' : ''; - return _this; - } - return SelectElementContainer; - }(ElementContainer)); - - var TextareaElementContainer = /** @class */ (function (_super) { - __extends(TextareaElementContainer, _super); - function TextareaElementContainer(context, element) { - var _this = _super.call(this, context, element) || this; - _this.value = element.value; - return _this; - } - return TextareaElementContainer; - }(ElementContainer)); - - var IFrameElementContainer = /** @class */ (function (_super) { - __extends(IFrameElementContainer, _super); - function IFrameElementContainer(context, iframe) { - var _this = _super.call(this, context, iframe) || this; - _this.src = iframe.src; - _this.width = parseInt(iframe.width, 10) || 0; - _this.height = parseInt(iframe.height, 10) || 0; - _this.backgroundColor = _this.styles.backgroundColor; - try { - if (iframe.contentWindow && - iframe.contentWindow.document && - iframe.contentWindow.document.documentElement) { - _this.tree = parseTree(context, iframe.contentWindow.document.documentElement); - // http://www.w3.org/TR/css3-background/#special-backgrounds - var documentBackgroundColor = iframe.contentWindow.document.documentElement - ? parseColor(context, getComputedStyle(iframe.contentWindow.document.documentElement).backgroundColor) - : COLORS.TRANSPARENT; - var bodyBackgroundColor = iframe.contentWindow.document.body - ? parseColor(context, getComputedStyle(iframe.contentWindow.document.body).backgroundColor) - : COLORS.TRANSPARENT; - _this.backgroundColor = isTransparent(documentBackgroundColor) - ? isTransparent(bodyBackgroundColor) - ? _this.styles.backgroundColor - : bodyBackgroundColor - : documentBackgroundColor; - } - } - catch (e) { } - return _this; - } - return IFrameElementContainer; - }(ElementContainer)); - - var LIST_OWNERS = ['OL', 'UL', 'MENU']; - var parseNodeTree = function (context, node, parent, root) { - for (var childNode = node.firstChild, nextNode = void 0; childNode; childNode = nextNode) { - nextNode = childNode.nextSibling; - if (isTextNode(childNode) && childNode.data.trim().length > 0) { - parent.textNodes.push(new TextContainer(context, childNode, parent.styles)); - } - else if (isElementNode(childNode)) { - if (isSlotElement(childNode) && childNode.assignedNodes) { - childNode.assignedNodes().forEach(function (childNode) { return parseNodeTree(context, childNode, parent, root); }); - } - else { - var container = createContainer(context, childNode); - if (container.styles.isVisible()) { - if (createsRealStackingContext(childNode, container, root)) { - container.flags |= 4 /* CREATES_REAL_STACKING_CONTEXT */; - } - else if (createsStackingContext(container.styles)) { - container.flags |= 2 /* CREATES_STACKING_CONTEXT */; - } - if (LIST_OWNERS.indexOf(childNode.tagName) !== -1) { - container.flags |= 8 /* IS_LIST_OWNER */; - } - parent.elements.push(container); - childNode.slot; - if (childNode.shadowRoot) { - parseNodeTree(context, childNode.shadowRoot, container, root); - } - else if (!isTextareaElement(childNode) && - !isSVGElement(childNode) && - !isSelectElement(childNode)) { - parseNodeTree(context, childNode, container, root); - } - } - } - } - } - }; - var createContainer = function (context, element) { - if (isImageElement(element)) { - return new ImageElementContainer(context, element); - } - if (isCanvasElement(element)) { - return new CanvasElementContainer(context, element); - } - if (isSVGElement(element)) { - return new SVGElementContainer(context, element); - } - if (isLIElement(element)) { - return new LIElementContainer(context, element); - } - if (isOLElement(element)) { - return new OLElementContainer(context, element); - } - if (isInputElement(element)) { - return new InputElementContainer(context, element); - } - if (isSelectElement(element)) { - return new SelectElementContainer(context, element); - } - if (isTextareaElement(element)) { - return new TextareaElementContainer(context, element); - } - if (isIFrameElement(element)) { - return new IFrameElementContainer(context, element); - } - return new ElementContainer(context, element); - }; - var parseTree = function (context, element) { - var container = createContainer(context, element); - container.flags |= 4 /* CREATES_REAL_STACKING_CONTEXT */; - parseNodeTree(context, element, container, container); - return container; - }; - var createsRealStackingContext = function (node, container, root) { - return (container.styles.isPositionedWithZIndex() || - container.styles.opacity < 1 || - container.styles.isTransformed() || - (isBodyElement(node) && root.styles.isTransparent())); - }; - var createsStackingContext = function (styles) { return styles.isPositioned() || styles.isFloating(); }; - var isTextNode = function (node) { return node.nodeType === Node.TEXT_NODE; }; - var isElementNode = function (node) { return node.nodeType === Node.ELEMENT_NODE; }; - var isHTMLElementNode = function (node) { - return isElementNode(node) && typeof node.style !== 'undefined' && !isSVGElementNode(node); - }; - var isSVGElementNode = function (element) { - return typeof element.className === 'object'; - }; - var isLIElement = function (node) { return node.tagName === 'LI'; }; - var isOLElement = function (node) { return node.tagName === 'OL'; }; - var isInputElement = function (node) { return node.tagName === 'INPUT'; }; - var isHTMLElement = function (node) { return node.tagName === 'HTML'; }; - var isSVGElement = function (node) { return node.tagName === 'svg'; }; - var isBodyElement = function (node) { return node.tagName === 'BODY'; }; - var isCanvasElement = function (node) { return node.tagName === 'CANVAS'; }; - var isVideoElement = function (node) { return node.tagName === 'VIDEO'; }; - var isImageElement = function (node) { return node.tagName === 'IMG'; }; - var isIFrameElement = function (node) { return node.tagName === 'IFRAME'; }; - var isStyleElement = function (node) { return node.tagName === 'STYLE'; }; - var isScriptElement = function (node) { return node.tagName === 'SCRIPT'; }; - var isTextareaElement = function (node) { return node.tagName === 'TEXTAREA'; }; - var isSelectElement = function (node) { return node.tagName === 'SELECT'; }; - var isSlotElement = function (node) { return node.tagName === 'SLOT'; }; - // https://html.spec.whatwg.org/multipage/custom-elements.html#valid-custom-element-name - var isCustomElement = function (node) { return node.tagName.indexOf('-') > 0; }; - - var CounterState = /** @class */ (function () { - function CounterState() { - this.counters = {}; - } - CounterState.prototype.getCounterValue = function (name) { - var counter = this.counters[name]; - if (counter && counter.length) { - return counter[counter.length - 1]; - } - return 1; - }; - CounterState.prototype.getCounterValues = function (name) { - var counter = this.counters[name]; - return counter ? counter : []; - }; - CounterState.prototype.pop = function (counters) { - var _this = this; - counters.forEach(function (counter) { return _this.counters[counter].pop(); }); - }; - CounterState.prototype.parse = function (style) { - var _this = this; - var counterIncrement = style.counterIncrement; - var counterReset = style.counterReset; - var canReset = true; - if (counterIncrement !== null) { - counterIncrement.forEach(function (entry) { - var counter = _this.counters[entry.counter]; - if (counter && entry.increment !== 0) { - canReset = false; - if (!counter.length) { - counter.push(1); - } - counter[Math.max(0, counter.length - 1)] += entry.increment; - } - }); - } - var counterNames = []; - if (canReset) { - counterReset.forEach(function (entry) { - var counter = _this.counters[entry.counter]; - counterNames.push(entry.counter); - if (!counter) { - counter = _this.counters[entry.counter] = []; - } - counter.push(entry.reset); - }); - } - return counterNames; - }; - return CounterState; - }()); - var ROMAN_UPPER = { - integers: [1000, 900, 500, 400, 100, 90, 50, 40, 10, 9, 5, 4, 1], - values: ['M', 'CM', 'D', 'CD', 'C', 'XC', 'L', 'XL', 'X', 'IX', 'V', 'IV', 'I'] - }; - var ARMENIAN = { - integers: [ - 9000, 8000, 7000, 6000, 5000, 4000, 3000, 2000, 1000, 900, 800, 700, 600, 500, 400, 300, 200, 100, 90, 80, 70, - 60, 50, 40, 30, 20, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1 - ], - values: [ - 'Ք', - 'Փ', - 'Ւ', - 'Ց', - 'Ր', - 'Տ', - 'Վ', - 'Ս', - 'Ռ', - 'Ջ', - 'Պ', - 'Չ', - 'Ո', - 'Շ', - 'Ն', - 'Յ', - 'Մ', - 'Ճ', - 'Ղ', - 'Ձ', - 'Հ', - 'Կ', - 'Ծ', - 'Խ', - 'Լ', - 'Ի', - 'Ժ', - 'Թ', - 'Ը', - 'Է', - 'Զ', - 'Ե', - 'Դ', - 'Գ', - 'Բ', - 'Ա' - ] - }; - var HEBREW = { - integers: [ - 10000, 9000, 8000, 7000, 6000, 5000, 4000, 3000, 2000, 1000, 400, 300, 200, 100, 90, 80, 70, 60, 50, 40, 30, 20, - 19, 18, 17, 16, 15, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1 - ], - values: [ - 'י׳', - 'ט׳', - 'ח׳', - 'ז׳', - 'ו׳', - 'ה׳', - 'ד׳', - 'ג׳', - 'ב׳', - 'א׳', - 'ת', - 'ש', - 'ר', - 'ק', - 'צ', - 'פ', - 'ע', - 'ס', - 'נ', - 'מ', - 'ל', - 'כ', - 'יט', - 'יח', - 'יז', - 'טז', - 'טו', - 'י', - 'ט', - 'ח', - 'ז', - 'ו', - 'ה', - 'ד', - 'ג', - 'ב', - 'א' - ] - }; - var GEORGIAN = { - integers: [ - 10000, 9000, 8000, 7000, 6000, 5000, 4000, 3000, 2000, 1000, 900, 800, 700, 600, 500, 400, 300, 200, 100, 90, - 80, 70, 60, 50, 40, 30, 20, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1 - ], - values: [ - 'ჵ', - 'ჰ', - 'ჯ', - 'ჴ', - 'ხ', - 'ჭ', - 'წ', - 'ძ', - 'ც', - 'ჩ', - 'შ', - 'ყ', - 'ღ', - 'ქ', - 'ფ', - 'ჳ', - 'ტ', - 'ს', - 'რ', - 'ჟ', - 'პ', - 'ო', - 'ჲ', - 'ნ', - 'მ', - 'ლ', - 'კ', - 'ი', - 'თ', - 'ჱ', - 'ზ', - 'ვ', - 'ე', - 'დ', - 'გ', - 'ბ', - 'ა' - ] - }; - var createAdditiveCounter = function (value, min, max, symbols, fallback, suffix) { - if (value < min || value > max) { - return createCounterText(value, fallback, suffix.length > 0); - } - return (symbols.integers.reduce(function (string, integer, index) { - while (value >= integer) { - value -= integer; - string += symbols.values[index]; - } - return string; - }, '') + suffix); - }; - var createCounterStyleWithSymbolResolver = function (value, codePointRangeLength, isNumeric, resolver) { - var string = ''; - do { - if (!isNumeric) { - value--; - } - string = resolver(value) + string; - value /= codePointRangeLength; - } while (value * codePointRangeLength >= codePointRangeLength); - return string; - }; - var createCounterStyleFromRange = function (value, codePointRangeStart, codePointRangeEnd, isNumeric, suffix) { - var codePointRangeLength = codePointRangeEnd - codePointRangeStart + 1; - return ((value < 0 ? '-' : '') + - (createCounterStyleWithSymbolResolver(Math.abs(value), codePointRangeLength, isNumeric, function (codePoint) { - return fromCodePoint$1(Math.floor(codePoint % codePointRangeLength) + codePointRangeStart); - }) + - suffix)); - }; - var createCounterStyleFromSymbols = function (value, symbols, suffix) { - if (suffix === void 0) { suffix = '. '; } - var codePointRangeLength = symbols.length; - return (createCounterStyleWithSymbolResolver(Math.abs(value), codePointRangeLength, false, function (codePoint) { return symbols[Math.floor(codePoint % codePointRangeLength)]; }) + suffix); - }; - var CJK_ZEROS = 1 << 0; - var CJK_TEN_COEFFICIENTS = 1 << 1; - var CJK_TEN_HIGH_COEFFICIENTS = 1 << 2; - var CJK_HUNDRED_COEFFICIENTS = 1 << 3; - var createCJKCounter = function (value, numbers, multipliers, negativeSign, suffix, flags) { - if (value < -9999 || value > 9999) { - return createCounterText(value, 4 /* CJK_DECIMAL */, suffix.length > 0); - } - var tmp = Math.abs(value); - var string = suffix; - if (tmp === 0) { - return numbers[0] + string; - } - for (var digit = 0; tmp > 0 && digit <= 4; digit++) { - var coefficient = tmp % 10; - if (coefficient === 0 && contains(flags, CJK_ZEROS) && string !== '') { - string = numbers[coefficient] + string; - } - else if (coefficient > 1 || - (coefficient === 1 && digit === 0) || - (coefficient === 1 && digit === 1 && contains(flags, CJK_TEN_COEFFICIENTS)) || - (coefficient === 1 && digit === 1 && contains(flags, CJK_TEN_HIGH_COEFFICIENTS) && value > 100) || - (coefficient === 1 && digit > 1 && contains(flags, CJK_HUNDRED_COEFFICIENTS))) { - string = numbers[coefficient] + (digit > 0 ? multipliers[digit - 1] : '') + string; - } - else if (coefficient === 1 && digit > 0) { - string = multipliers[digit - 1] + string; - } - tmp = Math.floor(tmp / 10); - } - return (value < 0 ? negativeSign : '') + string; - }; - var CHINESE_INFORMAL_MULTIPLIERS = '十百千萬'; - var CHINESE_FORMAL_MULTIPLIERS = '拾佰仟萬'; - var JAPANESE_NEGATIVE = 'マイナス'; - var KOREAN_NEGATIVE = '마이너스'; - var createCounterText = function (value, type, appendSuffix) { - var defaultSuffix = appendSuffix ? '. ' : ''; - var cjkSuffix = appendSuffix ? '、' : ''; - var koreanSuffix = appendSuffix ? ', ' : ''; - var spaceSuffix = appendSuffix ? ' ' : ''; - switch (type) { - case 0 /* DISC */: - return '•' + spaceSuffix; - case 1 /* CIRCLE */: - return '◦' + spaceSuffix; - case 2 /* SQUARE */: - return '◾' + spaceSuffix; - case 5 /* DECIMAL_LEADING_ZERO */: - var string = createCounterStyleFromRange(value, 48, 57, true, defaultSuffix); - return string.length < 4 ? "0" + string : string; - case 4 /* CJK_DECIMAL */: - return createCounterStyleFromSymbols(value, '〇一二三四五六七八九', cjkSuffix); - case 6 /* LOWER_ROMAN */: - return createAdditiveCounter(value, 1, 3999, ROMAN_UPPER, 3 /* DECIMAL */, defaultSuffix).toLowerCase(); - case 7 /* UPPER_ROMAN */: - return createAdditiveCounter(value, 1, 3999, ROMAN_UPPER, 3 /* DECIMAL */, defaultSuffix); - case 8 /* LOWER_GREEK */: - return createCounterStyleFromRange(value, 945, 969, false, defaultSuffix); - case 9 /* LOWER_ALPHA */: - return createCounterStyleFromRange(value, 97, 122, false, defaultSuffix); - case 10 /* UPPER_ALPHA */: - return createCounterStyleFromRange(value, 65, 90, false, defaultSuffix); - case 11 /* ARABIC_INDIC */: - return createCounterStyleFromRange(value, 1632, 1641, true, defaultSuffix); - case 12 /* ARMENIAN */: - case 49 /* UPPER_ARMENIAN */: - return createAdditiveCounter(value, 1, 9999, ARMENIAN, 3 /* DECIMAL */, defaultSuffix); - case 35 /* LOWER_ARMENIAN */: - return createAdditiveCounter(value, 1, 9999, ARMENIAN, 3 /* DECIMAL */, defaultSuffix).toLowerCase(); - case 13 /* BENGALI */: - return createCounterStyleFromRange(value, 2534, 2543, true, defaultSuffix); - case 14 /* CAMBODIAN */: - case 30 /* KHMER */: - return createCounterStyleFromRange(value, 6112, 6121, true, defaultSuffix); - case 15 /* CJK_EARTHLY_BRANCH */: - return createCounterStyleFromSymbols(value, '子丑寅卯辰巳午未申酉戌亥', cjkSuffix); - case 16 /* CJK_HEAVENLY_STEM */: - return createCounterStyleFromSymbols(value, '甲乙丙丁戊己庚辛壬癸', cjkSuffix); - case 17 /* CJK_IDEOGRAPHIC */: - case 48 /* TRAD_CHINESE_INFORMAL */: - return createCJKCounter(value, '零一二三四五六七八九', CHINESE_INFORMAL_MULTIPLIERS, '負', cjkSuffix, CJK_TEN_COEFFICIENTS | CJK_TEN_HIGH_COEFFICIENTS | CJK_HUNDRED_COEFFICIENTS); - case 47 /* TRAD_CHINESE_FORMAL */: - return createCJKCounter(value, '零壹貳參肆伍陸柒捌玖', CHINESE_FORMAL_MULTIPLIERS, '負', cjkSuffix, CJK_ZEROS | CJK_TEN_COEFFICIENTS | CJK_TEN_HIGH_COEFFICIENTS | CJK_HUNDRED_COEFFICIENTS); - case 42 /* SIMP_CHINESE_INFORMAL */: - return createCJKCounter(value, '零一二三四五六七八九', CHINESE_INFORMAL_MULTIPLIERS, '负', cjkSuffix, CJK_TEN_COEFFICIENTS | CJK_TEN_HIGH_COEFFICIENTS | CJK_HUNDRED_COEFFICIENTS); - case 41 /* SIMP_CHINESE_FORMAL */: - return createCJKCounter(value, '零壹贰叁肆伍陆柒捌玖', CHINESE_FORMAL_MULTIPLIERS, '负', cjkSuffix, CJK_ZEROS | CJK_TEN_COEFFICIENTS | CJK_TEN_HIGH_COEFFICIENTS | CJK_HUNDRED_COEFFICIENTS); - case 26 /* JAPANESE_INFORMAL */: - return createCJKCounter(value, '〇一二三四五六七八九', '十百千万', JAPANESE_NEGATIVE, cjkSuffix, 0); - case 25 /* JAPANESE_FORMAL */: - return createCJKCounter(value, '零壱弐参四伍六七八九', '拾百千万', JAPANESE_NEGATIVE, cjkSuffix, CJK_ZEROS | CJK_TEN_COEFFICIENTS | CJK_TEN_HIGH_COEFFICIENTS); - case 31 /* KOREAN_HANGUL_FORMAL */: - return createCJKCounter(value, '영일이삼사오육칠팔구', '십백천만', KOREAN_NEGATIVE, koreanSuffix, CJK_ZEROS | CJK_TEN_COEFFICIENTS | CJK_TEN_HIGH_COEFFICIENTS); - case 33 /* KOREAN_HANJA_INFORMAL */: - return createCJKCounter(value, '零一二三四五六七八九', '十百千萬', KOREAN_NEGATIVE, koreanSuffix, 0); - case 32 /* KOREAN_HANJA_FORMAL */: - return createCJKCounter(value, '零壹貳參四五六七八九', '拾百千', KOREAN_NEGATIVE, koreanSuffix, CJK_ZEROS | CJK_TEN_COEFFICIENTS | CJK_TEN_HIGH_COEFFICIENTS); - case 18 /* DEVANAGARI */: - return createCounterStyleFromRange(value, 0x966, 0x96f, true, defaultSuffix); - case 20 /* GEORGIAN */: - return createAdditiveCounter(value, 1, 19999, GEORGIAN, 3 /* DECIMAL */, defaultSuffix); - case 21 /* GUJARATI */: - return createCounterStyleFromRange(value, 0xae6, 0xaef, true, defaultSuffix); - case 22 /* GURMUKHI */: - return createCounterStyleFromRange(value, 0xa66, 0xa6f, true, defaultSuffix); - case 22 /* HEBREW */: - return createAdditiveCounter(value, 1, 10999, HEBREW, 3 /* DECIMAL */, defaultSuffix); - case 23 /* HIRAGANA */: - return createCounterStyleFromSymbols(value, 'あいうえおかきくけこさしすせそたちつてとなにぬねのはひふへほまみむめもやゆよらりるれろわゐゑをん'); - case 24 /* HIRAGANA_IROHA */: - return createCounterStyleFromSymbols(value, 'いろはにほへとちりぬるをわかよたれそつねならむうゐのおくやまけふこえてあさきゆめみしゑひもせす'); - case 27 /* KANNADA */: - return createCounterStyleFromRange(value, 0xce6, 0xcef, true, defaultSuffix); - case 28 /* KATAKANA */: - return createCounterStyleFromSymbols(value, 'アイウエオカキクケコサシスセソタチツテトナニヌネノハヒフヘホマミムメモヤユヨラリルレロワヰヱヲン', cjkSuffix); - case 29 /* KATAKANA_IROHA */: - return createCounterStyleFromSymbols(value, 'イロハニホヘトチリヌルヲワカヨタレソツネナラムウヰノオクヤマケフコエテアサキユメミシヱヒモセス', cjkSuffix); - case 34 /* LAO */: - return createCounterStyleFromRange(value, 0xed0, 0xed9, true, defaultSuffix); - case 37 /* MONGOLIAN */: - return createCounterStyleFromRange(value, 0x1810, 0x1819, true, defaultSuffix); - case 38 /* MYANMAR */: - return createCounterStyleFromRange(value, 0x1040, 0x1049, true, defaultSuffix); - case 39 /* ORIYA */: - return createCounterStyleFromRange(value, 0xb66, 0xb6f, true, defaultSuffix); - case 40 /* PERSIAN */: - return createCounterStyleFromRange(value, 0x6f0, 0x6f9, true, defaultSuffix); - case 43 /* TAMIL */: - return createCounterStyleFromRange(value, 0xbe6, 0xbef, true, defaultSuffix); - case 44 /* TELUGU */: - return createCounterStyleFromRange(value, 0xc66, 0xc6f, true, defaultSuffix); - case 45 /* THAI */: - return createCounterStyleFromRange(value, 0xe50, 0xe59, true, defaultSuffix); - case 46 /* TIBETAN */: - return createCounterStyleFromRange(value, 0xf20, 0xf29, true, defaultSuffix); - case 3 /* DECIMAL */: - default: - return createCounterStyleFromRange(value, 48, 57, true, defaultSuffix); - } - }; - - var IGNORE_ATTRIBUTE = 'data-html2canvas-ignore'; - var DocumentCloner = /** @class */ (function () { - function DocumentCloner(context, element, options) { - this.context = context; - this.options = options; - this.scrolledElements = []; - this.referenceElement = element; - this.counters = new CounterState(); - this.quoteDepth = 0; - if (!element.ownerDocument) { - throw new Error('Cloned element does not have an owner document'); - } - this.documentElement = this.cloneNode(element.ownerDocument.documentElement, false); - } - DocumentCloner.prototype.toIFrame = function (ownerDocument, windowSize) { - var _this = this; - var iframe = createIFrameContainer(ownerDocument, windowSize); - if (!iframe.contentWindow) { - return Promise.reject("Unable to find iframe window"); - } - var scrollX = ownerDocument.defaultView.pageXOffset; - var scrollY = ownerDocument.defaultView.pageYOffset; - var cloneWindow = iframe.contentWindow; - var documentClone = cloneWindow.document; - /* Chrome doesn't detect relative background-images assigned in inline - - - -
- -
- - - \ No newline at end of file diff --git a/spaces/notsq/diffuse-the-rest/build/_app/immutable/chunks/2-6ab63caf.js b/spaces/notsq/diffuse-the-rest/build/_app/immutable/chunks/2-6ab63caf.js deleted file mode 100644 index 01435cd888e3d6b3cf8645c01e8df844d9a470c1..0000000000000000000000000000000000000000 --- a/spaces/notsq/diffuse-the-rest/build/_app/immutable/chunks/2-6ab63caf.js +++ /dev/null @@ -1 +0,0 @@ -import{default as m}from"../components/pages/_page.svelte-1525ec40.js";import"./index-032ac624.js";export{m as component}; diff --git a/spaces/oguzakif/video-object-remover/FGT_codes/LAFC/data/train_dataset_single_edge.py b/spaces/oguzakif/video-object-remover/FGT_codes/LAFC/data/train_dataset_single_edge.py deleted file mode 100644 index 37674ff0e00be8911568fb96fba108f2c43b6500..0000000000000000000000000000000000000000 --- a/spaces/oguzakif/video-object-remover/FGT_codes/LAFC/data/train_dataset_single_edge.py +++ /dev/null @@ -1,151 +0,0 @@ -import random - -import pickle -import logging -import torch -import cv2 -import os - -from torch.utils.data.dataset import Dataset -import numpy as np -from skimage.feature import canny -from .util.STTN_mask import create_random_shape_with_random_motion -from cvbase import read_flow, flow2rgb -from .util.flow_utils import region_fill as rf -import imageio - -logger = logging.getLogger('base') - - -class VideoBasedDataset(Dataset): - def __init__(self, opt, dataInfo): - self.opt = opt - self.mode = opt['mode'] - self.dataInfo = dataInfo - self.flow_height, self.flow_width = dataInfo['flow']['flow_height'], dataInfo['flow']['flow_width'] - self.data_path = dataInfo['flow_path'] - self.frame_path = dataInfo['frame_path'] - self.train_list = os.listdir(self.data_path) - self.name2length = self.dataInfo['name2len'] - self.require_edge = opt['use_edges'] - self.sigma = dataInfo['edge']['sigma'] - self.low_threshold = dataInfo['edge']['low_threshold'] - self.high_threshold = dataInfo['edge']['high_threshold'] - with open(self.name2length, 'rb') as f: - self.name2len = pickle.load(f) - self.norm = opt['norm'] - self.ternary_loss = opt.get('ternary', 0) - - def __len__(self): - return len(self.train_list) - - def __getitem__(self, idx): - try: - item = self.load_item(idx) - except: - print('Loading error: ' + self.train_list[idx]) - item = self.load_item(0) - return item - - def frameSample(self, flowLen): - pivot = random.randint(0, flowLen - 1) - return pivot - - def load_item(self, idx): - info = {} - video = self.train_list[idx] - info['name'] = video - if np.random.uniform(0, 1) > 0.5: - direction = 'forward_flo' - else: - direction = 'backward_flo' - flow_dir = os.path.join(self.data_path, video, direction) - frame_dir = os.path.join(self.frame_path, video) - flowLen = self.name2len[video] - 1 - pivot = self.frameSample(flowLen) - # generate random masks - candidateMasks = create_random_shape_with_random_motion(1, 0.9, 1.1, 1, - 10) - # read the flows and masks - flow = read_flow(os.path.join(flow_dir, '{:05d}.flo'.format(pivot))) - mask = self.read_mask(candidateMasks[0], self.flow_height, self.flow_width) - flow = self.flow_tf(flow, self.flow_height, self.flow_width) - diffused_flow = self.diffusion_fill(flow, mask) - current_frame, shift_frame = self.read_frames(frame_dir, pivot, direction, self.flow_width, - self.flow_height) - edge = self.load_edge(flow) - inputs = {'flows': flow, 'diffused_flows': diffused_flow, 'current_frame': current_frame, - 'shift_frame': shift_frame, 'edges': edge, 'masks': mask} - return self.to_tensor(inputs) - - def read_frames(self, frame_dir, index, direction, width, height): - if direction == 'forward_flo': - current_frame = os.path.join(frame_dir, '{:05d}.jpg'.format(index)) - shift_frame = os.path.join(frame_dir, '{:05d}.jpg'.format(index + 1)) - else: - current_frame = os.path.join(frame_dir, '{:05d}.jpg'.format(index + 1)) - shift_frame = os.path.join(frame_dir, '{:05d}.jpg'.format(index)) - current_frame = imageio.imread(current_frame) - shift_frame = imageio.imread(shift_frame) - current_frame = cv2.resize(current_frame, (width, height), cv2.INTER_LINEAR) - shift_frame = cv2.resize(shift_frame, (width, height), cv2.INTER_LINEAR) - current_frame = current_frame / 255. - shift_frame = shift_frame / 255. - return current_frame, shift_frame - - def diffusion_fill(self, flow, mask): - flow_filled = np.zeros(flow.shape) - flow_filled[:, :, 0] = rf.regionfill(flow[:, :, 0] * (1 - mask), mask) - flow_filled[:, :, 1] = rf.regionfill(flow[:, :, 1] * (1 - mask), mask) - return flow_filled - - def flow_tf(self, flow, height, width): - flow_shape = flow.shape - flow_resized = cv2.resize(flow, (width, height), cv2.INTER_LINEAR) - flow_resized[:, :, 0] *= (float(width) / float(flow_shape[1])) - flow_resized[:, :, 1] *= (float(height) / float(flow_shape[0])) - return flow_resized - - def read_mask(self, mask, height, width): - mask = np.array(mask) - mask = mask / 255. - raw_mask = (mask > 0.5).astype(np.uint8) - raw_mask = cv2.resize(raw_mask, dsize=(width, height), interpolation=cv2.INTER_NEAREST) - return raw_mask - - def load_edge(self, flow): - gray_flow = (flow[:, :, 0] ** 2 + flow[:, :, 1] ** 2) ** 0.5 - factor = gray_flow.max() - gray_flow = gray_flow / factor - flow_rgb = flow2rgb(flow) - flow_gray = cv2.cvtColor(flow_rgb, cv2.COLOR_RGB2GRAY) - return canny(flow_gray, sigma=self.sigma, mask=None, low_threshold=self.low_threshold, - high_threshold=self.high_threshold).astype(np.float) - - def to_tensor(self, data_list): - """ - - Args: - data_list: a numpy.array list - - Returns: a torch.tensor list with the None entries removed - - """ - keys = list(data_list.keys()) - for key in keys: - if data_list[key] is None or data_list[key] == []: - data_list.pop(key) - else: - item = data_list[key] - if not isinstance(item, list): - if len(item.shape) == 2: - item = item[:, :, np.newaxis] - item = torch.from_numpy(np.transpose(item, (2, 0, 1))).float() - else: - item = np.stack(item, axis=0) - if len(item.shape) == 3: - item = item[:, :, :, np.newaxis] - item = torch.from_numpy(np.transpose(item, (3, 0, 1, 2))).float() - data_list[key] = item - return data_list - diff --git a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/scripts/convert_ncsnpp_original_checkpoint_to_diffusers.py b/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/scripts/convert_ncsnpp_original_checkpoint_to_diffusers.py deleted file mode 100644 index 22e4271eba3aa859e4220b6f69e81c06550e9548..0000000000000000000000000000000000000000 --- a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/scripts/convert_ncsnpp_original_checkpoint_to_diffusers.py +++ /dev/null @@ -1,185 +0,0 @@ -# coding=utf-8 -# Copyright 2023 The HuggingFace Inc. team. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -""" Conversion script for the NCSNPP checkpoints. """ - -import argparse -import json - -import torch - -from diffusers import ScoreSdeVePipeline, ScoreSdeVeScheduler, UNet2DModel - - -def convert_ncsnpp_checkpoint(checkpoint, config): - """ - Takes a state dict and the path to - """ - new_model_architecture = UNet2DModel(**config) - new_model_architecture.time_proj.W.data = checkpoint["all_modules.0.W"].data - new_model_architecture.time_proj.weight.data = checkpoint["all_modules.0.W"].data - new_model_architecture.time_embedding.linear_1.weight.data = checkpoint["all_modules.1.weight"].data - new_model_architecture.time_embedding.linear_1.bias.data = checkpoint["all_modules.1.bias"].data - - new_model_architecture.time_embedding.linear_2.weight.data = checkpoint["all_modules.2.weight"].data - new_model_architecture.time_embedding.linear_2.bias.data = checkpoint["all_modules.2.bias"].data - - new_model_architecture.conv_in.weight.data = checkpoint["all_modules.3.weight"].data - new_model_architecture.conv_in.bias.data = checkpoint["all_modules.3.bias"].data - - new_model_architecture.conv_norm_out.weight.data = checkpoint[list(checkpoint.keys())[-4]].data - new_model_architecture.conv_norm_out.bias.data = checkpoint[list(checkpoint.keys())[-3]].data - new_model_architecture.conv_out.weight.data = checkpoint[list(checkpoint.keys())[-2]].data - new_model_architecture.conv_out.bias.data = checkpoint[list(checkpoint.keys())[-1]].data - - module_index = 4 - - def set_attention_weights(new_layer, old_checkpoint, index): - new_layer.query.weight.data = old_checkpoint[f"all_modules.{index}.NIN_0.W"].data.T - new_layer.key.weight.data = old_checkpoint[f"all_modules.{index}.NIN_1.W"].data.T - new_layer.value.weight.data = old_checkpoint[f"all_modules.{index}.NIN_2.W"].data.T - - new_layer.query.bias.data = old_checkpoint[f"all_modules.{index}.NIN_0.b"].data - new_layer.key.bias.data = old_checkpoint[f"all_modules.{index}.NIN_1.b"].data - new_layer.value.bias.data = old_checkpoint[f"all_modules.{index}.NIN_2.b"].data - - new_layer.proj_attn.weight.data = old_checkpoint[f"all_modules.{index}.NIN_3.W"].data.T - new_layer.proj_attn.bias.data = old_checkpoint[f"all_modules.{index}.NIN_3.b"].data - - new_layer.group_norm.weight.data = old_checkpoint[f"all_modules.{index}.GroupNorm_0.weight"].data - new_layer.group_norm.bias.data = old_checkpoint[f"all_modules.{index}.GroupNorm_0.bias"].data - - def set_resnet_weights(new_layer, old_checkpoint, index): - new_layer.conv1.weight.data = old_checkpoint[f"all_modules.{index}.Conv_0.weight"].data - new_layer.conv1.bias.data = old_checkpoint[f"all_modules.{index}.Conv_0.bias"].data - new_layer.norm1.weight.data = old_checkpoint[f"all_modules.{index}.GroupNorm_0.weight"].data - new_layer.norm1.bias.data = old_checkpoint[f"all_modules.{index}.GroupNorm_0.bias"].data - - new_layer.conv2.weight.data = old_checkpoint[f"all_modules.{index}.Conv_1.weight"].data - new_layer.conv2.bias.data = old_checkpoint[f"all_modules.{index}.Conv_1.bias"].data - new_layer.norm2.weight.data = old_checkpoint[f"all_modules.{index}.GroupNorm_1.weight"].data - new_layer.norm2.bias.data = old_checkpoint[f"all_modules.{index}.GroupNorm_1.bias"].data - - new_layer.time_emb_proj.weight.data = old_checkpoint[f"all_modules.{index}.Dense_0.weight"].data - new_layer.time_emb_proj.bias.data = old_checkpoint[f"all_modules.{index}.Dense_0.bias"].data - - if new_layer.in_channels != new_layer.out_channels or new_layer.up or new_layer.down: - new_layer.conv_shortcut.weight.data = old_checkpoint[f"all_modules.{index}.Conv_2.weight"].data - new_layer.conv_shortcut.bias.data = old_checkpoint[f"all_modules.{index}.Conv_2.bias"].data - - for i, block in enumerate(new_model_architecture.downsample_blocks): - has_attentions = hasattr(block, "attentions") - for j in range(len(block.resnets)): - set_resnet_weights(block.resnets[j], checkpoint, module_index) - module_index += 1 - if has_attentions: - set_attention_weights(block.attentions[j], checkpoint, module_index) - module_index += 1 - - if hasattr(block, "downsamplers") and block.downsamplers is not None: - set_resnet_weights(block.resnet_down, checkpoint, module_index) - module_index += 1 - block.skip_conv.weight.data = checkpoint[f"all_modules.{module_index}.Conv_0.weight"].data - block.skip_conv.bias.data = checkpoint[f"all_modules.{module_index}.Conv_0.bias"].data - module_index += 1 - - set_resnet_weights(new_model_architecture.mid_block.resnets[0], checkpoint, module_index) - module_index += 1 - set_attention_weights(new_model_architecture.mid_block.attentions[0], checkpoint, module_index) - module_index += 1 - set_resnet_weights(new_model_architecture.mid_block.resnets[1], checkpoint, module_index) - module_index += 1 - - for i, block in enumerate(new_model_architecture.up_blocks): - has_attentions = hasattr(block, "attentions") - for j in range(len(block.resnets)): - set_resnet_weights(block.resnets[j], checkpoint, module_index) - module_index += 1 - if has_attentions: - set_attention_weights( - block.attentions[0], checkpoint, module_index - ) # why can there only be a single attention layer for up? - module_index += 1 - - if hasattr(block, "resnet_up") and block.resnet_up is not None: - block.skip_norm.weight.data = checkpoint[f"all_modules.{module_index}.weight"].data - block.skip_norm.bias.data = checkpoint[f"all_modules.{module_index}.bias"].data - module_index += 1 - block.skip_conv.weight.data = checkpoint[f"all_modules.{module_index}.weight"].data - block.skip_conv.bias.data = checkpoint[f"all_modules.{module_index}.bias"].data - module_index += 1 - set_resnet_weights(block.resnet_up, checkpoint, module_index) - module_index += 1 - - new_model_architecture.conv_norm_out.weight.data = checkpoint[f"all_modules.{module_index}.weight"].data - new_model_architecture.conv_norm_out.bias.data = checkpoint[f"all_modules.{module_index}.bias"].data - module_index += 1 - new_model_architecture.conv_out.weight.data = checkpoint[f"all_modules.{module_index}.weight"].data - new_model_architecture.conv_out.bias.data = checkpoint[f"all_modules.{module_index}.bias"].data - - return new_model_architecture.state_dict() - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - - parser.add_argument( - "--checkpoint_path", - default="/Users/arthurzucker/Work/diffusers/ArthurZ/diffusion_pytorch_model.bin", - type=str, - required=False, - help="Path to the checkpoint to convert.", - ) - - parser.add_argument( - "--config_file", - default="/Users/arthurzucker/Work/diffusers/ArthurZ/config.json", - type=str, - required=False, - help="The config json file corresponding to the architecture.", - ) - - parser.add_argument( - "--dump_path", - default="/Users/arthurzucker/Work/diffusers/ArthurZ/diffusion_model_new.pt", - type=str, - required=False, - help="Path to the output model.", - ) - - args = parser.parse_args() - - checkpoint = torch.load(args.checkpoint_path, map_location="cpu") - - with open(args.config_file) as f: - config = json.loads(f.read()) - - converted_checkpoint = convert_ncsnpp_checkpoint( - checkpoint, - config, - ) - - if "sde" in config: - del config["sde"] - - model = UNet2DModel(**config) - model.load_state_dict(converted_checkpoint) - - try: - scheduler = ScoreSdeVeScheduler.from_config("/".join(args.checkpoint_path.split("/")[:-1])) - - pipe = ScoreSdeVePipeline(unet=model, scheduler=scheduler) - pipe.save_pretrained(args.dump_path) - except: # noqa: E722 - model.save_pretrained(args.dump_path) diff --git a/spaces/parkyzh/bingo/src/components/markdown.tsx b/spaces/parkyzh/bingo/src/components/markdown.tsx deleted file mode 100644 index d4491467a1f14d1d72e535caac9c40636054e5df..0000000000000000000000000000000000000000 --- a/spaces/parkyzh/bingo/src/components/markdown.tsx +++ /dev/null @@ -1,9 +0,0 @@ -import { FC, memo } from 'react' -import ReactMarkdown, { Options } from 'react-markdown' - -export const MemoizedReactMarkdown: FC = memo( - ReactMarkdown, - (prevProps, nextProps) => - prevProps.children === nextProps.children && - prevProps.className === nextProps.className -) diff --git a/spaces/pierreguillou/document-layout-detection-dit-image-instances/README.md b/spaces/pierreguillou/document-layout-detection-dit-image-instances/README.md deleted file mode 100644 index 4f0f6c1c485d7b0b5538e585cc7363c4b78c83d5..0000000000000000000000000000000000000000 --- a/spaces/pierreguillou/document-layout-detection-dit-image-instances/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Dit Document Layout Analysis -emoji: 👀 -colorFrom: purple -colorTo: red -sdk: gradio -sdk_version: 2.8.9 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/operations/build/metadata_legacy.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/operations/build/metadata_legacy.py deleted file mode 100644 index e60988d643e007801f79e8718354e7d00c7acf18..0000000000000000000000000000000000000000 --- a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/operations/build/metadata_legacy.py +++ /dev/null @@ -1,74 +0,0 @@ -"""Metadata generation logic for legacy source distributions. -""" - -import logging -import os - -from pip._internal.build_env import BuildEnvironment -from pip._internal.cli.spinners import open_spinner -from pip._internal.exceptions import ( - InstallationError, - InstallationSubprocessError, - MetadataGenerationFailed, -) -from pip._internal.utils.setuptools_build import make_setuptools_egg_info_args -from pip._internal.utils.subprocess import call_subprocess -from pip._internal.utils.temp_dir import TempDirectory - -logger = logging.getLogger(__name__) - - -def _find_egg_info(directory: str) -> str: - """Find an .egg-info subdirectory in `directory`.""" - filenames = [f for f in os.listdir(directory) if f.endswith(".egg-info")] - - if not filenames: - raise InstallationError(f"No .egg-info directory found in {directory}") - - if len(filenames) > 1: - raise InstallationError( - "More than one .egg-info directory found in {}".format(directory) - ) - - return os.path.join(directory, filenames[0]) - - -def generate_metadata( - build_env: BuildEnvironment, - setup_py_path: str, - source_dir: str, - isolated: bool, - details: str, -) -> str: - """Generate metadata using setup.py-based defacto mechanisms. - - Returns the generated metadata directory. - """ - logger.debug( - "Running setup.py (path:%s) egg_info for package %s", - setup_py_path, - details, - ) - - egg_info_dir = TempDirectory(kind="pip-egg-info", globally_managed=True).path - - args = make_setuptools_egg_info_args( - setup_py_path, - egg_info_dir=egg_info_dir, - no_user_config=isolated, - ) - - with build_env: - with open_spinner("Preparing metadata (setup.py)") as spinner: - try: - call_subprocess( - args, - cwd=source_dir, - command_desc="python setup.py egg_info", - spinner=spinner, - ) - except InstallationSubprocessError as error: - raise MetadataGenerationFailed(package_details=details) from error - - # Return the .egg-info directory. - return _find_egg_info(egg_info_dir) diff --git a/spaces/plzdontcry/dakubettergpt/src/components/Chat/ChatContent/Message/MessageContent.tsx b/spaces/plzdontcry/dakubettergpt/src/components/Chat/ChatContent/Message/MessageContent.tsx deleted file mode 100644 index 0f60b6b60f0881efb6701cdc78cf8d2c87bd2f5c..0000000000000000000000000000000000000000 --- a/spaces/plzdontcry/dakubettergpt/src/components/Chat/ChatContent/Message/MessageContent.tsx +++ /dev/null @@ -1,43 +0,0 @@ -import React, { useState } from 'react'; -import useStore from '@store/store'; - -import ContentView from './View/ContentView'; -import EditView from './View/EditView'; - -const MessageContent = ({ - role, - content, - messageIndex, - sticky = false, -}: { - role: string; - content: string; - messageIndex: number; - sticky?: boolean; -}) => { - const [isEdit, setIsEdit] = useState(sticky); - const advancedMode = useStore((state) => state.advancedMode); - - return ( -
- {advancedMode &&
} - {isEdit ? ( - - ) : ( - - )} -
- ); -}; - -export default MessageContent; diff --git a/spaces/presidio/presidio_demo/transformers_rec/configuration.py b/spaces/presidio/presidio_demo/transformers_rec/configuration.py deleted file mode 100644 index ebf0439c5112231e51ad878474a62976b1f355ad..0000000000000000000000000000000000000000 --- a/spaces/presidio/presidio_demo/transformers_rec/configuration.py +++ /dev/null @@ -1,124 +0,0 @@ -## Taken from https://github.com/microsoft/presidio/blob/main/docs/samples/python/transformers_recognizer/configuration.py - -STANFORD_COFIGURATION = { - "DEFAULT_MODEL_PATH": "StanfordAIMI/stanford-deidentifier-base", - "PRESIDIO_SUPPORTED_ENTITIES": [ - "LOCATION", - "PERSON", - "ORGANIZATION", - "AGE", - "PHONE_NUMBER", - "EMAIL", - "DATE_TIME", - "DEVICE", - "ZIP", - "PROFESSION", - "USERNAME", - "ID" - - ], - "LABELS_TO_IGNORE": ["O"], - "DEFAULT_EXPLANATION": "Identified as {} by the StanfordAIMI/stanford-deidentifier-base NER model", - "SUB_WORD_AGGREGATION": "simple", - "DATASET_TO_PRESIDIO_MAPPING": { - "DATE": "DATE_TIME", - "DOCTOR": "PERSON", - "PATIENT": "PERSON", - "HOSPITAL": "LOCATION", - "MEDICALRECORD": "ID", - "IDNUM": "ID", - "ORGANIZATION": "ORGANIZATION", - "ZIP": "ZIP", - "PHONE": "PHONE_NUMBER", - "USERNAME": "USERNAME", - "STREET": "LOCATION", - "PROFESSION": "PROFESSION", - "COUNTRY": "LOCATION", - "LOCATION-OTHER": "LOCATION", - "FAX": "PHONE_NUMBER", - "EMAIL": "EMAIL", - "STATE": "LOCATION", - "DEVICE": "DEVICE", - "ORG": "ORGANIZATION", - "AGE": "AGE", - }, - "MODEL_TO_PRESIDIO_MAPPING": { - "PER": "PERSON", - "PERSON": "PERSON", - "LOC": "LOCATION", - "ORG": "ORGANIZATION", - "AGE": "AGE", - "PATIENT": "PERSON", - "HCW": "PERSON", - "HOSPITAL": "LOCATION", - "PATORG": "ORGANIZATION", - "DATE": "DATE_TIME", - "PHONE": "PHONE_NUMBER", - "VENDOR": "ORGANIZATION", - }, - "CHUNK_OVERLAP_SIZE": 40, - "CHUNK_SIZE": 600, - "ID_SCORE_MULTIPLIER": 0.4, - "ID_ENTITY_NAME": "ID" -} - - -BERT_DEID_CONFIGURATION = { - "PRESIDIO_SUPPORTED_ENTITIES": [ - "LOCATION", - "PERSON", - "ORGANIZATION", - "AGE", - "PHONE_NUMBER", - "EMAIL", - "DATE_TIME", - "ZIP", - "PROFESSION", - "USERNAME", - "ID" - ], - "DEFAULT_MODEL_PATH": "obi/deid_roberta_i2b2", - "LABELS_TO_IGNORE": ["O"], - "DEFAULT_EXPLANATION": "Identified as {} by the obi/deid_roberta_i2b2 NER model", - "SUB_WORD_AGGREGATION": "simple", - "DATASET_TO_PRESIDIO_MAPPING": { - "DATE": "DATE_TIME", - "DOCTOR": "PERSON", - "PATIENT": "PERSON", - "HOSPITAL": "ORGANIZATION", - "MEDICALRECORD": "O", - "IDNUM": "O", - "ORGANIZATION": "ORGANIZATION", - "ZIP": "O", - "PHONE": "PHONE_NUMBER", - "USERNAME": "", - "STREET": "LOCATION", - "PROFESSION": "PROFESSION", - "COUNTRY": "LOCATION", - "LOCATION-OTHER": "LOCATION", - "FAX": "PHONE_NUMBER", - "EMAIL": "EMAIL", - "STATE": "LOCATION", - "DEVICE": "O", - "ORG": "ORGANIZATION", - "AGE": "AGE", - }, - "MODEL_TO_PRESIDIO_MAPPING": { - "PER": "PERSON", - "LOC": "LOCATION", - "ORG": "ORGANIZATION", - "AGE": "AGE", - "ID": "ID", - "EMAIL": "EMAIL", - "PATIENT": "PERSON", - "STAFF": "PERSON", - "HOSP": "ORGANIZATION", - "PATORG": "ORGANIZATION", - "DATE": "DATE_TIME", - "PHONE": "PHONE_NUMBER", - }, - "CHUNK_OVERLAP_SIZE": 40, - "CHUNK_SIZE": 600, - "ID_SCORE_MULTIPLIER": 0.4, - "ID_ENTITY_NAME": "ID" -} diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/ttLib/tables/T_S_I__5.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/ttLib/tables/T_S_I__5.py deleted file mode 100644 index 5edc86a9cbc9a0b710cfc014a3910f671f791e54..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/ttLib/tables/T_S_I__5.py +++ /dev/null @@ -1,46 +0,0 @@ -""" TSI{0,1,2,3,5} are private tables used by Microsoft Visual TrueType (VTT) -tool to store its hinting source data. - -TSI5 contains the VTT character groups. -""" -from fontTools.misc.textTools import safeEval -from . import DefaultTable -import sys -import array - - -class table_T_S_I__5(DefaultTable.DefaultTable): - def decompile(self, data, ttFont): - numGlyphs = ttFont["maxp"].numGlyphs - assert len(data) == 2 * numGlyphs - a = array.array("H") - a.frombytes(data) - if sys.byteorder != "big": - a.byteswap() - self.glyphGrouping = {} - for i in range(numGlyphs): - self.glyphGrouping[ttFont.getGlyphName(i)] = a[i] - - def compile(self, ttFont): - glyphNames = ttFont.getGlyphOrder() - a = array.array("H") - for i in range(len(glyphNames)): - a.append(self.glyphGrouping.get(glyphNames[i], 0)) - if sys.byteorder != "big": - a.byteswap() - return a.tobytes() - - def toXML(self, writer, ttFont): - names = sorted(self.glyphGrouping.keys()) - for glyphName in names: - writer.simpletag( - "glyphgroup", name=glyphName, value=self.glyphGrouping[glyphName] - ) - writer.newline() - - def fromXML(self, name, attrs, content, ttFont): - if not hasattr(self, "glyphGrouping"): - self.glyphGrouping = {} - if name != "glyphgroup": - return - self.glyphGrouping[attrs["name"]] = safeEval(attrs["value"]) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/_frontend_code/fileexplorer/shared/utils.ts b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/_frontend_code/fileexplorer/shared/utils.ts deleted file mode 100644 index e894c05af1b68e2f3893870f59fb29c544efeea9..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/_frontend_code/fileexplorer/shared/utils.ts +++ /dev/null @@ -1,269 +0,0 @@ -import { writable, type Readable } from "svelte/store"; -import { dequal } from "dequal"; -export interface Node { - type: "file" | "folder"; - path: string; - children?: Node[]; - checked: boolean; - children_visible: boolean; - last?: Node | null; - parent: Node | null; - previous?: Node | null; -} - -export type SerialisedNode = Omit< - Node, - "checked" | "children_visible" | "children" -> & { children?: SerialisedNode[] }; - -interface FSStore { - subscribe: Readable["subscribe"]; - create_fs_graph: (serialised_node: SerialisedNode[]) => void; - - set_checked: ( - indices: number[], - checked: boolean, - checked_paths: string[][], - file_count: "single" | "multiple" - ) => string[][]; - set_checked_from_paths: (checked_paths: string[][]) => string[][]; -} - -export const make_fs_store = (): FSStore => { - const { subscribe, set, update } = writable(null); - let root: Node = { - type: "folder", - path: "", - checked: false, - children_visible: false, - parent: null - }; - - function create_fs_graph(serialised_node: SerialisedNode[]): void { - root.children = process_tree(serialised_node); - set(root.children); - } - - let old_checked_paths: string[][] = []; - - function set_checked_from_paths(checked_paths: string[][]): string[][] { - if (dequal(checked_paths, old_checked_paths)) { - return checked_paths; - } - old_checked_paths = checked_paths; - check_node_and_children(root.children, false, []); - const new_checked_paths: string[][] = []; - const seen_nodes = new Set(); - for (let i = 0; i < checked_paths.length; i++) { - let _node = root; - let _path = []; - for (let j = 0; j < checked_paths[i].length; j++) { - if (!_node?.children) { - continue; - } - _path.push(checked_paths[i][j]); - _node = _node.children!.find((v) => v.path === checked_paths[i][j])!; - } - - if (!_node) { - continue; - } - - _node.checked = true; - ensure_visible(_node); - const nodes = check_node_and_children(_node.children, true, [_node]); - check_parent(_node); - - nodes.forEach((node) => { - const path = get_full_path(node); - if (seen_nodes.has(path.join("/"))) { - return; - } - if (node.type === "file") { - new_checked_paths.push(path); - } - seen_nodes.add(path.join("/")); - }); - } - - set(root.children!); - - return new_checked_paths; - } - - function set_checked( - indices: number[], - checked: boolean, - checked_paths: string[][], - file_count: "single" | "multiple" - ): string[][] { - let _node = root; - - if (file_count === "single") { - check_node_and_children(root.children, false, []); - set(root.children!); - } - - for (let i = 0; i < indices.length; i++) { - _node = _node.children![indices[i]]; - } - - _node.checked = checked; - const nodes = check_node_and_children(_node.children, checked, [_node]); - - let new_checked_paths = new Map(checked_paths.map((v) => [v.join("/"), v])); - - for (let i = 0; i < nodes.length; i++) { - const _path = get_full_path(nodes[i]); - if (!checked) { - new_checked_paths.delete(_path.join("/")); - } else if (checked) { - if (file_count === "single") { - new_checked_paths = new Map(); - } - - if (nodes[i].type === "file") { - new_checked_paths.set(_path.join("/"), _path); - } - } - } - - check_parent(_node); - set(root.children!); - old_checked_paths = Array.from(new_checked_paths).map((v) => v[1]); - return old_checked_paths; - } - - return { - subscribe, - create_fs_graph, - set_checked, - set_checked_from_paths - }; -}; - -function ensure_visible(node: Node): void { - if (node.parent) { - node.parent.children_visible = true; - ensure_visible(node.parent); - } -} - -function process_tree( - node: SerialisedNode[], - depth = 0, - path_segments: string[] = [], - parent: Node | null = null -): Node[] { - const folders: Node[] = []; - const files: Node[] = []; - - for (let i = 0; i < node.length; i++) { - let n: (typeof node)[number] = node[i]; - - if (n.type === "file") { - let index = files.findIndex( - (v) => v.path.toLocaleLowerCase() >= n.path.toLocaleLowerCase() - ); - - const _node: Node = { - children: undefined, - type: "file", - path: n.path, - checked: false, - children_visible: false, - parent: parent - }; - - files.splice(index === -1 ? files.length : index, 0, _node); - } else { - let index = folders.findIndex( - (v) => v.path.toLocaleLowerCase() >= n.path.toLocaleLowerCase() - ); - - const _node: Node = { - type: "folder", - path: n.path, - checked: false, - children_visible: false, - parent: parent - }; - - const children = process_tree( - n.children!, - depth + 1, - [...path_segments, n.path], - _node - ); - - _node.children = children; - - folders.splice(index === -1 ? folders.length : index, 0, _node); - } - } - - const last = files[files.length - 1] || folders[folders.length - 1]; - - for (let i = 0; i < folders.length; i++) { - folders[i].last = last; - folders[i].previous = folders[i - 1] || null; - } - - for (let i = 0; i < files.length; i++) { - if (i === 0) { - files[i].previous = folders[folders.length - 1] || null; - } else { - files[i].previous = files[i - 1] || null; - } - files[i].last = last; - } - - return Array().concat(folders, files); -} - -function get_full_path(node: Node, path: string[] = []): string[] { - const new_path = [node.path, ...path]; - - if (node.parent) { - return get_full_path(node.parent, new_path); - } - return new_path; -} - -function check_node_and_children( - node: Node[] | null | undefined, - checked: boolean, - checked_nodes: Node[] -): Node[] { - // console.log(node, checked); - if (node === null || node === undefined) return checked_nodes; - for (let i = 0; i < node.length; i++) { - node[i].checked = checked; - checked_nodes.push(node[i]); - if (checked) ensure_visible(node[i]); - - checked_nodes.concat( - check_node_and_children(node[i].children, checked, checked_nodes) - ); - } - - return checked_nodes; -} - -function check_parent(node: Node | null | undefined): void { - if (node === null || node === undefined || !node.parent) return; - let _node = node.last; - let nodes_checked = []; - while (_node) { - nodes_checked.push(_node.checked); - _node = _node.previous; - } - - if (nodes_checked.every((v) => v === true)) { - node.parent!.checked = true; - check_parent(node?.parent); - } else if (nodes_checked.some((v) => v === false)) { - node.parent!.checked = false; - check_parent(node?.parent); - } -} diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/cdn/assets/index-4d2125d3.js b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/cdn/assets/index-4d2125d3.js deleted file mode 100644 index 4e40f50b414f65095925e4fcab6e515ee5e5a88a..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/cdn/assets/index-4d2125d3.js +++ /dev/null @@ -1,2 +0,0 @@ -import{L as e,S as m,S as p}from"./Index-37584f50.js";import{T as i}from"./Blocks-9824d5aa.js";import"./index-0526d562.js";import"./svelte/svelte.js";import"./Button-89057c03.js";export{e as Loader,m as StatusTracker,i as Toast,p as default}; -//# sourceMappingURL=index-4d2125d3.js.map diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/frontend/assets/svelte/svelte-submodules.js b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/frontend/assets/svelte/svelte-submodules.js deleted file mode 100644 index 64603e81eef6b25c0ee6d0a005b17e30eaa20c09..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/frontend/assets/svelte/svelte-submodules.js +++ /dev/null @@ -1,1153 +0,0 @@ -/** @returns {void} */ -function noop() {} - -const identity = (x) => x; - -/** - * @template T - * @template S - * @param {T} tar - * @param {S} src - * @returns {T & S} - */ -function assign(tar, src) { - // @ts-ignore - for (const k in src) tar[k] = src[k]; - return /** @type {T & S} */ (tar); -} - -function run(fn) { - return fn(); -} - -/** - * @param {Function[]} fns - * @returns {void} - */ -function run_all(fns) { - fns.forEach(run); -} - -/** - * @param {any} thing - * @returns {thing is Function} - */ -function is_function(thing) { - return typeof thing === 'function'; -} - -/** @returns {boolean} */ -function safe_not_equal(a, b) { - return a != a ? b == b : a !== b || (a && typeof a === 'object') || typeof a === 'function'; -} - -function subscribe(store, ...callbacks) { - if (store == null) { - for (const callback of callbacks) { - callback(undefined); - } - return noop; - } - const unsub = store.subscribe(...callbacks); - return unsub.unsubscribe ? () => unsub.unsubscribe() : unsub; -} - -/** - * Get the current value from a store by subscribing and immediately unsubscribing. - * - * https://svelte.dev/docs/svelte-store#get - * @template T - * @param {import('../store/public.js').Readable} store - * @returns {T} - */ -function get_store_value(store) { - let value; - subscribe(store, (_) => (value = _))(); - return value; -} - -/** @param {number | string} value - * @returns {[number, string]} - */ -function split_css_unit(value) { - const split = typeof value === 'string' && value.match(/^\s*(-?[\d.]+)([^\s]*)\s*$/); - return split ? [parseFloat(split[1]), split[2] || 'px'] : [/** @type {number} */ (value), 'px']; -} - -const is_client = typeof window !== 'undefined'; - -/** @type {() => number} */ -let now = is_client ? () => window.performance.now() : () => Date.now(); - -let raf = is_client ? (cb) => requestAnimationFrame(cb) : noop; - -const tasks = new Set(); - -/** - * @param {number} now - * @returns {void} - */ -function run_tasks(now) { - tasks.forEach((task) => { - if (!task.c(now)) { - tasks.delete(task); - task.f(); - } - }); - if (tasks.size !== 0) raf(run_tasks); -} - -/** - * Creates a new task that runs on each raf frame - * until it returns a falsy value or is aborted - * @param {import('./private.js').TaskCallback} callback - * @returns {import('./private.js').Task} - */ -function loop(callback) { - /** @type {import('./private.js').TaskEntry} */ - let task; - if (tasks.size === 0) raf(run_tasks); - return { - promise: new Promise((fulfill) => { - tasks.add((task = { c: callback, f: fulfill })); - }), - abort() { - tasks.delete(task); - } - }; -} - -/* -Adapted from https://github.com/mattdesl -Distributed under MIT License https://github.com/mattdesl/eases/blob/master/LICENSE.md -*/ - -/** - * https://svelte.dev/docs/svelte-easing - * @param {number} t - * @returns {number} - */ -function backInOut(t) { - const s = 1.70158 * 1.525; - if ((t *= 2) < 1) return 0.5 * (t * t * ((s + 1) * t - s)); - return 0.5 * ((t -= 2) * t * ((s + 1) * t + s) + 2); -} - -/** - * https://svelte.dev/docs/svelte-easing - * @param {number} t - * @returns {number} - */ -function backIn(t) { - const s = 1.70158; - return t * t * ((s + 1) * t - s); -} - -/** - * https://svelte.dev/docs/svelte-easing - * @param {number} t - * @returns {number} - */ -function backOut(t) { - const s = 1.70158; - return --t * t * ((s + 1) * t + s) + 1; -} - -/** - * https://svelte.dev/docs/svelte-easing - * @param {number} t - * @returns {number} - */ -function bounceOut(t) { - const a = 4.0 / 11.0; - const b = 8.0 / 11.0; - const c = 9.0 / 10.0; - const ca = 4356.0 / 361.0; - const cb = 35442.0 / 1805.0; - const cc = 16061.0 / 1805.0; - const t2 = t * t; - return t < a - ? 7.5625 * t2 - : t < b - ? 9.075 * t2 - 9.9 * t + 3.4 - : t < c - ? ca * t2 - cb * t + cc - : 10.8 * t * t - 20.52 * t + 10.72; -} - -/** - * https://svelte.dev/docs/svelte-easing - * @param {number} t - * @returns {number} - */ -function bounceInOut(t) { - return t < 0.5 ? 0.5 * (1.0 - bounceOut(1.0 - t * 2.0)) : 0.5 * bounceOut(t * 2.0 - 1.0) + 0.5; -} - -/** - * https://svelte.dev/docs/svelte-easing - * @param {number} t - * @returns {number} - */ -function bounceIn(t) { - return 1.0 - bounceOut(1.0 - t); -} - -/** - * https://svelte.dev/docs/svelte-easing - * @param {number} t - * @returns {number} - */ -function circInOut(t) { - if ((t *= 2) < 1) return -0.5 * (Math.sqrt(1 - t * t) - 1); - return 0.5 * (Math.sqrt(1 - (t -= 2) * t) + 1); -} - -/** - * https://svelte.dev/docs/svelte-easing - * @param {number} t - * @returns {number} - */ -function circIn(t) { - return 1.0 - Math.sqrt(1.0 - t * t); -} - -/** - * https://svelte.dev/docs/svelte-easing - * @param {number} t - * @returns {number} - */ -function circOut(t) { - return Math.sqrt(1 - --t * t); -} - -/** - * https://svelte.dev/docs/svelte-easing - * @param {number} t - * @returns {number} - */ -function cubicInOut(t) { - return t < 0.5 ? 4.0 * t * t * t : 0.5 * Math.pow(2.0 * t - 2.0, 3.0) + 1.0; -} - -/** - * https://svelte.dev/docs/svelte-easing - * @param {number} t - * @returns {number} - */ -function cubicIn(t) { - return t * t * t; -} - -/** - * https://svelte.dev/docs/svelte-easing - * @param {number} t - * @returns {number} - */ -function cubicOut(t) { - const f = t - 1.0; - return f * f * f + 1.0; -} - -/** - * https://svelte.dev/docs/svelte-easing - * @param {number} t - * @returns {number} - */ -function elasticInOut(t) { - return t < 0.5 - ? 0.5 * Math.sin(((+13.0 * Math.PI) / 2) * 2.0 * t) * Math.pow(2.0, 10.0 * (2.0 * t - 1.0)) - : 0.5 * - Math.sin(((-13.0 * Math.PI) / 2) * (2.0 * t - 1.0 + 1.0)) * - Math.pow(2.0, -10.0 * (2.0 * t - 1.0)) + - 1.0; -} - -/** - * https://svelte.dev/docs/svelte-easing - * @param {number} t - * @returns {number} - */ -function elasticIn(t) { - return Math.sin((13.0 * t * Math.PI) / 2) * Math.pow(2.0, 10.0 * (t - 1.0)); -} - -/** - * https://svelte.dev/docs/svelte-easing - * @param {number} t - * @returns {number} - */ -function elasticOut(t) { - return Math.sin((-13.0 * (t + 1.0) * Math.PI) / 2) * Math.pow(2.0, -10.0 * t) + 1.0; -} - -/** - * https://svelte.dev/docs/svelte-easing - * @param {number} t - * @returns {number} - */ -function expoInOut(t) { - return t === 0.0 || t === 1.0 - ? t - : t < 0.5 - ? +0.5 * Math.pow(2.0, 20.0 * t - 10.0) - : -0.5 * Math.pow(2.0, 10.0 - t * 20.0) + 1.0; -} - -/** - * https://svelte.dev/docs/svelte-easing - * @param {number} t - * @returns {number} - */ -function expoIn(t) { - return t === 0.0 ? t : Math.pow(2.0, 10.0 * (t - 1.0)); -} - -/** - * https://svelte.dev/docs/svelte-easing - * @param {number} t - * @returns {number} - */ -function expoOut(t) { - return t === 1.0 ? t : 1.0 - Math.pow(2.0, -10.0 * t); -} - -/** - * https://svelte.dev/docs/svelte-easing - * @param {number} t - * @returns {number} - */ -function quadInOut(t) { - t /= 0.5; - if (t < 1) return 0.5 * t * t; - t--; - return -0.5 * (t * (t - 2) - 1); -} - -/** - * https://svelte.dev/docs/svelte-easing - * @param {number} t - * @returns {number} - */ -function quadIn(t) { - return t * t; -} - -/** - * https://svelte.dev/docs/svelte-easing - * @param {number} t - * @returns {number} - */ -function quadOut(t) { - return -t * (t - 2.0); -} - -/** - * https://svelte.dev/docs/svelte-easing - * @param {number} t - * @returns {number} - */ -function quartInOut(t) { - return t < 0.5 ? +8.0 * Math.pow(t, 4.0) : -8.0 * Math.pow(t - 1.0, 4.0) + 1.0; -} - -/** - * https://svelte.dev/docs/svelte-easing - * @param {number} t - * @returns {number} - */ -function quartIn(t) { - return Math.pow(t, 4.0); -} - -/** - * https://svelte.dev/docs/svelte-easing - * @param {number} t - * @returns {number} - */ -function quartOut(t) { - return Math.pow(t - 1.0, 3.0) * (1.0 - t) + 1.0; -} - -/** - * https://svelte.dev/docs/svelte-easing - * @param {number} t - * @returns {number} - */ -function quintInOut(t) { - if ((t *= 2) < 1) return 0.5 * t * t * t * t * t; - return 0.5 * ((t -= 2) * t * t * t * t + 2); -} - -/** - * https://svelte.dev/docs/svelte-easing - * @param {number} t - * @returns {number} - */ -function quintIn(t) { - return t * t * t * t * t; -} - -/** - * https://svelte.dev/docs/svelte-easing - * @param {number} t - * @returns {number} - */ -function quintOut(t) { - return --t * t * t * t * t + 1; -} - -/** - * https://svelte.dev/docs/svelte-easing - * @param {number} t - * @returns {number} - */ -function sineInOut(t) { - return -0.5 * (Math.cos(Math.PI * t) - 1); -} - -/** - * https://svelte.dev/docs/svelte-easing - * @param {number} t - * @returns {number} - */ -function sineIn(t) { - const v = Math.cos(t * Math.PI * 0.5); - if (Math.abs(v) < 1e-14) return 1; - else return 1 - v; -} - -/** - * https://svelte.dev/docs/svelte-easing - * @param {number} t - * @returns {number} - */ -function sineOut(t) { - return Math.sin((t * Math.PI) / 2); -} - -/** - * Animates a `blur` filter alongside an element's opacity. - * - * https://svelte.dev/docs/svelte-transition#blur - * @param {Element} node - * @param {import('./public').BlurParams} [params] - * @returns {import('./public').TransitionConfig} - */ -function blur( - node, - { delay = 0, duration = 400, easing = cubicInOut, amount = 5, opacity = 0 } = {} -) { - const style = getComputedStyle(node); - const target_opacity = +style.opacity; - const f = style.filter === 'none' ? '' : style.filter; - const od = target_opacity * (1 - opacity); - const [value, unit] = split_css_unit(amount); - return { - delay, - duration, - easing, - css: (_t, u) => `opacity: ${target_opacity - od * u}; filter: ${f} blur(${u * value}${unit});` - }; -} - -/** - * Animates the opacity of an element from 0 to the current opacity for `in` transitions and from the current opacity to 0 for `out` transitions. - * - * https://svelte.dev/docs/svelte-transition#fade - * @param {Element} node - * @param {import('./public').FadeParams} [params] - * @returns {import('./public').TransitionConfig} - */ -function fade(node, { delay = 0, duration = 400, easing = identity } = {}) { - const o = +getComputedStyle(node).opacity; - return { - delay, - duration, - easing, - css: (t) => `opacity: ${t * o}` - }; -} - -/** - * Animates the x and y positions and the opacity of an element. `in` transitions animate from the provided values, passed as parameters to the element's default values. `out` transitions animate from the element's default values to the provided values. - * - * https://svelte.dev/docs/svelte-transition#fly - * @param {Element} node - * @param {import('./public').FlyParams} [params] - * @returns {import('./public').TransitionConfig} - */ -function fly( - node, - { delay = 0, duration = 400, easing = cubicOut, x = 0, y = 0, opacity = 0 } = {} -) { - const style = getComputedStyle(node); - const target_opacity = +style.opacity; - const transform = style.transform === 'none' ? '' : style.transform; - const od = target_opacity * (1 - opacity); - const [xValue, xUnit] = split_css_unit(x); - const [yValue, yUnit] = split_css_unit(y); - return { - delay, - duration, - easing, - css: (t, u) => ` - transform: ${transform} translate(${(1 - t) * xValue}${xUnit}, ${(1 - t) * yValue}${yUnit}); - opacity: ${target_opacity - od * u}` - }; -} - -/** - * Slides an element in and out. - * - * https://svelte.dev/docs/svelte-transition#slide - * @param {Element} node - * @param {import('./public').SlideParams} [params] - * @returns {import('./public').TransitionConfig} - */ -function slide(node, { delay = 0, duration = 400, easing = cubicOut, axis = 'y' } = {}) { - const style = getComputedStyle(node); - const opacity = +style.opacity; - const primary_property = axis === 'y' ? 'height' : 'width'; - const primary_property_value = parseFloat(style[primary_property]); - const secondary_properties = axis === 'y' ? ['top', 'bottom'] : ['left', 'right']; - const capitalized_secondary_properties = secondary_properties.map( - (e) => `${e[0].toUpperCase()}${e.slice(1)}` - ); - const padding_start_value = parseFloat(style[`padding${capitalized_secondary_properties[0]}`]); - const padding_end_value = parseFloat(style[`padding${capitalized_secondary_properties[1]}`]); - const margin_start_value = parseFloat(style[`margin${capitalized_secondary_properties[0]}`]); - const margin_end_value = parseFloat(style[`margin${capitalized_secondary_properties[1]}`]); - const border_width_start_value = parseFloat( - style[`border${capitalized_secondary_properties[0]}Width`] - ); - const border_width_end_value = parseFloat( - style[`border${capitalized_secondary_properties[1]}Width`] - ); - return { - delay, - duration, - easing, - css: (t) => - 'overflow: hidden;' + - `opacity: ${Math.min(t * 20, 1) * opacity};` + - `${primary_property}: ${t * primary_property_value}px;` + - `padding-${secondary_properties[0]}: ${t * padding_start_value}px;` + - `padding-${secondary_properties[1]}: ${t * padding_end_value}px;` + - `margin-${secondary_properties[0]}: ${t * margin_start_value}px;` + - `margin-${secondary_properties[1]}: ${t * margin_end_value}px;` + - `border-${secondary_properties[0]}-width: ${t * border_width_start_value}px;` + - `border-${secondary_properties[1]}-width: ${t * border_width_end_value}px;` - }; -} - -/** - * Animates the opacity and scale of an element. `in` transitions animate from an element's current (default) values to the provided values, passed as parameters. `out` transitions animate from the provided values to an element's default values. - * - * https://svelte.dev/docs/svelte-transition#scale - * @param {Element} node - * @param {import('./public').ScaleParams} [params] - * @returns {import('./public').TransitionConfig} - */ -function scale( - node, - { delay = 0, duration = 400, easing = cubicOut, start = 0, opacity = 0 } = {} -) { - const style = getComputedStyle(node); - const target_opacity = +style.opacity; - const transform = style.transform === 'none' ? '' : style.transform; - const sd = 1 - start; - const od = target_opacity * (1 - opacity); - return { - delay, - duration, - easing, - css: (_t, u) => ` - transform: ${transform} scale(${1 - sd * u}); - opacity: ${target_opacity - od * u} - ` - }; -} - -/** - * Animates the stroke of an SVG element, like a snake in a tube. `in` transitions begin with the path invisible and draw the path to the screen over time. `out` transitions start in a visible state and gradually erase the path. `draw` only works with elements that have a `getTotalLength` method, like `` and ``. - * - * https://svelte.dev/docs/svelte-transition#draw - * @param {SVGElement & { getTotalLength(): number }} node - * @param {import('./public').DrawParams} [params] - * @returns {import('./public').TransitionConfig} - */ -function draw(node, { delay = 0, speed, duration, easing = cubicInOut } = {}) { - let len = node.getTotalLength(); - const style = getComputedStyle(node); - if (style.strokeLinecap !== 'butt') { - len += parseInt(style.strokeWidth); - } - if (duration === undefined) { - if (speed === undefined) { - duration = 800; - } else { - duration = len / speed; - } - } else if (typeof duration === 'function') { - duration = duration(len); - } - return { - delay, - duration, - easing, - css: (_, u) => ` - stroke-dasharray: ${len}; - stroke-dashoffset: ${u * len}; - ` - }; -} - -/** - * The `crossfade` function creates a pair of [transitions](/docs#template-syntax-element-directives-transition-fn) called `send` and `receive`. When an element is 'sent', it looks for a corresponding element being 'received', and generates a transition that transforms the element to its counterpart's position and fades it out. When an element is 'received', the reverse happens. If there is no counterpart, the `fallback` transition is used. - * - * https://svelte.dev/docs/svelte-transition#crossfade - * @param {import('./public').CrossfadeParams & { - * fallback?: (node: Element, params: import('./public').CrossfadeParams, intro: boolean) => import('./public').TransitionConfig; - * }} params - * @returns {[(node: any, params: import('./public').CrossfadeParams & { key: any; }) => () => import('./public').TransitionConfig, (node: any, params: import('./public').CrossfadeParams & { key: any; }) => () => import('./public').TransitionConfig]} - */ -function crossfade({ fallback, ...defaults }) { - /** @type {Map} */ - const to_receive = new Map(); - /** @type {Map} */ - const to_send = new Map(); - /** - * @param {Element} from_node - * @param {Element} node - * @param {import('./public').CrossfadeParams} params - * @returns {import('./public').TransitionConfig} - */ - function crossfade(from_node, node, params) { - const { - delay = 0, - duration = (d) => Math.sqrt(d) * 30, - easing = cubicOut - } = assign(assign({}, defaults), params); - const from = from_node.getBoundingClientRect(); - const to = node.getBoundingClientRect(); - const dx = from.left - to.left; - const dy = from.top - to.top; - const dw = from.width / to.width; - const dh = from.height / to.height; - const d = Math.sqrt(dx * dx + dy * dy); - const style = getComputedStyle(node); - const transform = style.transform === 'none' ? '' : style.transform; - const opacity = +style.opacity; - return { - delay, - duration: is_function(duration) ? duration(d) : duration, - easing, - css: (t, u) => ` - opacity: ${t * opacity}; - transform-origin: top left; - transform: ${transform} translate(${u * dx}px,${u * dy}px) scale(${t + (1 - t) * dw}, ${ - t + (1 - t) * dh - }); - ` - }; - } - - /** - * @param {Map} items - * @param {Map} counterparts - * @param {boolean} intro - * @returns {(node: any, params: import('./public').CrossfadeParams & { key: any; }) => () => import('./public').TransitionConfig} - */ - function transition(items, counterparts, intro) { - return (node, params) => { - items.set(params.key, node); - return () => { - if (counterparts.has(params.key)) { - const other_node = counterparts.get(params.key); - counterparts.delete(params.key); - return crossfade(other_node, node, params); - } - // if the node is disappearing altogether - // (i.e. wasn't claimed by the other list) - // then we need to supply an outro - items.delete(params.key); - return fallback && fallback(node, params, intro); - }; - }; - } - return [transition(to_send, to_receive, false), transition(to_receive, to_send, true)]; -} - -const subscriber_queue = []; - -/** - * Creates a `Readable` store that allows reading by subscription. - * - * https://svelte.dev/docs/svelte-store#readable - * @template T - * @param {T} [value] initial value - * @param {import('./public.js').StartStopNotifier} [start] - * @returns {import('./public.js').Readable} - */ -function readable(value, start) { - return { - subscribe: writable(value, start).subscribe - }; -} - -/** - * Create a `Writable` store that allows both updating and reading by subscription. - * - * https://svelte.dev/docs/svelte-store#writable - * @template T - * @param {T} [value] initial value - * @param {import('./public.js').StartStopNotifier} [start] - * @returns {import('./public.js').Writable} - */ -function writable(value, start = noop) { - /** @type {import('./public.js').Unsubscriber} */ - let stop; - /** @type {Set>} */ - const subscribers = new Set(); - /** @param {T} new_value - * @returns {void} - */ - function set(new_value) { - if (safe_not_equal(value, new_value)) { - value = new_value; - if (stop) { - // store is ready - const run_queue = !subscriber_queue.length; - for (const subscriber of subscribers) { - subscriber[1](); - subscriber_queue.push(subscriber, value); - } - if (run_queue) { - for (let i = 0; i < subscriber_queue.length; i += 2) { - subscriber_queue[i][0](subscriber_queue[i + 1]); - } - subscriber_queue.length = 0; - } - } - } - } - - /** - * @param {import('./public.js').Updater} fn - * @returns {void} - */ - function update(fn) { - set(fn(value)); - } - - /** - * @param {import('./public.js').Subscriber} run - * @param {import('./private.js').Invalidator} [invalidate] - * @returns {import('./public.js').Unsubscriber} - */ - function subscribe(run, invalidate = noop) { - /** @type {import('./private.js').SubscribeInvalidateTuple} */ - const subscriber = [run, invalidate]; - subscribers.add(subscriber); - if (subscribers.size === 1) { - stop = start(set, update) || noop; - } - run(value); - return () => { - subscribers.delete(subscriber); - if (subscribers.size === 0 && stop) { - stop(); - stop = null; - } - }; - } - return { set, update, subscribe }; -} - -/** - * Derived value store by synchronizing one or more readable stores and - * applying an aggregation function over its input values. - * - * https://svelte.dev/docs/svelte-store#derived - * @template {import('./private.js').Stores} S - * @template T - * @overload - * @param {S} stores - input stores - * @param {(values: import('./private.js').StoresValues, set: (value: T) => void, update: (fn: import('./public.js').Updater) => void) => import('./public.js').Unsubscriber | void} fn - function callback that aggregates the values - * @param {T} [initial_value] - initial value - * @returns {import('./public.js').Readable} - */ - -/** - * Derived value store by synchronizing one or more readable stores and - * applying an aggregation function over its input values. - * - * https://svelte.dev/docs/svelte-store#derived - * @template {import('./private.js').Stores} S - * @template T - * @overload - * @param {S} stores - input stores - * @param {(values: import('./private.js').StoresValues) => T} fn - function callback that aggregates the values - * @param {T} [initial_value] - initial value - * @returns {import('./public.js').Readable} - */ - -/** - * @template {import('./private.js').Stores} S - * @template T - * @param {S} stores - * @param {Function} fn - * @param {T} [initial_value] - * @returns {import('./public.js').Readable} - */ -function derived(stores, fn, initial_value) { - const single = !Array.isArray(stores); - /** @type {Array>} */ - const stores_array = single ? [stores] : stores; - if (!stores_array.every(Boolean)) { - throw new Error('derived() expects stores as input, got a falsy value'); - } - const auto = fn.length < 2; - return readable(initial_value, (set, update) => { - let started = false; - const values = []; - let pending = 0; - let cleanup = noop; - const sync = () => { - if (pending) { - return; - } - cleanup(); - const result = fn(single ? values[0] : values, set, update); - if (auto) { - set(result); - } else { - cleanup = is_function(result) ? result : noop; - } - }; - const unsubscribers = stores_array.map((store, i) => - subscribe( - store, - (value) => { - values[i] = value; - pending &= ~(1 << i); - if (started) { - sync(); - } - }, - () => { - pending |= 1 << i; - } - ) - ); - started = true; - sync(); - return function stop() { - run_all(unsubscribers); - cleanup(); - // We need to set this to false because callbacks can still happen despite having unsubscribed: - // Callbacks might already be placed in the queue which doesn't know it should no longer - // invoke this derived store. - started = false; - }; - }); -} - -/** - * Takes a store and returns a new one derived from the old one that is readable. - * - * https://svelte.dev/docs/svelte-store#readonly - * @template T - * @param {import('./public.js').Readable} store - store to make readonly - * @returns {import('./public.js').Readable} - */ -function readonly(store) { - return { - subscribe: store.subscribe.bind(store) - }; -} - -/** - * @param {any} obj - * @returns {boolean} - */ -function is_date(obj) { - return Object.prototype.toString.call(obj) === '[object Date]'; -} - -/** - * @template T - * @param {import('./private.js').TickContext} ctx - * @param {T} last_value - * @param {T} current_value - * @param {T} target_value - * @returns {T} - */ -function tick_spring(ctx, last_value, current_value, target_value) { - if (typeof current_value === 'number' || is_date(current_value)) { - // @ts-ignore - const delta = target_value - current_value; - // @ts-ignore - const velocity = (current_value - last_value) / (ctx.dt || 1 / 60); // guard div by 0 - const spring = ctx.opts.stiffness * delta; - const damper = ctx.opts.damping * velocity; - const acceleration = (spring - damper) * ctx.inv_mass; - const d = (velocity + acceleration) * ctx.dt; - if (Math.abs(d) < ctx.opts.precision && Math.abs(delta) < ctx.opts.precision) { - return target_value; // settled - } else { - ctx.settled = false; // signal loop to keep ticking - // @ts-ignore - return is_date(current_value) ? new Date(current_value.getTime() + d) : current_value + d; - } - } else if (Array.isArray(current_value)) { - // @ts-ignore - return current_value.map((_, i) => - tick_spring(ctx, last_value[i], current_value[i], target_value[i]) - ); - } else if (typeof current_value === 'object') { - const next_value = {}; - for (const k in current_value) { - // @ts-ignore - next_value[k] = tick_spring(ctx, last_value[k], current_value[k], target_value[k]); - } - // @ts-ignore - return next_value; - } else { - throw new Error(`Cannot spring ${typeof current_value} values`); - } -} - -/** - * The spring function in Svelte creates a store whose value is animated, with a motion that simulates the behavior of a spring. This means when the value changes, instead of transitioning at a steady rate, it "bounces" like a spring would, depending on the physics parameters provided. This adds a level of realism to the transitions and can enhance the user experience. - * - * https://svelte.dev/docs/svelte-motion#spring - * @template [T=any] - * @param {T} [value] - * @param {import('./private.js').SpringOpts} [opts] - * @returns {import('./public.js').Spring} - */ -function spring(value, opts = {}) { - const store = writable(value); - const { stiffness = 0.15, damping = 0.8, precision = 0.01 } = opts; - /** @type {number} */ - let last_time; - /** @type {import('../internal/private.js').Task} */ - let task; - /** @type {object} */ - let current_token; - /** @type {T} */ - let last_value = value; - /** @type {T} */ - let target_value = value; - let inv_mass = 1; - let inv_mass_recovery_rate = 0; - let cancel_task = false; - /** - * @param {T} new_value - * @param {import('./private.js').SpringUpdateOpts} opts - * @returns {Promise} - */ - function set(new_value, opts = {}) { - target_value = new_value; - const token = (current_token = {}); - if (value == null || opts.hard || (spring.stiffness >= 1 && spring.damping >= 1)) { - cancel_task = true; // cancel any running animation - last_time = now(); - last_value = new_value; - store.set((value = target_value)); - return Promise.resolve(); - } else if (opts.soft) { - const rate = opts.soft === true ? 0.5 : +opts.soft; - inv_mass_recovery_rate = 1 / (rate * 60); - inv_mass = 0; // infinite mass, unaffected by spring forces - } - if (!task) { - last_time = now(); - cancel_task = false; - task = loop((now) => { - if (cancel_task) { - cancel_task = false; - task = null; - return false; - } - inv_mass = Math.min(inv_mass + inv_mass_recovery_rate, 1); - const ctx = { - inv_mass, - opts: spring, - settled: true, - dt: ((now - last_time) * 60) / 1000 - }; - const next_value = tick_spring(ctx, last_value, value, target_value); - last_time = now; - last_value = value; - store.set((value = next_value)); - if (ctx.settled) { - task = null; - } - return !ctx.settled; - }); - } - return new Promise((fulfil) => { - task.promise.then(() => { - if (token === current_token) fulfil(); - }); - }); - } - /** @type {import('./public.js').Spring} */ - const spring = { - set, - update: (fn, opts) => set(fn(target_value, value), opts), - subscribe: store.subscribe, - stiffness, - damping, - precision - }; - return spring; -} - -/** @returns {(t: any) => any} */ -function get_interpolator(a, b) { - if (a === b || a !== a) return () => a; - const type = typeof a; - if (type !== typeof b || Array.isArray(a) !== Array.isArray(b)) { - throw new Error('Cannot interpolate values of different type'); - } - if (Array.isArray(a)) { - const arr = b.map((bi, i) => { - return get_interpolator(a[i], bi); - }); - return (t) => arr.map((fn) => fn(t)); - } - if (type === 'object') { - if (!a || !b) throw new Error('Object cannot be null'); - if (is_date(a) && is_date(b)) { - a = a.getTime(); - b = b.getTime(); - const delta = b - a; - return (t) => new Date(a + t * delta); - } - const keys = Object.keys(b); - const interpolators = {}; - keys.forEach((key) => { - interpolators[key] = get_interpolator(a[key], b[key]); - }); - return (t) => { - const result = {}; - keys.forEach((key) => { - result[key] = interpolators[key](t); - }); - return result; - }; - } - if (type === 'number') { - const delta = b - a; - return (t) => a + t * delta; - } - throw new Error(`Cannot interpolate ${type} values`); -} - -/** - * A tweened store in Svelte is a special type of store that provides smooth transitions between state values over time. - * - * https://svelte.dev/docs/svelte-motion#tweened - * @template T - * @param {T} [value] - * @param {import('./private.js').TweenedOptions} [defaults] - * @returns {import('./public.js').Tweened} - */ -function tweened(value, defaults = {}) { - const store = writable(value); - /** @type {import('../internal/private.js').Task} */ - let task; - let target_value = value; - /** - * @param {T} new_value - * @param {import('./private.js').TweenedOptions} [opts] - */ - function set(new_value, opts) { - if (value == null) { - store.set((value = new_value)); - return Promise.resolve(); - } - target_value = new_value; - let previous_task = task; - let started = false; - let { - delay = 0, - duration = 400, - easing = identity, - interpolate = get_interpolator - } = assign(assign({}, defaults), opts); - if (duration === 0) { - if (previous_task) { - previous_task.abort(); - previous_task = null; - } - store.set((value = target_value)); - return Promise.resolve(); - } - const start = now() + delay; - let fn; - task = loop((now) => { - if (now < start) return true; - if (!started) { - fn = interpolate(value, new_value); - if (typeof duration === 'function') duration = duration(value, new_value); - started = true; - } - if (previous_task) { - previous_task.abort(); - previous_task = null; - } - const elapsed = now - start; - if (elapsed > /** @type {number} */ (duration)) { - store.set((value = new_value)); - return false; - } - // @ts-ignore - store.set((value = fn(easing(elapsed / duration)))); - return true; - }); - return task.promise; - } - return { - set, - update: (fn, opts) => set(fn(target_value, value), opts), - subscribe: store.subscribe - }; -} - -/** - * The flip function calculates the start and end position of an element and animates between them, translating the x and y values. - * `flip` stands for [First, Last, Invert, Play](https://aerotwist.com/blog/flip-your-animations/). - * - * https://svelte.dev/docs/svelte-animate#flip - * @param {Element} node - * @param {{ from: DOMRect; to: DOMRect }} fromTo - * @param {import('./public.js').FlipParams} params - * @returns {import('./public.js').AnimationConfig} - */ -function flip(node, { from, to }, params = {}) { - const style = getComputedStyle(node); - const transform = style.transform === 'none' ? '' : style.transform; - const [ox, oy] = style.transformOrigin.split(' ').map(parseFloat); - const dx = from.left + (from.width * ox) / to.width - (to.left + ox); - const dy = from.top + (from.height * oy) / to.height - (to.top + oy); - const { delay = 0, duration = (d) => Math.sqrt(d) * 120, easing = cubicOut } = params; - return { - delay, - duration: is_function(duration) ? duration(Math.sqrt(dx * dx + dy * dy)) : duration, - easing, - css: (t, u) => { - const x = u * dx; - const y = u * dy; - const sx = t + (u * from.width) / to.width; - const sy = t + (u * from.height) / to.height; - return `transform: ${transform} translate(${x}px, ${y}px) scale(${sx}, ${sy});`; - } - }; -} - -export { backIn, backInOut, backOut, blur, bounceIn, bounceInOut, bounceOut, circIn, circInOut, circOut, crossfade, cubicIn, cubicInOut, cubicOut, derived, draw, elasticIn, elasticInOut, elasticOut, expoIn, expoInOut, expoOut, fade, flip, fly, get_store_value as get, identity as linear, quadIn, quadInOut, quadOut, quartIn, quartInOut, quartOut, quintIn, quintInOut, quintOut, readable, readonly, scale, sineIn, sineInOut, sineOut, slide, spring, tweened, writable }; diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/typing/tests/data/pass/einsumfunc.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/typing/tests/data/pass/einsumfunc.py deleted file mode 100644 index 429764e67eccc7855d363da20d432fdb45e66971..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/typing/tests/data/pass/einsumfunc.py +++ /dev/null @@ -1,36 +0,0 @@ -from __future__ import annotations - -from typing import Any - -import numpy as np - -AR_LIKE_b = [True, True, True] -AR_LIKE_u = [np.uint32(1), np.uint32(2), np.uint32(3)] -AR_LIKE_i = [1, 2, 3] -AR_LIKE_f = [1.0, 2.0, 3.0] -AR_LIKE_c = [1j, 2j, 3j] -AR_LIKE_U = ["1", "2", "3"] - -OUT_f: np.ndarray[Any, np.dtype[np.float64]] = np.empty(3, dtype=np.float64) -OUT_c: np.ndarray[Any, np.dtype[np.complex128]] = np.empty(3, dtype=np.complex128) - -np.einsum("i,i->i", AR_LIKE_b, AR_LIKE_b) -np.einsum("i,i->i", AR_LIKE_u, AR_LIKE_u) -np.einsum("i,i->i", AR_LIKE_i, AR_LIKE_i) -np.einsum("i,i->i", AR_LIKE_f, AR_LIKE_f) -np.einsum("i,i->i", AR_LIKE_c, AR_LIKE_c) -np.einsum("i,i->i", AR_LIKE_b, AR_LIKE_i) -np.einsum("i,i,i,i->i", AR_LIKE_b, AR_LIKE_u, AR_LIKE_i, AR_LIKE_c) - -np.einsum("i,i->i", AR_LIKE_f, AR_LIKE_f, dtype="c16") -np.einsum("i,i->i", AR_LIKE_U, AR_LIKE_U, dtype=bool, casting="unsafe") -np.einsum("i,i->i", AR_LIKE_f, AR_LIKE_f, out=OUT_c) -np.einsum("i,i->i", AR_LIKE_U, AR_LIKE_U, dtype=int, casting="unsafe", out=OUT_f) - -np.einsum_path("i,i->i", AR_LIKE_b, AR_LIKE_b) -np.einsum_path("i,i->i", AR_LIKE_u, AR_LIKE_u) -np.einsum_path("i,i->i", AR_LIKE_i, AR_LIKE_i) -np.einsum_path("i,i->i", AR_LIKE_f, AR_LIKE_f) -np.einsum_path("i,i->i", AR_LIKE_c, AR_LIKE_c) -np.einsum_path("i,i->i", AR_LIKE_b, AR_LIKE_i) -np.einsum_path("i,i,i,i->i", AR_LIKE_b, AR_LIKE_u, AR_LIKE_i, AR_LIKE_c) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/core/frame.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/core/frame.py deleted file mode 100644 index c109070ce461d5e0e0bb33a0a4fdf256dbe4241f..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/core/frame.py +++ /dev/null @@ -1,12314 +0,0 @@ -""" -DataFrame ---------- -An efficient 2D container for potentially mixed-type time series or other -labeled data series. - -Similar to its R counterpart, data.frame, except providing automatic data -alignment and a host of useful data manipulation methods having to do with the -labeling information -""" -from __future__ import annotations - -import collections -from collections import abc -from collections.abc import ( - Hashable, - Iterable, - Iterator, - Mapping, - Sequence, -) -import functools -from inspect import signature -from io import StringIO -import itertools -import operator -import sys -from textwrap import dedent -from typing import ( - TYPE_CHECKING, - Any, - Callable, - Literal, - cast, - overload, -) -import warnings - -import numpy as np -from numpy import ma - -from pandas._config import ( - get_option, - using_copy_on_write, -) - -from pandas._libs import ( - algos as libalgos, - lib, - properties, -) -from pandas._libs.hashtable import duplicated -from pandas._libs.lib import is_range_indexer -from pandas.compat import PYPY -from pandas.compat._constants import REF_COUNT -from pandas.compat._optional import import_optional_dependency -from pandas.compat.numpy import function as nv -from pandas.errors import ( - ChainedAssignmentError, - InvalidIndexError, - _chained_assignment_method_msg, - _chained_assignment_msg, -) -from pandas.util._decorators import ( - Appender, - Substitution, - doc, -) -from pandas.util._exceptions import find_stack_level -from pandas.util._validators import ( - validate_ascending, - validate_bool_kwarg, - validate_percentile, -) - -from pandas.core.dtypes.cast import ( - LossySetitemError, - can_hold_element, - construct_1d_arraylike_from_scalar, - construct_2d_arraylike_from_scalar, - find_common_type, - infer_dtype_from_scalar, - invalidate_string_dtypes, - maybe_box_native, - maybe_downcast_to_dtype, -) -from pandas.core.dtypes.common import ( - infer_dtype_from_object, - is_1d_only_ea_dtype, - is_array_like, - is_bool_dtype, - is_dataclass, - is_dict_like, - is_float, - is_float_dtype, - is_hashable, - is_integer, - is_integer_dtype, - is_iterator, - is_list_like, - is_scalar, - is_sequence, - needs_i8_conversion, - pandas_dtype, -) -from pandas.core.dtypes.concat import concat_compat -from pandas.core.dtypes.dtypes import ( - ArrowDtype, - BaseMaskedDtype, - ExtensionDtype, -) -from pandas.core.dtypes.missing import ( - isna, - notna, -) - -from pandas.core import ( - algorithms, - common as com, - nanops, - ops, - roperator, -) -from pandas.core.accessor import CachedAccessor -from pandas.core.apply import reconstruct_and_relabel_result -from pandas.core.array_algos.take import take_2d_multi -from pandas.core.arraylike import OpsMixin -from pandas.core.arrays import ( - BaseMaskedArray, - DatetimeArray, - ExtensionArray, - PeriodArray, - TimedeltaArray, -) -from pandas.core.arrays.sparse import SparseFrameAccessor -from pandas.core.construction import ( - ensure_wrapped_if_datetimelike, - extract_array, - sanitize_array, - sanitize_masked_array, -) -from pandas.core.generic import ( - NDFrame, - make_doc, -) -from pandas.core.indexers import check_key_length -from pandas.core.indexes.api import ( - DatetimeIndex, - Index, - PeriodIndex, - default_index, - ensure_index, - ensure_index_from_sequences, -) -from pandas.core.indexes.multi import ( - MultiIndex, - maybe_droplevels, -) -from pandas.core.indexing import ( - check_bool_indexer, - check_dict_or_set_indexers, -) -from pandas.core.internals import ( - ArrayManager, - BlockManager, -) -from pandas.core.internals.construction import ( - arrays_to_mgr, - dataclasses_to_dicts, - dict_to_mgr, - mgr_to_mgr, - ndarray_to_mgr, - nested_data_to_arrays, - rec_array_to_mgr, - reorder_arrays, - to_arrays, - treat_as_nested, -) -from pandas.core.methods import selectn -from pandas.core.reshape.melt import melt -from pandas.core.series import Series -from pandas.core.shared_docs import _shared_docs -from pandas.core.sorting import ( - get_group_index, - lexsort_indexer, - nargsort, -) - -from pandas.io.common import get_handle -from pandas.io.formats import ( - console, - format as fmt, -) -from pandas.io.formats.info import ( - INFO_DOCSTRING, - DataFrameInfo, - frame_sub_kwargs, -) -import pandas.plotting - -if TYPE_CHECKING: - import datetime - - from pandas._libs.internals import BlockValuesRefs - from pandas._typing import ( - AggFuncType, - AnyAll, - AnyArrayLike, - ArrayLike, - Axes, - Axis, - AxisInt, - ColspaceArgType, - CompressionOptions, - CorrelationMethod, - DropKeep, - Dtype, - DtypeObj, - FilePath, - FloatFormatType, - FormattersType, - Frequency, - FromDictOrient, - IgnoreRaise, - IndexKeyFunc, - IndexLabel, - JoinValidate, - Level, - MergeHow, - MergeValidate, - NaAction, - NaPosition, - NsmallestNlargestKeep, - PythonFuncType, - QuantileInterpolation, - ReadBuffer, - ReindexMethod, - Renamer, - Scalar, - Self, - SortKind, - StorageOptions, - Suffixes, - ToGbqIfexist, - ToStataByteorder, - ToTimestampHow, - UpdateJoin, - ValueKeyFunc, - WriteBuffer, - XMLParsers, - npt, - ) - - from pandas.core.groupby.generic import DataFrameGroupBy - from pandas.core.interchange.dataframe_protocol import DataFrame as DataFrameXchg - from pandas.core.internals import SingleDataManager - - from pandas.io.formats.style import Styler - -# --------------------------------------------------------------------- -# Docstring templates - -_shared_doc_kwargs = { - "axes": "index, columns", - "klass": "DataFrame", - "axes_single_arg": "{0 or 'index', 1 or 'columns'}", - "axis": """axis : {0 or 'index', 1 or 'columns'}, default 0 - If 0 or 'index': apply function to each column. - If 1 or 'columns': apply function to each row.""", - "inplace": """ - inplace : bool, default False - Whether to modify the DataFrame rather than creating a new one.""", - "optional_by": """ -by : str or list of str - Name or list of names to sort by. - - - if `axis` is 0 or `'index'` then `by` may contain index - levels and/or column labels. - - if `axis` is 1 or `'columns'` then `by` may contain column - levels and/or index labels.""", - "optional_reindex": """ -labels : array-like, optional - New labels / index to conform the axis specified by 'axis' to. -index : array-like, optional - New labels for the index. Preferably an Index object to avoid - duplicating data. -columns : array-like, optional - New labels for the columns. Preferably an Index object to avoid - duplicating data. -axis : int or str, optional - Axis to target. Can be either the axis name ('index', 'columns') - or number (0, 1).""", -} - -_merge_doc = """ -Merge DataFrame or named Series objects with a database-style join. - -A named Series object is treated as a DataFrame with a single named column. - -The join is done on columns or indexes. If joining columns on -columns, the DataFrame indexes *will be ignored*. Otherwise if joining indexes -on indexes or indexes on a column or columns, the index will be passed on. -When performing a cross merge, no column specifications to merge on are -allowed. - -.. warning:: - - If both key columns contain rows where the key is a null value, those - rows will be matched against each other. This is different from usual SQL - join behaviour and can lead to unexpected results. - -Parameters -----------%s -right : DataFrame or named Series - Object to merge with. -how : {'left', 'right', 'outer', 'inner', 'cross'}, default 'inner' - Type of merge to be performed. - - * left: use only keys from left frame, similar to a SQL left outer join; - preserve key order. - * right: use only keys from right frame, similar to a SQL right outer join; - preserve key order. - * outer: use union of keys from both frames, similar to a SQL full outer - join; sort keys lexicographically. - * inner: use intersection of keys from both frames, similar to a SQL inner - join; preserve the order of the left keys. - * cross: creates the cartesian product from both frames, preserves the order - of the left keys. - - .. versionadded:: 1.2.0 - -on : label or list - Column or index level names to join on. These must be found in both - DataFrames. If `on` is None and not merging on indexes then this defaults - to the intersection of the columns in both DataFrames. -left_on : label or list, or array-like - Column or index level names to join on in the left DataFrame. Can also - be an array or list of arrays of the length of the left DataFrame. - These arrays are treated as if they are columns. -right_on : label or list, or array-like - Column or index level names to join on in the right DataFrame. Can also - be an array or list of arrays of the length of the right DataFrame. - These arrays are treated as if they are columns. -left_index : bool, default False - Use the index from the left DataFrame as the join key(s). If it is a - MultiIndex, the number of keys in the other DataFrame (either the index - or a number of columns) must match the number of levels. -right_index : bool, default False - Use the index from the right DataFrame as the join key. Same caveats as - left_index. -sort : bool, default False - Sort the join keys lexicographically in the result DataFrame. If False, - the order of the join keys depends on the join type (how keyword). -suffixes : list-like, default is ("_x", "_y") - A length-2 sequence where each element is optionally a string - indicating the suffix to add to overlapping column names in - `left` and `right` respectively. Pass a value of `None` instead - of a string to indicate that the column name from `left` or - `right` should be left as-is, with no suffix. At least one of the - values must not be None. -copy : bool, default True - If False, avoid copy if possible. -indicator : bool or str, default False - If True, adds a column to the output DataFrame called "_merge" with - information on the source of each row. The column can be given a different - name by providing a string argument. The column will have a Categorical - type with the value of "left_only" for observations whose merge key only - appears in the left DataFrame, "right_only" for observations - whose merge key only appears in the right DataFrame, and "both" - if the observation's merge key is found in both DataFrames. - -validate : str, optional - If specified, checks if merge is of specified type. - - * "one_to_one" or "1:1": check if merge keys are unique in both - left and right datasets. - * "one_to_many" or "1:m": check if merge keys are unique in left - dataset. - * "many_to_one" or "m:1": check if merge keys are unique in right - dataset. - * "many_to_many" or "m:m": allowed, but does not result in checks. - -Returns -------- -DataFrame - A DataFrame of the two merged objects. - -See Also --------- -merge_ordered : Merge with optional filling/interpolation. -merge_asof : Merge on nearest keys. -DataFrame.join : Similar method using indices. - -Examples --------- ->>> df1 = pd.DataFrame({'lkey': ['foo', 'bar', 'baz', 'foo'], -... 'value': [1, 2, 3, 5]}) ->>> df2 = pd.DataFrame({'rkey': ['foo', 'bar', 'baz', 'foo'], -... 'value': [5, 6, 7, 8]}) ->>> df1 - lkey value -0 foo 1 -1 bar 2 -2 baz 3 -3 foo 5 ->>> df2 - rkey value -0 foo 5 -1 bar 6 -2 baz 7 -3 foo 8 - -Merge df1 and df2 on the lkey and rkey columns. The value columns have -the default suffixes, _x and _y, appended. - ->>> df1.merge(df2, left_on='lkey', right_on='rkey') - lkey value_x rkey value_y -0 foo 1 foo 5 -1 foo 1 foo 8 -2 foo 5 foo 5 -3 foo 5 foo 8 -4 bar 2 bar 6 -5 baz 3 baz 7 - -Merge DataFrames df1 and df2 with specified left and right suffixes -appended to any overlapping columns. - ->>> df1.merge(df2, left_on='lkey', right_on='rkey', -... suffixes=('_left', '_right')) - lkey value_left rkey value_right -0 foo 1 foo 5 -1 foo 1 foo 8 -2 foo 5 foo 5 -3 foo 5 foo 8 -4 bar 2 bar 6 -5 baz 3 baz 7 - -Merge DataFrames df1 and df2, but raise an exception if the DataFrames have -any overlapping columns. - ->>> df1.merge(df2, left_on='lkey', right_on='rkey', suffixes=(False, False)) -Traceback (most recent call last): -... -ValueError: columns overlap but no suffix specified: - Index(['value'], dtype='object') - ->>> df1 = pd.DataFrame({'a': ['foo', 'bar'], 'b': [1, 2]}) ->>> df2 = pd.DataFrame({'a': ['foo', 'baz'], 'c': [3, 4]}) ->>> df1 - a b -0 foo 1 -1 bar 2 ->>> df2 - a c -0 foo 3 -1 baz 4 - ->>> df1.merge(df2, how='inner', on='a') - a b c -0 foo 1 3 - ->>> df1.merge(df2, how='left', on='a') - a b c -0 foo 1 3.0 -1 bar 2 NaN - ->>> df1 = pd.DataFrame({'left': ['foo', 'bar']}) ->>> df2 = pd.DataFrame({'right': [7, 8]}) ->>> df1 - left -0 foo -1 bar ->>> df2 - right -0 7 -1 8 - ->>> df1.merge(df2, how='cross') - left right -0 foo 7 -1 foo 8 -2 bar 7 -3 bar 8 -""" - - -# ----------------------------------------------------------------------- -# DataFrame class - - -class DataFrame(NDFrame, OpsMixin): - """ - Two-dimensional, size-mutable, potentially heterogeneous tabular data. - - Data structure also contains labeled axes (rows and columns). - Arithmetic operations align on both row and column labels. Can be - thought of as a dict-like container for Series objects. The primary - pandas data structure. - - Parameters - ---------- - data : ndarray (structured or homogeneous), Iterable, dict, or DataFrame - Dict can contain Series, arrays, constants, dataclass or list-like objects. If - data is a dict, column order follows insertion-order. If a dict contains Series - which have an index defined, it is aligned by its index. This alignment also - occurs if data is a Series or a DataFrame itself. Alignment is done on - Series/DataFrame inputs. - - If data is a list of dicts, column order follows insertion-order. - - index : Index or array-like - Index to use for resulting frame. Will default to RangeIndex if - no indexing information part of input data and no index provided. - columns : Index or array-like - Column labels to use for resulting frame when data does not have them, - defaulting to RangeIndex(0, 1, 2, ..., n). If data contains column labels, - will perform column selection instead. - dtype : dtype, default None - Data type to force. Only a single dtype is allowed. If None, infer. - copy : bool or None, default None - Copy data from inputs. - For dict data, the default of None behaves like ``copy=True``. For DataFrame - or 2d ndarray input, the default of None behaves like ``copy=False``. - If data is a dict containing one or more Series (possibly of different dtypes), - ``copy=False`` will ensure that these inputs are not copied. - - .. versionchanged:: 1.3.0 - - See Also - -------- - DataFrame.from_records : Constructor from tuples, also record arrays. - DataFrame.from_dict : From dicts of Series, arrays, or dicts. - read_csv : Read a comma-separated values (csv) file into DataFrame. - read_table : Read general delimited file into DataFrame. - read_clipboard : Read text from clipboard into DataFrame. - - Notes - ----- - Please reference the :ref:`User Guide ` for more information. - - Examples - -------- - Constructing DataFrame from a dictionary. - - >>> d = {'col1': [1, 2], 'col2': [3, 4]} - >>> df = pd.DataFrame(data=d) - >>> df - col1 col2 - 0 1 3 - 1 2 4 - - Notice that the inferred dtype is int64. - - >>> df.dtypes - col1 int64 - col2 int64 - dtype: object - - To enforce a single dtype: - - >>> df = pd.DataFrame(data=d, dtype=np.int8) - >>> df.dtypes - col1 int8 - col2 int8 - dtype: object - - Constructing DataFrame from a dictionary including Series: - - >>> d = {'col1': [0, 1, 2, 3], 'col2': pd.Series([2, 3], index=[2, 3])} - >>> pd.DataFrame(data=d, index=[0, 1, 2, 3]) - col1 col2 - 0 0 NaN - 1 1 NaN - 2 2 2.0 - 3 3 3.0 - - Constructing DataFrame from numpy ndarray: - - >>> df2 = pd.DataFrame(np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]), - ... columns=['a', 'b', 'c']) - >>> df2 - a b c - 0 1 2 3 - 1 4 5 6 - 2 7 8 9 - - Constructing DataFrame from a numpy ndarray that has labeled columns: - - >>> data = np.array([(1, 2, 3), (4, 5, 6), (7, 8, 9)], - ... dtype=[("a", "i4"), ("b", "i4"), ("c", "i4")]) - >>> df3 = pd.DataFrame(data, columns=['c', 'a']) - ... - >>> df3 - c a - 0 3 1 - 1 6 4 - 2 9 7 - - Constructing DataFrame from dataclass: - - >>> from dataclasses import make_dataclass - >>> Point = make_dataclass("Point", [("x", int), ("y", int)]) - >>> pd.DataFrame([Point(0, 0), Point(0, 3), Point(2, 3)]) - x y - 0 0 0 - 1 0 3 - 2 2 3 - - Constructing DataFrame from Series/DataFrame: - - >>> ser = pd.Series([1, 2, 3], index=["a", "b", "c"]) - >>> df = pd.DataFrame(data=ser, index=["a", "c"]) - >>> df - 0 - a 1 - c 3 - - >>> df1 = pd.DataFrame([1, 2, 3], index=["a", "b", "c"], columns=["x"]) - >>> df2 = pd.DataFrame(data=df1, index=["a", "c"]) - >>> df2 - x - a 1 - c 3 - """ - - _internal_names_set = {"columns", "index"} | NDFrame._internal_names_set - _typ = "dataframe" - _HANDLED_TYPES = (Series, Index, ExtensionArray, np.ndarray) - _accessors: set[str] = {"sparse"} - _hidden_attrs: frozenset[str] = NDFrame._hidden_attrs | frozenset([]) - _mgr: BlockManager | ArrayManager - - # similar to __array_priority__, positions DataFrame before Series, Index, - # and ExtensionArray. Should NOT be overridden by subclasses. - __pandas_priority__ = 4000 - - @property - def _constructor(self) -> Callable[..., DataFrame]: - return DataFrame - - def _constructor_from_mgr(self, mgr, axes): - if self._constructor is DataFrame: - # we are pandas.DataFrame (or a subclass that doesn't override _constructor) - return self._from_mgr(mgr, axes=axes) - else: - assert axes is mgr.axes - return self._constructor(mgr) - - _constructor_sliced: Callable[..., Series] = Series - - def _sliced_from_mgr(self, mgr, axes) -> Series: - return Series._from_mgr(mgr, axes) - - def _constructor_sliced_from_mgr(self, mgr, axes): - if self._constructor_sliced is Series: - ser = self._sliced_from_mgr(mgr, axes) - ser._name = None # caller is responsible for setting real name - return ser - assert axes is mgr.axes - return self._constructor_sliced(mgr) - - # ---------------------------------------------------------------------- - # Constructors - - def __init__( - self, - data=None, - index: Axes | None = None, - columns: Axes | None = None, - dtype: Dtype | None = None, - copy: bool | None = None, - ) -> None: - if dtype is not None: - dtype = self._validate_dtype(dtype) - - if isinstance(data, DataFrame): - data = data._mgr - if not copy: - # if not copying data, ensure to still return a shallow copy - # to avoid the result sharing the same Manager - data = data.copy(deep=False) - - if isinstance(data, (BlockManager, ArrayManager)): - if using_copy_on_write(): - data = data.copy(deep=False) - # first check if a Manager is passed without any other arguments - # -> use fastpath (without checking Manager type) - if index is None and columns is None and dtype is None and not copy: - # GH#33357 fastpath - NDFrame.__init__(self, data) - return - - manager = get_option("mode.data_manager") - - # GH47215 - if isinstance(index, set): - raise ValueError("index cannot be a set") - if isinstance(columns, set): - raise ValueError("columns cannot be a set") - - if copy is None: - if isinstance(data, dict): - # retain pre-GH#38939 default behavior - copy = True - elif ( - manager == "array" - and isinstance(data, (np.ndarray, ExtensionArray)) - and data.ndim == 2 - ): - # INFO(ArrayManager) by default copy the 2D input array to get - # contiguous 1D arrays - copy = True - elif using_copy_on_write() and not isinstance( - data, (Index, DataFrame, Series) - ): - copy = True - else: - copy = False - - if data is None: - index = index if index is not None else default_index(0) - columns = columns if columns is not None else default_index(0) - dtype = dtype if dtype is not None else pandas_dtype(object) - data = [] - - if isinstance(data, (BlockManager, ArrayManager)): - mgr = self._init_mgr( - data, axes={"index": index, "columns": columns}, dtype=dtype, copy=copy - ) - - elif isinstance(data, dict): - # GH#38939 de facto copy defaults to False only in non-dict cases - mgr = dict_to_mgr(data, index, columns, dtype=dtype, copy=copy, typ=manager) - elif isinstance(data, ma.MaskedArray): - from numpy.ma import mrecords - - # masked recarray - if isinstance(data, mrecords.MaskedRecords): - raise TypeError( - "MaskedRecords are not supported. Pass " - "{name: data[name] for name in data.dtype.names} " - "instead" - ) - - # a masked array - data = sanitize_masked_array(data) - mgr = ndarray_to_mgr( - data, - index, - columns, - dtype=dtype, - copy=copy, - typ=manager, - ) - - elif isinstance(data, (np.ndarray, Series, Index, ExtensionArray)): - if data.dtype.names: - # i.e. numpy structured array - data = cast(np.ndarray, data) - mgr = rec_array_to_mgr( - data, - index, - columns, - dtype, - copy, - typ=manager, - ) - elif getattr(data, "name", None) is not None: - # i.e. Series/Index with non-None name - _copy = copy if using_copy_on_write() else True - mgr = dict_to_mgr( - # error: Item "ndarray" of "Union[ndarray, Series, Index]" has no - # attribute "name" - {data.name: data}, # type: ignore[union-attr] - index, - columns, - dtype=dtype, - typ=manager, - copy=_copy, - ) - else: - mgr = ndarray_to_mgr( - data, - index, - columns, - dtype=dtype, - copy=copy, - typ=manager, - ) - - # For data is list-like, or Iterable (will consume into list) - elif is_list_like(data): - if not isinstance(data, abc.Sequence): - if hasattr(data, "__array__"): - # GH#44616 big perf improvement for e.g. pytorch tensor - data = np.asarray(data) - else: - data = list(data) - if len(data) > 0: - if is_dataclass(data[0]): - data = dataclasses_to_dicts(data) - if not isinstance(data, np.ndarray) and treat_as_nested(data): - # exclude ndarray as we may have cast it a few lines above - if columns is not None: - columns = ensure_index(columns) - arrays, columns, index = nested_data_to_arrays( - # error: Argument 3 to "nested_data_to_arrays" has incompatible - # type "Optional[Collection[Any]]"; expected "Optional[Index]" - data, - columns, - index, # type: ignore[arg-type] - dtype, - ) - mgr = arrays_to_mgr( - arrays, - columns, - index, - dtype=dtype, - typ=manager, - ) - else: - mgr = ndarray_to_mgr( - data, - index, - columns, - dtype=dtype, - copy=copy, - typ=manager, - ) - else: - mgr = dict_to_mgr( - {}, - index, - columns if columns is not None else default_index(0), - dtype=dtype, - typ=manager, - ) - # For data is scalar - else: - if index is None or columns is None: - raise ValueError("DataFrame constructor not properly called!") - - index = ensure_index(index) - columns = ensure_index(columns) - - if not dtype: - dtype, _ = infer_dtype_from_scalar(data) - - # For data is a scalar extension dtype - if isinstance(dtype, ExtensionDtype): - # TODO(EA2D): special case not needed with 2D EAs - - values = [ - construct_1d_arraylike_from_scalar(data, len(index), dtype) - for _ in range(len(columns)) - ] - mgr = arrays_to_mgr(values, columns, index, dtype=None, typ=manager) - else: - arr2d = construct_2d_arraylike_from_scalar( - data, - len(index), - len(columns), - dtype, - copy, - ) - - mgr = ndarray_to_mgr( - arr2d, - index, - columns, - dtype=arr2d.dtype, - copy=False, - typ=manager, - ) - - # ensure correct Manager type according to settings - mgr = mgr_to_mgr(mgr, typ=manager) - - NDFrame.__init__(self, mgr) - - # ---------------------------------------------------------------------- - - def __dataframe__( - self, nan_as_null: bool = False, allow_copy: bool = True - ) -> DataFrameXchg: - """ - Return the dataframe interchange object implementing the interchange protocol. - - Parameters - ---------- - nan_as_null : bool, default False - Whether to tell the DataFrame to overwrite null values in the data - with ``NaN`` (or ``NaT``). - allow_copy : bool, default True - Whether to allow memory copying when exporting. If set to False - it would cause non-zero-copy exports to fail. - - Returns - ------- - DataFrame interchange object - The object which consuming library can use to ingress the dataframe. - - Notes - ----- - Details on the interchange protocol: - https://data-apis.org/dataframe-protocol/latest/index.html - - `nan_as_null` currently has no effect; once support for nullable extension - dtypes is added, this value should be propagated to columns. - - Examples - -------- - >>> df_not_necessarily_pandas = pd.DataFrame({'A': [1, 2], 'B': [3, 4]}) - >>> interchange_object = df_not_necessarily_pandas.__dataframe__() - >>> interchange_object.column_names() - Index(['A', 'B'], dtype='object') - >>> df_pandas = (pd.api.interchange.from_dataframe - ... (interchange_object.select_columns_by_name(['A']))) - >>> df_pandas - A - 0 1 - 1 2 - - These methods (``column_names``, ``select_columns_by_name``) should work - for any dataframe library which implements the interchange protocol. - """ - - from pandas.core.interchange.dataframe import PandasDataFrameXchg - - return PandasDataFrameXchg(self, nan_as_null, allow_copy) - - def __dataframe_consortium_standard__( - self, *, api_version: str | None = None - ) -> Any: - """ - Provide entry point to the Consortium DataFrame Standard API. - - This is developed and maintained outside of pandas. - Please report any issues to https://github.com/data-apis/dataframe-api-compat. - """ - dataframe_api_compat = import_optional_dependency("dataframe_api_compat") - convert_to_standard_compliant_dataframe = ( - dataframe_api_compat.pandas_standard.convert_to_standard_compliant_dataframe - ) - return convert_to_standard_compliant_dataframe(self, api_version=api_version) - - # ---------------------------------------------------------------------- - - @property - def axes(self) -> list[Index]: - """ - Return a list representing the axes of the DataFrame. - - It has the row axis labels and column axis labels as the only members. - They are returned in that order. - - Examples - -------- - >>> df = pd.DataFrame({'col1': [1, 2], 'col2': [3, 4]}) - >>> df.axes - [RangeIndex(start=0, stop=2, step=1), Index(['col1', 'col2'], - dtype='object')] - """ - return [self.index, self.columns] - - @property - def shape(self) -> tuple[int, int]: - """ - Return a tuple representing the dimensionality of the DataFrame. - - See Also - -------- - ndarray.shape : Tuple of array dimensions. - - Examples - -------- - >>> df = pd.DataFrame({'col1': [1, 2], 'col2': [3, 4]}) - >>> df.shape - (2, 2) - - >>> df = pd.DataFrame({'col1': [1, 2], 'col2': [3, 4], - ... 'col3': [5, 6]}) - >>> df.shape - (2, 3) - """ - return len(self.index), len(self.columns) - - @property - def _is_homogeneous_type(self) -> bool: - """ - Whether all the columns in a DataFrame have the same type. - - Returns - ------- - bool - - Examples - -------- - >>> DataFrame({"A": [1, 2], "B": [3, 4]})._is_homogeneous_type - True - >>> DataFrame({"A": [1, 2], "B": [3.0, 4.0]})._is_homogeneous_type - False - - Items with the same type but different sizes are considered - different types. - - >>> DataFrame({ - ... "A": np.array([1, 2], dtype=np.int32), - ... "B": np.array([1, 2], dtype=np.int64)})._is_homogeneous_type - False - """ - # The "<" part of "<=" here is for empty DataFrame cases - return len({arr.dtype for arr in self._mgr.arrays}) <= 1 - - @property - def _can_fast_transpose(self) -> bool: - """ - Can we transpose this DataFrame without creating any new array objects. - """ - if isinstance(self._mgr, ArrayManager): - return False - blocks = self._mgr.blocks - if len(blocks) != 1: - return False - - dtype = blocks[0].dtype - # TODO(EA2D) special case would be unnecessary with 2D EAs - return not is_1d_only_ea_dtype(dtype) - - @property - def _values(self) -> np.ndarray | DatetimeArray | TimedeltaArray | PeriodArray: - """ - Analogue to ._values that may return a 2D ExtensionArray. - """ - mgr = self._mgr - - if isinstance(mgr, ArrayManager): - if len(mgr.arrays) == 1 and not is_1d_only_ea_dtype(mgr.arrays[0].dtype): - # error: Item "ExtensionArray" of "Union[ndarray, ExtensionArray]" - # has no attribute "reshape" - return mgr.arrays[0].reshape(-1, 1) # type: ignore[union-attr] - return ensure_wrapped_if_datetimelike(self.values) - - blocks = mgr.blocks - if len(blocks) != 1: - return ensure_wrapped_if_datetimelike(self.values) - - arr = blocks[0].values - if arr.ndim == 1: - # non-2D ExtensionArray - return self.values - - # more generally, whatever we allow in NDArrayBackedExtensionBlock - arr = cast("np.ndarray | DatetimeArray | TimedeltaArray | PeriodArray", arr) - return arr.T - - # ---------------------------------------------------------------------- - # Rendering Methods - - def _repr_fits_vertical_(self) -> bool: - """ - Check length against max_rows. - """ - max_rows = get_option("display.max_rows") - return len(self) <= max_rows - - def _repr_fits_horizontal_(self) -> bool: - """ - Check if full repr fits in horizontal boundaries imposed by the display - options width and max_columns. - """ - width, height = console.get_console_size() - max_columns = get_option("display.max_columns") - nb_columns = len(self.columns) - - # exceed max columns - if (max_columns and nb_columns > max_columns) or ( - width and nb_columns > (width // 2) - ): - return False - - # used by repr_html under IPython notebook or scripts ignore terminal - # dims - if width is None or not console.in_interactive_session(): - return True - - if get_option("display.width") is not None or console.in_ipython_frontend(): - # check at least the column row for excessive width - max_rows = 1 - else: - max_rows = get_option("display.max_rows") - - # when auto-detecting, so width=None and not in ipython front end - # check whether repr fits horizontal by actually checking - # the width of the rendered repr - buf = StringIO() - - # only care about the stuff we'll actually print out - # and to_string on entire frame may be expensive - d = self - - if max_rows is not None: # unlimited rows - # min of two, where one may be None - d = d.iloc[: min(max_rows, len(d))] - else: - return True - - d.to_string(buf=buf) - value = buf.getvalue() - repr_width = max(len(line) for line in value.split("\n")) - - return repr_width < width - - def _info_repr(self) -> bool: - """ - True if the repr should show the info view. - """ - info_repr_option = get_option("display.large_repr") == "info" - return info_repr_option and not ( - self._repr_fits_horizontal_() and self._repr_fits_vertical_() - ) - - def __repr__(self) -> str: - """ - Return a string representation for a particular DataFrame. - """ - if self._info_repr(): - buf = StringIO() - self.info(buf=buf) - return buf.getvalue() - - repr_params = fmt.get_dataframe_repr_params() - return self.to_string(**repr_params) - - def _repr_html_(self) -> str | None: - """ - Return a html representation for a particular DataFrame. - - Mainly for IPython notebook. - """ - if self._info_repr(): - buf = StringIO() - self.info(buf=buf) - # need to escape the , should be the first line. - val = buf.getvalue().replace("<", r"<", 1) - val = val.replace(">", r">", 1) - return f"
{val}
" - - if get_option("display.notebook_repr_html"): - max_rows = get_option("display.max_rows") - min_rows = get_option("display.min_rows") - max_cols = get_option("display.max_columns") - show_dimensions = get_option("display.show_dimensions") - - formatter = fmt.DataFrameFormatter( - self, - columns=None, - col_space=None, - na_rep="NaN", - formatters=None, - float_format=None, - sparsify=None, - justify=None, - index_names=True, - header=True, - index=True, - bold_rows=True, - escape=True, - max_rows=max_rows, - min_rows=min_rows, - max_cols=max_cols, - show_dimensions=show_dimensions, - decimal=".", - ) - return fmt.DataFrameRenderer(formatter).to_html(notebook=True) - else: - return None - - @overload - def to_string( - self, - buf: None = ..., - columns: Axes | None = ..., - col_space: int | list[int] | dict[Hashable, int] | None = ..., - header: bool | list[str] = ..., - index: bool = ..., - na_rep: str = ..., - formatters: fmt.FormattersType | None = ..., - float_format: fmt.FloatFormatType | None = ..., - sparsify: bool | None = ..., - index_names: bool = ..., - justify: str | None = ..., - max_rows: int | None = ..., - max_cols: int | None = ..., - show_dimensions: bool = ..., - decimal: str = ..., - line_width: int | None = ..., - min_rows: int | None = ..., - max_colwidth: int | None = ..., - encoding: str | None = ..., - ) -> str: - ... - - @overload - def to_string( - self, - buf: FilePath | WriteBuffer[str], - columns: Axes | None = ..., - col_space: int | list[int] | dict[Hashable, int] | None = ..., - header: bool | list[str] = ..., - index: bool = ..., - na_rep: str = ..., - formatters: fmt.FormattersType | None = ..., - float_format: fmt.FloatFormatType | None = ..., - sparsify: bool | None = ..., - index_names: bool = ..., - justify: str | None = ..., - max_rows: int | None = ..., - max_cols: int | None = ..., - show_dimensions: bool = ..., - decimal: str = ..., - line_width: int | None = ..., - min_rows: int | None = ..., - max_colwidth: int | None = ..., - encoding: str | None = ..., - ) -> None: - ... - - @Substitution( - header_type="bool or list of str", - header="Write out the column names. If a list of columns " - "is given, it is assumed to be aliases for the " - "column names", - col_space_type="int, list or dict of int", - col_space="The minimum width of each column. If a list of ints is given " - "every integers corresponds with one column. If a dict is given, the key " - "references the column, while the value defines the space to use.", - ) - @Substitution(shared_params=fmt.common_docstring, returns=fmt.return_docstring) - def to_string( - self, - buf: FilePath | WriteBuffer[str] | None = None, - columns: Axes | None = None, - col_space: int | list[int] | dict[Hashable, int] | None = None, - header: bool | list[str] = True, - index: bool = True, - na_rep: str = "NaN", - formatters: fmt.FormattersType | None = None, - float_format: fmt.FloatFormatType | None = None, - sparsify: bool | None = None, - index_names: bool = True, - justify: str | None = None, - max_rows: int | None = None, - max_cols: int | None = None, - show_dimensions: bool = False, - decimal: str = ".", - line_width: int | None = None, - min_rows: int | None = None, - max_colwidth: int | None = None, - encoding: str | None = None, - ) -> str | None: - """ - Render a DataFrame to a console-friendly tabular output. - %(shared_params)s - line_width : int, optional - Width to wrap a line in characters. - min_rows : int, optional - The number of rows to display in the console in a truncated repr - (when number of rows is above `max_rows`). - max_colwidth : int, optional - Max width to truncate each column in characters. By default, no limit. - encoding : str, default "utf-8" - Set character encoding. - %(returns)s - See Also - -------- - to_html : Convert DataFrame to HTML. - - Examples - -------- - >>> d = {'col1': [1, 2, 3], 'col2': [4, 5, 6]} - >>> df = pd.DataFrame(d) - >>> print(df.to_string()) - col1 col2 - 0 1 4 - 1 2 5 - 2 3 6 - """ - from pandas import option_context - - with option_context("display.max_colwidth", max_colwidth): - formatter = fmt.DataFrameFormatter( - self, - columns=columns, - col_space=col_space, - na_rep=na_rep, - formatters=formatters, - float_format=float_format, - sparsify=sparsify, - justify=justify, - index_names=index_names, - header=header, - index=index, - min_rows=min_rows, - max_rows=max_rows, - max_cols=max_cols, - show_dimensions=show_dimensions, - decimal=decimal, - ) - return fmt.DataFrameRenderer(formatter).to_string( - buf=buf, - encoding=encoding, - line_width=line_width, - ) - - # ---------------------------------------------------------------------- - - @property - def style(self) -> Styler: - """ - Returns a Styler object. - - Contains methods for building a styled HTML representation of the DataFrame. - - See Also - -------- - io.formats.style.Styler : Helps style a DataFrame or Series according to the - data with HTML and CSS. - - Examples - -------- - >>> df = pd.DataFrame({'A': [1, 2, 3]}) - >>> df.style # doctest: +SKIP - - Please see - `Table Visualization <../../user_guide/style.ipynb>`_ for more examples. - """ - from pandas.io.formats.style import Styler - - return Styler(self) - - _shared_docs[ - "items" - ] = r""" - Iterate over (column name, Series) pairs. - - Iterates over the DataFrame columns, returning a tuple with - the column name and the content as a Series. - - Yields - ------ - label : object - The column names for the DataFrame being iterated over. - content : Series - The column entries belonging to each label, as a Series. - - See Also - -------- - DataFrame.iterrows : Iterate over DataFrame rows as - (index, Series) pairs. - DataFrame.itertuples : Iterate over DataFrame rows as namedtuples - of the values. - - Examples - -------- - >>> df = pd.DataFrame({'species': ['bear', 'bear', 'marsupial'], - ... 'population': [1864, 22000, 80000]}, - ... index=['panda', 'polar', 'koala']) - >>> df - species population - panda bear 1864 - polar bear 22000 - koala marsupial 80000 - >>> for label, content in df.items(): - ... print(f'label: {label}') - ... print(f'content: {content}', sep='\n') - ... - label: species - content: - panda bear - polar bear - koala marsupial - Name: species, dtype: object - label: population - content: - panda 1864 - polar 22000 - koala 80000 - Name: population, dtype: int64 - """ - - @Appender(_shared_docs["items"]) - def items(self) -> Iterable[tuple[Hashable, Series]]: - if self.columns.is_unique and hasattr(self, "_item_cache"): - for k in self.columns: - yield k, self._get_item_cache(k) - else: - for i, k in enumerate(self.columns): - yield k, self._ixs(i, axis=1) - - def iterrows(self) -> Iterable[tuple[Hashable, Series]]: - """ - Iterate over DataFrame rows as (index, Series) pairs. - - Yields - ------ - index : label or tuple of label - The index of the row. A tuple for a `MultiIndex`. - data : Series - The data of the row as a Series. - - See Also - -------- - DataFrame.itertuples : Iterate over DataFrame rows as namedtuples of the values. - DataFrame.items : Iterate over (column name, Series) pairs. - - Notes - ----- - 1. Because ``iterrows`` returns a Series for each row, - it does **not** preserve dtypes across the rows (dtypes are - preserved across columns for DataFrames). - - To preserve dtypes while iterating over the rows, it is better - to use :meth:`itertuples` which returns namedtuples of the values - and which is generally faster than ``iterrows``. - - 2. You should **never modify** something you are iterating over. - This is not guaranteed to work in all cases. Depending on the - data types, the iterator returns a copy and not a view, and writing - to it will have no effect. - - Examples - -------- - - >>> df = pd.DataFrame([[1, 1.5]], columns=['int', 'float']) - >>> row = next(df.iterrows())[1] - >>> row - int 1.0 - float 1.5 - Name: 0, dtype: float64 - >>> print(row['int'].dtype) - float64 - >>> print(df['int'].dtype) - int64 - """ - columns = self.columns - klass = self._constructor_sliced - using_cow = using_copy_on_write() - for k, v in zip(self.index, self.values): - s = klass(v, index=columns, name=k).__finalize__(self) - if using_cow and self._mgr.is_single_block: - s._mgr.add_references(self._mgr) # type: ignore[arg-type] - yield k, s - - def itertuples( - self, index: bool = True, name: str | None = "Pandas" - ) -> Iterable[tuple[Any, ...]]: - """ - Iterate over DataFrame rows as namedtuples. - - Parameters - ---------- - index : bool, default True - If True, return the index as the first element of the tuple. - name : str or None, default "Pandas" - The name of the returned namedtuples or None to return regular - tuples. - - Returns - ------- - iterator - An object to iterate over namedtuples for each row in the - DataFrame with the first field possibly being the index and - following fields being the column values. - - See Also - -------- - DataFrame.iterrows : Iterate over DataFrame rows as (index, Series) - pairs. - DataFrame.items : Iterate over (column name, Series) pairs. - - Notes - ----- - The column names will be renamed to positional names if they are - invalid Python identifiers, repeated, or start with an underscore. - - Examples - -------- - >>> df = pd.DataFrame({'num_legs': [4, 2], 'num_wings': [0, 2]}, - ... index=['dog', 'hawk']) - >>> df - num_legs num_wings - dog 4 0 - hawk 2 2 - >>> for row in df.itertuples(): - ... print(row) - ... - Pandas(Index='dog', num_legs=4, num_wings=0) - Pandas(Index='hawk', num_legs=2, num_wings=2) - - By setting the `index` parameter to False we can remove the index - as the first element of the tuple: - - >>> for row in df.itertuples(index=False): - ... print(row) - ... - Pandas(num_legs=4, num_wings=0) - Pandas(num_legs=2, num_wings=2) - - With the `name` parameter set we set a custom name for the yielded - namedtuples: - - >>> for row in df.itertuples(name='Animal'): - ... print(row) - ... - Animal(Index='dog', num_legs=4, num_wings=0) - Animal(Index='hawk', num_legs=2, num_wings=2) - """ - arrays = [] - fields = list(self.columns) - if index: - arrays.append(self.index) - fields.insert(0, "Index") - - # use integer indexing because of possible duplicate column names - arrays.extend(self.iloc[:, k] for k in range(len(self.columns))) - - if name is not None: - # https://github.com/python/mypy/issues/9046 - # error: namedtuple() expects a string literal as the first argument - itertuple = collections.namedtuple( # type: ignore[misc] - name, fields, rename=True - ) - return map(itertuple._make, zip(*arrays)) - - # fallback to regular tuples - return zip(*arrays) - - def __len__(self) -> int: - """ - Returns length of info axis, but here we use the index. - """ - return len(self.index) - - @overload - def dot(self, other: Series) -> Series: - ... - - @overload - def dot(self, other: DataFrame | Index | ArrayLike) -> DataFrame: - ... - - def dot(self, other: AnyArrayLike | DataFrame) -> DataFrame | Series: - """ - Compute the matrix multiplication between the DataFrame and other. - - This method computes the matrix product between the DataFrame and the - values of an other Series, DataFrame or a numpy array. - - It can also be called using ``self @ other``. - - Parameters - ---------- - other : Series, DataFrame or array-like - The other object to compute the matrix product with. - - Returns - ------- - Series or DataFrame - If other is a Series, return the matrix product between self and - other as a Series. If other is a DataFrame or a numpy.array, return - the matrix product of self and other in a DataFrame of a np.array. - - See Also - -------- - Series.dot: Similar method for Series. - - Notes - ----- - The dimensions of DataFrame and other must be compatible in order to - compute the matrix multiplication. In addition, the column names of - DataFrame and the index of other must contain the same values, as they - will be aligned prior to the multiplication. - - The dot method for Series computes the inner product, instead of the - matrix product here. - - Examples - -------- - Here we multiply a DataFrame with a Series. - - >>> df = pd.DataFrame([[0, 1, -2, -1], [1, 1, 1, 1]]) - >>> s = pd.Series([1, 1, 2, 1]) - >>> df.dot(s) - 0 -4 - 1 5 - dtype: int64 - - Here we multiply a DataFrame with another DataFrame. - - >>> other = pd.DataFrame([[0, 1], [1, 2], [-1, -1], [2, 0]]) - >>> df.dot(other) - 0 1 - 0 1 4 - 1 2 2 - - Note that the dot method give the same result as @ - - >>> df @ other - 0 1 - 0 1 4 - 1 2 2 - - The dot method works also if other is an np.array. - - >>> arr = np.array([[0, 1], [1, 2], [-1, -1], [2, 0]]) - >>> df.dot(arr) - 0 1 - 0 1 4 - 1 2 2 - - Note how shuffling of the objects does not change the result. - - >>> s2 = s.reindex([1, 0, 2, 3]) - >>> df.dot(s2) - 0 -4 - 1 5 - dtype: int64 - """ - if isinstance(other, (Series, DataFrame)): - common = self.columns.union(other.index) - if len(common) > len(self.columns) or len(common) > len(other.index): - raise ValueError("matrices are not aligned") - - left = self.reindex(columns=common, copy=False) - right = other.reindex(index=common, copy=False) - lvals = left.values - rvals = right._values - else: - left = self - lvals = self.values - rvals = np.asarray(other) - if lvals.shape[1] != rvals.shape[0]: - raise ValueError( - f"Dot product shape mismatch, {lvals.shape} vs {rvals.shape}" - ) - - if isinstance(other, DataFrame): - common_type = find_common_type(list(self.dtypes) + list(other.dtypes)) - return self._constructor( - np.dot(lvals, rvals), - index=left.index, - columns=other.columns, - copy=False, - dtype=common_type, - ) - elif isinstance(other, Series): - common_type = find_common_type(list(self.dtypes) + [other.dtypes]) - return self._constructor_sliced( - np.dot(lvals, rvals), index=left.index, copy=False, dtype=common_type - ) - elif isinstance(rvals, (np.ndarray, Index)): - result = np.dot(lvals, rvals) - if result.ndim == 2: - return self._constructor(result, index=left.index, copy=False) - else: - return self._constructor_sliced(result, index=left.index, copy=False) - else: # pragma: no cover - raise TypeError(f"unsupported type: {type(other)}") - - @overload - def __matmul__(self, other: Series) -> Series: - ... - - @overload - def __matmul__(self, other: AnyArrayLike | DataFrame) -> DataFrame | Series: - ... - - def __matmul__(self, other: AnyArrayLike | DataFrame) -> DataFrame | Series: - """ - Matrix multiplication using binary `@` operator. - """ - return self.dot(other) - - def __rmatmul__(self, other) -> DataFrame: - """ - Matrix multiplication using binary `@` operator. - """ - try: - return self.T.dot(np.transpose(other)).T - except ValueError as err: - if "shape mismatch" not in str(err): - raise - # GH#21581 give exception message for original shapes - msg = f"shapes {np.shape(other)} and {self.shape} not aligned" - raise ValueError(msg) from err - - # ---------------------------------------------------------------------- - # IO methods (to / from other formats) - - @classmethod - def from_dict( - cls, - data: dict, - orient: FromDictOrient = "columns", - dtype: Dtype | None = None, - columns: Axes | None = None, - ) -> DataFrame: - """ - Construct DataFrame from dict of array-like or dicts. - - Creates DataFrame object from dictionary by columns or by index - allowing dtype specification. - - Parameters - ---------- - data : dict - Of the form {field : array-like} or {field : dict}. - orient : {'columns', 'index', 'tight'}, default 'columns' - The "orientation" of the data. If the keys of the passed dict - should be the columns of the resulting DataFrame, pass 'columns' - (default). Otherwise if the keys should be rows, pass 'index'. - If 'tight', assume a dict with keys ['index', 'columns', 'data', - 'index_names', 'column_names']. - - .. versionadded:: 1.4.0 - 'tight' as an allowed value for the ``orient`` argument - - dtype : dtype, default None - Data type to force after DataFrame construction, otherwise infer. - columns : list, default None - Column labels to use when ``orient='index'``. Raises a ValueError - if used with ``orient='columns'`` or ``orient='tight'``. - - Returns - ------- - DataFrame - - See Also - -------- - DataFrame.from_records : DataFrame from structured ndarray, sequence - of tuples or dicts, or DataFrame. - DataFrame : DataFrame object creation using constructor. - DataFrame.to_dict : Convert the DataFrame to a dictionary. - - Examples - -------- - By default the keys of the dict become the DataFrame columns: - - >>> data = {'col_1': [3, 2, 1, 0], 'col_2': ['a', 'b', 'c', 'd']} - >>> pd.DataFrame.from_dict(data) - col_1 col_2 - 0 3 a - 1 2 b - 2 1 c - 3 0 d - - Specify ``orient='index'`` to create the DataFrame using dictionary - keys as rows: - - >>> data = {'row_1': [3, 2, 1, 0], 'row_2': ['a', 'b', 'c', 'd']} - >>> pd.DataFrame.from_dict(data, orient='index') - 0 1 2 3 - row_1 3 2 1 0 - row_2 a b c d - - When using the 'index' orientation, the column names can be - specified manually: - - >>> pd.DataFrame.from_dict(data, orient='index', - ... columns=['A', 'B', 'C', 'D']) - A B C D - row_1 3 2 1 0 - row_2 a b c d - - Specify ``orient='tight'`` to create the DataFrame using a 'tight' - format: - - >>> data = {'index': [('a', 'b'), ('a', 'c')], - ... 'columns': [('x', 1), ('y', 2)], - ... 'data': [[1, 3], [2, 4]], - ... 'index_names': ['n1', 'n2'], - ... 'column_names': ['z1', 'z2']} - >>> pd.DataFrame.from_dict(data, orient='tight') - z1 x y - z2 1 2 - n1 n2 - a b 1 3 - c 2 4 - """ - index = None - orient = orient.lower() # type: ignore[assignment] - if orient == "index": - if len(data) > 0: - # TODO speed up Series case - if isinstance(next(iter(data.values())), (Series, dict)): - data = _from_nested_dict(data) - else: - index = list(data.keys()) - # error: Incompatible types in assignment (expression has type - # "List[Any]", variable has type "Dict[Any, Any]") - data = list(data.values()) # type: ignore[assignment] - elif orient in ("columns", "tight"): - if columns is not None: - raise ValueError(f"cannot use columns parameter with orient='{orient}'") - else: # pragma: no cover - raise ValueError( - f"Expected 'index', 'columns' or 'tight' for orient parameter. " - f"Got '{orient}' instead" - ) - - if orient != "tight": - return cls(data, index=index, columns=columns, dtype=dtype) - else: - realdata = data["data"] - - def create_index(indexlist, namelist): - index: Index - if len(namelist) > 1: - index = MultiIndex.from_tuples(indexlist, names=namelist) - else: - index = Index(indexlist, name=namelist[0]) - return index - - index = create_index(data["index"], data["index_names"]) - columns = create_index(data["columns"], data["column_names"]) - return cls(realdata, index=index, columns=columns, dtype=dtype) - - def to_numpy( - self, - dtype: npt.DTypeLike | None = None, - copy: bool = False, - na_value: object = lib.no_default, - ) -> np.ndarray: - """ - Convert the DataFrame to a NumPy array. - - By default, the dtype of the returned array will be the common NumPy - dtype of all types in the DataFrame. For example, if the dtypes are - ``float16`` and ``float32``, the results dtype will be ``float32``. - This may require copying data and coercing values, which may be - expensive. - - Parameters - ---------- - dtype : str or numpy.dtype, optional - The dtype to pass to :meth:`numpy.asarray`. - copy : bool, default False - Whether to ensure that the returned value is not a view on - another array. Note that ``copy=False`` does not *ensure* that - ``to_numpy()`` is no-copy. Rather, ``copy=True`` ensure that - a copy is made, even if not strictly necessary. - na_value : Any, optional - The value to use for missing values. The default value depends - on `dtype` and the dtypes of the DataFrame columns. - - Returns - ------- - numpy.ndarray - - See Also - -------- - Series.to_numpy : Similar method for Series. - - Examples - -------- - >>> pd.DataFrame({"A": [1, 2], "B": [3, 4]}).to_numpy() - array([[1, 3], - [2, 4]]) - - With heterogeneous data, the lowest common type will have to - be used. - - >>> df = pd.DataFrame({"A": [1, 2], "B": [3.0, 4.5]}) - >>> df.to_numpy() - array([[1. , 3. ], - [2. , 4.5]]) - - For a mix of numeric and non-numeric types, the output array will - have object dtype. - - >>> df['C'] = pd.date_range('2000', periods=2) - >>> df.to_numpy() - array([[1, 3.0, Timestamp('2000-01-01 00:00:00')], - [2, 4.5, Timestamp('2000-01-02 00:00:00')]], dtype=object) - """ - if dtype is not None: - dtype = np.dtype(dtype) - result = self._mgr.as_array(dtype=dtype, copy=copy, na_value=na_value) - if result.dtype is not dtype: - result = np.array(result, dtype=dtype, copy=False) - - return result - - def _create_data_for_split_and_tight_to_dict( - self, are_all_object_dtype_cols: bool, object_dtype_indices: list[int] - ) -> list: - """ - Simple helper method to create data for to ``to_dict(orient="split")`` and - ``to_dict(orient="tight")`` to create the main output data - """ - if are_all_object_dtype_cols: - data = [ - list(map(maybe_box_native, t)) - for t in self.itertuples(index=False, name=None) - ] - else: - data = [list(t) for t in self.itertuples(index=False, name=None)] - if object_dtype_indices: - # If we have object_dtype_cols, apply maybe_box_naive after list - # comprehension for perf - for row in data: - for i in object_dtype_indices: - row[i] = maybe_box_native(row[i]) - return data - - @overload - def to_dict( - self, - orient: Literal["dict", "list", "series", "split", "tight", "index"] = ..., - into: type[dict] = ..., - ) -> dict: - ... - - @overload - def to_dict(self, orient: Literal["records"], into: type[dict] = ...) -> list[dict]: - ... - - def to_dict( - self, - orient: Literal[ - "dict", "list", "series", "split", "tight", "records", "index" - ] = "dict", - into: type[dict] = dict, - index: bool = True, - ) -> dict | list[dict]: - """ - Convert the DataFrame to a dictionary. - - The type of the key-value pairs can be customized with the parameters - (see below). - - Parameters - ---------- - orient : str {'dict', 'list', 'series', 'split', 'tight', 'records', 'index'} - Determines the type of the values of the dictionary. - - - 'dict' (default) : dict like {column -> {index -> value}} - - 'list' : dict like {column -> [values]} - - 'series' : dict like {column -> Series(values)} - - 'split' : dict like - {'index' -> [index], 'columns' -> [columns], 'data' -> [values]} - - 'tight' : dict like - {'index' -> [index], 'columns' -> [columns], 'data' -> [values], - 'index_names' -> [index.names], 'column_names' -> [column.names]} - - 'records' : list like - [{column -> value}, ... , {column -> value}] - - 'index' : dict like {index -> {column -> value}} - - .. versionadded:: 1.4.0 - 'tight' as an allowed value for the ``orient`` argument - - into : class, default dict - The collections.abc.Mapping subclass used for all Mappings - in the return value. Can be the actual class or an empty - instance of the mapping type you want. If you want a - collections.defaultdict, you must pass it initialized. - - index : bool, default True - Whether to include the index item (and index_names item if `orient` - is 'tight') in the returned dictionary. Can only be ``False`` - when `orient` is 'split' or 'tight'. - - .. versionadded:: 2.0.0 - - Returns - ------- - dict, list or collections.abc.Mapping - Return a collections.abc.Mapping object representing the DataFrame. - The resulting transformation depends on the `orient` parameter. - - See Also - -------- - DataFrame.from_dict: Create a DataFrame from a dictionary. - DataFrame.to_json: Convert a DataFrame to JSON format. - - Examples - -------- - >>> df = pd.DataFrame({'col1': [1, 2], - ... 'col2': [0.5, 0.75]}, - ... index=['row1', 'row2']) - >>> df - col1 col2 - row1 1 0.50 - row2 2 0.75 - >>> df.to_dict() - {'col1': {'row1': 1, 'row2': 2}, 'col2': {'row1': 0.5, 'row2': 0.75}} - - You can specify the return orientation. - - >>> df.to_dict('series') - {'col1': row1 1 - row2 2 - Name: col1, dtype: int64, - 'col2': row1 0.50 - row2 0.75 - Name: col2, dtype: float64} - - >>> df.to_dict('split') - {'index': ['row1', 'row2'], 'columns': ['col1', 'col2'], - 'data': [[1, 0.5], [2, 0.75]]} - - >>> df.to_dict('records') - [{'col1': 1, 'col2': 0.5}, {'col1': 2, 'col2': 0.75}] - - >>> df.to_dict('index') - {'row1': {'col1': 1, 'col2': 0.5}, 'row2': {'col1': 2, 'col2': 0.75}} - - >>> df.to_dict('tight') - {'index': ['row1', 'row2'], 'columns': ['col1', 'col2'], - 'data': [[1, 0.5], [2, 0.75]], 'index_names': [None], 'column_names': [None]} - - You can also specify the mapping type. - - >>> from collections import OrderedDict, defaultdict - >>> df.to_dict(into=OrderedDict) - OrderedDict([('col1', OrderedDict([('row1', 1), ('row2', 2)])), - ('col2', OrderedDict([('row1', 0.5), ('row2', 0.75)]))]) - - If you want a `defaultdict`, you need to initialize it: - - >>> dd = defaultdict(list) - >>> df.to_dict('records', into=dd) - [defaultdict(, {'col1': 1, 'col2': 0.5}), - defaultdict(, {'col1': 2, 'col2': 0.75})] - """ - from pandas.core.methods.to_dict import to_dict - - return to_dict(self, orient, into, index) - - def to_gbq( - self, - destination_table: str, - project_id: str | None = None, - chunksize: int | None = None, - reauth: bool = False, - if_exists: ToGbqIfexist = "fail", - auth_local_webserver: bool = True, - table_schema: list[dict[str, str]] | None = None, - location: str | None = None, - progress_bar: bool = True, - credentials=None, - ) -> None: - """ - Write a DataFrame to a Google BigQuery table. - - This function requires the `pandas-gbq package - `__. - - See the `How to authenticate with Google BigQuery - `__ - guide for authentication instructions. - - Parameters - ---------- - destination_table : str - Name of table to be written, in the form ``dataset.tablename``. - project_id : str, optional - Google BigQuery Account project ID. Optional when available from - the environment. - chunksize : int, optional - Number of rows to be inserted in each chunk from the dataframe. - Set to ``None`` to load the whole dataframe at once. - reauth : bool, default False - Force Google BigQuery to re-authenticate the user. This is useful - if multiple accounts are used. - if_exists : str, default 'fail' - Behavior when the destination table exists. Value can be one of: - - ``'fail'`` - If table exists raise pandas_gbq.gbq.TableCreationError. - ``'replace'`` - If table exists, drop it, recreate it, and insert data. - ``'append'`` - If table exists, insert data. Create if does not exist. - auth_local_webserver : bool, default True - Use the `local webserver flow`_ instead of the `console flow`_ - when getting user credentials. - - .. _local webserver flow: - https://google-auth-oauthlib.readthedocs.io/en/latest/reference/google_auth_oauthlib.flow.html#google_auth_oauthlib.flow.InstalledAppFlow.run_local_server - .. _console flow: - https://google-auth-oauthlib.readthedocs.io/en/latest/reference/google_auth_oauthlib.flow.html#google_auth_oauthlib.flow.InstalledAppFlow.run_console - - *New in version 0.2.0 of pandas-gbq*. - - .. versionchanged:: 1.5.0 - Default value is changed to ``True``. Google has deprecated the - ``auth_local_webserver = False`` `"out of band" (copy-paste) - flow - `_. - table_schema : list of dicts, optional - List of BigQuery table fields to which according DataFrame - columns conform to, e.g. ``[{'name': 'col1', 'type': - 'STRING'},...]``. If schema is not provided, it will be - generated according to dtypes of DataFrame columns. See - BigQuery API documentation on available names of a field. - - *New in version 0.3.1 of pandas-gbq*. - location : str, optional - Location where the load job should run. See the `BigQuery locations - documentation - `__ for a - list of available locations. The location must match that of the - target dataset. - - *New in version 0.5.0 of pandas-gbq*. - progress_bar : bool, default True - Use the library `tqdm` to show the progress bar for the upload, - chunk by chunk. - - *New in version 0.5.0 of pandas-gbq*. - credentials : google.auth.credentials.Credentials, optional - Credentials for accessing Google APIs. Use this parameter to - override default credentials, such as to use Compute Engine - :class:`google.auth.compute_engine.Credentials` or Service - Account :class:`google.oauth2.service_account.Credentials` - directly. - - *New in version 0.8.0 of pandas-gbq*. - - See Also - -------- - pandas_gbq.to_gbq : This function in the pandas-gbq library. - read_gbq : Read a DataFrame from Google BigQuery. - - Examples - -------- - Example taken from `Google BigQuery documentation - `_ - - >>> project_id = "my-project" - >>> table_id = 'my_dataset.my_table' - >>> df = pd.DataFrame({ - ... "my_string": ["a", "b", "c"], - ... "my_int64": [1, 2, 3], - ... "my_float64": [4.0, 5.0, 6.0], - ... "my_bool1": [True, False, True], - ... "my_bool2": [False, True, False], - ... "my_dates": pd.date_range("now", periods=3), - ... } - ... ) - - >>> df.to_gbq(table_id, project_id=project_id) # doctest: +SKIP - """ - from pandas.io import gbq - - gbq.to_gbq( - self, - destination_table, - project_id=project_id, - chunksize=chunksize, - reauth=reauth, - if_exists=if_exists, - auth_local_webserver=auth_local_webserver, - table_schema=table_schema, - location=location, - progress_bar=progress_bar, - credentials=credentials, - ) - - @classmethod - def from_records( - cls, - data, - index=None, - exclude=None, - columns=None, - coerce_float: bool = False, - nrows: int | None = None, - ) -> DataFrame: - """ - Convert structured or record ndarray to DataFrame. - - Creates a DataFrame object from a structured ndarray, sequence of - tuples or dicts, or DataFrame. - - Parameters - ---------- - data : structured ndarray, sequence of tuples or dicts, or DataFrame - Structured input data. - - .. deprecated:: 2.1.0 - Passing a DataFrame is deprecated. - index : str, list of fields, array-like - Field of array to use as the index, alternately a specific set of - input labels to use. - exclude : sequence, default None - Columns or fields to exclude. - columns : sequence, default None - Column names to use. If the passed data do not have names - associated with them, this argument provides names for the - columns. Otherwise this argument indicates the order of the columns - in the result (any names not found in the data will become all-NA - columns). - coerce_float : bool, default False - Attempt to convert values of non-string, non-numeric objects (like - decimal.Decimal) to floating point, useful for SQL result sets. - nrows : int, default None - Number of rows to read if data is an iterator. - - Returns - ------- - DataFrame - - See Also - -------- - DataFrame.from_dict : DataFrame from dict of array-like or dicts. - DataFrame : DataFrame object creation using constructor. - - Examples - -------- - Data can be provided as a structured ndarray: - - >>> data = np.array([(3, 'a'), (2, 'b'), (1, 'c'), (0, 'd')], - ... dtype=[('col_1', 'i4'), ('col_2', 'U1')]) - >>> pd.DataFrame.from_records(data) - col_1 col_2 - 0 3 a - 1 2 b - 2 1 c - 3 0 d - - Data can be provided as a list of dicts: - - >>> data = [{'col_1': 3, 'col_2': 'a'}, - ... {'col_1': 2, 'col_2': 'b'}, - ... {'col_1': 1, 'col_2': 'c'}, - ... {'col_1': 0, 'col_2': 'd'}] - >>> pd.DataFrame.from_records(data) - col_1 col_2 - 0 3 a - 1 2 b - 2 1 c - 3 0 d - - Data can be provided as a list of tuples with corresponding columns: - - >>> data = [(3, 'a'), (2, 'b'), (1, 'c'), (0, 'd')] - >>> pd.DataFrame.from_records(data, columns=['col_1', 'col_2']) - col_1 col_2 - 0 3 a - 1 2 b - 2 1 c - 3 0 d - """ - if isinstance(data, DataFrame): - warnings.warn( - "Passing a DataFrame to DataFrame.from_records is deprecated. Use " - "set_index and/or drop to modify the DataFrame instead.", - FutureWarning, - stacklevel=find_stack_level(), - ) - if columns is not None: - if is_scalar(columns): - columns = [columns] - data = data[columns] - if index is not None: - data = data.set_index(index) - if exclude is not None: - data = data.drop(columns=exclude) - return data.copy(deep=False) - - result_index = None - - # Make a copy of the input columns so we can modify it - if columns is not None: - columns = ensure_index(columns) - - def maybe_reorder( - arrays: list[ArrayLike], arr_columns: Index, columns: Index, index - ) -> tuple[list[ArrayLike], Index, Index | None]: - """ - If our desired 'columns' do not match the data's pre-existing 'arr_columns', - we re-order our arrays. This is like a pre-emptive (cheap) reindex. - """ - if len(arrays): - length = len(arrays[0]) - else: - length = 0 - - result_index = None - if len(arrays) == 0 and index is None and length == 0: - result_index = default_index(0) - - arrays, arr_columns = reorder_arrays(arrays, arr_columns, columns, length) - return arrays, arr_columns, result_index - - if is_iterator(data): - if nrows == 0: - return cls() - - try: - first_row = next(data) - except StopIteration: - return cls(index=index, columns=columns) - - dtype = None - if hasattr(first_row, "dtype") and first_row.dtype.names: - dtype = first_row.dtype - - values = [first_row] - - if nrows is None: - values += data - else: - values.extend(itertools.islice(data, nrows - 1)) - - if dtype is not None: - data = np.array(values, dtype=dtype) - else: - data = values - - if isinstance(data, dict): - if columns is None: - columns = arr_columns = ensure_index(sorted(data)) - arrays = [data[k] for k in columns] - else: - arrays = [] - arr_columns_list = [] - for k, v in data.items(): - if k in columns: - arr_columns_list.append(k) - arrays.append(v) - - arr_columns = Index(arr_columns_list) - arrays, arr_columns, result_index = maybe_reorder( - arrays, arr_columns, columns, index - ) - - elif isinstance(data, np.ndarray): - arrays, columns = to_arrays(data, columns) - arr_columns = columns - else: - arrays, arr_columns = to_arrays(data, columns) - if coerce_float: - for i, arr in enumerate(arrays): - if arr.dtype == object: - # error: Argument 1 to "maybe_convert_objects" has - # incompatible type "Union[ExtensionArray, ndarray]"; - # expected "ndarray" - arrays[i] = lib.maybe_convert_objects( - arr, # type: ignore[arg-type] - try_float=True, - ) - - arr_columns = ensure_index(arr_columns) - if columns is None: - columns = arr_columns - else: - arrays, arr_columns, result_index = maybe_reorder( - arrays, arr_columns, columns, index - ) - - if exclude is None: - exclude = set() - else: - exclude = set(exclude) - - if index is not None: - if isinstance(index, str) or not hasattr(index, "__iter__"): - i = columns.get_loc(index) - exclude.add(index) - if len(arrays) > 0: - result_index = Index(arrays[i], name=index) - else: - result_index = Index([], name=index) - else: - try: - index_data = [arrays[arr_columns.get_loc(field)] for field in index] - except (KeyError, TypeError): - # raised by get_loc, see GH#29258 - result_index = index - else: - result_index = ensure_index_from_sequences(index_data, names=index) - exclude.update(index) - - if any(exclude): - arr_exclude = [x for x in exclude if x in arr_columns] - to_remove = [arr_columns.get_loc(col) for col in arr_exclude] - arrays = [v for i, v in enumerate(arrays) if i not in to_remove] - - columns = columns.drop(exclude) - - manager = get_option("mode.data_manager") - mgr = arrays_to_mgr(arrays, columns, result_index, typ=manager) - - return cls(mgr) - - def to_records( - self, index: bool = True, column_dtypes=None, index_dtypes=None - ) -> np.rec.recarray: - """ - Convert DataFrame to a NumPy record array. - - Index will be included as the first field of the record array if - requested. - - Parameters - ---------- - index : bool, default True - Include index in resulting record array, stored in 'index' - field or using the index label, if set. - column_dtypes : str, type, dict, default None - If a string or type, the data type to store all columns. If - a dictionary, a mapping of column names and indices (zero-indexed) - to specific data types. - index_dtypes : str, type, dict, default None - If a string or type, the data type to store all index levels. If - a dictionary, a mapping of index level names and indices - (zero-indexed) to specific data types. - - This mapping is applied only if `index=True`. - - Returns - ------- - numpy.rec.recarray - NumPy ndarray with the DataFrame labels as fields and each row - of the DataFrame as entries. - - See Also - -------- - DataFrame.from_records: Convert structured or record ndarray - to DataFrame. - numpy.rec.recarray: An ndarray that allows field access using - attributes, analogous to typed columns in a - spreadsheet. - - Examples - -------- - >>> df = pd.DataFrame({'A': [1, 2], 'B': [0.5, 0.75]}, - ... index=['a', 'b']) - >>> df - A B - a 1 0.50 - b 2 0.75 - >>> df.to_records() - rec.array([('a', 1, 0.5 ), ('b', 2, 0.75)], - dtype=[('index', 'O'), ('A', '>> df.index = df.index.rename("I") - >>> df.to_records() - rec.array([('a', 1, 0.5 ), ('b', 2, 0.75)], - dtype=[('I', 'O'), ('A', '>> df.to_records(index=False) - rec.array([(1, 0.5 ), (2, 0.75)], - dtype=[('A', '>> df.to_records(column_dtypes={"A": "int32"}) - rec.array([('a', 1, 0.5 ), ('b', 2, 0.75)], - dtype=[('I', 'O'), ('A', '>> df.to_records(index_dtypes=">> index_dtypes = f">> df.to_records(index_dtypes=index_dtypes) - rec.array([(b'a', 1, 0.5 ), (b'b', 2, 0.75)], - dtype=[('I', 'S1'), ('A', ' Self: - """ - Create DataFrame from a list of arrays corresponding to the columns. - - Parameters - ---------- - arrays : list-like of arrays - Each array in the list corresponds to one column, in order. - columns : list-like, Index - The column names for the resulting DataFrame. - index : list-like, Index - The rows labels for the resulting DataFrame. - dtype : dtype, optional - Optional dtype to enforce for all arrays. - verify_integrity : bool, default True - Validate and homogenize all input. If set to False, it is assumed - that all elements of `arrays` are actual arrays how they will be - stored in a block (numpy ndarray or ExtensionArray), have the same - length as and are aligned with the index, and that `columns` and - `index` are ensured to be an Index object. - - Returns - ------- - DataFrame - """ - if dtype is not None: - dtype = pandas_dtype(dtype) - - manager = get_option("mode.data_manager") - columns = ensure_index(columns) - if len(columns) != len(arrays): - raise ValueError("len(columns) must match len(arrays)") - mgr = arrays_to_mgr( - arrays, - columns, - index, - dtype=dtype, - verify_integrity=verify_integrity, - typ=manager, - ) - return cls(mgr) - - @doc( - storage_options=_shared_docs["storage_options"], - compression_options=_shared_docs["compression_options"] % "path", - ) - def to_stata( - self, - path: FilePath | WriteBuffer[bytes], - *, - convert_dates: dict[Hashable, str] | None = None, - write_index: bool = True, - byteorder: ToStataByteorder | None = None, - time_stamp: datetime.datetime | None = None, - data_label: str | None = None, - variable_labels: dict[Hashable, str] | None = None, - version: int | None = 114, - convert_strl: Sequence[Hashable] | None = None, - compression: CompressionOptions = "infer", - storage_options: StorageOptions | None = None, - value_labels: dict[Hashable, dict[float, str]] | None = None, - ) -> None: - """ - Export DataFrame object to Stata dta format. - - Writes the DataFrame to a Stata dataset file. - "dta" files contain a Stata dataset. - - Parameters - ---------- - path : str, path object, or buffer - String, path object (implementing ``os.PathLike[str]``), or file-like - object implementing a binary ``write()`` function. - - convert_dates : dict - Dictionary mapping columns containing datetime types to stata - internal format to use when writing the dates. Options are 'tc', - 'td', 'tm', 'tw', 'th', 'tq', 'ty'. Column can be either an integer - or a name. Datetime columns that do not have a conversion type - specified will be converted to 'tc'. Raises NotImplementedError if - a datetime column has timezone information. - write_index : bool - Write the index to Stata dataset. - byteorder : str - Can be ">", "<", "little", or "big". default is `sys.byteorder`. - time_stamp : datetime - A datetime to use as file creation date. Default is the current - time. - data_label : str, optional - A label for the data set. Must be 80 characters or smaller. - variable_labels : dict - Dictionary containing columns as keys and variable labels as - values. Each label must be 80 characters or smaller. - version : {{114, 117, 118, 119, None}}, default 114 - Version to use in the output dta file. Set to None to let pandas - decide between 118 or 119 formats depending on the number of - columns in the frame. Version 114 can be read by Stata 10 and - later. Version 117 can be read by Stata 13 or later. Version 118 - is supported in Stata 14 and later. Version 119 is supported in - Stata 15 and later. Version 114 limits string variables to 244 - characters or fewer while versions 117 and later allow strings - with lengths up to 2,000,000 characters. Versions 118 and 119 - support Unicode characters, and version 119 supports more than - 32,767 variables. - - Version 119 should usually only be used when the number of - variables exceeds the capacity of dta format 118. Exporting - smaller datasets in format 119 may have unintended consequences, - and, as of November 2020, Stata SE cannot read version 119 files. - - convert_strl : list, optional - List of column names to convert to string columns to Stata StrL - format. Only available if version is 117. Storing strings in the - StrL format can produce smaller dta files if strings have more than - 8 characters and values are repeated. - {compression_options} - - .. versionchanged:: 1.4.0 Zstandard support. - - {storage_options} - - .. versionadded:: 1.2.0 - - value_labels : dict of dicts - Dictionary containing columns as keys and dictionaries of column value - to labels as values. Labels for a single variable must be 32,000 - characters or smaller. - - .. versionadded:: 1.4.0 - - Raises - ------ - NotImplementedError - * If datetimes contain timezone information - * Column dtype is not representable in Stata - ValueError - * Columns listed in convert_dates are neither datetime64[ns] - or datetime.datetime - * Column listed in convert_dates is not in DataFrame - * Categorical label contains more than 32,000 characters - - See Also - -------- - read_stata : Import Stata data files. - io.stata.StataWriter : Low-level writer for Stata data files. - io.stata.StataWriter117 : Low-level writer for version 117 files. - - Examples - -------- - >>> df = pd.DataFrame({{'animal': ['falcon', 'parrot', 'falcon', - ... 'parrot'], - ... 'speed': [350, 18, 361, 15]}}) - >>> df.to_stata('animals.dta') # doctest: +SKIP - """ - if version not in (114, 117, 118, 119, None): - raise ValueError("Only formats 114, 117, 118 and 119 are supported.") - if version == 114: - if convert_strl is not None: - raise ValueError("strl is not supported in format 114") - from pandas.io.stata import StataWriter as statawriter - elif version == 117: - # Incompatible import of "statawriter" (imported name has type - # "Type[StataWriter117]", local name has type "Type[StataWriter]") - from pandas.io.stata import ( # type: ignore[assignment] - StataWriter117 as statawriter, - ) - else: # versions 118 and 119 - # Incompatible import of "statawriter" (imported name has type - # "Type[StataWriter117]", local name has type "Type[StataWriter]") - from pandas.io.stata import ( # type: ignore[assignment] - StataWriterUTF8 as statawriter, - ) - - kwargs: dict[str, Any] = {} - if version is None or version >= 117: - # strl conversion is only supported >= 117 - kwargs["convert_strl"] = convert_strl - if version is None or version >= 118: - # Specifying the version is only supported for UTF8 (118 or 119) - kwargs["version"] = version - - writer = statawriter( - path, - self, - convert_dates=convert_dates, - byteorder=byteorder, - time_stamp=time_stamp, - data_label=data_label, - write_index=write_index, - variable_labels=variable_labels, - compression=compression, - storage_options=storage_options, - value_labels=value_labels, - **kwargs, - ) - writer.write_file() - - def to_feather(self, path: FilePath | WriteBuffer[bytes], **kwargs) -> None: - """ - Write a DataFrame to the binary Feather format. - - Parameters - ---------- - path : str, path object, file-like object - String, path object (implementing ``os.PathLike[str]``), or file-like - object implementing a binary ``write()`` function. If a string or a path, - it will be used as Root Directory path when writing a partitioned dataset. - **kwargs : - Additional keywords passed to :func:`pyarrow.feather.write_feather`. - This includes the `compression`, `compression_level`, `chunksize` - and `version` keywords. - - Notes - ----- - This function writes the dataframe as a `feather file - `_. Requires a default - index. For saving the DataFrame with your custom index use a method that - supports custom indices e.g. `to_parquet`. - - Examples - -------- - >>> df = pd.DataFrame([[1, 2, 3], [4, 5, 6]]) - >>> df.to_feather("file.feather") # doctest: +SKIP - """ - from pandas.io.feather_format import to_feather - - to_feather(self, path, **kwargs) - - @doc( - Series.to_markdown, - klass=_shared_doc_kwargs["klass"], - storage_options=_shared_docs["storage_options"], - examples="""Examples - -------- - >>> df = pd.DataFrame( - ... data={"animal_1": ["elk", "pig"], "animal_2": ["dog", "quetzal"]} - ... ) - >>> print(df.to_markdown()) - | | animal_1 | animal_2 | - |---:|:-----------|:-----------| - | 0 | elk | dog | - | 1 | pig | quetzal | - - Output markdown with a tabulate option. - - >>> print(df.to_markdown(tablefmt="grid")) - +----+------------+------------+ - | | animal_1 | animal_2 | - +====+============+============+ - | 0 | elk | dog | - +----+------------+------------+ - | 1 | pig | quetzal | - +----+------------+------------+""", - ) - def to_markdown( - self, - buf: FilePath | WriteBuffer[str] | None = None, - mode: str = "wt", - index: bool = True, - storage_options: StorageOptions | None = None, - **kwargs, - ) -> str | None: - if "showindex" in kwargs: - raise ValueError("Pass 'index' instead of 'showindex") - - kwargs.setdefault("headers", "keys") - kwargs.setdefault("tablefmt", "pipe") - kwargs.setdefault("showindex", index) - tabulate = import_optional_dependency("tabulate") - result = tabulate.tabulate(self, **kwargs) - if buf is None: - return result - - with get_handle(buf, mode, storage_options=storage_options) as handles: - handles.handle.write(result) - return None - - @overload - def to_parquet( - self, - path: None = ..., - engine: Literal["auto", "pyarrow", "fastparquet"] = ..., - compression: str | None = ..., - index: bool | None = ..., - partition_cols: list[str] | None = ..., - storage_options: StorageOptions = ..., - **kwargs, - ) -> bytes: - ... - - @overload - def to_parquet( - self, - path: FilePath | WriteBuffer[bytes], - engine: Literal["auto", "pyarrow", "fastparquet"] = ..., - compression: str | None = ..., - index: bool | None = ..., - partition_cols: list[str] | None = ..., - storage_options: StorageOptions = ..., - **kwargs, - ) -> None: - ... - - @doc(storage_options=_shared_docs["storage_options"]) - def to_parquet( - self, - path: FilePath | WriteBuffer[bytes] | None = None, - engine: Literal["auto", "pyarrow", "fastparquet"] = "auto", - compression: str | None = "snappy", - index: bool | None = None, - partition_cols: list[str] | None = None, - storage_options: StorageOptions | None = None, - **kwargs, - ) -> bytes | None: - """ - Write a DataFrame to the binary parquet format. - - This function writes the dataframe as a `parquet file - `_. You can choose different parquet - backends, and have the option of compression. See - :ref:`the user guide ` for more details. - - Parameters - ---------- - path : str, path object, file-like object, or None, default None - String, path object (implementing ``os.PathLike[str]``), or file-like - object implementing a binary ``write()`` function. If None, the result is - returned as bytes. If a string or path, it will be used as Root Directory - path when writing a partitioned dataset. - - .. versionchanged:: 1.2.0 - - Previously this was "fname" - - engine : {{'auto', 'pyarrow', 'fastparquet'}}, default 'auto' - Parquet library to use. If 'auto', then the option - ``io.parquet.engine`` is used. The default ``io.parquet.engine`` - behavior is to try 'pyarrow', falling back to 'fastparquet' if - 'pyarrow' is unavailable. - compression : str or None, default 'snappy' - Name of the compression to use. Use ``None`` for no compression. - Supported options: 'snappy', 'gzip', 'brotli', 'lz4', 'zstd'. - index : bool, default None - If ``True``, include the dataframe's index(es) in the file output. - If ``False``, they will not be written to the file. - If ``None``, similar to ``True`` the dataframe's index(es) - will be saved. However, instead of being saved as values, - the RangeIndex will be stored as a range in the metadata so it - doesn't require much space and is faster. Other indexes will - be included as columns in the file output. - partition_cols : list, optional, default None - Column names by which to partition the dataset. - Columns are partitioned in the order they are given. - Must be None if path is not a string. - {storage_options} - - .. versionadded:: 1.2.0 - - **kwargs - Additional arguments passed to the parquet library. See - :ref:`pandas io ` for more details. - - Returns - ------- - bytes if no path argument is provided else None - - See Also - -------- - read_parquet : Read a parquet file. - DataFrame.to_orc : Write an orc file. - DataFrame.to_csv : Write a csv file. - DataFrame.to_sql : Write to a sql table. - DataFrame.to_hdf : Write to hdf. - - Notes - ----- - This function requires either the `fastparquet - `_ or `pyarrow - `_ library. - - Examples - -------- - >>> df = pd.DataFrame(data={{'col1': [1, 2], 'col2': [3, 4]}}) - >>> df.to_parquet('df.parquet.gzip', - ... compression='gzip') # doctest: +SKIP - >>> pd.read_parquet('df.parquet.gzip') # doctest: +SKIP - col1 col2 - 0 1 3 - 1 2 4 - - If you want to get a buffer to the parquet content you can use a io.BytesIO - object, as long as you don't use partition_cols, which creates multiple files. - - >>> import io - >>> f = io.BytesIO() - >>> df.to_parquet(f) - >>> f.seek(0) - 0 - >>> content = f.read() - """ - from pandas.io.parquet import to_parquet - - return to_parquet( - self, - path, - engine, - compression=compression, - index=index, - partition_cols=partition_cols, - storage_options=storage_options, - **kwargs, - ) - - def to_orc( - self, - path: FilePath | WriteBuffer[bytes] | None = None, - *, - engine: Literal["pyarrow"] = "pyarrow", - index: bool | None = None, - engine_kwargs: dict[str, Any] | None = None, - ) -> bytes | None: - """ - Write a DataFrame to the ORC format. - - .. versionadded:: 1.5.0 - - Parameters - ---------- - path : str, file-like object or None, default None - If a string, it will be used as Root Directory path - when writing a partitioned dataset. By file-like object, - we refer to objects with a write() method, such as a file handle - (e.g. via builtin open function). If path is None, - a bytes object is returned. - engine : {'pyarrow'}, default 'pyarrow' - ORC library to use. Pyarrow must be >= 7.0.0. - index : bool, optional - If ``True``, include the dataframe's index(es) in the file output. - If ``False``, they will not be written to the file. - If ``None``, similar to ``infer`` the dataframe's index(es) - will be saved. However, instead of being saved as values, - the RangeIndex will be stored as a range in the metadata so it - doesn't require much space and is faster. Other indexes will - be included as columns in the file output. - engine_kwargs : dict[str, Any] or None, default None - Additional keyword arguments passed to :func:`pyarrow.orc.write_table`. - - Returns - ------- - bytes if no path argument is provided else None - - Raises - ------ - NotImplementedError - Dtype of one or more columns is category, unsigned integers, interval, - period or sparse. - ValueError - engine is not pyarrow. - - See Also - -------- - read_orc : Read a ORC file. - DataFrame.to_parquet : Write a parquet file. - DataFrame.to_csv : Write a csv file. - DataFrame.to_sql : Write to a sql table. - DataFrame.to_hdf : Write to hdf. - - Notes - ----- - * Before using this function you should read the :ref:`user guide about - ORC ` and :ref:`install optional dependencies `. - * This function requires `pyarrow `_ - library. - * For supported dtypes please refer to `supported ORC features in Arrow - `__. - * Currently timezones in datetime columns are not preserved when a - dataframe is converted into ORC files. - - Examples - -------- - >>> df = pd.DataFrame(data={'col1': [1, 2], 'col2': [4, 3]}) - >>> df.to_orc('df.orc') # doctest: +SKIP - >>> pd.read_orc('df.orc') # doctest: +SKIP - col1 col2 - 0 1 4 - 1 2 3 - - If you want to get a buffer to the orc content you can write it to io.BytesIO - - >>> import io - >>> b = io.BytesIO(df.to_orc()) # doctest: +SKIP - >>> b.seek(0) # doctest: +SKIP - 0 - >>> content = b.read() # doctest: +SKIP - """ - from pandas.io.orc import to_orc - - return to_orc( - self, path, engine=engine, index=index, engine_kwargs=engine_kwargs - ) - - @overload - def to_html( - self, - buf: FilePath | WriteBuffer[str], - columns: Axes | None = ..., - col_space: ColspaceArgType | None = ..., - header: bool = ..., - index: bool = ..., - na_rep: str = ..., - formatters: FormattersType | None = ..., - float_format: FloatFormatType | None = ..., - sparsify: bool | None = ..., - index_names: bool = ..., - justify: str | None = ..., - max_rows: int | None = ..., - max_cols: int | None = ..., - show_dimensions: bool | str = ..., - decimal: str = ..., - bold_rows: bool = ..., - classes: str | list | tuple | None = ..., - escape: bool = ..., - notebook: bool = ..., - border: int | bool | None = ..., - table_id: str | None = ..., - render_links: bool = ..., - encoding: str | None = ..., - ) -> None: - ... - - @overload - def to_html( - self, - buf: None = ..., - columns: Axes | None = ..., - col_space: ColspaceArgType | None = ..., - header: bool = ..., - index: bool = ..., - na_rep: str = ..., - formatters: FormattersType | None = ..., - float_format: FloatFormatType | None = ..., - sparsify: bool | None = ..., - index_names: bool = ..., - justify: str | None = ..., - max_rows: int | None = ..., - max_cols: int | None = ..., - show_dimensions: bool | str = ..., - decimal: str = ..., - bold_rows: bool = ..., - classes: str | list | tuple | None = ..., - escape: bool = ..., - notebook: bool = ..., - border: int | bool | None = ..., - table_id: str | None = ..., - render_links: bool = ..., - encoding: str | None = ..., - ) -> str: - ... - - @Substitution( - header_type="bool", - header="Whether to print column labels, default True", - col_space_type="str or int, list or dict of int or str", - col_space="The minimum width of each column in CSS length " - "units. An int is assumed to be px units.", - ) - @Substitution(shared_params=fmt.common_docstring, returns=fmt.return_docstring) - def to_html( - self, - buf: FilePath | WriteBuffer[str] | None = None, - columns: Axes | None = None, - col_space: ColspaceArgType | None = None, - header: bool = True, - index: bool = True, - na_rep: str = "NaN", - formatters: FormattersType | None = None, - float_format: FloatFormatType | None = None, - sparsify: bool | None = None, - index_names: bool = True, - justify: str | None = None, - max_rows: int | None = None, - max_cols: int | None = None, - show_dimensions: bool | str = False, - decimal: str = ".", - bold_rows: bool = True, - classes: str | list | tuple | None = None, - escape: bool = True, - notebook: bool = False, - border: int | bool | None = None, - table_id: str | None = None, - render_links: bool = False, - encoding: str | None = None, - ) -> str | None: - """ - Render a DataFrame as an HTML table. - %(shared_params)s - bold_rows : bool, default True - Make the row labels bold in the output. - classes : str or list or tuple, default None - CSS class(es) to apply to the resulting html table. - escape : bool, default True - Convert the characters <, >, and & to HTML-safe sequences. - notebook : {True, False}, default False - Whether the generated HTML is for IPython Notebook. - border : int - A ``border=border`` attribute is included in the opening - `` tag. Default ``pd.options.display.html.border``. - table_id : str, optional - A css id is included in the opening `
` tag if specified. - render_links : bool, default False - Convert URLs to HTML links. - encoding : str, default "utf-8" - Set character encoding. - %(returns)s - See Also - -------- - to_string : Convert DataFrame to a string. - - Examples - -------- - >>> df = pd.DataFrame(data={'col1': [1, 2], 'col2': [4, 3]}) - >>> html_string = '''
- ... - ... - ... - ... - ... - ... - ... - ... - ... - ... - ... - ... - ... - ... - ... - ... - ... - ... - ... - ...
col1col2
014
123
''' - >>> assert html_string == df.to_html() - """ - if justify is not None and justify not in fmt._VALID_JUSTIFY_PARAMETERS: - raise ValueError("Invalid value for justify parameter") - - formatter = fmt.DataFrameFormatter( - self, - columns=columns, - col_space=col_space, - na_rep=na_rep, - header=header, - index=index, - formatters=formatters, - float_format=float_format, - bold_rows=bold_rows, - sparsify=sparsify, - justify=justify, - index_names=index_names, - escape=escape, - decimal=decimal, - max_rows=max_rows, - max_cols=max_cols, - show_dimensions=show_dimensions, - ) - # TODO: a generic formatter wld b in DataFrameFormatter - return fmt.DataFrameRenderer(formatter).to_html( - buf=buf, - classes=classes, - notebook=notebook, - border=border, - encoding=encoding, - table_id=table_id, - render_links=render_links, - ) - - @doc( - storage_options=_shared_docs["storage_options"], - compression_options=_shared_docs["compression_options"] % "path_or_buffer", - ) - def to_xml( - self, - path_or_buffer: FilePath | WriteBuffer[bytes] | WriteBuffer[str] | None = None, - index: bool = True, - root_name: str | None = "data", - row_name: str | None = "row", - na_rep: str | None = None, - attr_cols: list[str] | None = None, - elem_cols: list[str] | None = None, - namespaces: dict[str | None, str] | None = None, - prefix: str | None = None, - encoding: str = "utf-8", - xml_declaration: bool | None = True, - pretty_print: bool | None = True, - parser: XMLParsers | None = "lxml", - stylesheet: FilePath | ReadBuffer[str] | ReadBuffer[bytes] | None = None, - compression: CompressionOptions = "infer", - storage_options: StorageOptions | None = None, - ) -> str | None: - """ - Render a DataFrame to an XML document. - - .. versionadded:: 1.3.0 - - Parameters - ---------- - path_or_buffer : str, path object, file-like object, or None, default None - String, path object (implementing ``os.PathLike[str]``), or file-like - object implementing a ``write()`` function. If None, the result is returned - as a string. - index : bool, default True - Whether to include index in XML document. - root_name : str, default 'data' - The name of root element in XML document. - row_name : str, default 'row' - The name of row element in XML document. - na_rep : str, optional - Missing data representation. - attr_cols : list-like, optional - List of columns to write as attributes in row element. - Hierarchical columns will be flattened with underscore - delimiting the different levels. - elem_cols : list-like, optional - List of columns to write as children in row element. By default, - all columns output as children of row element. Hierarchical - columns will be flattened with underscore delimiting the - different levels. - namespaces : dict, optional - All namespaces to be defined in root element. Keys of dict - should be prefix names and values of dict corresponding URIs. - Default namespaces should be given empty string key. For - example, :: - - namespaces = {{"": "https://example.com"}} - - prefix : str, optional - Namespace prefix to be used for every element and/or attribute - in document. This should be one of the keys in ``namespaces`` - dict. - encoding : str, default 'utf-8' - Encoding of the resulting document. - xml_declaration : bool, default True - Whether to include the XML declaration at start of document. - pretty_print : bool, default True - Whether output should be pretty printed with indentation and - line breaks. - parser : {{'lxml','etree'}}, default 'lxml' - Parser module to use for building of tree. Only 'lxml' and - 'etree' are supported. With 'lxml', the ability to use XSLT - stylesheet is supported. - stylesheet : str, path object or file-like object, optional - A URL, file-like object, or a raw string containing an XSLT - script used to transform the raw XML output. Script should use - layout of elements and attributes from original output. This - argument requires ``lxml`` to be installed. Only XSLT 1.0 - scripts and not later versions is currently supported. - {compression_options} - - .. versionchanged:: 1.4.0 Zstandard support. - - {storage_options} - - Returns - ------- - None or str - If ``io`` is None, returns the resulting XML format as a - string. Otherwise returns None. - - See Also - -------- - to_json : Convert the pandas object to a JSON string. - to_html : Convert DataFrame to a html. - - Examples - -------- - >>> df = pd.DataFrame({{'shape': ['square', 'circle', 'triangle'], - ... 'degrees': [360, 360, 180], - ... 'sides': [4, np.nan, 3]}}) - - >>> df.to_xml() # doctest: +SKIP - - - - 0 - square - 360 - 4.0 - - - 1 - circle - 360 - - - - 2 - triangle - 180 - 3.0 - - - - >>> df.to_xml(attr_cols=[ - ... 'index', 'shape', 'degrees', 'sides' - ... ]) # doctest: +SKIP - - - - - - - - >>> df.to_xml(namespaces={{"doc": "https://example.com"}}, - ... prefix="doc") # doctest: +SKIP - - - - 0 - square - 360 - 4.0 - - - 1 - circle - 360 - - - - 2 - triangle - 180 - 3.0 - - - """ - - from pandas.io.formats.xml import ( - EtreeXMLFormatter, - LxmlXMLFormatter, - ) - - lxml = import_optional_dependency("lxml.etree", errors="ignore") - - TreeBuilder: type[EtreeXMLFormatter] | type[LxmlXMLFormatter] - - if parser == "lxml": - if lxml is not None: - TreeBuilder = LxmlXMLFormatter - else: - raise ImportError( - "lxml not found, please install or use the etree parser." - ) - - elif parser == "etree": - TreeBuilder = EtreeXMLFormatter - - else: - raise ValueError("Values for parser can only be lxml or etree.") - - xml_formatter = TreeBuilder( - self, - path_or_buffer=path_or_buffer, - index=index, - root_name=root_name, - row_name=row_name, - na_rep=na_rep, - attr_cols=attr_cols, - elem_cols=elem_cols, - namespaces=namespaces, - prefix=prefix, - encoding=encoding, - xml_declaration=xml_declaration, - pretty_print=pretty_print, - stylesheet=stylesheet, - compression=compression, - storage_options=storage_options, - ) - - return xml_formatter.write_output() - - # ---------------------------------------------------------------------- - @doc(INFO_DOCSTRING, **frame_sub_kwargs) - def info( - self, - verbose: bool | None = None, - buf: WriteBuffer[str] | None = None, - max_cols: int | None = None, - memory_usage: bool | str | None = None, - show_counts: bool | None = None, - ) -> None: - info = DataFrameInfo( - data=self, - memory_usage=memory_usage, - ) - info.render( - buf=buf, - max_cols=max_cols, - verbose=verbose, - show_counts=show_counts, - ) - - def memory_usage(self, index: bool = True, deep: bool = False) -> Series: - """ - Return the memory usage of each column in bytes. - - The memory usage can optionally include the contribution of - the index and elements of `object` dtype. - - This value is displayed in `DataFrame.info` by default. This can be - suppressed by setting ``pandas.options.display.memory_usage`` to False. - - Parameters - ---------- - index : bool, default True - Specifies whether to include the memory usage of the DataFrame's - index in returned Series. If ``index=True``, the memory usage of - the index is the first item in the output. - deep : bool, default False - If True, introspect the data deeply by interrogating - `object` dtypes for system-level memory consumption, and include - it in the returned values. - - Returns - ------- - Series - A Series whose index is the original column names and whose values - is the memory usage of each column in bytes. - - See Also - -------- - numpy.ndarray.nbytes : Total bytes consumed by the elements of an - ndarray. - Series.memory_usage : Bytes consumed by a Series. - Categorical : Memory-efficient array for string values with - many repeated values. - DataFrame.info : Concise summary of a DataFrame. - - Notes - ----- - See the :ref:`Frequently Asked Questions ` for more - details. - - Examples - -------- - >>> dtypes = ['int64', 'float64', 'complex128', 'object', 'bool'] - >>> data = dict([(t, np.ones(shape=5000, dtype=int).astype(t)) - ... for t in dtypes]) - >>> df = pd.DataFrame(data) - >>> df.head() - int64 float64 complex128 object bool - 0 1 1.0 1.0+0.0j 1 True - 1 1 1.0 1.0+0.0j 1 True - 2 1 1.0 1.0+0.0j 1 True - 3 1 1.0 1.0+0.0j 1 True - 4 1 1.0 1.0+0.0j 1 True - - >>> df.memory_usage() - Index 128 - int64 40000 - float64 40000 - complex128 80000 - object 40000 - bool 5000 - dtype: int64 - - >>> df.memory_usage(index=False) - int64 40000 - float64 40000 - complex128 80000 - object 40000 - bool 5000 - dtype: int64 - - The memory footprint of `object` dtype columns is ignored by default: - - >>> df.memory_usage(deep=True) - Index 128 - int64 40000 - float64 40000 - complex128 80000 - object 180000 - bool 5000 - dtype: int64 - - Use a Categorical for efficient storage of an object-dtype column with - many repeated values. - - >>> df['object'].astype('category').memory_usage(deep=True) - 5244 - """ - result = self._constructor_sliced( - [c.memory_usage(index=False, deep=deep) for col, c in self.items()], - index=self.columns, - dtype=np.intp, - ) - if index: - index_memory_usage = self._constructor_sliced( - self.index.memory_usage(deep=deep), index=["Index"] - ) - result = index_memory_usage._append(result) - return result - - def transpose(self, *args, copy: bool = False) -> DataFrame: - """ - Transpose index and columns. - - Reflect the DataFrame over its main diagonal by writing rows as columns - and vice-versa. The property :attr:`.T` is an accessor to the method - :meth:`transpose`. - - Parameters - ---------- - *args : tuple, optional - Accepted for compatibility with NumPy. - copy : bool, default False - Whether to copy the data after transposing, even for DataFrames - with a single dtype. - - Note that a copy is always required for mixed dtype DataFrames, - or for DataFrames with any extension types. - - Returns - ------- - DataFrame - The transposed DataFrame. - - See Also - -------- - numpy.transpose : Permute the dimensions of a given array. - - Notes - ----- - Transposing a DataFrame with mixed dtypes will result in a homogeneous - DataFrame with the `object` dtype. In such a case, a copy of the data - is always made. - - Examples - -------- - **Square DataFrame with homogeneous dtype** - - >>> d1 = {'col1': [1, 2], 'col2': [3, 4]} - >>> df1 = pd.DataFrame(data=d1) - >>> df1 - col1 col2 - 0 1 3 - 1 2 4 - - >>> df1_transposed = df1.T # or df1.transpose() - >>> df1_transposed - 0 1 - col1 1 2 - col2 3 4 - - When the dtype is homogeneous in the original DataFrame, we get a - transposed DataFrame with the same dtype: - - >>> df1.dtypes - col1 int64 - col2 int64 - dtype: object - >>> df1_transposed.dtypes - 0 int64 - 1 int64 - dtype: object - - **Non-square DataFrame with mixed dtypes** - - >>> d2 = {'name': ['Alice', 'Bob'], - ... 'score': [9.5, 8], - ... 'employed': [False, True], - ... 'kids': [0, 0]} - >>> df2 = pd.DataFrame(data=d2) - >>> df2 - name score employed kids - 0 Alice 9.5 False 0 - 1 Bob 8.0 True 0 - - >>> df2_transposed = df2.T # or df2.transpose() - >>> df2_transposed - 0 1 - name Alice Bob - score 9.5 8.0 - employed False True - kids 0 0 - - When the DataFrame has mixed dtypes, we get a transposed DataFrame with - the `object` dtype: - - >>> df2.dtypes - name object - score float64 - employed bool - kids int64 - dtype: object - >>> df2_transposed.dtypes - 0 object - 1 object - dtype: object - """ - nv.validate_transpose(args, {}) - # construct the args - - dtypes = list(self.dtypes) - - if self._can_fast_transpose: - # Note: tests pass without this, but this improves perf quite a bit. - new_vals = self._values.T - if copy and not using_copy_on_write(): - new_vals = new_vals.copy() - - result = self._constructor( - new_vals, - index=self.columns, - columns=self.index, - copy=False, - dtype=new_vals.dtype, - ) - if using_copy_on_write() and len(self) > 0: - result._mgr.add_references(self._mgr) # type: ignore[arg-type] - - elif ( - self._is_homogeneous_type - and dtypes - and isinstance(dtypes[0], ExtensionDtype) - ): - new_values: list - if isinstance(dtypes[0], BaseMaskedDtype): - # We have masked arrays with the same dtype. We can transpose faster. - from pandas.core.arrays.masked import ( - transpose_homogeneous_masked_arrays, - ) - - new_values = transpose_homogeneous_masked_arrays( - cast(Sequence[BaseMaskedArray], self._iter_column_arrays()) - ) - elif isinstance(dtypes[0], ArrowDtype): - # We have arrow EAs with the same dtype. We can transpose faster. - from pandas.core.arrays.arrow.array import ( - ArrowExtensionArray, - transpose_homogeneous_pyarrow, - ) - - new_values = transpose_homogeneous_pyarrow( - cast(Sequence[ArrowExtensionArray], self._iter_column_arrays()) - ) - else: - # We have other EAs with the same dtype. We preserve dtype in transpose. - dtyp = dtypes[0] - arr_typ = dtyp.construct_array_type() - values = self.values - new_values = [arr_typ._from_sequence(row, dtype=dtyp) for row in values] - - result = type(self)._from_arrays( - new_values, - index=self.columns, - columns=self.index, - verify_integrity=False, - ) - - else: - new_arr = self.values.T - if copy and not using_copy_on_write(): - new_arr = new_arr.copy() - result = self._constructor( - new_arr, - index=self.columns, - columns=self.index, - dtype=new_arr.dtype, - # We already made a copy (more than one block) - copy=False, - ) - - return result.__finalize__(self, method="transpose") - - @property - def T(self) -> DataFrame: - """ - The transpose of the DataFrame. - - Returns - ------- - DataFrame - The transposed DataFrame. - - See Also - -------- - DataFrame.transpose : Transpose index and columns. - - Examples - -------- - >>> df = pd.DataFrame({'col1': [1, 2], 'col2': [3, 4]}) - >>> df - col1 col2 - 0 1 3 - 1 2 4 - - >>> df.T - 0 1 - col1 1 2 - col2 3 4 - """ - return self.transpose() - - # ---------------------------------------------------------------------- - # Indexing Methods - - def _ixs(self, i: int, axis: AxisInt = 0) -> Series: - """ - Parameters - ---------- - i : int - axis : int - - Returns - ------- - Series - """ - # irow - if axis == 0: - new_mgr = self._mgr.fast_xs(i) - - # if we are a copy, mark as such - copy = isinstance(new_mgr.array, np.ndarray) and new_mgr.array.base is None - result = self._constructor_sliced_from_mgr(new_mgr, axes=new_mgr.axes) - result._name = self.index[i] - result = result.__finalize__(self) - result._set_is_copy(self, copy=copy) - return result - - # icol - else: - label = self.columns[i] - - col_mgr = self._mgr.iget(i) - result = self._box_col_values(col_mgr, i) - - # this is a cached value, mark it so - result._set_as_cached(label, self) - return result - - def _get_column_array(self, i: int) -> ArrayLike: - """ - Get the values of the i'th column (ndarray or ExtensionArray, as stored - in the Block) - - Warning! The returned array is a view but doesn't handle Copy-on-Write, - so this should be used with caution (for read-only purposes). - """ - return self._mgr.iget_values(i) - - def _iter_column_arrays(self) -> Iterator[ArrayLike]: - """ - Iterate over the arrays of all columns in order. - This returns the values as stored in the Block (ndarray or ExtensionArray). - - Warning! The returned array is a view but doesn't handle Copy-on-Write, - so this should be used with caution (for read-only purposes). - """ - if isinstance(self._mgr, ArrayManager): - yield from self._mgr.arrays - else: - for i in range(len(self.columns)): - yield self._get_column_array(i) - - def _getitem_nocopy(self, key: list): - """ - Behaves like __getitem__, but returns a view in cases where __getitem__ - would make a copy. - """ - # TODO(CoW): can be removed if/when we are always Copy-on-Write - indexer = self.columns._get_indexer_strict(key, "columns")[1] - new_axis = self.columns[indexer] - - new_mgr = self._mgr.reindex_indexer( - new_axis, - indexer, - axis=0, - allow_dups=True, - copy=False, - only_slice=True, - ) - return self._constructor_from_mgr(new_mgr, axes=new_mgr.axes) - - def __getitem__(self, key): - check_dict_or_set_indexers(key) - key = lib.item_from_zerodim(key) - key = com.apply_if_callable(key, self) - - if is_hashable(key) and not is_iterator(key): - # is_iterator to exclude generator e.g. test_getitem_listlike - # shortcut if the key is in columns - is_mi = isinstance(self.columns, MultiIndex) - # GH#45316 Return view if key is not duplicated - # Only use drop_duplicates with duplicates for performance - if not is_mi and ( - self.columns.is_unique - and key in self.columns - or key in self.columns.drop_duplicates(keep=False) - ): - return self._get_item_cache(key) - - elif is_mi and self.columns.is_unique and key in self.columns: - return self._getitem_multilevel(key) - - # Do we have a slicer (on rows)? - if isinstance(key, slice): - return self._getitem_slice(key) - - # Do we have a (boolean) DataFrame? - if isinstance(key, DataFrame): - return self.where(key) - - # Do we have a (boolean) 1d indexer? - if com.is_bool_indexer(key): - return self._getitem_bool_array(key) - - # We are left with two options: a single key, and a collection of keys, - # We interpret tuples as collections only for non-MultiIndex - is_single_key = isinstance(key, tuple) or not is_list_like(key) - - if is_single_key: - if self.columns.nlevels > 1: - return self._getitem_multilevel(key) - indexer = self.columns.get_loc(key) - if is_integer(indexer): - indexer = [indexer] - else: - if is_iterator(key): - key = list(key) - indexer = self.columns._get_indexer_strict(key, "columns")[1] - - # take() does not accept boolean indexers - if getattr(indexer, "dtype", None) == bool: - indexer = np.where(indexer)[0] - - if isinstance(indexer, slice): - return self._slice(indexer, axis=1) - - data = self._take_with_is_copy(indexer, axis=1) - - if is_single_key: - # What does looking for a single key in a non-unique index return? - # The behavior is inconsistent. It returns a Series, except when - # - the key itself is repeated (test on data.shape, #9519), or - # - we have a MultiIndex on columns (test on self.columns, #21309) - if data.shape[1] == 1 and not isinstance(self.columns, MultiIndex): - # GH#26490 using data[key] can cause RecursionError - return data._get_item_cache(key) - - return data - - def _getitem_bool_array(self, key): - # also raises Exception if object array with NA values - # warning here just in case -- previously __setitem__ was - # reindexing but __getitem__ was not; it seems more reasonable to - # go with the __setitem__ behavior since that is more consistent - # with all other indexing behavior - if isinstance(key, Series) and not key.index.equals(self.index): - warnings.warn( - "Boolean Series key will be reindexed to match DataFrame index.", - UserWarning, - stacklevel=find_stack_level(), - ) - elif len(key) != len(self.index): - raise ValueError( - f"Item wrong length {len(key)} instead of {len(self.index)}." - ) - - # check_bool_indexer will throw exception if Series key cannot - # be reindexed to match DataFrame rows - key = check_bool_indexer(self.index, key) - - if key.all(): - return self.copy(deep=None) - - indexer = key.nonzero()[0] - return self._take_with_is_copy(indexer, axis=0) - - def _getitem_multilevel(self, key): - # self.columns is a MultiIndex - loc = self.columns.get_loc(key) - if isinstance(loc, (slice, np.ndarray)): - new_columns = self.columns[loc] - result_columns = maybe_droplevels(new_columns, key) - result = self.iloc[:, loc] - result.columns = result_columns - - # If there is only one column being returned, and its name is - # either an empty string, or a tuple with an empty string as its - # first element, then treat the empty string as a placeholder - # and return the column as if the user had provided that empty - # string in the key. If the result is a Series, exclude the - # implied empty string from its name. - if len(result.columns) == 1: - # e.g. test_frame_getitem_multicolumn_empty_level, - # test_frame_mixed_depth_get, test_loc_setitem_single_column_slice - top = result.columns[0] - if isinstance(top, tuple): - top = top[0] - if top == "": - result = result[""] - if isinstance(result, Series): - result = self._constructor_sliced( - result, index=self.index, name=key - ) - - result._set_is_copy(self) - return result - else: - # loc is neither a slice nor ndarray, so must be an int - return self._ixs(loc, axis=1) - - def _get_value(self, index, col, takeable: bool = False) -> Scalar: - """ - Quickly retrieve single value at passed column and index. - - Parameters - ---------- - index : row label - col : column label - takeable : interpret the index/col as indexers, default False - - Returns - ------- - scalar - - Notes - ----- - Assumes that both `self.index._index_as_unique` and - `self.columns._index_as_unique`; Caller is responsible for checking. - """ - if takeable: - series = self._ixs(col, axis=1) - return series._values[index] - - series = self._get_item_cache(col) - engine = self.index._engine - - if not isinstance(self.index, MultiIndex): - # CategoricalIndex: Trying to use the engine fastpath may give incorrect - # results if our categories are integers that dont match our codes - # IntervalIndex: IntervalTree has no get_loc - row = self.index.get_loc(index) - return series._values[row] - - # For MultiIndex going through engine effectively restricts us to - # same-length tuples; see test_get_set_value_no_partial_indexing - loc = engine.get_loc(index) - return series._values[loc] - - def isetitem(self, loc, value) -> None: - """ - Set the given value in the column with position `loc`. - - This is a positional analogue to ``__setitem__``. - - Parameters - ---------- - loc : int or sequence of ints - Index position for the column. - value : scalar or arraylike - Value(s) for the column. - - Notes - ----- - ``frame.isetitem(loc, value)`` is an in-place method as it will - modify the DataFrame in place (not returning a new object). In contrast to - ``frame.iloc[:, i] = value`` which will try to update the existing values in - place, ``frame.isetitem(loc, value)`` will not update the values of the column - itself in place, it will instead insert a new array. - - In cases where ``frame.columns`` is unique, this is equivalent to - ``frame[frame.columns[i]] = value``. - """ - if isinstance(value, DataFrame): - if is_integer(loc): - loc = [loc] - - if len(loc) != len(value.columns): - raise ValueError( - f"Got {len(loc)} positions but value has {len(value.columns)} " - f"columns." - ) - - for i, idx in enumerate(loc): - arraylike, refs = self._sanitize_column(value.iloc[:, i]) - self._iset_item_mgr(idx, arraylike, inplace=False, refs=refs) - return - - arraylike, refs = self._sanitize_column(value) - self._iset_item_mgr(loc, arraylike, inplace=False, refs=refs) - - def __setitem__(self, key, value) -> None: - if not PYPY and using_copy_on_write(): - if sys.getrefcount(self) <= 3: - warnings.warn( - _chained_assignment_msg, ChainedAssignmentError, stacklevel=2 - ) - - key = com.apply_if_callable(key, self) - - # see if we can slice the rows - if isinstance(key, slice): - slc = self.index._convert_slice_indexer(key, kind="getitem") - return self._setitem_slice(slc, value) - - if isinstance(key, DataFrame) or getattr(key, "ndim", None) == 2: - self._setitem_frame(key, value) - elif isinstance(key, (Series, np.ndarray, list, Index)): - self._setitem_array(key, value) - elif isinstance(value, DataFrame): - self._set_item_frame_value(key, value) - elif ( - is_list_like(value) - and not self.columns.is_unique - and 1 < len(self.columns.get_indexer_for([key])) == len(value) - ): - # Column to set is duplicated - self._setitem_array([key], value) - else: - # set column - self._set_item(key, value) - - def _setitem_slice(self, key: slice, value) -> None: - # NB: we can't just use self.loc[key] = value because that - # operates on labels and we need to operate positional for - # backwards-compat, xref GH#31469 - self._check_setitem_copy() - self.iloc[key] = value - - def _setitem_array(self, key, value): - # also raises Exception if object array with NA values - if com.is_bool_indexer(key): - # bool indexer is indexing along rows - if len(key) != len(self.index): - raise ValueError( - f"Item wrong length {len(key)} instead of {len(self.index)}!" - ) - key = check_bool_indexer(self.index, key) - indexer = key.nonzero()[0] - self._check_setitem_copy() - if isinstance(value, DataFrame): - # GH#39931 reindex since iloc does not align - value = value.reindex(self.index.take(indexer)) - self.iloc[indexer] = value - - else: - # Note: unlike self.iloc[:, indexer] = value, this will - # never try to overwrite values inplace - - if isinstance(value, DataFrame): - check_key_length(self.columns, key, value) - for k1, k2 in zip(key, value.columns): - self[k1] = value[k2] - - elif not is_list_like(value): - for col in key: - self[col] = value - - elif isinstance(value, np.ndarray) and value.ndim == 2: - self._iset_not_inplace(key, value) - - elif np.ndim(value) > 1: - # list of lists - value = DataFrame(value).values - return self._setitem_array(key, value) - - else: - self._iset_not_inplace(key, value) - - def _iset_not_inplace(self, key, value): - # GH#39510 when setting with df[key] = obj with a list-like key and - # list-like value, we iterate over those listlikes and set columns - # one at a time. This is different from dispatching to - # `self.loc[:, key]= value` because loc.__setitem__ may overwrite - # data inplace, whereas this will insert new arrays. - - def igetitem(obj, i: int): - # Note: we catch DataFrame obj before getting here, but - # hypothetically would return obj.iloc[:, i] - if isinstance(obj, np.ndarray): - return obj[..., i] - else: - return obj[i] - - if self.columns.is_unique: - if np.shape(value)[-1] != len(key): - raise ValueError("Columns must be same length as key") - - for i, col in enumerate(key): - self[col] = igetitem(value, i) - - else: - ilocs = self.columns.get_indexer_non_unique(key)[0] - if (ilocs < 0).any(): - # key entries not in self.columns - raise NotImplementedError - - if np.shape(value)[-1] != len(ilocs): - raise ValueError("Columns must be same length as key") - - assert np.ndim(value) <= 2 - - orig_columns = self.columns - - # Using self.iloc[:, i] = ... may set values inplace, which - # by convention we do not do in __setitem__ - try: - self.columns = Index(range(len(self.columns))) - for i, iloc in enumerate(ilocs): - self[iloc] = igetitem(value, i) - finally: - self.columns = orig_columns - - def _setitem_frame(self, key, value): - # support boolean setting with DataFrame input, e.g. - # df[df > df2] = 0 - if isinstance(key, np.ndarray): - if key.shape != self.shape: - raise ValueError("Array conditional must be same shape as self") - key = self._constructor(key, **self._construct_axes_dict(), copy=False) - - if key.size and not all(is_bool_dtype(dtype) for dtype in key.dtypes): - raise TypeError( - "Must pass DataFrame or 2-d ndarray with boolean values only" - ) - - self._check_setitem_copy() - self._where(-key, value, inplace=True) - - def _set_item_frame_value(self, key, value: DataFrame) -> None: - self._ensure_valid_index(value) - - # align columns - if key in self.columns: - loc = self.columns.get_loc(key) - cols = self.columns[loc] - len_cols = 1 if is_scalar(cols) or isinstance(cols, tuple) else len(cols) - if len_cols != len(value.columns): - raise ValueError("Columns must be same length as key") - - # align right-hand-side columns if self.columns - # is multi-index and self[key] is a sub-frame - if isinstance(self.columns, MultiIndex) and isinstance( - loc, (slice, Series, np.ndarray, Index) - ): - cols_droplevel = maybe_droplevels(cols, key) - if len(cols_droplevel) and not cols_droplevel.equals(value.columns): - value = value.reindex(cols_droplevel, axis=1) - - for col, col_droplevel in zip(cols, cols_droplevel): - self[col] = value[col_droplevel] - return - - if is_scalar(cols): - self[cols] = value[value.columns[0]] - return - - locs: np.ndarray | list - if isinstance(loc, slice): - locs = np.arange(loc.start, loc.stop, loc.step) - elif is_scalar(loc): - locs = [loc] - else: - locs = loc.nonzero()[0] - - return self.isetitem(locs, value) - - if len(value.columns) != 1: - raise ValueError( - "Cannot set a DataFrame with multiple columns to the single " - f"column {key}" - ) - - self[key] = value[value.columns[0]] - - def _iset_item_mgr( - self, - loc: int | slice | np.ndarray, - value, - inplace: bool = False, - refs: BlockValuesRefs | None = None, - ) -> None: - # when called from _set_item_mgr loc can be anything returned from get_loc - self._mgr.iset(loc, value, inplace=inplace, refs=refs) - self._clear_item_cache() - - def _set_item_mgr( - self, key, value: ArrayLike, refs: BlockValuesRefs | None = None - ) -> None: - try: - loc = self._info_axis.get_loc(key) - except KeyError: - # This item wasn't present, just insert at end - self._mgr.insert(len(self._info_axis), key, value, refs) - else: - self._iset_item_mgr(loc, value, refs=refs) - - # check if we are modifying a copy - # try to set first as we want an invalid - # value exception to occur first - if len(self): - self._check_setitem_copy() - - def _iset_item(self, loc: int, value: Series, inplace: bool = True) -> None: - # We are only called from _replace_columnwise which guarantees that - # no reindex is necessary - if using_copy_on_write(): - self._iset_item_mgr( - loc, value._values, inplace=inplace, refs=value._references - ) - else: - self._iset_item_mgr(loc, value._values.copy(), inplace=True) - - # check if we are modifying a copy - # try to set first as we want an invalid - # value exception to occur first - if len(self): - self._check_setitem_copy() - - def _set_item(self, key, value) -> None: - """ - Add series to DataFrame in specified column. - - If series is a numpy-array (not a Series/TimeSeries), it must be the - same length as the DataFrames index or an error will be thrown. - - Series/TimeSeries will be conformed to the DataFrames index to - ensure homogeneity. - """ - value, refs = self._sanitize_column(value) - - if ( - key in self.columns - and value.ndim == 1 - and not isinstance(value.dtype, ExtensionDtype) - ): - # broadcast across multiple columns if necessary - if not self.columns.is_unique or isinstance(self.columns, MultiIndex): - existing_piece = self[key] - if isinstance(existing_piece, DataFrame): - value = np.tile(value, (len(existing_piece.columns), 1)).T - refs = None - - self._set_item_mgr(key, value, refs) - - def _set_value( - self, index: IndexLabel, col, value: Scalar, takeable: bool = False - ) -> None: - """ - Put single value at passed column and index. - - Parameters - ---------- - index : Label - row label - col : Label - column label - value : scalar - takeable : bool, default False - Sets whether or not index/col interpreted as indexers - """ - try: - if takeable: - icol = col - iindex = cast(int, index) - else: - icol = self.columns.get_loc(col) - iindex = self.index.get_loc(index) - self._mgr.column_setitem(icol, iindex, value, inplace_only=True) - self._clear_item_cache() - - except (KeyError, TypeError, ValueError, LossySetitemError): - # get_loc might raise a KeyError for missing labels (falling back - # to (i)loc will do expansion of the index) - # column_setitem will do validation that may raise TypeError, - # ValueError, or LossySetitemError - # set using a non-recursive method & reset the cache - if takeable: - self.iloc[index, col] = value - else: - self.loc[index, col] = value - self._item_cache.pop(col, None) - - except InvalidIndexError as ii_err: - # GH48729: Seems like you are trying to assign a value to a - # row when only scalar options are permitted - raise InvalidIndexError( - f"You can only assign a scalar value not a {type(value)}" - ) from ii_err - - def _ensure_valid_index(self, value) -> None: - """ - Ensure that if we don't have an index, that we can create one from the - passed value. - """ - # GH5632, make sure that we are a Series convertible - if not len(self.index) and is_list_like(value) and len(value): - if not isinstance(value, DataFrame): - try: - value = Series(value) - except (ValueError, NotImplementedError, TypeError) as err: - raise ValueError( - "Cannot set a frame with no defined index " - "and a value that cannot be converted to a Series" - ) from err - - # GH31368 preserve name of index - index_copy = value.index.copy() - if self.index.name is not None: - index_copy.name = self.index.name - - self._mgr = self._mgr.reindex_axis(index_copy, axis=1, fill_value=np.nan) - - def _box_col_values(self, values: SingleDataManager, loc: int) -> Series: - """ - Provide boxed values for a column. - """ - # Lookup in columns so that if e.g. a str datetime was passed - # we attach the Timestamp object as the name. - name = self.columns[loc] - # We get index=self.index bc values is a SingleDataManager - obj = self._constructor_sliced_from_mgr(values, axes=values.axes) - obj._name = name - return obj.__finalize__(self) - - # ---------------------------------------------------------------------- - # Lookup Caching - - def _clear_item_cache(self) -> None: - self._item_cache.clear() - - def _get_item_cache(self, item: Hashable) -> Series: - """Return the cached item, item represents a label indexer.""" - if using_copy_on_write(): - loc = self.columns.get_loc(item) - return self._ixs(loc, axis=1) - - cache = self._item_cache - res = cache.get(item) - if res is None: - # All places that call _get_item_cache have unique columns, - # pending resolution of GH#33047 - - loc = self.columns.get_loc(item) - res = self._ixs(loc, axis=1) - - cache[item] = res - - # for a chain - res._is_copy = self._is_copy - return res - - def _reset_cacher(self) -> None: - # no-op for DataFrame - pass - - def _maybe_cache_changed(self, item, value: Series, inplace: bool) -> None: - """ - The object has called back to us saying maybe it has changed. - """ - loc = self._info_axis.get_loc(item) - arraylike = value._values - - old = self._ixs(loc, axis=1) - if old._values is value._values and inplace: - # GH#46149 avoid making unnecessary copies/block-splitting - return - - self._mgr.iset(loc, arraylike, inplace=inplace) - - # ---------------------------------------------------------------------- - # Unsorted - - @overload - def query(self, expr: str, *, inplace: Literal[False] = ..., **kwargs) -> DataFrame: - ... - - @overload - def query(self, expr: str, *, inplace: Literal[True], **kwargs) -> None: - ... - - @overload - def query(self, expr: str, *, inplace: bool = ..., **kwargs) -> DataFrame | None: - ... - - def query(self, expr: str, *, inplace: bool = False, **kwargs) -> DataFrame | None: - """ - Query the columns of a DataFrame with a boolean expression. - - Parameters - ---------- - expr : str - The query string to evaluate. - - You can refer to variables - in the environment by prefixing them with an '@' character like - ``@a + b``. - - You can refer to column names that are not valid Python variable names - by surrounding them in backticks. Thus, column names containing spaces - or punctuations (besides underscores) or starting with digits must be - surrounded by backticks. (For example, a column named "Area (cm^2)" would - be referenced as ```Area (cm^2)```). Column names which are Python keywords - (like "list", "for", "import", etc) cannot be used. - - For example, if one of your columns is called ``a a`` and you want - to sum it with ``b``, your query should be ```a a` + b``. - - inplace : bool - Whether to modify the DataFrame rather than creating a new one. - **kwargs - See the documentation for :func:`eval` for complete details - on the keyword arguments accepted by :meth:`DataFrame.query`. - - Returns - ------- - DataFrame or None - DataFrame resulting from the provided query expression or - None if ``inplace=True``. - - See Also - -------- - eval : Evaluate a string describing operations on - DataFrame columns. - DataFrame.eval : Evaluate a string describing operations on - DataFrame columns. - - Notes - ----- - The result of the evaluation of this expression is first passed to - :attr:`DataFrame.loc` and if that fails because of a - multidimensional key (e.g., a DataFrame) then the result will be passed - to :meth:`DataFrame.__getitem__`. - - This method uses the top-level :func:`eval` function to - evaluate the passed query. - - The :meth:`~pandas.DataFrame.query` method uses a slightly - modified Python syntax by default. For example, the ``&`` and ``|`` - (bitwise) operators have the precedence of their boolean cousins, - :keyword:`and` and :keyword:`or`. This *is* syntactically valid Python, - however the semantics are different. - - You can change the semantics of the expression by passing the keyword - argument ``parser='python'``. This enforces the same semantics as - evaluation in Python space. Likewise, you can pass ``engine='python'`` - to evaluate an expression using Python itself as a backend. This is not - recommended as it is inefficient compared to using ``numexpr`` as the - engine. - - The :attr:`DataFrame.index` and - :attr:`DataFrame.columns` attributes of the - :class:`~pandas.DataFrame` instance are placed in the query namespace - by default, which allows you to treat both the index and columns of the - frame as a column in the frame. - The identifier ``index`` is used for the frame index; you can also - use the name of the index to identify it in a query. Please note that - Python keywords may not be used as identifiers. - - For further details and examples see the ``query`` documentation in - :ref:`indexing `. - - *Backtick quoted variables* - - Backtick quoted variables are parsed as literal Python code and - are converted internally to a Python valid identifier. - This can lead to the following problems. - - During parsing a number of disallowed characters inside the backtick - quoted string are replaced by strings that are allowed as a Python identifier. - These characters include all operators in Python, the space character, the - question mark, the exclamation mark, the dollar sign, and the euro sign. - For other characters that fall outside the ASCII range (U+0001..U+007F) - and those that are not further specified in PEP 3131, - the query parser will raise an error. - This excludes whitespace different than the space character, - but also the hashtag (as it is used for comments) and the backtick - itself (backtick can also not be escaped). - - In a special case, quotes that make a pair around a backtick can - confuse the parser. - For example, ```it's` > `that's``` will raise an error, - as it forms a quoted string (``'s > `that'``) with a backtick inside. - - See also the Python documentation about lexical analysis - (https://docs.python.org/3/reference/lexical_analysis.html) - in combination with the source code in :mod:`pandas.core.computation.parsing`. - - Examples - -------- - >>> df = pd.DataFrame({'A': range(1, 6), - ... 'B': range(10, 0, -2), - ... 'C C': range(10, 5, -1)}) - >>> df - A B C C - 0 1 10 10 - 1 2 8 9 - 2 3 6 8 - 3 4 4 7 - 4 5 2 6 - >>> df.query('A > B') - A B C C - 4 5 2 6 - - The previous expression is equivalent to - - >>> df[df.A > df.B] - A B C C - 4 5 2 6 - - For columns with spaces in their name, you can use backtick quoting. - - >>> df.query('B == `C C`') - A B C C - 0 1 10 10 - - The previous expression is equivalent to - - >>> df[df.B == df['C C']] - A B C C - 0 1 10 10 - """ - inplace = validate_bool_kwarg(inplace, "inplace") - if not isinstance(expr, str): - msg = f"expr must be a string to be evaluated, {type(expr)} given" - raise ValueError(msg) - kwargs["level"] = kwargs.pop("level", 0) + 1 - kwargs["target"] = None - res = self.eval(expr, **kwargs) - - try: - result = self.loc[res] - except ValueError: - # when res is multi-dimensional loc raises, but this is sometimes a - # valid query - result = self[res] - - if inplace: - self._update_inplace(result) - return None - else: - return result - - @overload - def eval(self, expr: str, *, inplace: Literal[False] = ..., **kwargs) -> Any: - ... - - @overload - def eval(self, expr: str, *, inplace: Literal[True], **kwargs) -> None: - ... - - def eval(self, expr: str, *, inplace: bool = False, **kwargs) -> Any | None: - """ - Evaluate a string describing operations on DataFrame columns. - - Operates on columns only, not specific rows or elements. This allows - `eval` to run arbitrary code, which can make you vulnerable to code - injection if you pass user input to this function. - - Parameters - ---------- - expr : str - The expression string to evaluate. - inplace : bool, default False - If the expression contains an assignment, whether to perform the - operation inplace and mutate the existing DataFrame. Otherwise, - a new DataFrame is returned. - **kwargs - See the documentation for :func:`eval` for complete details - on the keyword arguments accepted by - :meth:`~pandas.DataFrame.query`. - - Returns - ------- - ndarray, scalar, pandas object, or None - The result of the evaluation or None if ``inplace=True``. - - See Also - -------- - DataFrame.query : Evaluates a boolean expression to query the columns - of a frame. - DataFrame.assign : Can evaluate an expression or function to create new - values for a column. - eval : Evaluate a Python expression as a string using various - backends. - - Notes - ----- - For more details see the API documentation for :func:`~eval`. - For detailed examples see :ref:`enhancing performance with eval - `. - - Examples - -------- - >>> df = pd.DataFrame({'A': range(1, 6), 'B': range(10, 0, -2)}) - >>> df - A B - 0 1 10 - 1 2 8 - 2 3 6 - 3 4 4 - 4 5 2 - >>> df.eval('A + B') - 0 11 - 1 10 - 2 9 - 3 8 - 4 7 - dtype: int64 - - Assignment is allowed though by default the original DataFrame is not - modified. - - >>> df.eval('C = A + B') - A B C - 0 1 10 11 - 1 2 8 10 - 2 3 6 9 - 3 4 4 8 - 4 5 2 7 - >>> df - A B - 0 1 10 - 1 2 8 - 2 3 6 - 3 4 4 - 4 5 2 - - Multiple columns can be assigned to using multi-line expressions: - - >>> df.eval( - ... ''' - ... C = A + B - ... D = A - B - ... ''' - ... ) - A B C D - 0 1 10 11 -9 - 1 2 8 10 -6 - 2 3 6 9 -3 - 3 4 4 8 0 - 4 5 2 7 3 - """ - from pandas.core.computation.eval import eval as _eval - - inplace = validate_bool_kwarg(inplace, "inplace") - kwargs["level"] = kwargs.pop("level", 0) + 1 - index_resolvers = self._get_index_resolvers() - column_resolvers = self._get_cleaned_column_resolvers() - resolvers = column_resolvers, index_resolvers - if "target" not in kwargs: - kwargs["target"] = self - kwargs["resolvers"] = tuple(kwargs.get("resolvers", ())) + resolvers - - return _eval(expr, inplace=inplace, **kwargs) - - def select_dtypes(self, include=None, exclude=None) -> Self: - """ - Return a subset of the DataFrame's columns based on the column dtypes. - - Parameters - ---------- - include, exclude : scalar or list-like - A selection of dtypes or strings to be included/excluded. At least - one of these parameters must be supplied. - - Returns - ------- - DataFrame - The subset of the frame including the dtypes in ``include`` and - excluding the dtypes in ``exclude``. - - Raises - ------ - ValueError - * If both of ``include`` and ``exclude`` are empty - * If ``include`` and ``exclude`` have overlapping elements - * If any kind of string dtype is passed in. - - See Also - -------- - DataFrame.dtypes: Return Series with the data type of each column. - - Notes - ----- - * To select all *numeric* types, use ``np.number`` or ``'number'`` - * To select strings you must use the ``object`` dtype, but note that - this will return *all* object dtype columns - * See the `numpy dtype hierarchy - `__ - * To select datetimes, use ``np.datetime64``, ``'datetime'`` or - ``'datetime64'`` - * To select timedeltas, use ``np.timedelta64``, ``'timedelta'`` or - ``'timedelta64'`` - * To select Pandas categorical dtypes, use ``'category'`` - * To select Pandas datetimetz dtypes, use ``'datetimetz'`` - or ``'datetime64[ns, tz]'`` - - Examples - -------- - >>> df = pd.DataFrame({'a': [1, 2] * 3, - ... 'b': [True, False] * 3, - ... 'c': [1.0, 2.0] * 3}) - >>> df - a b c - 0 1 True 1.0 - 1 2 False 2.0 - 2 1 True 1.0 - 3 2 False 2.0 - 4 1 True 1.0 - 5 2 False 2.0 - - >>> df.select_dtypes(include='bool') - b - 0 True - 1 False - 2 True - 3 False - 4 True - 5 False - - >>> df.select_dtypes(include=['float64']) - c - 0 1.0 - 1 2.0 - 2 1.0 - 3 2.0 - 4 1.0 - 5 2.0 - - >>> df.select_dtypes(exclude=['int64']) - b c - 0 True 1.0 - 1 False 2.0 - 2 True 1.0 - 3 False 2.0 - 4 True 1.0 - 5 False 2.0 - """ - if not is_list_like(include): - include = (include,) if include is not None else () - if not is_list_like(exclude): - exclude = (exclude,) if exclude is not None else () - - selection = (frozenset(include), frozenset(exclude)) - - if not any(selection): - raise ValueError("at least one of include or exclude must be nonempty") - - # convert the myriad valid dtypes object to a single representation - def check_int_infer_dtype(dtypes): - converted_dtypes: list[type] = [] - for dtype in dtypes: - # Numpy maps int to different types (int32, in64) on Windows and Linux - # see https://github.com/numpy/numpy/issues/9464 - if (isinstance(dtype, str) and dtype == "int") or (dtype is int): - converted_dtypes.append(np.int32) - converted_dtypes.append(np.int64) - elif dtype == "float" or dtype is float: - # GH#42452 : np.dtype("float") coerces to np.float64 from Numpy 1.20 - converted_dtypes.extend([np.float64, np.float32]) - else: - converted_dtypes.append(infer_dtype_from_object(dtype)) - return frozenset(converted_dtypes) - - include = check_int_infer_dtype(include) - exclude = check_int_infer_dtype(exclude) - - for dtypes in (include, exclude): - invalidate_string_dtypes(dtypes) - - # can't both include AND exclude! - if not include.isdisjoint(exclude): - raise ValueError(f"include and exclude overlap on {(include & exclude)}") - - def dtype_predicate(dtype: DtypeObj, dtypes_set) -> bool: - # GH 46870: BooleanDtype._is_numeric == True but should be excluded - dtype = dtype if not isinstance(dtype, ArrowDtype) else dtype.numpy_dtype - return issubclass(dtype.type, tuple(dtypes_set)) or ( - np.number in dtypes_set - and getattr(dtype, "_is_numeric", False) - and not is_bool_dtype(dtype) - ) - - def predicate(arr: ArrayLike) -> bool: - dtype = arr.dtype - if include: - if not dtype_predicate(dtype, include): - return False - - if exclude: - if dtype_predicate(dtype, exclude): - return False - - return True - - mgr = self._mgr._get_data_subset(predicate).copy(deep=None) - return self._constructor_from_mgr(mgr, axes=mgr.axes).__finalize__(self) - - def insert( - self, - loc: int, - column: Hashable, - value: Scalar | AnyArrayLike, - allow_duplicates: bool | lib.NoDefault = lib.no_default, - ) -> None: - """ - Insert column into DataFrame at specified location. - - Raises a ValueError if `column` is already contained in the DataFrame, - unless `allow_duplicates` is set to True. - - Parameters - ---------- - loc : int - Insertion index. Must verify 0 <= loc <= len(columns). - column : str, number, or hashable object - Label of the inserted column. - value : Scalar, Series, or array-like - allow_duplicates : bool, optional, default lib.no_default - - See Also - -------- - Index.insert : Insert new item by index. - - Examples - -------- - >>> df = pd.DataFrame({'col1': [1, 2], 'col2': [3, 4]}) - >>> df - col1 col2 - 0 1 3 - 1 2 4 - >>> df.insert(1, "newcol", [99, 99]) - >>> df - col1 newcol col2 - 0 1 99 3 - 1 2 99 4 - >>> df.insert(0, "col1", [100, 100], allow_duplicates=True) - >>> df - col1 col1 newcol col2 - 0 100 1 99 3 - 1 100 2 99 4 - - Notice that pandas uses index alignment in case of `value` from type `Series`: - - >>> df.insert(0, "col0", pd.Series([5, 6], index=[1, 2])) - >>> df - col0 col1 col1 newcol col2 - 0 NaN 100 1 99 3 - 1 5.0 100 2 99 4 - """ - if allow_duplicates is lib.no_default: - allow_duplicates = False - if allow_duplicates and not self.flags.allows_duplicate_labels: - raise ValueError( - "Cannot specify 'allow_duplicates=True' when " - "'self.flags.allows_duplicate_labels' is False." - ) - if not allow_duplicates and column in self.columns: - # Should this be a different kind of error?? - raise ValueError(f"cannot insert {column}, already exists") - if not is_integer(loc): - raise TypeError("loc must be int") - # convert non stdlib ints to satisfy typing checks - loc = int(loc) - if isinstance(value, DataFrame) and len(value.columns) > 1: - raise ValueError( - f"Expected a one-dimensional object, got a DataFrame with " - f"{len(value.columns)} columns instead." - ) - elif isinstance(value, DataFrame): - value = value.iloc[:, 0] - - value, refs = self._sanitize_column(value) - self._mgr.insert(loc, column, value, refs=refs) - - def assign(self, **kwargs) -> DataFrame: - r""" - Assign new columns to a DataFrame. - - Returns a new object with all original columns in addition to new ones. - Existing columns that are re-assigned will be overwritten. - - Parameters - ---------- - **kwargs : dict of {str: callable or Series} - The column names are keywords. If the values are - callable, they are computed on the DataFrame and - assigned to the new columns. The callable must not - change input DataFrame (though pandas doesn't check it). - If the values are not callable, (e.g. a Series, scalar, or array), - they are simply assigned. - - Returns - ------- - DataFrame - A new DataFrame with the new columns in addition to - all the existing columns. - - Notes - ----- - Assigning multiple columns within the same ``assign`` is possible. - Later items in '\*\*kwargs' may refer to newly created or modified - columns in 'df'; items are computed and assigned into 'df' in order. - - Examples - -------- - >>> df = pd.DataFrame({'temp_c': [17.0, 25.0]}, - ... index=['Portland', 'Berkeley']) - >>> df - temp_c - Portland 17.0 - Berkeley 25.0 - - Where the value is a callable, evaluated on `df`: - - >>> df.assign(temp_f=lambda x: x.temp_c * 9 / 5 + 32) - temp_c temp_f - Portland 17.0 62.6 - Berkeley 25.0 77.0 - - Alternatively, the same behavior can be achieved by directly - referencing an existing Series or sequence: - - >>> df.assign(temp_f=df['temp_c'] * 9 / 5 + 32) - temp_c temp_f - Portland 17.0 62.6 - Berkeley 25.0 77.0 - - You can create multiple columns within the same assign where one - of the columns depends on another one defined within the same assign: - - >>> df.assign(temp_f=lambda x: x['temp_c'] * 9 / 5 + 32, - ... temp_k=lambda x: (x['temp_f'] + 459.67) * 5 / 9) - temp_c temp_f temp_k - Portland 17.0 62.6 290.15 - Berkeley 25.0 77.0 298.15 - """ - data = self.copy(deep=None) - - for k, v in kwargs.items(): - data[k] = com.apply_if_callable(v, data) - return data - - def _sanitize_column(self, value) -> tuple[ArrayLike, BlockValuesRefs | None]: - """ - Ensures new columns (which go into the BlockManager as new blocks) are - always copied (or a reference is being tracked to them under CoW) - and converted into an array. - - Parameters - ---------- - value : scalar, Series, or array-like - - Returns - ------- - tuple of numpy.ndarray or ExtensionArray and optional BlockValuesRefs - """ - self._ensure_valid_index(value) - - # Using a DataFrame would mean coercing values to one dtype - assert not isinstance(value, DataFrame) - if is_dict_like(value): - if not isinstance(value, Series): - value = Series(value) - return _reindex_for_setitem(value, self.index) - - if is_list_like(value): - com.require_length_match(value, self.index) - return sanitize_array(value, self.index, copy=True, allow_2d=True), None - - @property - def _series(self): - return { - item: Series( - self._mgr.iget(idx), index=self.index, name=item, fastpath=True - ) - for idx, item in enumerate(self.columns) - } - - # ---------------------------------------------------------------------- - # Reindexing and alignment - - def _reindex_multi( - self, axes: dict[str, Index], copy: bool, fill_value - ) -> DataFrame: - """ - We are guaranteed non-Nones in the axes. - """ - - new_index, row_indexer = self.index.reindex(axes["index"]) - new_columns, col_indexer = self.columns.reindex(axes["columns"]) - - if row_indexer is not None and col_indexer is not None: - # Fastpath. By doing two 'take's at once we avoid making an - # unnecessary copy. - # We only get here with `self._can_fast_transpose`, which (almost) - # ensures that self.values is cheap. It may be worth making this - # condition more specific. - indexer = row_indexer, col_indexer - new_values = take_2d_multi(self.values, indexer, fill_value=fill_value) - return self._constructor( - new_values, index=new_index, columns=new_columns, copy=False - ) - else: - return self._reindex_with_indexers( - {0: [new_index, row_indexer], 1: [new_columns, col_indexer]}, - copy=copy, - fill_value=fill_value, - ) - - @Appender( - """ - Examples - -------- - >>> df = pd.DataFrame({"A": [1, 2, 3], "B": [4, 5, 6]}) - - Change the row labels. - - >>> df.set_axis(['a', 'b', 'c'], axis='index') - A B - a 1 4 - b 2 5 - c 3 6 - - Change the column labels. - - >>> df.set_axis(['I', 'II'], axis='columns') - I II - 0 1 4 - 1 2 5 - 2 3 6 - """ - ) - @Substitution( - klass=_shared_doc_kwargs["klass"], - axes_single_arg=_shared_doc_kwargs["axes_single_arg"], - extended_summary_sub=" column or", - axis_description_sub=", and 1 identifies the columns", - see_also_sub=" or columns", - ) - @Appender(NDFrame.set_axis.__doc__) - def set_axis( - self, - labels, - *, - axis: Axis = 0, - copy: bool | None = None, - ) -> DataFrame: - return super().set_axis(labels, axis=axis, copy=copy) - - @doc( - NDFrame.reindex, - klass=_shared_doc_kwargs["klass"], - optional_reindex=_shared_doc_kwargs["optional_reindex"], - ) - def reindex( - self, - labels=None, - *, - index=None, - columns=None, - axis: Axis | None = None, - method: ReindexMethod | None = None, - copy: bool | None = None, - level: Level | None = None, - fill_value: Scalar | None = np.nan, - limit: int | None = None, - tolerance=None, - ) -> DataFrame: - return super().reindex( - labels=labels, - index=index, - columns=columns, - axis=axis, - method=method, - copy=copy, - level=level, - fill_value=fill_value, - limit=limit, - tolerance=tolerance, - ) - - @overload - def drop( - self, - labels: IndexLabel = ..., - *, - axis: Axis = ..., - index: IndexLabel = ..., - columns: IndexLabel = ..., - level: Level = ..., - inplace: Literal[True], - errors: IgnoreRaise = ..., - ) -> None: - ... - - @overload - def drop( - self, - labels: IndexLabel = ..., - *, - axis: Axis = ..., - index: IndexLabel = ..., - columns: IndexLabel = ..., - level: Level = ..., - inplace: Literal[False] = ..., - errors: IgnoreRaise = ..., - ) -> DataFrame: - ... - - @overload - def drop( - self, - labels: IndexLabel = ..., - *, - axis: Axis = ..., - index: IndexLabel = ..., - columns: IndexLabel = ..., - level: Level = ..., - inplace: bool = ..., - errors: IgnoreRaise = ..., - ) -> DataFrame | None: - ... - - def drop( - self, - labels: IndexLabel | None = None, - *, - axis: Axis = 0, - index: IndexLabel | None = None, - columns: IndexLabel | None = None, - level: Level | None = None, - inplace: bool = False, - errors: IgnoreRaise = "raise", - ) -> DataFrame | None: - """ - Drop specified labels from rows or columns. - - Remove rows or columns by specifying label names and corresponding - axis, or by directly specifying index or column names. When using a - multi-index, labels on different levels can be removed by specifying - the level. See the :ref:`user guide ` - for more information about the now unused levels. - - Parameters - ---------- - labels : single label or list-like - Index or column labels to drop. A tuple will be used as a single - label and not treated as a list-like. - axis : {0 or 'index', 1 or 'columns'}, default 0 - Whether to drop labels from the index (0 or 'index') or - columns (1 or 'columns'). - index : single label or list-like - Alternative to specifying axis (``labels, axis=0`` - is equivalent to ``index=labels``). - columns : single label or list-like - Alternative to specifying axis (``labels, axis=1`` - is equivalent to ``columns=labels``). - level : int or level name, optional - For MultiIndex, level from which the labels will be removed. - inplace : bool, default False - If False, return a copy. Otherwise, do operation - in place and return None. - errors : {'ignore', 'raise'}, default 'raise' - If 'ignore', suppress error and only existing labels are - dropped. - - Returns - ------- - DataFrame or None - Returns DataFrame or None DataFrame with the specified - index or column labels removed or None if inplace=True. - - Raises - ------ - KeyError - If any of the labels is not found in the selected axis. - - See Also - -------- - DataFrame.loc : Label-location based indexer for selection by label. - DataFrame.dropna : Return DataFrame with labels on given axis omitted - where (all or any) data are missing. - DataFrame.drop_duplicates : Return DataFrame with duplicate rows - removed, optionally only considering certain columns. - Series.drop : Return Series with specified index labels removed. - - Examples - -------- - >>> df = pd.DataFrame(np.arange(12).reshape(3, 4), - ... columns=['A', 'B', 'C', 'D']) - >>> df - A B C D - 0 0 1 2 3 - 1 4 5 6 7 - 2 8 9 10 11 - - Drop columns - - >>> df.drop(['B', 'C'], axis=1) - A D - 0 0 3 - 1 4 7 - 2 8 11 - - >>> df.drop(columns=['B', 'C']) - A D - 0 0 3 - 1 4 7 - 2 8 11 - - Drop a row by index - - >>> df.drop([0, 1]) - A B C D - 2 8 9 10 11 - - Drop columns and/or rows of MultiIndex DataFrame - - >>> midx = pd.MultiIndex(levels=[['llama', 'cow', 'falcon'], - ... ['speed', 'weight', 'length']], - ... codes=[[0, 0, 0, 1, 1, 1, 2, 2, 2], - ... [0, 1, 2, 0, 1, 2, 0, 1, 2]]) - >>> df = pd.DataFrame(index=midx, columns=['big', 'small'], - ... data=[[45, 30], [200, 100], [1.5, 1], [30, 20], - ... [250, 150], [1.5, 0.8], [320, 250], - ... [1, 0.8], [0.3, 0.2]]) - >>> df - big small - llama speed 45.0 30.0 - weight 200.0 100.0 - length 1.5 1.0 - cow speed 30.0 20.0 - weight 250.0 150.0 - length 1.5 0.8 - falcon speed 320.0 250.0 - weight 1.0 0.8 - length 0.3 0.2 - - Drop a specific index combination from the MultiIndex - DataFrame, i.e., drop the combination ``'falcon'`` and - ``'weight'``, which deletes only the corresponding row - - >>> df.drop(index=('falcon', 'weight')) - big small - llama speed 45.0 30.0 - weight 200.0 100.0 - length 1.5 1.0 - cow speed 30.0 20.0 - weight 250.0 150.0 - length 1.5 0.8 - falcon speed 320.0 250.0 - length 0.3 0.2 - - >>> df.drop(index='cow', columns='small') - big - llama speed 45.0 - weight 200.0 - length 1.5 - falcon speed 320.0 - weight 1.0 - length 0.3 - - >>> df.drop(index='length', level=1) - big small - llama speed 45.0 30.0 - weight 200.0 100.0 - cow speed 30.0 20.0 - weight 250.0 150.0 - falcon speed 320.0 250.0 - weight 1.0 0.8 - """ - return super().drop( - labels=labels, - axis=axis, - index=index, - columns=columns, - level=level, - inplace=inplace, - errors=errors, - ) - - @overload - def rename( - self, - mapper: Renamer | None = ..., - *, - index: Renamer | None = ..., - columns: Renamer | None = ..., - axis: Axis | None = ..., - copy: bool | None = ..., - inplace: Literal[True], - level: Level = ..., - errors: IgnoreRaise = ..., - ) -> None: - ... - - @overload - def rename( - self, - mapper: Renamer | None = ..., - *, - index: Renamer | None = ..., - columns: Renamer | None = ..., - axis: Axis | None = ..., - copy: bool | None = ..., - inplace: Literal[False] = ..., - level: Level = ..., - errors: IgnoreRaise = ..., - ) -> DataFrame: - ... - - @overload - def rename( - self, - mapper: Renamer | None = ..., - *, - index: Renamer | None = ..., - columns: Renamer | None = ..., - axis: Axis | None = ..., - copy: bool | None = ..., - inplace: bool = ..., - level: Level = ..., - errors: IgnoreRaise = ..., - ) -> DataFrame | None: - ... - - def rename( - self, - mapper: Renamer | None = None, - *, - index: Renamer | None = None, - columns: Renamer | None = None, - axis: Axis | None = None, - copy: bool | None = None, - inplace: bool = False, - level: Level | None = None, - errors: IgnoreRaise = "ignore", - ) -> DataFrame | None: - """ - Rename columns or index labels. - - Function / dict values must be unique (1-to-1). Labels not contained in - a dict / Series will be left as-is. Extra labels listed don't throw an - error. - - See the :ref:`user guide ` for more. - - Parameters - ---------- - mapper : dict-like or function - Dict-like or function transformations to apply to - that axis' values. Use either ``mapper`` and ``axis`` to - specify the axis to target with ``mapper``, or ``index`` and - ``columns``. - index : dict-like or function - Alternative to specifying axis (``mapper, axis=0`` - is equivalent to ``index=mapper``). - columns : dict-like or function - Alternative to specifying axis (``mapper, axis=1`` - is equivalent to ``columns=mapper``). - axis : {0 or 'index', 1 or 'columns'}, default 0 - Axis to target with ``mapper``. Can be either the axis name - ('index', 'columns') or number (0, 1). The default is 'index'. - copy : bool, default True - Also copy underlying data. - inplace : bool, default False - Whether to modify the DataFrame rather than creating a new one. - If True then value of copy is ignored. - level : int or level name, default None - In case of a MultiIndex, only rename labels in the specified - level. - errors : {'ignore', 'raise'}, default 'ignore' - If 'raise', raise a `KeyError` when a dict-like `mapper`, `index`, - or `columns` contains labels that are not present in the Index - being transformed. - If 'ignore', existing keys will be renamed and extra keys will be - ignored. - - Returns - ------- - DataFrame or None - DataFrame with the renamed axis labels or None if ``inplace=True``. - - Raises - ------ - KeyError - If any of the labels is not found in the selected axis and - "errors='raise'". - - See Also - -------- - DataFrame.rename_axis : Set the name of the axis. - - Examples - -------- - ``DataFrame.rename`` supports two calling conventions - - * ``(index=index_mapper, columns=columns_mapper, ...)`` - * ``(mapper, axis={'index', 'columns'}, ...)`` - - We *highly* recommend using keyword arguments to clarify your - intent. - - Rename columns using a mapping: - - >>> df = pd.DataFrame({"A": [1, 2, 3], "B": [4, 5, 6]}) - >>> df.rename(columns={"A": "a", "B": "c"}) - a c - 0 1 4 - 1 2 5 - 2 3 6 - - Rename index using a mapping: - - >>> df.rename(index={0: "x", 1: "y", 2: "z"}) - A B - x 1 4 - y 2 5 - z 3 6 - - Cast index labels to a different type: - - >>> df.index - RangeIndex(start=0, stop=3, step=1) - >>> df.rename(index=str).index - Index(['0', '1', '2'], dtype='object') - - >>> df.rename(columns={"A": "a", "B": "b", "C": "c"}, errors="raise") - Traceback (most recent call last): - KeyError: ['C'] not found in axis - - Using axis-style parameters: - - >>> df.rename(str.lower, axis='columns') - a b - 0 1 4 - 1 2 5 - 2 3 6 - - >>> df.rename({1: 2, 2: 4}, axis='index') - A B - 0 1 4 - 2 2 5 - 4 3 6 - """ - return super()._rename( - mapper=mapper, - index=index, - columns=columns, - axis=axis, - copy=copy, - inplace=inplace, - level=level, - errors=errors, - ) - - def pop(self, item: Hashable) -> Series: - """ - Return item and drop from frame. Raise KeyError if not found. - - Parameters - ---------- - item : label - Label of column to be popped. - - Returns - ------- - Series - - Examples - -------- - >>> df = pd.DataFrame([('falcon', 'bird', 389.0), - ... ('parrot', 'bird', 24.0), - ... ('lion', 'mammal', 80.5), - ... ('monkey', 'mammal', np.nan)], - ... columns=('name', 'class', 'max_speed')) - >>> df - name class max_speed - 0 falcon bird 389.0 - 1 parrot bird 24.0 - 2 lion mammal 80.5 - 3 monkey mammal NaN - - >>> df.pop('class') - 0 bird - 1 bird - 2 mammal - 3 mammal - Name: class, dtype: object - - >>> df - name max_speed - 0 falcon 389.0 - 1 parrot 24.0 - 2 lion 80.5 - 3 monkey NaN - """ - return super().pop(item=item) - - def _replace_columnwise( - self, mapping: dict[Hashable, tuple[Any, Any]], inplace: bool, regex - ): - """ - Dispatch to Series.replace column-wise. - - Parameters - ---------- - mapping : dict - of the form {col: (target, value)} - inplace : bool - regex : bool or same types as `to_replace` in DataFrame.replace - - Returns - ------- - DataFrame or None - """ - # Operate column-wise - res = self if inplace else self.copy(deep=None) - ax = self.columns - - for i, ax_value in enumerate(ax): - if ax_value in mapping: - ser = self.iloc[:, i] - - target, value = mapping[ax_value] - newobj = ser.replace(target, value, regex=regex) - - res._iset_item(i, newobj, inplace=inplace) - - if inplace: - return - return res.__finalize__(self) - - @doc(NDFrame.shift, klass=_shared_doc_kwargs["klass"]) - def shift( - self, - periods: int | Sequence[int] = 1, - freq: Frequency | None = None, - axis: Axis = 0, - fill_value: Hashable = lib.no_default, - suffix: str | None = None, - ) -> DataFrame: - if freq is not None and fill_value is not lib.no_default: - # GH#53832 - warnings.warn( - "Passing a 'freq' together with a 'fill_value' silently ignores " - "the fill_value and is deprecated. This will raise in a future " - "version.", - FutureWarning, - stacklevel=find_stack_level(), - ) - fill_value = lib.no_default - - axis = self._get_axis_number(axis) - - if is_list_like(periods): - periods = cast(Sequence, periods) - if axis == 1: - raise ValueError( - "If `periods` contains multiple shifts, `axis` cannot be 1." - ) - if len(periods) == 0: - raise ValueError("If `periods` is an iterable, it cannot be empty.") - from pandas.core.reshape.concat import concat - - shifted_dataframes = [] - for period in periods: - if not is_integer(period): - raise TypeError( - f"Periods must be integer, but {period} is {type(period)}." - ) - period = cast(int, period) - shifted_dataframes.append( - super() - .shift(periods=period, freq=freq, axis=axis, fill_value=fill_value) - .add_suffix(f"{suffix}_{period}" if suffix else f"_{period}") - ) - return concat(shifted_dataframes, axis=1) - elif suffix: - raise ValueError("Cannot specify `suffix` if `periods` is an int.") - periods = cast(int, periods) - - ncols = len(self.columns) - arrays = self._mgr.arrays - if axis == 1 and periods != 0 and ncols > 0 and freq is None: - if fill_value is lib.no_default: - # We will infer fill_value to match the closest column - - # Use a column that we know is valid for our column's dtype GH#38434 - label = self.columns[0] - - if periods > 0: - result = self.iloc[:, :-periods] - for col in range(min(ncols, abs(periods))): - # TODO(EA2D): doing this in a loop unnecessary with 2D EAs - # Define filler inside loop so we get a copy - filler = self.iloc[:, 0].shift(len(self)) - result.insert(0, label, filler, allow_duplicates=True) - else: - result = self.iloc[:, -periods:] - for col in range(min(ncols, abs(periods))): - # Define filler inside loop so we get a copy - filler = self.iloc[:, -1].shift(len(self)) - result.insert( - len(result.columns), label, filler, allow_duplicates=True - ) - - result.columns = self.columns.copy() - return result - elif len(arrays) > 1 or ( - # If we only have one block and we know that we can't - # keep the same dtype (i.e. the _can_hold_element check) - # then we can go through the reindex_indexer path - # (and avoid casting logic in the Block method). - not can_hold_element(arrays[0], fill_value) - ): - # GH#35488 we need to watch out for multi-block cases - # We only get here with fill_value not-lib.no_default - nper = abs(periods) - nper = min(nper, ncols) - if periods > 0: - indexer = np.array( - [-1] * nper + list(range(ncols - periods)), dtype=np.intp - ) - else: - indexer = np.array( - list(range(nper, ncols)) + [-1] * nper, dtype=np.intp - ) - mgr = self._mgr.reindex_indexer( - self.columns, - indexer, - axis=0, - fill_value=fill_value, - allow_dups=True, - ) - res_df = self._constructor_from_mgr(mgr, axes=mgr.axes) - return res_df.__finalize__(self, method="shift") - else: - return self.T.shift(periods=periods, fill_value=fill_value).T - - return super().shift( - periods=periods, freq=freq, axis=axis, fill_value=fill_value - ) - - @overload - def set_index( - self, - keys, - *, - drop: bool = ..., - append: bool = ..., - inplace: Literal[False] = ..., - verify_integrity: bool = ..., - ) -> DataFrame: - ... - - @overload - def set_index( - self, - keys, - *, - drop: bool = ..., - append: bool = ..., - inplace: Literal[True], - verify_integrity: bool = ..., - ) -> None: - ... - - def set_index( - self, - keys, - *, - drop: bool = True, - append: bool = False, - inplace: bool = False, - verify_integrity: bool = False, - ) -> DataFrame | None: - """ - Set the DataFrame index using existing columns. - - Set the DataFrame index (row labels) using one or more existing - columns or arrays (of the correct length). The index can replace the - existing index or expand on it. - - Parameters - ---------- - keys : label or array-like or list of labels/arrays - This parameter can be either a single column key, a single array of - the same length as the calling DataFrame, or a list containing an - arbitrary combination of column keys and arrays. Here, "array" - encompasses :class:`Series`, :class:`Index`, ``np.ndarray``, and - instances of :class:`~collections.abc.Iterator`. - drop : bool, default True - Delete columns to be used as the new index. - append : bool, default False - Whether to append columns to existing index. - inplace : bool, default False - Whether to modify the DataFrame rather than creating a new one. - verify_integrity : bool, default False - Check the new index for duplicates. Otherwise defer the check until - necessary. Setting to False will improve the performance of this - method. - - Returns - ------- - DataFrame or None - Changed row labels or None if ``inplace=True``. - - See Also - -------- - DataFrame.reset_index : Opposite of set_index. - DataFrame.reindex : Change to new indices or expand indices. - DataFrame.reindex_like : Change to same indices as other DataFrame. - - Examples - -------- - >>> df = pd.DataFrame({'month': [1, 4, 7, 10], - ... 'year': [2012, 2014, 2013, 2014], - ... 'sale': [55, 40, 84, 31]}) - >>> df - month year sale - 0 1 2012 55 - 1 4 2014 40 - 2 7 2013 84 - 3 10 2014 31 - - Set the index to become the 'month' column: - - >>> df.set_index('month') - year sale - month - 1 2012 55 - 4 2014 40 - 7 2013 84 - 10 2014 31 - - Create a MultiIndex using columns 'year' and 'month': - - >>> df.set_index(['year', 'month']) - sale - year month - 2012 1 55 - 2014 4 40 - 2013 7 84 - 2014 10 31 - - Create a MultiIndex using an Index and a column: - - >>> df.set_index([pd.Index([1, 2, 3, 4]), 'year']) - month sale - year - 1 2012 1 55 - 2 2014 4 40 - 3 2013 7 84 - 4 2014 10 31 - - Create a MultiIndex using two Series: - - >>> s = pd.Series([1, 2, 3, 4]) - >>> df.set_index([s, s**2]) - month year sale - 1 1 1 2012 55 - 2 4 4 2014 40 - 3 9 7 2013 84 - 4 16 10 2014 31 - """ - inplace = validate_bool_kwarg(inplace, "inplace") - self._check_inplace_and_allows_duplicate_labels(inplace) - if not isinstance(keys, list): - keys = [keys] - - err_msg = ( - 'The parameter "keys" may be a column key, one-dimensional ' - "array, or a list containing only valid column keys and " - "one-dimensional arrays." - ) - - missing: list[Hashable] = [] - for col in keys: - if isinstance(col, (Index, Series, np.ndarray, list, abc.Iterator)): - # arrays are fine as long as they are one-dimensional - # iterators get converted to list below - if getattr(col, "ndim", 1) != 1: - raise ValueError(err_msg) - else: - # everything else gets tried as a key; see GH 24969 - try: - found = col in self.columns - except TypeError as err: - raise TypeError( - f"{err_msg}. Received column of type {type(col)}" - ) from err - else: - if not found: - missing.append(col) - - if missing: - raise KeyError(f"None of {missing} are in the columns") - - if inplace: - frame = self - else: - # GH 49473 Use "lazy copy" with Copy-on-Write - frame = self.copy(deep=None) - - arrays: list[Index] = [] - names: list[Hashable] = [] - if append: - names = list(self.index.names) - if isinstance(self.index, MultiIndex): - arrays.extend( - self.index._get_level_values(i) for i in range(self.index.nlevels) - ) - else: - arrays.append(self.index) - - to_remove: list[Hashable] = [] - for col in keys: - if isinstance(col, MultiIndex): - arrays.extend(col._get_level_values(n) for n in range(col.nlevels)) - names.extend(col.names) - elif isinstance(col, (Index, Series)): - # if Index then not MultiIndex (treated above) - - # error: Argument 1 to "append" of "list" has incompatible type - # "Union[Index, Series]"; expected "Index" - arrays.append(col) # type: ignore[arg-type] - names.append(col.name) - elif isinstance(col, (list, np.ndarray)): - # error: Argument 1 to "append" of "list" has incompatible type - # "Union[List[Any], ndarray]"; expected "Index" - arrays.append(col) # type: ignore[arg-type] - names.append(None) - elif isinstance(col, abc.Iterator): - # error: Argument 1 to "append" of "list" has incompatible type - # "List[Any]"; expected "Index" - arrays.append(list(col)) # type: ignore[arg-type] - names.append(None) - # from here, col can only be a column label - else: - arrays.append(frame[col]) - names.append(col) - if drop: - to_remove.append(col) - - if len(arrays[-1]) != len(self): - # check newest element against length of calling frame, since - # ensure_index_from_sequences would not raise for append=False. - raise ValueError( - f"Length mismatch: Expected {len(self)} rows, " - f"received array of length {len(arrays[-1])}" - ) - - index = ensure_index_from_sequences(arrays, names) - - if verify_integrity and not index.is_unique: - duplicates = index[index.duplicated()].unique() - raise ValueError(f"Index has duplicate keys: {duplicates}") - - # use set to handle duplicate column names gracefully in case of drop - for c in set(to_remove): - del frame[c] - - # clear up memory usage - index._cleanup() - - frame.index = index - - if not inplace: - return frame - return None - - @overload - def reset_index( - self, - level: IndexLabel = ..., - *, - drop: bool = ..., - inplace: Literal[False] = ..., - col_level: Hashable = ..., - col_fill: Hashable = ..., - allow_duplicates: bool | lib.NoDefault = ..., - names: Hashable | Sequence[Hashable] | None = None, - ) -> DataFrame: - ... - - @overload - def reset_index( - self, - level: IndexLabel = ..., - *, - drop: bool = ..., - inplace: Literal[True], - col_level: Hashable = ..., - col_fill: Hashable = ..., - allow_duplicates: bool | lib.NoDefault = ..., - names: Hashable | Sequence[Hashable] | None = None, - ) -> None: - ... - - @overload - def reset_index( - self, - level: IndexLabel = ..., - *, - drop: bool = ..., - inplace: bool = ..., - col_level: Hashable = ..., - col_fill: Hashable = ..., - allow_duplicates: bool | lib.NoDefault = ..., - names: Hashable | Sequence[Hashable] | None = None, - ) -> DataFrame | None: - ... - - def reset_index( - self, - level: IndexLabel | None = None, - *, - drop: bool = False, - inplace: bool = False, - col_level: Hashable = 0, - col_fill: Hashable = "", - allow_duplicates: bool | lib.NoDefault = lib.no_default, - names: Hashable | Sequence[Hashable] | None = None, - ) -> DataFrame | None: - """ - Reset the index, or a level of it. - - Reset the index of the DataFrame, and use the default one instead. - If the DataFrame has a MultiIndex, this method can remove one or more - levels. - - Parameters - ---------- - level : int, str, tuple, or list, default None - Only remove the given levels from the index. Removes all levels by - default. - drop : bool, default False - Do not try to insert index into dataframe columns. This resets - the index to the default integer index. - inplace : bool, default False - Whether to modify the DataFrame rather than creating a new one. - col_level : int or str, default 0 - If the columns have multiple levels, determines which level the - labels are inserted into. By default it is inserted into the first - level. - col_fill : object, default '' - If the columns have multiple levels, determines how the other - levels are named. If None then the index name is repeated. - allow_duplicates : bool, optional, default lib.no_default - Allow duplicate column labels to be created. - - .. versionadded:: 1.5.0 - - names : int, str or 1-dimensional list, default None - Using the given string, rename the DataFrame column which contains the - index data. If the DataFrame has a MultiIndex, this has to be a list or - tuple with length equal to the number of levels. - - .. versionadded:: 1.5.0 - - Returns - ------- - DataFrame or None - DataFrame with the new index or None if ``inplace=True``. - - See Also - -------- - DataFrame.set_index : Opposite of reset_index. - DataFrame.reindex : Change to new indices or expand indices. - DataFrame.reindex_like : Change to same indices as other DataFrame. - - Examples - -------- - >>> df = pd.DataFrame([('bird', 389.0), - ... ('bird', 24.0), - ... ('mammal', 80.5), - ... ('mammal', np.nan)], - ... index=['falcon', 'parrot', 'lion', 'monkey'], - ... columns=('class', 'max_speed')) - >>> df - class max_speed - falcon bird 389.0 - parrot bird 24.0 - lion mammal 80.5 - monkey mammal NaN - - When we reset the index, the old index is added as a column, and a - new sequential index is used: - - >>> df.reset_index() - index class max_speed - 0 falcon bird 389.0 - 1 parrot bird 24.0 - 2 lion mammal 80.5 - 3 monkey mammal NaN - - We can use the `drop` parameter to avoid the old index being added as - a column: - - >>> df.reset_index(drop=True) - class max_speed - 0 bird 389.0 - 1 bird 24.0 - 2 mammal 80.5 - 3 mammal NaN - - You can also use `reset_index` with `MultiIndex`. - - >>> index = pd.MultiIndex.from_tuples([('bird', 'falcon'), - ... ('bird', 'parrot'), - ... ('mammal', 'lion'), - ... ('mammal', 'monkey')], - ... names=['class', 'name']) - >>> columns = pd.MultiIndex.from_tuples([('speed', 'max'), - ... ('species', 'type')]) - >>> df = pd.DataFrame([(389.0, 'fly'), - ... (24.0, 'fly'), - ... (80.5, 'run'), - ... (np.nan, 'jump')], - ... index=index, - ... columns=columns) - >>> df - speed species - max type - class name - bird falcon 389.0 fly - parrot 24.0 fly - mammal lion 80.5 run - monkey NaN jump - - Using the `names` parameter, choose a name for the index column: - - >>> df.reset_index(names=['classes', 'names']) - classes names speed species - max type - 0 bird falcon 389.0 fly - 1 bird parrot 24.0 fly - 2 mammal lion 80.5 run - 3 mammal monkey NaN jump - - If the index has multiple levels, we can reset a subset of them: - - >>> df.reset_index(level='class') - class speed species - max type - name - falcon bird 389.0 fly - parrot bird 24.0 fly - lion mammal 80.5 run - monkey mammal NaN jump - - If we are not dropping the index, by default, it is placed in the top - level. We can place it in another level: - - >>> df.reset_index(level='class', col_level=1) - speed species - class max type - name - falcon bird 389.0 fly - parrot bird 24.0 fly - lion mammal 80.5 run - monkey mammal NaN jump - - When the index is inserted under another level, we can specify under - which one with the parameter `col_fill`: - - >>> df.reset_index(level='class', col_level=1, col_fill='species') - species speed species - class max type - name - falcon bird 389.0 fly - parrot bird 24.0 fly - lion mammal 80.5 run - monkey mammal NaN jump - - If we specify a nonexistent level for `col_fill`, it is created: - - >>> df.reset_index(level='class', col_level=1, col_fill='genus') - genus speed species - class max type - name - falcon bird 389.0 fly - parrot bird 24.0 fly - lion mammal 80.5 run - monkey mammal NaN jump - """ - inplace = validate_bool_kwarg(inplace, "inplace") - self._check_inplace_and_allows_duplicate_labels(inplace) - if inplace: - new_obj = self - else: - new_obj = self.copy(deep=None) - if allow_duplicates is not lib.no_default: - allow_duplicates = validate_bool_kwarg(allow_duplicates, "allow_duplicates") - - new_index = default_index(len(new_obj)) - if level is not None: - if not isinstance(level, (tuple, list)): - level = [level] - level = [self.index._get_level_number(lev) for lev in level] - if len(level) < self.index.nlevels: - new_index = self.index.droplevel(level) - - if not drop: - to_insert: Iterable[tuple[Any, Any | None]] - - default = "index" if "index" not in self else "level_0" - names = self.index._get_default_index_names(names, default) - - if isinstance(self.index, MultiIndex): - to_insert = zip(self.index.levels, self.index.codes) - else: - to_insert = ((self.index, None),) - - multi_col = isinstance(self.columns, MultiIndex) - for i, (lev, lab) in reversed(list(enumerate(to_insert))): - if level is not None and i not in level: - continue - name = names[i] - if multi_col: - col_name = list(name) if isinstance(name, tuple) else [name] - if col_fill is None: - if len(col_name) not in (1, self.columns.nlevels): - raise ValueError( - "col_fill=None is incompatible " - f"with incomplete column name {name}" - ) - col_fill = col_name[0] - - lev_num = self.columns._get_level_number(col_level) - name_lst = [col_fill] * lev_num + col_name - missing = self.columns.nlevels - len(name_lst) - name_lst += [col_fill] * missing - name = tuple(name_lst) - - # to ndarray and maybe infer different dtype - level_values = lev._values - if level_values.dtype == np.object_: - level_values = lib.maybe_convert_objects(level_values) - - if lab is not None: - # if we have the codes, extract the values with a mask - level_values = algorithms.take( - level_values, lab, allow_fill=True, fill_value=lev._na_value - ) - - new_obj.insert( - 0, - name, - level_values, - allow_duplicates=allow_duplicates, - ) - - new_obj.index = new_index - if not inplace: - return new_obj - - return None - - # ---------------------------------------------------------------------- - # Reindex-based selection methods - - @doc(NDFrame.isna, klass=_shared_doc_kwargs["klass"]) - def isna(self) -> DataFrame: - res_mgr = self._mgr.isna(func=isna) - result = self._constructor_from_mgr(res_mgr, axes=res_mgr.axes) - return result.__finalize__(self, method="isna") - - @doc(NDFrame.isna, klass=_shared_doc_kwargs["klass"]) - def isnull(self) -> DataFrame: - """ - DataFrame.isnull is an alias for DataFrame.isna. - """ - return self.isna() - - @doc(NDFrame.notna, klass=_shared_doc_kwargs["klass"]) - def notna(self) -> DataFrame: - return ~self.isna() - - @doc(NDFrame.notna, klass=_shared_doc_kwargs["klass"]) - def notnull(self) -> DataFrame: - """ - DataFrame.notnull is an alias for DataFrame.notna. - """ - return ~self.isna() - - @overload - def dropna( - self, - *, - axis: Axis = ..., - how: AnyAll | lib.NoDefault = ..., - thresh: int | lib.NoDefault = ..., - subset: IndexLabel = ..., - inplace: Literal[False] = ..., - ignore_index: bool = ..., - ) -> DataFrame: - ... - - @overload - def dropna( - self, - *, - axis: Axis = ..., - how: AnyAll | lib.NoDefault = ..., - thresh: int | lib.NoDefault = ..., - subset: IndexLabel = ..., - inplace: Literal[True], - ignore_index: bool = ..., - ) -> None: - ... - - def dropna( - self, - *, - axis: Axis = 0, - how: AnyAll | lib.NoDefault = lib.no_default, - thresh: int | lib.NoDefault = lib.no_default, - subset: IndexLabel | None = None, - inplace: bool = False, - ignore_index: bool = False, - ) -> DataFrame | None: - """ - Remove missing values. - - See the :ref:`User Guide ` for more on which values are - considered missing, and how to work with missing data. - - Parameters - ---------- - axis : {0 or 'index', 1 or 'columns'}, default 0 - Determine if rows or columns which contain missing values are - removed. - - * 0, or 'index' : Drop rows which contain missing values. - * 1, or 'columns' : Drop columns which contain missing value. - - Only a single axis is allowed. - - how : {'any', 'all'}, default 'any' - Determine if row or column is removed from DataFrame, when we have - at least one NA or all NA. - - * 'any' : If any NA values are present, drop that row or column. - * 'all' : If all values are NA, drop that row or column. - - thresh : int, optional - Require that many non-NA values. Cannot be combined with how. - subset : column label or sequence of labels, optional - Labels along other axis to consider, e.g. if you are dropping rows - these would be a list of columns to include. - inplace : bool, default False - Whether to modify the DataFrame rather than creating a new one. - ignore_index : bool, default ``False`` - If ``True``, the resulting axis will be labeled 0, 1, …, n - 1. - - .. versionadded:: 2.0.0 - - Returns - ------- - DataFrame or None - DataFrame with NA entries dropped from it or None if ``inplace=True``. - - See Also - -------- - DataFrame.isna: Indicate missing values. - DataFrame.notna : Indicate existing (non-missing) values. - DataFrame.fillna : Replace missing values. - Series.dropna : Drop missing values. - Index.dropna : Drop missing indices. - - Examples - -------- - >>> df = pd.DataFrame({"name": ['Alfred', 'Batman', 'Catwoman'], - ... "toy": [np.nan, 'Batmobile', 'Bullwhip'], - ... "born": [pd.NaT, pd.Timestamp("1940-04-25"), - ... pd.NaT]}) - >>> df - name toy born - 0 Alfred NaN NaT - 1 Batman Batmobile 1940-04-25 - 2 Catwoman Bullwhip NaT - - Drop the rows where at least one element is missing. - - >>> df.dropna() - name toy born - 1 Batman Batmobile 1940-04-25 - - Drop the columns where at least one element is missing. - - >>> df.dropna(axis='columns') - name - 0 Alfred - 1 Batman - 2 Catwoman - - Drop the rows where all elements are missing. - - >>> df.dropna(how='all') - name toy born - 0 Alfred NaN NaT - 1 Batman Batmobile 1940-04-25 - 2 Catwoman Bullwhip NaT - - Keep only the rows with at least 2 non-NA values. - - >>> df.dropna(thresh=2) - name toy born - 1 Batman Batmobile 1940-04-25 - 2 Catwoman Bullwhip NaT - - Define in which columns to look for missing values. - - >>> df.dropna(subset=['name', 'toy']) - name toy born - 1 Batman Batmobile 1940-04-25 - 2 Catwoman Bullwhip NaT - """ - if (how is not lib.no_default) and (thresh is not lib.no_default): - raise TypeError( - "You cannot set both the how and thresh arguments at the same time." - ) - - if how is lib.no_default: - how = "any" - - inplace = validate_bool_kwarg(inplace, "inplace") - if isinstance(axis, (tuple, list)): - # GH20987 - raise TypeError("supplying multiple axes to axis is no longer supported.") - - axis = self._get_axis_number(axis) - agg_axis = 1 - axis - - agg_obj = self - if subset is not None: - # subset needs to be list - if not is_list_like(subset): - subset = [subset] - ax = self._get_axis(agg_axis) - indices = ax.get_indexer_for(subset) - check = indices == -1 - if check.any(): - raise KeyError(np.array(subset)[check].tolist()) - agg_obj = self.take(indices, axis=agg_axis) - - if thresh is not lib.no_default: - count = agg_obj.count(axis=agg_axis) - mask = count >= thresh - elif how == "any": - # faster equivalent to 'agg_obj.count(agg_axis) == self.shape[agg_axis]' - mask = notna(agg_obj).all(axis=agg_axis, bool_only=False) - elif how == "all": - # faster equivalent to 'agg_obj.count(agg_axis) > 0' - mask = notna(agg_obj).any(axis=agg_axis, bool_only=False) - else: - raise ValueError(f"invalid how option: {how}") - - if np.all(mask): - result = self.copy(deep=None) - else: - result = self.loc(axis=axis)[mask] - - if ignore_index: - result.index = default_index(len(result)) - - if not inplace: - return result - self._update_inplace(result) - return None - - @overload - def drop_duplicates( - self, - subset: Hashable | Sequence[Hashable] | None = ..., - *, - keep: DropKeep = ..., - inplace: Literal[True], - ignore_index: bool = ..., - ) -> None: - ... - - @overload - def drop_duplicates( - self, - subset: Hashable | Sequence[Hashable] | None = ..., - *, - keep: DropKeep = ..., - inplace: Literal[False] = ..., - ignore_index: bool = ..., - ) -> DataFrame: - ... - - @overload - def drop_duplicates( - self, - subset: Hashable | Sequence[Hashable] | None = ..., - *, - keep: DropKeep = ..., - inplace: bool = ..., - ignore_index: bool = ..., - ) -> DataFrame | None: - ... - - def drop_duplicates( - self, - subset: Hashable | Sequence[Hashable] | None = None, - *, - keep: DropKeep = "first", - inplace: bool = False, - ignore_index: bool = False, - ) -> DataFrame | None: - """ - Return DataFrame with duplicate rows removed. - - Considering certain columns is optional. Indexes, including time indexes - are ignored. - - Parameters - ---------- - subset : column label or sequence of labels, optional - Only consider certain columns for identifying duplicates, by - default use all of the columns. - keep : {'first', 'last', ``False``}, default 'first' - Determines which duplicates (if any) to keep. - - - 'first' : Drop duplicates except for the first occurrence. - - 'last' : Drop duplicates except for the last occurrence. - - ``False`` : Drop all duplicates. - - inplace : bool, default ``False`` - Whether to modify the DataFrame rather than creating a new one. - ignore_index : bool, default ``False`` - If ``True``, the resulting axis will be labeled 0, 1, …, n - 1. - - Returns - ------- - DataFrame or None - DataFrame with duplicates removed or None if ``inplace=True``. - - See Also - -------- - DataFrame.value_counts: Count unique combinations of columns. - - Examples - -------- - Consider dataset containing ramen rating. - - >>> df = pd.DataFrame({ - ... 'brand': ['Yum Yum', 'Yum Yum', 'Indomie', 'Indomie', 'Indomie'], - ... 'style': ['cup', 'cup', 'cup', 'pack', 'pack'], - ... 'rating': [4, 4, 3.5, 15, 5] - ... }) - >>> df - brand style rating - 0 Yum Yum cup 4.0 - 1 Yum Yum cup 4.0 - 2 Indomie cup 3.5 - 3 Indomie pack 15.0 - 4 Indomie pack 5.0 - - By default, it removes duplicate rows based on all columns. - - >>> df.drop_duplicates() - brand style rating - 0 Yum Yum cup 4.0 - 2 Indomie cup 3.5 - 3 Indomie pack 15.0 - 4 Indomie pack 5.0 - - To remove duplicates on specific column(s), use ``subset``. - - >>> df.drop_duplicates(subset=['brand']) - brand style rating - 0 Yum Yum cup 4.0 - 2 Indomie cup 3.5 - - To remove duplicates and keep last occurrences, use ``keep``. - - >>> df.drop_duplicates(subset=['brand', 'style'], keep='last') - brand style rating - 1 Yum Yum cup 4.0 - 2 Indomie cup 3.5 - 4 Indomie pack 5.0 - """ - if self.empty: - return self.copy(deep=None) - - inplace = validate_bool_kwarg(inplace, "inplace") - ignore_index = validate_bool_kwarg(ignore_index, "ignore_index") - - result = self[-self.duplicated(subset, keep=keep)] - if ignore_index: - result.index = default_index(len(result)) - - if inplace: - self._update_inplace(result) - return None - else: - return result - - def duplicated( - self, - subset: Hashable | Sequence[Hashable] | None = None, - keep: DropKeep = "first", - ) -> Series: - """ - Return boolean Series denoting duplicate rows. - - Considering certain columns is optional. - - Parameters - ---------- - subset : column label or sequence of labels, optional - Only consider certain columns for identifying duplicates, by - default use all of the columns. - keep : {'first', 'last', False}, default 'first' - Determines which duplicates (if any) to mark. - - - ``first`` : Mark duplicates as ``True`` except for the first occurrence. - - ``last`` : Mark duplicates as ``True`` except for the last occurrence. - - False : Mark all duplicates as ``True``. - - Returns - ------- - Series - Boolean series for each duplicated rows. - - See Also - -------- - Index.duplicated : Equivalent method on index. - Series.duplicated : Equivalent method on Series. - Series.drop_duplicates : Remove duplicate values from Series. - DataFrame.drop_duplicates : Remove duplicate values from DataFrame. - - Examples - -------- - Consider dataset containing ramen rating. - - >>> df = pd.DataFrame({ - ... 'brand': ['Yum Yum', 'Yum Yum', 'Indomie', 'Indomie', 'Indomie'], - ... 'style': ['cup', 'cup', 'cup', 'pack', 'pack'], - ... 'rating': [4, 4, 3.5, 15, 5] - ... }) - >>> df - brand style rating - 0 Yum Yum cup 4.0 - 1 Yum Yum cup 4.0 - 2 Indomie cup 3.5 - 3 Indomie pack 15.0 - 4 Indomie pack 5.0 - - By default, for each set of duplicated values, the first occurrence - is set on False and all others on True. - - >>> df.duplicated() - 0 False - 1 True - 2 False - 3 False - 4 False - dtype: bool - - By using 'last', the last occurrence of each set of duplicated values - is set on False and all others on True. - - >>> df.duplicated(keep='last') - 0 True - 1 False - 2 False - 3 False - 4 False - dtype: bool - - By setting ``keep`` on False, all duplicates are True. - - >>> df.duplicated(keep=False) - 0 True - 1 True - 2 False - 3 False - 4 False - dtype: bool - - To find duplicates on specific column(s), use ``subset``. - - >>> df.duplicated(subset=['brand']) - 0 False - 1 True - 2 False - 3 True - 4 True - dtype: bool - """ - - if self.empty: - return self._constructor_sliced(dtype=bool) - - def f(vals) -> tuple[np.ndarray, int]: - labels, shape = algorithms.factorize(vals, size_hint=len(self)) - return labels.astype("i8", copy=False), len(shape) - - if subset is None: - # https://github.com/pandas-dev/pandas/issues/28770 - # Incompatible types in assignment (expression has type "Index", variable - # has type "Sequence[Any]") - subset = self.columns # type: ignore[assignment] - elif ( - not np.iterable(subset) - or isinstance(subset, str) - or isinstance(subset, tuple) - and subset in self.columns - ): - subset = (subset,) - - # needed for mypy since can't narrow types using np.iterable - subset = cast(Sequence, subset) - - # Verify all columns in subset exist in the queried dataframe - # Otherwise, raise a KeyError, same as if you try to __getitem__ with a - # key that doesn't exist. - diff = set(subset) - set(self.columns) - if diff: - raise KeyError(Index(diff)) - - if len(subset) == 1 and self.columns.is_unique: - # GH#45236 This is faster than get_group_index below - result = self[subset[0]].duplicated(keep) - result.name = None - else: - vals = (col.values for name, col in self.items() if name in subset) - labels, shape = map(list, zip(*map(f, vals))) - - ids = get_group_index( - labels, - # error: Argument 1 to "tuple" has incompatible type "List[_T]"; - # expected "Iterable[int]" - tuple(shape), # type: ignore[arg-type] - sort=False, - xnull=False, - ) - result = self._constructor_sliced(duplicated(ids, keep), index=self.index) - return result.__finalize__(self, method="duplicated") - - # ---------------------------------------------------------------------- - # Sorting - # error: Signature of "sort_values" incompatible with supertype "NDFrame" - @overload # type: ignore[override] - def sort_values( - self, - by: IndexLabel, - *, - axis: Axis = ..., - ascending=..., - inplace: Literal[False] = ..., - kind: SortKind = ..., - na_position: NaPosition = ..., - ignore_index: bool = ..., - key: ValueKeyFunc = ..., - ) -> DataFrame: - ... - - @overload - def sort_values( - self, - by: IndexLabel, - *, - axis: Axis = ..., - ascending=..., - inplace: Literal[True], - kind: SortKind = ..., - na_position: str = ..., - ignore_index: bool = ..., - key: ValueKeyFunc = ..., - ) -> None: - ... - - def sort_values( - self, - by: IndexLabel, - *, - axis: Axis = 0, - ascending: bool | list[bool] | tuple[bool, ...] = True, - inplace: bool = False, - kind: SortKind = "quicksort", - na_position: str = "last", - ignore_index: bool = False, - key: ValueKeyFunc | None = None, - ) -> DataFrame | None: - """ - Sort by the values along either axis. - - Parameters - ---------- - by : str or list of str - Name or list of names to sort by. - - - if `axis` is 0 or `'index'` then `by` may contain index - levels and/or column labels. - - if `axis` is 1 or `'columns'` then `by` may contain column - levels and/or index labels. - axis : "{0 or 'index', 1 or 'columns'}", default 0 - Axis to be sorted. - ascending : bool or list of bool, default True - Sort ascending vs. descending. Specify list for multiple sort - orders. If this is a list of bools, must match the length of - the by. - inplace : bool, default False - If True, perform operation in-place. - kind : {'quicksort', 'mergesort', 'heapsort', 'stable'}, default 'quicksort' - Choice of sorting algorithm. See also :func:`numpy.sort` for more - information. `mergesort` and `stable` are the only stable algorithms. For - DataFrames, this option is only applied when sorting on a single - column or label. - na_position : {'first', 'last'}, default 'last' - Puts NaNs at the beginning if `first`; `last` puts NaNs at the - end. - ignore_index : bool, default False - If True, the resulting axis will be labeled 0, 1, …, n - 1. - key : callable, optional - Apply the key function to the values - before sorting. This is similar to the `key` argument in the - builtin :meth:`sorted` function, with the notable difference that - this `key` function should be *vectorized*. It should expect a - ``Series`` and return a Series with the same shape as the input. - It will be applied to each column in `by` independently. - - Returns - ------- - DataFrame or None - DataFrame with sorted values or None if ``inplace=True``. - - See Also - -------- - DataFrame.sort_index : Sort a DataFrame by the index. - Series.sort_values : Similar method for a Series. - - Examples - -------- - >>> df = pd.DataFrame({ - ... 'col1': ['A', 'A', 'B', np.nan, 'D', 'C'], - ... 'col2': [2, 1, 9, 8, 7, 4], - ... 'col3': [0, 1, 9, 4, 2, 3], - ... 'col4': ['a', 'B', 'c', 'D', 'e', 'F'] - ... }) - >>> df - col1 col2 col3 col4 - 0 A 2 0 a - 1 A 1 1 B - 2 B 9 9 c - 3 NaN 8 4 D - 4 D 7 2 e - 5 C 4 3 F - - Sort by col1 - - >>> df.sort_values(by=['col1']) - col1 col2 col3 col4 - 0 A 2 0 a - 1 A 1 1 B - 2 B 9 9 c - 5 C 4 3 F - 4 D 7 2 e - 3 NaN 8 4 D - - Sort by multiple columns - - >>> df.sort_values(by=['col1', 'col2']) - col1 col2 col3 col4 - 1 A 1 1 B - 0 A 2 0 a - 2 B 9 9 c - 5 C 4 3 F - 4 D 7 2 e - 3 NaN 8 4 D - - Sort Descending - - >>> df.sort_values(by='col1', ascending=False) - col1 col2 col3 col4 - 4 D 7 2 e - 5 C 4 3 F - 2 B 9 9 c - 0 A 2 0 a - 1 A 1 1 B - 3 NaN 8 4 D - - Putting NAs first - - >>> df.sort_values(by='col1', ascending=False, na_position='first') - col1 col2 col3 col4 - 3 NaN 8 4 D - 4 D 7 2 e - 5 C 4 3 F - 2 B 9 9 c - 0 A 2 0 a - 1 A 1 1 B - - Sorting with a key function - - >>> df.sort_values(by='col4', key=lambda col: col.str.lower()) - col1 col2 col3 col4 - 0 A 2 0 a - 1 A 1 1 B - 2 B 9 9 c - 3 NaN 8 4 D - 4 D 7 2 e - 5 C 4 3 F - - Natural sort with the key argument, - using the `natsort ` package. - - >>> df = pd.DataFrame({ - ... "time": ['0hr', '128hr', '72hr', '48hr', '96hr'], - ... "value": [10, 20, 30, 40, 50] - ... }) - >>> df - time value - 0 0hr 10 - 1 128hr 20 - 2 72hr 30 - 3 48hr 40 - 4 96hr 50 - >>> from natsort import index_natsorted - >>> df.sort_values( - ... by="time", - ... key=lambda x: np.argsort(index_natsorted(df["time"])) - ... ) - time value - 0 0hr 10 - 3 48hr 40 - 2 72hr 30 - 4 96hr 50 - 1 128hr 20 - """ - inplace = validate_bool_kwarg(inplace, "inplace") - axis = self._get_axis_number(axis) - ascending = validate_ascending(ascending) - if not isinstance(by, list): - by = [by] - # error: Argument 1 to "len" has incompatible type "Union[bool, List[bool]]"; - # expected "Sized" - if is_sequence(ascending) and ( - len(by) != len(ascending) # type: ignore[arg-type] - ): - # error: Argument 1 to "len" has incompatible type "Union[bool, - # List[bool]]"; expected "Sized" - raise ValueError( - f"Length of ascending ({len(ascending)})" # type: ignore[arg-type] - f" != length of by ({len(by)})" - ) - if len(by) > 1: - keys = [self._get_label_or_level_values(x, axis=axis) for x in by] - - # need to rewrap columns in Series to apply key function - if key is not None: - # error: List comprehension has incompatible type List[Series]; - # expected List[ndarray] - keys = [ - Series(k, name=name) # type: ignore[misc] - for (k, name) in zip(keys, by) - ] - - indexer = lexsort_indexer( - keys, orders=ascending, na_position=na_position, key=key - ) - elif len(by): - # len(by) == 1 - - k = self._get_label_or_level_values(by[0], axis=axis) - - # need to rewrap column in Series to apply key function - if key is not None: - # error: Incompatible types in assignment (expression has type - # "Series", variable has type "ndarray") - k = Series(k, name=by[0]) # type: ignore[assignment] - - if isinstance(ascending, (tuple, list)): - ascending = ascending[0] - - indexer = nargsort( - k, kind=kind, ascending=ascending, na_position=na_position, key=key - ) - else: - if inplace: - return self._update_inplace(self) - else: - return self.copy(deep=None) - - if is_range_indexer(indexer, len(indexer)): - result = self.copy(deep=(not inplace and not using_copy_on_write())) - if ignore_index: - result.index = default_index(len(result)) - - if inplace: - return self._update_inplace(result) - else: - return result - - new_data = self._mgr.take( - indexer, axis=self._get_block_manager_axis(axis), verify=False - ) - - if ignore_index: - new_data.set_axis( - self._get_block_manager_axis(axis), default_index(len(indexer)) - ) - - result = self._constructor_from_mgr(new_data, axes=new_data.axes) - if inplace: - return self._update_inplace(result) - else: - return result.__finalize__(self, method="sort_values") - - @overload - def sort_index( - self, - *, - axis: Axis = ..., - level: IndexLabel = ..., - ascending: bool | Sequence[bool] = ..., - inplace: Literal[True], - kind: SortKind = ..., - na_position: NaPosition = ..., - sort_remaining: bool = ..., - ignore_index: bool = ..., - key: IndexKeyFunc = ..., - ) -> None: - ... - - @overload - def sort_index( - self, - *, - axis: Axis = ..., - level: IndexLabel = ..., - ascending: bool | Sequence[bool] = ..., - inplace: Literal[False] = ..., - kind: SortKind = ..., - na_position: NaPosition = ..., - sort_remaining: bool = ..., - ignore_index: bool = ..., - key: IndexKeyFunc = ..., - ) -> DataFrame: - ... - - @overload - def sort_index( - self, - *, - axis: Axis = ..., - level: IndexLabel = ..., - ascending: bool | Sequence[bool] = ..., - inplace: bool = ..., - kind: SortKind = ..., - na_position: NaPosition = ..., - sort_remaining: bool = ..., - ignore_index: bool = ..., - key: IndexKeyFunc = ..., - ) -> DataFrame | None: - ... - - def sort_index( - self, - *, - axis: Axis = 0, - level: IndexLabel | None = None, - ascending: bool | Sequence[bool] = True, - inplace: bool = False, - kind: SortKind = "quicksort", - na_position: NaPosition = "last", - sort_remaining: bool = True, - ignore_index: bool = False, - key: IndexKeyFunc | None = None, - ) -> DataFrame | None: - """ - Sort object by labels (along an axis). - - Returns a new DataFrame sorted by label if `inplace` argument is - ``False``, otherwise updates the original DataFrame and returns None. - - Parameters - ---------- - axis : {0 or 'index', 1 or 'columns'}, default 0 - The axis along which to sort. The value 0 identifies the rows, - and 1 identifies the columns. - level : int or level name or list of ints or list of level names - If not None, sort on values in specified index level(s). - ascending : bool or list-like of bools, default True - Sort ascending vs. descending. When the index is a MultiIndex the - sort direction can be controlled for each level individually. - inplace : bool, default False - Whether to modify the DataFrame rather than creating a new one. - kind : {'quicksort', 'mergesort', 'heapsort', 'stable'}, default 'quicksort' - Choice of sorting algorithm. See also :func:`numpy.sort` for more - information. `mergesort` and `stable` are the only stable algorithms. For - DataFrames, this option is only applied when sorting on a single - column or label. - na_position : {'first', 'last'}, default 'last' - Puts NaNs at the beginning if `first`; `last` puts NaNs at the end. - Not implemented for MultiIndex. - sort_remaining : bool, default True - If True and sorting by level and index is multilevel, sort by other - levels too (in order) after sorting by specified level. - ignore_index : bool, default False - If True, the resulting axis will be labeled 0, 1, …, n - 1. - key : callable, optional - If not None, apply the key function to the index values - before sorting. This is similar to the `key` argument in the - builtin :meth:`sorted` function, with the notable difference that - this `key` function should be *vectorized*. It should expect an - ``Index`` and return an ``Index`` of the same shape. For MultiIndex - inputs, the key is applied *per level*. - - Returns - ------- - DataFrame or None - The original DataFrame sorted by the labels or None if ``inplace=True``. - - See Also - -------- - Series.sort_index : Sort Series by the index. - DataFrame.sort_values : Sort DataFrame by the value. - Series.sort_values : Sort Series by the value. - - Examples - -------- - >>> df = pd.DataFrame([1, 2, 3, 4, 5], index=[100, 29, 234, 1, 150], - ... columns=['A']) - >>> df.sort_index() - A - 1 4 - 29 2 - 100 1 - 150 5 - 234 3 - - By default, it sorts in ascending order, to sort in descending order, - use ``ascending=False`` - - >>> df.sort_index(ascending=False) - A - 234 3 - 150 5 - 100 1 - 29 2 - 1 4 - - A key function can be specified which is applied to the index before - sorting. For a ``MultiIndex`` this is applied to each level separately. - - >>> df = pd.DataFrame({"a": [1, 2, 3, 4]}, index=['A', 'b', 'C', 'd']) - >>> df.sort_index(key=lambda x: x.str.lower()) - a - A 1 - b 2 - C 3 - d 4 - """ - return super().sort_index( - axis=axis, - level=level, - ascending=ascending, - inplace=inplace, - kind=kind, - na_position=na_position, - sort_remaining=sort_remaining, - ignore_index=ignore_index, - key=key, - ) - - def value_counts( - self, - subset: IndexLabel | None = None, - normalize: bool = False, - sort: bool = True, - ascending: bool = False, - dropna: bool = True, - ) -> Series: - """ - Return a Series containing the frequency of each distinct row in the Dataframe. - - Parameters - ---------- - subset : label or list of labels, optional - Columns to use when counting unique combinations. - normalize : bool, default False - Return proportions rather than frequencies. - sort : bool, default True - Sort by frequencies when True. Sort by DataFrame column values when False. - ascending : bool, default False - Sort in ascending order. - dropna : bool, default True - Don't include counts of rows that contain NA values. - - .. versionadded:: 1.3.0 - - Returns - ------- - Series - - See Also - -------- - Series.value_counts: Equivalent method on Series. - - Notes - ----- - The returned Series will have a MultiIndex with one level per input - column but an Index (non-multi) for a single label. By default, rows - that contain any NA values are omitted from the result. By default, - the resulting Series will be in descending order so that the first - element is the most frequently-occurring row. - - Examples - -------- - >>> df = pd.DataFrame({'num_legs': [2, 4, 4, 6], - ... 'num_wings': [2, 0, 0, 0]}, - ... index=['falcon', 'dog', 'cat', 'ant']) - >>> df - num_legs num_wings - falcon 2 2 - dog 4 0 - cat 4 0 - ant 6 0 - - >>> df.value_counts() - num_legs num_wings - 4 0 2 - 2 2 1 - 6 0 1 - Name: count, dtype: int64 - - >>> df.value_counts(sort=False) - num_legs num_wings - 2 2 1 - 4 0 2 - 6 0 1 - Name: count, dtype: int64 - - >>> df.value_counts(ascending=True) - num_legs num_wings - 2 2 1 - 6 0 1 - 4 0 2 - Name: count, dtype: int64 - - >>> df.value_counts(normalize=True) - num_legs num_wings - 4 0 0.50 - 2 2 0.25 - 6 0 0.25 - Name: proportion, dtype: float64 - - With `dropna` set to `False` we can also count rows with NA values. - - >>> df = pd.DataFrame({'first_name': ['John', 'Anne', 'John', 'Beth'], - ... 'middle_name': ['Smith', pd.NA, pd.NA, 'Louise']}) - >>> df - first_name middle_name - 0 John Smith - 1 Anne - 2 John - 3 Beth Louise - - >>> df.value_counts() - first_name middle_name - Beth Louise 1 - John Smith 1 - Name: count, dtype: int64 - - >>> df.value_counts(dropna=False) - first_name middle_name - Anne NaN 1 - Beth Louise 1 - John Smith 1 - NaN 1 - Name: count, dtype: int64 - - >>> df.value_counts("first_name") - first_name - John 2 - Anne 1 - Beth 1 - Name: count, dtype: int64 - """ - if subset is None: - subset = self.columns.tolist() - - name = "proportion" if normalize else "count" - counts = self.groupby(subset, dropna=dropna, observed=False).grouper.size() - counts.name = name - - if sort: - counts = counts.sort_values(ascending=ascending) - if normalize: - counts /= counts.sum() - - # Force MultiIndex for a list_like subset with a single column - if is_list_like(subset) and len(subset) == 1: # type: ignore[arg-type] - counts.index = MultiIndex.from_arrays( - [counts.index], names=[counts.index.name] - ) - - return counts - - def nlargest( - self, n: int, columns: IndexLabel, keep: NsmallestNlargestKeep = "first" - ) -> DataFrame: - """ - Return the first `n` rows ordered by `columns` in descending order. - - Return the first `n` rows with the largest values in `columns`, in - descending order. The columns that are not specified are returned as - well, but not used for ordering. - - This method is equivalent to - ``df.sort_values(columns, ascending=False).head(n)``, but more - performant. - - Parameters - ---------- - n : int - Number of rows to return. - columns : label or list of labels - Column label(s) to order by. - keep : {'first', 'last', 'all'}, default 'first' - Where there are duplicate values: - - - ``first`` : prioritize the first occurrence(s) - - ``last`` : prioritize the last occurrence(s) - - ``all`` : do not drop any duplicates, even it means - selecting more than `n` items. - - Returns - ------- - DataFrame - The first `n` rows ordered by the given columns in descending - order. - - See Also - -------- - DataFrame.nsmallest : Return the first `n` rows ordered by `columns` in - ascending order. - DataFrame.sort_values : Sort DataFrame by the values. - DataFrame.head : Return the first `n` rows without re-ordering. - - Notes - ----- - This function cannot be used with all column types. For example, when - specifying columns with `object` or `category` dtypes, ``TypeError`` is - raised. - - Examples - -------- - >>> df = pd.DataFrame({'population': [59000000, 65000000, 434000, - ... 434000, 434000, 337000, 11300, - ... 11300, 11300], - ... 'GDP': [1937894, 2583560 , 12011, 4520, 12128, - ... 17036, 182, 38, 311], - ... 'alpha-2': ["IT", "FR", "MT", "MV", "BN", - ... "IS", "NR", "TV", "AI"]}, - ... index=["Italy", "France", "Malta", - ... "Maldives", "Brunei", "Iceland", - ... "Nauru", "Tuvalu", "Anguilla"]) - >>> df - population GDP alpha-2 - Italy 59000000 1937894 IT - France 65000000 2583560 FR - Malta 434000 12011 MT - Maldives 434000 4520 MV - Brunei 434000 12128 BN - Iceland 337000 17036 IS - Nauru 11300 182 NR - Tuvalu 11300 38 TV - Anguilla 11300 311 AI - - In the following example, we will use ``nlargest`` to select the three - rows having the largest values in column "population". - - >>> df.nlargest(3, 'population') - population GDP alpha-2 - France 65000000 2583560 FR - Italy 59000000 1937894 IT - Malta 434000 12011 MT - - When using ``keep='last'``, ties are resolved in reverse order: - - >>> df.nlargest(3, 'population', keep='last') - population GDP alpha-2 - France 65000000 2583560 FR - Italy 59000000 1937894 IT - Brunei 434000 12128 BN - - When using ``keep='all'``, all duplicate items are maintained: - - >>> df.nlargest(3, 'population', keep='all') - population GDP alpha-2 - France 65000000 2583560 FR - Italy 59000000 1937894 IT - Malta 434000 12011 MT - Maldives 434000 4520 MV - Brunei 434000 12128 BN - - To order by the largest values in column "population" and then "GDP", - we can specify multiple columns like in the next example. - - >>> df.nlargest(3, ['population', 'GDP']) - population GDP alpha-2 - France 65000000 2583560 FR - Italy 59000000 1937894 IT - Brunei 434000 12128 BN - """ - return selectn.SelectNFrame(self, n=n, keep=keep, columns=columns).nlargest() - - def nsmallest( - self, n: int, columns: IndexLabel, keep: NsmallestNlargestKeep = "first" - ) -> DataFrame: - """ - Return the first `n` rows ordered by `columns` in ascending order. - - Return the first `n` rows with the smallest values in `columns`, in - ascending order. The columns that are not specified are returned as - well, but not used for ordering. - - This method is equivalent to - ``df.sort_values(columns, ascending=True).head(n)``, but more - performant. - - Parameters - ---------- - n : int - Number of items to retrieve. - columns : list or str - Column name or names to order by. - keep : {'first', 'last', 'all'}, default 'first' - Where there are duplicate values: - - - ``first`` : take the first occurrence. - - ``last`` : take the last occurrence. - - ``all`` : do not drop any duplicates, even it means - selecting more than `n` items. - - Returns - ------- - DataFrame - - See Also - -------- - DataFrame.nlargest : Return the first `n` rows ordered by `columns` in - descending order. - DataFrame.sort_values : Sort DataFrame by the values. - DataFrame.head : Return the first `n` rows without re-ordering. - - Examples - -------- - >>> df = pd.DataFrame({'population': [59000000, 65000000, 434000, - ... 434000, 434000, 337000, 337000, - ... 11300, 11300], - ... 'GDP': [1937894, 2583560 , 12011, 4520, 12128, - ... 17036, 182, 38, 311], - ... 'alpha-2': ["IT", "FR", "MT", "MV", "BN", - ... "IS", "NR", "TV", "AI"]}, - ... index=["Italy", "France", "Malta", - ... "Maldives", "Brunei", "Iceland", - ... "Nauru", "Tuvalu", "Anguilla"]) - >>> df - population GDP alpha-2 - Italy 59000000 1937894 IT - France 65000000 2583560 FR - Malta 434000 12011 MT - Maldives 434000 4520 MV - Brunei 434000 12128 BN - Iceland 337000 17036 IS - Nauru 337000 182 NR - Tuvalu 11300 38 TV - Anguilla 11300 311 AI - - In the following example, we will use ``nsmallest`` to select the - three rows having the smallest values in column "population". - - >>> df.nsmallest(3, 'population') - population GDP alpha-2 - Tuvalu 11300 38 TV - Anguilla 11300 311 AI - Iceland 337000 17036 IS - - When using ``keep='last'``, ties are resolved in reverse order: - - >>> df.nsmallest(3, 'population', keep='last') - population GDP alpha-2 - Anguilla 11300 311 AI - Tuvalu 11300 38 TV - Nauru 337000 182 NR - - When using ``keep='all'``, all duplicate items are maintained: - - >>> df.nsmallest(3, 'population', keep='all') - population GDP alpha-2 - Tuvalu 11300 38 TV - Anguilla 11300 311 AI - Iceland 337000 17036 IS - Nauru 337000 182 NR - - To order by the smallest values in column "population" and then "GDP", we can - specify multiple columns like in the next example. - - >>> df.nsmallest(3, ['population', 'GDP']) - population GDP alpha-2 - Tuvalu 11300 38 TV - Anguilla 11300 311 AI - Nauru 337000 182 NR - """ - return selectn.SelectNFrame(self, n=n, keep=keep, columns=columns).nsmallest() - - @doc( - Series.swaplevel, - klass=_shared_doc_kwargs["klass"], - extra_params=dedent( - """axis : {0 or 'index', 1 or 'columns'}, default 0 - The axis to swap levels on. 0 or 'index' for row-wise, 1 or - 'columns' for column-wise.""" - ), - examples=dedent( - """\ - Examples - -------- - >>> df = pd.DataFrame( - ... {"Grade": ["A", "B", "A", "C"]}, - ... index=[ - ... ["Final exam", "Final exam", "Coursework", "Coursework"], - ... ["History", "Geography", "History", "Geography"], - ... ["January", "February", "March", "April"], - ... ], - ... ) - >>> df - Grade - Final exam History January A - Geography February B - Coursework History March A - Geography April C - - In the following example, we will swap the levels of the indices. - Here, we will swap the levels column-wise, but levels can be swapped row-wise - in a similar manner. Note that column-wise is the default behaviour. - By not supplying any arguments for i and j, we swap the last and second to - last indices. - - >>> df.swaplevel() - Grade - Final exam January History A - February Geography B - Coursework March History A - April Geography C - - By supplying one argument, we can choose which index to swap the last - index with. We can for example swap the first index with the last one as - follows. - - >>> df.swaplevel(0) - Grade - January History Final exam A - February Geography Final exam B - March History Coursework A - April Geography Coursework C - - We can also define explicitly which indices we want to swap by supplying values - for both i and j. Here, we for example swap the first and second indices. - - >>> df.swaplevel(0, 1) - Grade - History Final exam January A - Geography Final exam February B - History Coursework March A - Geography Coursework April C""" - ), - ) - def swaplevel(self, i: Axis = -2, j: Axis = -1, axis: Axis = 0) -> DataFrame: - result = self.copy(deep=None) - - axis = self._get_axis_number(axis) - - if not isinstance(result._get_axis(axis), MultiIndex): # pragma: no cover - raise TypeError("Can only swap levels on a hierarchical axis.") - - if axis == 0: - assert isinstance(result.index, MultiIndex) - result.index = result.index.swaplevel(i, j) - else: - assert isinstance(result.columns, MultiIndex) - result.columns = result.columns.swaplevel(i, j) - return result - - def reorder_levels(self, order: Sequence[int | str], axis: Axis = 0) -> DataFrame: - """ - Rearrange index levels using input order. May not drop or duplicate levels. - - Parameters - ---------- - order : list of int or list of str - List representing new level order. Reference level by number - (position) or by key (label). - axis : {0 or 'index', 1 or 'columns'}, default 0 - Where to reorder levels. - - Returns - ------- - DataFrame - - Examples - -------- - >>> data = { - ... "class": ["Mammals", "Mammals", "Reptiles"], - ... "diet": ["Omnivore", "Carnivore", "Carnivore"], - ... "species": ["Humans", "Dogs", "Snakes"], - ... } - >>> df = pd.DataFrame(data, columns=["class", "diet", "species"]) - >>> df = df.set_index(["class", "diet"]) - >>> df - species - class diet - Mammals Omnivore Humans - Carnivore Dogs - Reptiles Carnivore Snakes - - Let's reorder the levels of the index: - - >>> df.reorder_levels(["diet", "class"]) - species - diet class - Omnivore Mammals Humans - Carnivore Mammals Dogs - Reptiles Snakes - """ - axis = self._get_axis_number(axis) - if not isinstance(self._get_axis(axis), MultiIndex): # pragma: no cover - raise TypeError("Can only reorder levels on a hierarchical axis.") - - result = self.copy(deep=None) - - if axis == 0: - assert isinstance(result.index, MultiIndex) - result.index = result.index.reorder_levels(order) - else: - assert isinstance(result.columns, MultiIndex) - result.columns = result.columns.reorder_levels(order) - return result - - # ---------------------------------------------------------------------- - # Arithmetic Methods - - def _cmp_method(self, other, op): - axis: Literal[1] = 1 # only relevant for Series other case - - self, other = self._align_for_op(other, axis, flex=False, level=None) - - # See GH#4537 for discussion of scalar op behavior - new_data = self._dispatch_frame_op(other, op, axis=axis) - return self._construct_result(new_data) - - def _arith_method(self, other, op): - if self._should_reindex_frame_op(other, op, 1, None, None): - return self._arith_method_with_reindex(other, op) - - axis: Literal[1] = 1 # only relevant for Series other case - other = ops.maybe_prepare_scalar_for_op(other, (self.shape[axis],)) - - self, other = self._align_for_op(other, axis, flex=True, level=None) - - with np.errstate(all="ignore"): - new_data = self._dispatch_frame_op(other, op, axis=axis) - return self._construct_result(new_data) - - _logical_method = _arith_method - - def _dispatch_frame_op( - self, right, func: Callable, axis: AxisInt | None = None - ) -> DataFrame: - """ - Evaluate the frame operation func(left, right) by evaluating - column-by-column, dispatching to the Series implementation. - - Parameters - ---------- - right : scalar, Series, or DataFrame - func : arithmetic or comparison operator - axis : {None, 0, 1} - - Returns - ------- - DataFrame - - Notes - ----- - Caller is responsible for setting np.errstate where relevant. - """ - # Get the appropriate array-op to apply to each column/block's values. - array_op = ops.get_array_op(func) - - right = lib.item_from_zerodim(right) - if not is_list_like(right): - # i.e. scalar, faster than checking np.ndim(right) == 0 - bm = self._mgr.apply(array_op, right=right) - return self._constructor_from_mgr(bm, axes=bm.axes) - - elif isinstance(right, DataFrame): - assert self.index.equals(right.index) - assert self.columns.equals(right.columns) - # TODO: The previous assertion `assert right._indexed_same(self)` - # fails in cases with empty columns reached via - # _frame_arith_method_with_reindex - - # TODO operate_blockwise expects a manager of the same type - bm = self._mgr.operate_blockwise( - # error: Argument 1 to "operate_blockwise" of "ArrayManager" has - # incompatible type "Union[ArrayManager, BlockManager]"; expected - # "ArrayManager" - # error: Argument 1 to "operate_blockwise" of "BlockManager" has - # incompatible type "Union[ArrayManager, BlockManager]"; expected - # "BlockManager" - right._mgr, # type: ignore[arg-type] - array_op, - ) - return self._constructor_from_mgr(bm, axes=bm.axes) - - elif isinstance(right, Series) and axis == 1: - # axis=1 means we want to operate row-by-row - assert right.index.equals(self.columns) - - right = right._values - # maybe_align_as_frame ensures we do not have an ndarray here - assert not isinstance(right, np.ndarray) - - arrays = [ - array_op(_left, _right) - for _left, _right in zip(self._iter_column_arrays(), right) - ] - - elif isinstance(right, Series): - assert right.index.equals(self.index) - right = right._values - - arrays = [array_op(left, right) for left in self._iter_column_arrays()] - - else: - raise NotImplementedError(right) - - return type(self)._from_arrays( - arrays, self.columns, self.index, verify_integrity=False - ) - - def _combine_frame(self, other: DataFrame, func, fill_value=None): - # at this point we have `self._indexed_same(other)` - - if fill_value is None: - # since _arith_op may be called in a loop, avoid function call - # overhead if possible by doing this check once - _arith_op = func - - else: - - def _arith_op(left, right): - # for the mixed_type case where we iterate over columns, - # _arith_op(left, right) is equivalent to - # left._binop(right, func, fill_value=fill_value) - left, right = ops.fill_binop(left, right, fill_value) - return func(left, right) - - new_data = self._dispatch_frame_op(other, _arith_op) - return new_data - - def _arith_method_with_reindex(self, right: DataFrame, op) -> DataFrame: - """ - For DataFrame-with-DataFrame operations that require reindexing, - operate only on shared columns, then reindex. - - Parameters - ---------- - right : DataFrame - op : binary operator - - Returns - ------- - DataFrame - """ - left = self - - # GH#31623, only operate on shared columns - cols, lcols, rcols = left.columns.join( - right.columns, how="inner", level=None, return_indexers=True - ) - - new_left = left.iloc[:, lcols] - new_right = right.iloc[:, rcols] - result = op(new_left, new_right) - - # Do the join on the columns instead of using left._align_for_op - # to avoid constructing two potentially large/sparse DataFrames - join_columns, _, _ = left.columns.join( - right.columns, how="outer", level=None, return_indexers=True - ) - - if result.columns.has_duplicates: - # Avoid reindexing with a duplicate axis. - # https://github.com/pandas-dev/pandas/issues/35194 - indexer, _ = result.columns.get_indexer_non_unique(join_columns) - indexer = algorithms.unique1d(indexer) - result = result._reindex_with_indexers( - {1: [join_columns, indexer]}, allow_dups=True - ) - else: - result = result.reindex(join_columns, axis=1) - - return result - - def _should_reindex_frame_op(self, right, op, axis: int, fill_value, level) -> bool: - """ - Check if this is an operation between DataFrames that will need to reindex. - """ - if op is operator.pow or op is roperator.rpow: - # GH#32685 pow has special semantics for operating with null values - return False - - if not isinstance(right, DataFrame): - return False - - if fill_value is None and level is None and axis == 1: - # TODO: any other cases we should handle here? - - # Intersection is always unique so we have to check the unique columns - left_uniques = self.columns.unique() - right_uniques = right.columns.unique() - cols = left_uniques.intersection(right_uniques) - if len(cols) and not ( - len(cols) == len(left_uniques) and len(cols) == len(right_uniques) - ): - # TODO: is there a shortcut available when len(cols) == 0? - return True - - return False - - def _align_for_op( - self, - other, - axis: AxisInt, - flex: bool | None = False, - level: Level | None = None, - ): - """ - Convert rhs to meet lhs dims if input is list, tuple or np.ndarray. - - Parameters - ---------- - left : DataFrame - right : Any - axis : int - flex : bool or None, default False - Whether this is a flex op, in which case we reindex. - None indicates not to check for alignment. - level : int or level name, default None - - Returns - ------- - left : DataFrame - right : Any - """ - left, right = self, other - - def to_series(right): - msg = ( - "Unable to coerce to Series, " - "length must be {req_len}: given {given_len}" - ) - - # pass dtype to avoid doing inference, which would break consistency - # with Index/Series ops - dtype = None - if getattr(right, "dtype", None) == object: - # can't pass right.dtype unconditionally as that would break on e.g. - # datetime64[h] ndarray - dtype = object - - if axis == 0: - if len(left.index) != len(right): - raise ValueError( - msg.format(req_len=len(left.index), given_len=len(right)) - ) - right = left._constructor_sliced(right, index=left.index, dtype=dtype) - else: - if len(left.columns) != len(right): - raise ValueError( - msg.format(req_len=len(left.columns), given_len=len(right)) - ) - right = left._constructor_sliced(right, index=left.columns, dtype=dtype) - return right - - if isinstance(right, np.ndarray): - if right.ndim == 1: - right = to_series(right) - - elif right.ndim == 2: - # We need to pass dtype=right.dtype to retain object dtype - # otherwise we lose consistency with Index and array ops - dtype = None - if right.dtype == object: - # can't pass right.dtype unconditionally as that would break on e.g. - # datetime64[h] ndarray - dtype = object - - if right.shape == left.shape: - right = left._constructor( - right, index=left.index, columns=left.columns, dtype=dtype - ) - - elif right.shape[0] == left.shape[0] and right.shape[1] == 1: - # Broadcast across columns - right = np.broadcast_to(right, left.shape) - right = left._constructor( - right, index=left.index, columns=left.columns, dtype=dtype - ) - - elif right.shape[1] == left.shape[1] and right.shape[0] == 1: - # Broadcast along rows - right = to_series(right[0, :]) - - else: - raise ValueError( - "Unable to coerce to DataFrame, shape " - f"must be {left.shape}: given {right.shape}" - ) - - elif right.ndim > 2: - raise ValueError( - "Unable to coerce to Series/DataFrame, " - f"dimension must be <= 2: {right.shape}" - ) - - elif is_list_like(right) and not isinstance(right, (Series, DataFrame)): - # GH#36702. Raise when attempting arithmetic with list of array-like. - if any(is_array_like(el) for el in right): - raise ValueError( - f"Unable to coerce list of {type(right[0])} to Series/DataFrame" - ) - # GH#17901 - right = to_series(right) - - if flex is not None and isinstance(right, DataFrame): - if not left._indexed_same(right): - if flex: - left, right = left.align( - right, join="outer", level=level, copy=False - ) - else: - raise ValueError( - "Can only compare identically-labeled (both index and columns) " - "DataFrame objects" - ) - elif isinstance(right, Series): - # axis=1 is default for DataFrame-with-Series op - axis = axis if axis is not None else 1 - if not flex: - if not left.axes[axis].equals(right.index): - raise ValueError( - "Operands are not aligned. Do " - "`left, right = left.align(right, axis=1, copy=False)` " - "before operating." - ) - - left, right = left.align( - right, - join="outer", - axis=axis, - level=level, - copy=False, - ) - right = left._maybe_align_series_as_frame(right, axis) - - return left, right - - def _maybe_align_series_as_frame(self, series: Series, axis: AxisInt): - """ - If the Series operand is not EA-dtype, we can broadcast to 2D and operate - blockwise. - """ - rvalues = series._values - if not isinstance(rvalues, np.ndarray): - # TODO(EA2D): no need to special-case with 2D EAs - if rvalues.dtype in ("datetime64[ns]", "timedelta64[ns]"): - # We can losslessly+cheaply cast to ndarray - rvalues = np.asarray(rvalues) - else: - return series - - if axis == 0: - rvalues = rvalues.reshape(-1, 1) - else: - rvalues = rvalues.reshape(1, -1) - - rvalues = np.broadcast_to(rvalues, self.shape) - # pass dtype to avoid doing inference - return self._constructor( - rvalues, - index=self.index, - columns=self.columns, - dtype=rvalues.dtype, - ) - - def _flex_arith_method( - self, other, op, *, axis: Axis = "columns", level=None, fill_value=None - ): - axis = self._get_axis_number(axis) if axis is not None else 1 - - if self._should_reindex_frame_op(other, op, axis, fill_value, level): - return self._arith_method_with_reindex(other, op) - - if isinstance(other, Series) and fill_value is not None: - # TODO: We could allow this in cases where we end up going - # through the DataFrame path - raise NotImplementedError(f"fill_value {fill_value} not supported.") - - other = ops.maybe_prepare_scalar_for_op(other, self.shape) - self, other = self._align_for_op(other, axis, flex=True, level=level) - - with np.errstate(all="ignore"): - if isinstance(other, DataFrame): - # Another DataFrame - new_data = self._combine_frame(other, op, fill_value) - - elif isinstance(other, Series): - new_data = self._dispatch_frame_op(other, op, axis=axis) - else: - # in this case we always have `np.ndim(other) == 0` - if fill_value is not None: - self = self.fillna(fill_value) - - new_data = self._dispatch_frame_op(other, op) - - return self._construct_result(new_data) - - def _construct_result(self, result) -> DataFrame: - """ - Wrap the result of an arithmetic, comparison, or logical operation. - - Parameters - ---------- - result : DataFrame - - Returns - ------- - DataFrame - """ - out = self._constructor(result, copy=False).__finalize__(self) - # Pin columns instead of passing to constructor for compat with - # non-unique columns case - out.columns = self.columns - out.index = self.index - return out - - def __divmod__(self, other) -> tuple[DataFrame, DataFrame]: - # Naive implementation, room for optimization - div = self // other - mod = self - div * other - return div, mod - - def __rdivmod__(self, other) -> tuple[DataFrame, DataFrame]: - # Naive implementation, room for optimization - div = other // self - mod = other - div * self - return div, mod - - def _flex_cmp_method(self, other, op, *, axis: Axis = "columns", level=None): - axis = self._get_axis_number(axis) if axis is not None else 1 - - self, other = self._align_for_op(other, axis, flex=True, level=level) - - new_data = self._dispatch_frame_op(other, op, axis=axis) - return self._construct_result(new_data) - - @Appender(ops.make_flex_doc("eq", "dataframe")) - def eq(self, other, axis: Axis = "columns", level=None): - return self._flex_cmp_method(other, operator.eq, axis=axis, level=level) - - @Appender(ops.make_flex_doc("ne", "dataframe")) - def ne(self, other, axis: Axis = "columns", level=None): - return self._flex_cmp_method(other, operator.ne, axis=axis, level=level) - - @Appender(ops.make_flex_doc("le", "dataframe")) - def le(self, other, axis: Axis = "columns", level=None): - return self._flex_cmp_method(other, operator.le, axis=axis, level=level) - - @Appender(ops.make_flex_doc("lt", "dataframe")) - def lt(self, other, axis: Axis = "columns", level=None): - return self._flex_cmp_method(other, operator.lt, axis=axis, level=level) - - @Appender(ops.make_flex_doc("ge", "dataframe")) - def ge(self, other, axis: Axis = "columns", level=None): - return self._flex_cmp_method(other, operator.ge, axis=axis, level=level) - - @Appender(ops.make_flex_doc("gt", "dataframe")) - def gt(self, other, axis: Axis = "columns", level=None): - return self._flex_cmp_method(other, operator.gt, axis=axis, level=level) - - @Appender(ops.make_flex_doc("add", "dataframe")) - def add(self, other, axis: Axis = "columns", level=None, fill_value=None): - return self._flex_arith_method( - other, operator.add, level=level, fill_value=fill_value, axis=axis - ) - - @Appender(ops.make_flex_doc("radd", "dataframe")) - def radd(self, other, axis: Axis = "columns", level=None, fill_value=None): - return self._flex_arith_method( - other, roperator.radd, level=level, fill_value=fill_value, axis=axis - ) - - @Appender(ops.make_flex_doc("sub", "dataframe")) - def sub(self, other, axis: Axis = "columns", level=None, fill_value=None): - return self._flex_arith_method( - other, operator.sub, level=level, fill_value=fill_value, axis=axis - ) - - subtract = sub - - @Appender(ops.make_flex_doc("rsub", "dataframe")) - def rsub(self, other, axis: Axis = "columns", level=None, fill_value=None): - return self._flex_arith_method( - other, roperator.rsub, level=level, fill_value=fill_value, axis=axis - ) - - @Appender(ops.make_flex_doc("mul", "dataframe")) - def mul(self, other, axis: Axis = "columns", level=None, fill_value=None): - return self._flex_arith_method( - other, operator.mul, level=level, fill_value=fill_value, axis=axis - ) - - multiply = mul - - @Appender(ops.make_flex_doc("rmul", "dataframe")) - def rmul(self, other, axis: Axis = "columns", level=None, fill_value=None): - return self._flex_arith_method( - other, roperator.rmul, level=level, fill_value=fill_value, axis=axis - ) - - @Appender(ops.make_flex_doc("truediv", "dataframe")) - def truediv(self, other, axis: Axis = "columns", level=None, fill_value=None): - return self._flex_arith_method( - other, operator.truediv, level=level, fill_value=fill_value, axis=axis - ) - - div = truediv - divide = truediv - - @Appender(ops.make_flex_doc("rtruediv", "dataframe")) - def rtruediv(self, other, axis: Axis = "columns", level=None, fill_value=None): - return self._flex_arith_method( - other, roperator.rtruediv, level=level, fill_value=fill_value, axis=axis - ) - - rdiv = rtruediv - - @Appender(ops.make_flex_doc("floordiv", "dataframe")) - def floordiv(self, other, axis: Axis = "columns", level=None, fill_value=None): - return self._flex_arith_method( - other, operator.floordiv, level=level, fill_value=fill_value, axis=axis - ) - - @Appender(ops.make_flex_doc("rfloordiv", "dataframe")) - def rfloordiv(self, other, axis: Axis = "columns", level=None, fill_value=None): - return self._flex_arith_method( - other, roperator.rfloordiv, level=level, fill_value=fill_value, axis=axis - ) - - @Appender(ops.make_flex_doc("mod", "dataframe")) - def mod(self, other, axis: Axis = "columns", level=None, fill_value=None): - return self._flex_arith_method( - other, operator.mod, level=level, fill_value=fill_value, axis=axis - ) - - @Appender(ops.make_flex_doc("rmod", "dataframe")) - def rmod(self, other, axis: Axis = "columns", level=None, fill_value=None): - return self._flex_arith_method( - other, roperator.rmod, level=level, fill_value=fill_value, axis=axis - ) - - @Appender(ops.make_flex_doc("pow", "dataframe")) - def pow(self, other, axis: Axis = "columns", level=None, fill_value=None): - return self._flex_arith_method( - other, operator.pow, level=level, fill_value=fill_value, axis=axis - ) - - @Appender(ops.make_flex_doc("rpow", "dataframe")) - def rpow(self, other, axis: Axis = "columns", level=None, fill_value=None): - return self._flex_arith_method( - other, roperator.rpow, level=level, fill_value=fill_value, axis=axis - ) - - # ---------------------------------------------------------------------- - # Combination-Related - - @doc( - _shared_docs["compare"], - dedent( - """ - Returns - ------- - DataFrame - DataFrame that shows the differences stacked side by side. - - The resulting index will be a MultiIndex with 'self' and 'other' - stacked alternately at the inner level. - - Raises - ------ - ValueError - When the two DataFrames don't have identical labels or shape. - - See Also - -------- - Series.compare : Compare with another Series and show differences. - DataFrame.equals : Test whether two objects contain the same elements. - - Notes - ----- - Matching NaNs will not appear as a difference. - - Can only compare identically-labeled - (i.e. same shape, identical row and column labels) DataFrames - - Examples - -------- - >>> df = pd.DataFrame( - ... {{ - ... "col1": ["a", "a", "b", "b", "a"], - ... "col2": [1.0, 2.0, 3.0, np.nan, 5.0], - ... "col3": [1.0, 2.0, 3.0, 4.0, 5.0] - ... }}, - ... columns=["col1", "col2", "col3"], - ... ) - >>> df - col1 col2 col3 - 0 a 1.0 1.0 - 1 a 2.0 2.0 - 2 b 3.0 3.0 - 3 b NaN 4.0 - 4 a 5.0 5.0 - - >>> df2 = df.copy() - >>> df2.loc[0, 'col1'] = 'c' - >>> df2.loc[2, 'col3'] = 4.0 - >>> df2 - col1 col2 col3 - 0 c 1.0 1.0 - 1 a 2.0 2.0 - 2 b 3.0 4.0 - 3 b NaN 4.0 - 4 a 5.0 5.0 - - Align the differences on columns - - >>> df.compare(df2) - col1 col3 - self other self other - 0 a c NaN NaN - 2 NaN NaN 3.0 4.0 - - Assign result_names - - >>> df.compare(df2, result_names=("left", "right")) - col1 col3 - left right left right - 0 a c NaN NaN - 2 NaN NaN 3.0 4.0 - - Stack the differences on rows - - >>> df.compare(df2, align_axis=0) - col1 col3 - 0 self a NaN - other c NaN - 2 self NaN 3.0 - other NaN 4.0 - - Keep the equal values - - >>> df.compare(df2, keep_equal=True) - col1 col3 - self other self other - 0 a c 1.0 1.0 - 2 b b 3.0 4.0 - - Keep all original rows and columns - - >>> df.compare(df2, keep_shape=True) - col1 col2 col3 - self other self other self other - 0 a c NaN NaN NaN NaN - 1 NaN NaN NaN NaN NaN NaN - 2 NaN NaN NaN NaN 3.0 4.0 - 3 NaN NaN NaN NaN NaN NaN - 4 NaN NaN NaN NaN NaN NaN - - Keep all original rows and columns and also all original values - - >>> df.compare(df2, keep_shape=True, keep_equal=True) - col1 col2 col3 - self other self other self other - 0 a c 1.0 1.0 1.0 1.0 - 1 a a 2.0 2.0 2.0 2.0 - 2 b b 3.0 3.0 3.0 4.0 - 3 b b NaN NaN 4.0 4.0 - 4 a a 5.0 5.0 5.0 5.0 - """ - ), - klass=_shared_doc_kwargs["klass"], - ) - def compare( - self, - other: DataFrame, - align_axis: Axis = 1, - keep_shape: bool = False, - keep_equal: bool = False, - result_names: Suffixes = ("self", "other"), - ) -> DataFrame: - return super().compare( - other=other, - align_axis=align_axis, - keep_shape=keep_shape, - keep_equal=keep_equal, - result_names=result_names, - ) - - def combine( - self, - other: DataFrame, - func: Callable[[Series, Series], Series | Hashable], - fill_value=None, - overwrite: bool = True, - ) -> DataFrame: - """ - Perform column-wise combine with another DataFrame. - - Combines a DataFrame with `other` DataFrame using `func` - to element-wise combine columns. The row and column indexes of the - resulting DataFrame will be the union of the two. - - Parameters - ---------- - other : DataFrame - The DataFrame to merge column-wise. - func : function - Function that takes two series as inputs and return a Series or a - scalar. Used to merge the two dataframes column by columns. - fill_value : scalar value, default None - The value to fill NaNs with prior to passing any column to the - merge func. - overwrite : bool, default True - If True, columns in `self` that do not exist in `other` will be - overwritten with NaNs. - - Returns - ------- - DataFrame - Combination of the provided DataFrames. - - See Also - -------- - DataFrame.combine_first : Combine two DataFrame objects and default to - non-null values in frame calling the method. - - Examples - -------- - Combine using a simple function that chooses the smaller column. - - >>> df1 = pd.DataFrame({'A': [0, 0], 'B': [4, 4]}) - >>> df2 = pd.DataFrame({'A': [1, 1], 'B': [3, 3]}) - >>> take_smaller = lambda s1, s2: s1 if s1.sum() < s2.sum() else s2 - >>> df1.combine(df2, take_smaller) - A B - 0 0 3 - 1 0 3 - - Example using a true element-wise combine function. - - >>> df1 = pd.DataFrame({'A': [5, 0], 'B': [2, 4]}) - >>> df2 = pd.DataFrame({'A': [1, 1], 'B': [3, 3]}) - >>> df1.combine(df2, np.minimum) - A B - 0 1 2 - 1 0 3 - - Using `fill_value` fills Nones prior to passing the column to the - merge function. - - >>> df1 = pd.DataFrame({'A': [0, 0], 'B': [None, 4]}) - >>> df2 = pd.DataFrame({'A': [1, 1], 'B': [3, 3]}) - >>> df1.combine(df2, take_smaller, fill_value=-5) - A B - 0 0 -5.0 - 1 0 4.0 - - However, if the same element in both dataframes is None, that None - is preserved - - >>> df1 = pd.DataFrame({'A': [0, 0], 'B': [None, 4]}) - >>> df2 = pd.DataFrame({'A': [1, 1], 'B': [None, 3]}) - >>> df1.combine(df2, take_smaller, fill_value=-5) - A B - 0 0 -5.0 - 1 0 3.0 - - Example that demonstrates the use of `overwrite` and behavior when - the axis differ between the dataframes. - - >>> df1 = pd.DataFrame({'A': [0, 0], 'B': [4, 4]}) - >>> df2 = pd.DataFrame({'B': [3, 3], 'C': [-10, 1], }, index=[1, 2]) - >>> df1.combine(df2, take_smaller) - A B C - 0 NaN NaN NaN - 1 NaN 3.0 -10.0 - 2 NaN 3.0 1.0 - - >>> df1.combine(df2, take_smaller, overwrite=False) - A B C - 0 0.0 NaN NaN - 1 0.0 3.0 -10.0 - 2 NaN 3.0 1.0 - - Demonstrating the preference of the passed in dataframe. - - >>> df2 = pd.DataFrame({'B': [3, 3], 'C': [1, 1], }, index=[1, 2]) - >>> df2.combine(df1, take_smaller) - A B C - 0 0.0 NaN NaN - 1 0.0 3.0 NaN - 2 NaN 3.0 NaN - - >>> df2.combine(df1, take_smaller, overwrite=False) - A B C - 0 0.0 NaN NaN - 1 0.0 3.0 1.0 - 2 NaN 3.0 1.0 - """ - other_idxlen = len(other.index) # save for compare - - this, other = self.align(other, copy=False) - new_index = this.index - - if other.empty and len(new_index) == len(self.index): - return self.copy() - - if self.empty and len(other) == other_idxlen: - return other.copy() - - # sorts if possible; otherwise align above ensures that these are set-equal - new_columns = this.columns.union(other.columns) - do_fill = fill_value is not None - result = {} - for col in new_columns: - series = this[col] - other_series = other[col] - - this_dtype = series.dtype - other_dtype = other_series.dtype - - this_mask = isna(series) - other_mask = isna(other_series) - - # don't overwrite columns unnecessarily - # DO propagate if this column is not in the intersection - if not overwrite and other_mask.all(): - result[col] = this[col].copy() - continue - - if do_fill: - series = series.copy() - other_series = other_series.copy() - series[this_mask] = fill_value - other_series[other_mask] = fill_value - - if col not in self.columns: - # If self DataFrame does not have col in other DataFrame, - # try to promote series, which is all NaN, as other_dtype. - new_dtype = other_dtype - try: - series = series.astype(new_dtype, copy=False) - except ValueError: - # e.g. new_dtype is integer types - pass - else: - # if we have different dtypes, possibly promote - new_dtype = find_common_type([this_dtype, other_dtype]) - series = series.astype(new_dtype, copy=False) - other_series = other_series.astype(new_dtype, copy=False) - - arr = func(series, other_series) - if isinstance(new_dtype, np.dtype): - # if new_dtype is an EA Dtype, then `func` is expected to return - # the correct dtype without any additional casting - # error: No overload variant of "maybe_downcast_to_dtype" matches - # argument types "Union[Series, Hashable]", "dtype[Any]" - arr = maybe_downcast_to_dtype( # type: ignore[call-overload] - arr, new_dtype - ) - - result[col] = arr - - # convert_objects just in case - frame_result = self._constructor(result, index=new_index, columns=new_columns) - return frame_result.__finalize__(self, method="combine") - - def combine_first(self, other: DataFrame) -> DataFrame: - """ - Update null elements with value in the same location in `other`. - - Combine two DataFrame objects by filling null values in one DataFrame - with non-null values from other DataFrame. The row and column indexes - of the resulting DataFrame will be the union of the two. The resulting - dataframe contains the 'first' dataframe values and overrides the - second one values where both first.loc[index, col] and - second.loc[index, col] are not missing values, upon calling - first.combine_first(second). - - Parameters - ---------- - other : DataFrame - Provided DataFrame to use to fill null values. - - Returns - ------- - DataFrame - The result of combining the provided DataFrame with the other object. - - See Also - -------- - DataFrame.combine : Perform series-wise operation on two DataFrames - using a given function. - - Examples - -------- - >>> df1 = pd.DataFrame({'A': [None, 0], 'B': [None, 4]}) - >>> df2 = pd.DataFrame({'A': [1, 1], 'B': [3, 3]}) - >>> df1.combine_first(df2) - A B - 0 1.0 3.0 - 1 0.0 4.0 - - Null values still persist if the location of that null value - does not exist in `other` - - >>> df1 = pd.DataFrame({'A': [None, 0], 'B': [4, None]}) - >>> df2 = pd.DataFrame({'B': [3, 3], 'C': [1, 1]}, index=[1, 2]) - >>> df1.combine_first(df2) - A B C - 0 NaN 4.0 NaN - 1 0.0 3.0 1.0 - 2 NaN 3.0 1.0 - """ - from pandas.core.computation import expressions - - def combiner(x, y): - mask = extract_array(isna(x)) - - x_values = extract_array(x, extract_numpy=True) - y_values = extract_array(y, extract_numpy=True) - - # If the column y in other DataFrame is not in first DataFrame, - # just return y_values. - if y.name not in self.columns: - return y_values - - return expressions.where(mask, y_values, x_values) - - if len(other) == 0: - combined = self.reindex( - self.columns.append(other.columns.difference(self.columns)), axis=1 - ) - combined = combined.astype(other.dtypes) - else: - combined = self.combine(other, combiner, overwrite=False) - - dtypes = { - col: find_common_type([self.dtypes[col], other.dtypes[col]]) - for col in self.columns.intersection(other.columns) - if combined.dtypes[col] != self.dtypes[col] - } - - if dtypes: - combined = combined.astype(dtypes) - - return combined.__finalize__(self, method="combine_first") - - def update( - self, - other, - join: UpdateJoin = "left", - overwrite: bool = True, - filter_func=None, - errors: IgnoreRaise = "ignore", - ) -> None: - """ - Modify in place using non-NA values from another DataFrame. - - Aligns on indices. There is no return value. - - Parameters - ---------- - other : DataFrame, or object coercible into a DataFrame - Should have at least one matching index/column label - with the original DataFrame. If a Series is passed, - its name attribute must be set, and that will be - used as the column name to align with the original DataFrame. - join : {'left'}, default 'left' - Only left join is implemented, keeping the index and columns of the - original object. - overwrite : bool, default True - How to handle non-NA values for overlapping keys: - - * True: overwrite original DataFrame's values - with values from `other`. - * False: only update values that are NA in - the original DataFrame. - - filter_func : callable(1d-array) -> bool 1d-array, optional - Can choose to replace values other than NA. Return True for values - that should be updated. - errors : {'raise', 'ignore'}, default 'ignore' - If 'raise', will raise a ValueError if the DataFrame and `other` - both contain non-NA data in the same place. - - Returns - ------- - None - This method directly changes calling object. - - Raises - ------ - ValueError - * When `errors='raise'` and there's overlapping non-NA data. - * When `errors` is not either `'ignore'` or `'raise'` - NotImplementedError - * If `join != 'left'` - - See Also - -------- - dict.update : Similar method for dictionaries. - DataFrame.merge : For column(s)-on-column(s) operations. - - Examples - -------- - >>> df = pd.DataFrame({'A': [1, 2, 3], - ... 'B': [400, 500, 600]}) - >>> new_df = pd.DataFrame({'B': [4, 5, 6], - ... 'C': [7, 8, 9]}) - >>> df.update(new_df) - >>> df - A B - 0 1 4 - 1 2 5 - 2 3 6 - - The DataFrame's length does not increase as a result of the update, - only values at matching index/column labels are updated. - - >>> df = pd.DataFrame({'A': ['a', 'b', 'c'], - ... 'B': ['x', 'y', 'z']}) - >>> new_df = pd.DataFrame({'B': ['d', 'e', 'f', 'g', 'h', 'i']}) - >>> df.update(new_df) - >>> df - A B - 0 a d - 1 b e - 2 c f - - For Series, its name attribute must be set. - - >>> df = pd.DataFrame({'A': ['a', 'b', 'c'], - ... 'B': ['x', 'y', 'z']}) - >>> new_column = pd.Series(['d', 'e'], name='B', index=[0, 2]) - >>> df.update(new_column) - >>> df - A B - 0 a d - 1 b y - 2 c e - >>> df = pd.DataFrame({'A': ['a', 'b', 'c'], - ... 'B': ['x', 'y', 'z']}) - >>> new_df = pd.DataFrame({'B': ['d', 'e']}, index=[1, 2]) - >>> df.update(new_df) - >>> df - A B - 0 a x - 1 b d - 2 c e - - If `other` contains NaNs the corresponding values are not updated - in the original dataframe. - - >>> df = pd.DataFrame({'A': [1, 2, 3], - ... 'B': [400, 500, 600]}) - >>> new_df = pd.DataFrame({'B': [4, np.nan, 6]}) - >>> df.update(new_df) - >>> df - A B - 0 1 4 - 1 2 500 - 2 3 6 - """ - if not PYPY and using_copy_on_write(): - if sys.getrefcount(self) <= REF_COUNT: - warnings.warn( - _chained_assignment_method_msg, - ChainedAssignmentError, - stacklevel=2, - ) - - from pandas.core.computation import expressions - - # TODO: Support other joins - if join != "left": # pragma: no cover - raise NotImplementedError("Only left join is supported") - if errors not in ["ignore", "raise"]: - raise ValueError("The parameter errors must be either 'ignore' or 'raise'") - - if not isinstance(other, DataFrame): - other = DataFrame(other) - - other = other.reindex(self.index) - - for col in self.columns.intersection(other.columns): - this = self[col]._values - that = other[col]._values - - if filter_func is not None: - mask = ~filter_func(this) | isna(that) - else: - if errors == "raise": - mask_this = notna(that) - mask_that = notna(this) - if any(mask_this & mask_that): - raise ValueError("Data overlaps.") - - if overwrite: - mask = isna(that) - else: - mask = notna(this) - - # don't overwrite columns unnecessarily - if mask.all(): - continue - - self.loc[:, col] = expressions.where(mask, this, that) - - # ---------------------------------------------------------------------- - # Data reshaping - @Appender( - dedent( - """ - Examples - -------- - >>> df = pd.DataFrame({'Animal': ['Falcon', 'Falcon', - ... 'Parrot', 'Parrot'], - ... 'Max Speed': [380., 370., 24., 26.]}) - >>> df - Animal Max Speed - 0 Falcon 380.0 - 1 Falcon 370.0 - 2 Parrot 24.0 - 3 Parrot 26.0 - >>> df.groupby(['Animal']).mean() - Max Speed - Animal - Falcon 375.0 - Parrot 25.0 - - **Hierarchical Indexes** - - We can groupby different levels of a hierarchical index - using the `level` parameter: - - >>> arrays = [['Falcon', 'Falcon', 'Parrot', 'Parrot'], - ... ['Captive', 'Wild', 'Captive', 'Wild']] - >>> index = pd.MultiIndex.from_arrays(arrays, names=('Animal', 'Type')) - >>> df = pd.DataFrame({'Max Speed': [390., 350., 30., 20.]}, - ... index=index) - >>> df - Max Speed - Animal Type - Falcon Captive 390.0 - Wild 350.0 - Parrot Captive 30.0 - Wild 20.0 - >>> df.groupby(level=0).mean() - Max Speed - Animal - Falcon 370.0 - Parrot 25.0 - >>> df.groupby(level="Type").mean() - Max Speed - Type - Captive 210.0 - Wild 185.0 - - We can also choose to include NA in group keys or not by setting - `dropna` parameter, the default setting is `True`. - - >>> l = [[1, 2, 3], [1, None, 4], [2, 1, 3], [1, 2, 2]] - >>> df = pd.DataFrame(l, columns=["a", "b", "c"]) - - >>> df.groupby(by=["b"]).sum() - a c - b - 1.0 2 3 - 2.0 2 5 - - >>> df.groupby(by=["b"], dropna=False).sum() - a c - b - 1.0 2 3 - 2.0 2 5 - NaN 1 4 - - >>> l = [["a", 12, 12], [None, 12.3, 33.], ["b", 12.3, 123], ["a", 1, 1]] - >>> df = pd.DataFrame(l, columns=["a", "b", "c"]) - - >>> df.groupby(by="a").sum() - b c - a - a 13.0 13.0 - b 12.3 123.0 - - >>> df.groupby(by="a", dropna=False).sum() - b c - a - a 13.0 13.0 - b 12.3 123.0 - NaN 12.3 33.0 - - When using ``.apply()``, use ``group_keys`` to include or exclude the - group keys. The ``group_keys`` argument defaults to ``True`` (include). - - >>> df = pd.DataFrame({'Animal': ['Falcon', 'Falcon', - ... 'Parrot', 'Parrot'], - ... 'Max Speed': [380., 370., 24., 26.]}) - >>> df.groupby("Animal", group_keys=True).apply(lambda x: x) - Animal Max Speed - Animal - Falcon 0 Falcon 380.0 - 1 Falcon 370.0 - Parrot 2 Parrot 24.0 - 3 Parrot 26.0 - - >>> df.groupby("Animal", group_keys=False).apply(lambda x: x) - Animal Max Speed - 0 Falcon 380.0 - 1 Falcon 370.0 - 2 Parrot 24.0 - 3 Parrot 26.0 - """ - ) - ) - @Appender(_shared_docs["groupby"] % _shared_doc_kwargs) - def groupby( - self, - by=None, - axis: Axis | lib.NoDefault = lib.no_default, - level: IndexLabel | None = None, - as_index: bool = True, - sort: bool = True, - group_keys: bool = True, - observed: bool | lib.NoDefault = lib.no_default, - dropna: bool = True, - ) -> DataFrameGroupBy: - if axis is not lib.no_default: - axis = self._get_axis_number(axis) - if axis == 1: - warnings.warn( - "DataFrame.groupby with axis=1 is deprecated. Do " - "`frame.T.groupby(...)` without axis instead.", - FutureWarning, - stacklevel=find_stack_level(), - ) - else: - warnings.warn( - "The 'axis' keyword in DataFrame.groupby is deprecated and " - "will be removed in a future version.", - FutureWarning, - stacklevel=find_stack_level(), - ) - else: - axis = 0 - - from pandas.core.groupby.generic import DataFrameGroupBy - - if level is None and by is None: - raise TypeError("You have to supply one of 'by' and 'level'") - - return DataFrameGroupBy( - obj=self, - keys=by, - axis=axis, - level=level, - as_index=as_index, - sort=sort, - group_keys=group_keys, - observed=observed, - dropna=dropna, - ) - - _shared_docs[ - "pivot" - ] = """ - Return reshaped DataFrame organized by given index / column values. - - Reshape data (produce a "pivot" table) based on column values. Uses - unique values from specified `index` / `columns` to form axes of the - resulting DataFrame. This function does not support data - aggregation, multiple values will result in a MultiIndex in the - columns. See the :ref:`User Guide ` for more on reshaping. - - Parameters - ----------%s - columns : str or object or a list of str - Column to use to make new frame's columns. - index : str or object or a list of str, optional - Column to use to make new frame's index. If not given, uses existing index. - values : str, object or a list of the previous, optional - Column(s) to use for populating new frame's values. If not - specified, all remaining columns will be used and the result will - have hierarchically indexed columns. - - Returns - ------- - DataFrame - Returns reshaped DataFrame. - - Raises - ------ - ValueError: - When there are any `index`, `columns` combinations with multiple - values. `DataFrame.pivot_table` when you need to aggregate. - - See Also - -------- - DataFrame.pivot_table : Generalization of pivot that can handle - duplicate values for one index/column pair. - DataFrame.unstack : Pivot based on the index values instead of a - column. - wide_to_long : Wide panel to long format. Less flexible but more - user-friendly than melt. - - Notes - ----- - For finer-tuned control, see hierarchical indexing documentation along - with the related stack/unstack methods. - - Reference :ref:`the user guide ` for more examples. - - Examples - -------- - >>> df = pd.DataFrame({'foo': ['one', 'one', 'one', 'two', 'two', - ... 'two'], - ... 'bar': ['A', 'B', 'C', 'A', 'B', 'C'], - ... 'baz': [1, 2, 3, 4, 5, 6], - ... 'zoo': ['x', 'y', 'z', 'q', 'w', 't']}) - >>> df - foo bar baz zoo - 0 one A 1 x - 1 one B 2 y - 2 one C 3 z - 3 two A 4 q - 4 two B 5 w - 5 two C 6 t - - >>> df.pivot(index='foo', columns='bar', values='baz') - bar A B C - foo - one 1 2 3 - two 4 5 6 - - >>> df.pivot(index='foo', columns='bar')['baz'] - bar A B C - foo - one 1 2 3 - two 4 5 6 - - >>> df.pivot(index='foo', columns='bar', values=['baz', 'zoo']) - baz zoo - bar A B C A B C - foo - one 1 2 3 x y z - two 4 5 6 q w t - - You could also assign a list of column names or a list of index names. - - >>> df = pd.DataFrame({ - ... "lev1": [1, 1, 1, 2, 2, 2], - ... "lev2": [1, 1, 2, 1, 1, 2], - ... "lev3": [1, 2, 1, 2, 1, 2], - ... "lev4": [1, 2, 3, 4, 5, 6], - ... "values": [0, 1, 2, 3, 4, 5]}) - >>> df - lev1 lev2 lev3 lev4 values - 0 1 1 1 1 0 - 1 1 1 2 2 1 - 2 1 2 1 3 2 - 3 2 1 2 4 3 - 4 2 1 1 5 4 - 5 2 2 2 6 5 - - >>> df.pivot(index="lev1", columns=["lev2", "lev3"], values="values") - lev2 1 2 - lev3 1 2 1 2 - lev1 - 1 0.0 1.0 2.0 NaN - 2 4.0 3.0 NaN 5.0 - - >>> df.pivot(index=["lev1", "lev2"], columns=["lev3"], values="values") - lev3 1 2 - lev1 lev2 - 1 1 0.0 1.0 - 2 2.0 NaN - 2 1 4.0 3.0 - 2 NaN 5.0 - - A ValueError is raised if there are any duplicates. - - >>> df = pd.DataFrame({"foo": ['one', 'one', 'two', 'two'], - ... "bar": ['A', 'A', 'B', 'C'], - ... "baz": [1, 2, 3, 4]}) - >>> df - foo bar baz - 0 one A 1 - 1 one A 2 - 2 two B 3 - 3 two C 4 - - Notice that the first two rows are the same for our `index` - and `columns` arguments. - - >>> df.pivot(index='foo', columns='bar', values='baz') - Traceback (most recent call last): - ... - ValueError: Index contains duplicate entries, cannot reshape - """ - - @Substitution("") - @Appender(_shared_docs["pivot"]) - def pivot( - self, *, columns, index=lib.no_default, values=lib.no_default - ) -> DataFrame: - from pandas.core.reshape.pivot import pivot - - return pivot(self, index=index, columns=columns, values=values) - - _shared_docs[ - "pivot_table" - ] = """ - Create a spreadsheet-style pivot table as a DataFrame. - - The levels in the pivot table will be stored in MultiIndex objects - (hierarchical indexes) on the index and columns of the result DataFrame. - - Parameters - ----------%s - values : list-like or scalar, optional - Column or columns to aggregate. - index : column, Grouper, array, or list of the previous - Keys to group by on the pivot table index. If a list is passed, - it can contain any of the other types (except list). If an array is - passed, it must be the same length as the data and will be used in - the same manner as column values. - columns : column, Grouper, array, or list of the previous - Keys to group by on the pivot table column. If a list is passed, - it can contain any of the other types (except list). If an array is - passed, it must be the same length as the data and will be used in - the same manner as column values. - aggfunc : function, list of functions, dict, default "mean" - If a list of functions is passed, the resulting pivot table will have - hierarchical columns whose top level are the function names - (inferred from the function objects themselves). - If a dict is passed, the key is column to aggregate and the value is - function or list of functions. If ``margin=True``, aggfunc will be - used to calculate the partial aggregates. - fill_value : scalar, default None - Value to replace missing values with (in the resulting pivot table, - after aggregation). - margins : bool, default False - If ``margins=True``, special ``All`` columns and rows - will be added with partial group aggregates across the categories - on the rows and columns. - dropna : bool, default True - Do not include columns whose entries are all NaN. If True, - rows with a NaN value in any column will be omitted before - computing margins. - margins_name : str, default 'All' - Name of the row / column that will contain the totals - when margins is True. - observed : bool, default False - This only applies if any of the groupers are Categoricals. - If True: only show observed values for categorical groupers. - If False: show all values for categorical groupers. - - sort : bool, default True - Specifies if the result should be sorted. - - .. versionadded:: 1.3.0 - - Returns - ------- - DataFrame - An Excel style pivot table. - - See Also - -------- - DataFrame.pivot : Pivot without aggregation that can handle - non-numeric data. - DataFrame.melt: Unpivot a DataFrame from wide to long format, - optionally leaving identifiers set. - wide_to_long : Wide panel to long format. Less flexible but more - user-friendly than melt. - - Notes - ----- - Reference :ref:`the user guide ` for more examples. - - Examples - -------- - >>> df = pd.DataFrame({"A": ["foo", "foo", "foo", "foo", "foo", - ... "bar", "bar", "bar", "bar"], - ... "B": ["one", "one", "one", "two", "two", - ... "one", "one", "two", "two"], - ... "C": ["small", "large", "large", "small", - ... "small", "large", "small", "small", - ... "large"], - ... "D": [1, 2, 2, 3, 3, 4, 5, 6, 7], - ... "E": [2, 4, 5, 5, 6, 6, 8, 9, 9]}) - >>> df - A B C D E - 0 foo one small 1 2 - 1 foo one large 2 4 - 2 foo one large 2 5 - 3 foo two small 3 5 - 4 foo two small 3 6 - 5 bar one large 4 6 - 6 bar one small 5 8 - 7 bar two small 6 9 - 8 bar two large 7 9 - - This first example aggregates values by taking the sum. - - >>> table = pd.pivot_table(df, values='D', index=['A', 'B'], - ... columns=['C'], aggfunc="sum") - >>> table - C large small - A B - bar one 4.0 5.0 - two 7.0 6.0 - foo one 4.0 1.0 - two NaN 6.0 - - We can also fill missing values using the `fill_value` parameter. - - >>> table = pd.pivot_table(df, values='D', index=['A', 'B'], - ... columns=['C'], aggfunc="sum", fill_value=0) - >>> table - C large small - A B - bar one 4 5 - two 7 6 - foo one 4 1 - two 0 6 - - The next example aggregates by taking the mean across multiple columns. - - >>> table = pd.pivot_table(df, values=['D', 'E'], index=['A', 'C'], - ... aggfunc={'D': "mean", 'E': "mean"}) - >>> table - D E - A C - bar large 5.500000 7.500000 - small 5.500000 8.500000 - foo large 2.000000 4.500000 - small 2.333333 4.333333 - - We can also calculate multiple types of aggregations for any given - value column. - - >>> table = pd.pivot_table(df, values=['D', 'E'], index=['A', 'C'], - ... aggfunc={'D': "mean", - ... 'E': ["min", "max", "mean"]}) - >>> table - D E - mean max mean min - A C - bar large 5.500000 9 7.500000 6 - small 5.500000 9 8.500000 8 - foo large 2.000000 5 4.500000 4 - small 2.333333 6 4.333333 2 - """ - - @Substitution("") - @Appender(_shared_docs["pivot_table"]) - def pivot_table( - self, - values=None, - index=None, - columns=None, - aggfunc: AggFuncType = "mean", - fill_value=None, - margins: bool = False, - dropna: bool = True, - margins_name: Level = "All", - observed: bool = False, - sort: bool = True, - ) -> DataFrame: - from pandas.core.reshape.pivot import pivot_table - - return pivot_table( - self, - values=values, - index=index, - columns=columns, - aggfunc=aggfunc, - fill_value=fill_value, - margins=margins, - dropna=dropna, - margins_name=margins_name, - observed=observed, - sort=sort, - ) - - def stack( - self, - level: IndexLabel = -1, - dropna: bool | lib.NoDefault = lib.no_default, - sort: bool | lib.NoDefault = lib.no_default, - future_stack: bool = False, - ): - """ - Stack the prescribed level(s) from columns to index. - - Return a reshaped DataFrame or Series having a multi-level - index with one or more new inner-most levels compared to the current - DataFrame. The new inner-most levels are created by pivoting the - columns of the current dataframe: - - - if the columns have a single level, the output is a Series; - - if the columns have multiple levels, the new index - level(s) is (are) taken from the prescribed level(s) and - the output is a DataFrame. - - Parameters - ---------- - level : int, str, list, default -1 - Level(s) to stack from the column axis onto the index - axis, defined as one index or label, or a list of indices - or labels. - dropna : bool, default True - Whether to drop rows in the resulting Frame/Series with - missing values. Stacking a column level onto the index - axis can create combinations of index and column values - that are missing from the original dataframe. See Examples - section. - sort : bool, default True - Whether to sort the levels of the resulting MultiIndex. - future_stack : bool, default False - Whether to use the new implementation that will replace the current - implementation in pandas 3.0. When True, dropna and sort have no impact - on the result and must remain unspecified. See :ref:`pandas 2.1.0 Release - notes ` for more details. - - Returns - ------- - DataFrame or Series - Stacked dataframe or series. - - See Also - -------- - DataFrame.unstack : Unstack prescribed level(s) from index axis - onto column axis. - DataFrame.pivot : Reshape dataframe from long format to wide - format. - DataFrame.pivot_table : Create a spreadsheet-style pivot table - as a DataFrame. - - Notes - ----- - The function is named by analogy with a collection of books - being reorganized from being side by side on a horizontal - position (the columns of the dataframe) to being stacked - vertically on top of each other (in the index of the - dataframe). - - Reference :ref:`the user guide ` for more examples. - - Examples - -------- - **Single level columns** - - >>> df_single_level_cols = pd.DataFrame([[0, 1], [2, 3]], - ... index=['cat', 'dog'], - ... columns=['weight', 'height']) - - Stacking a dataframe with a single level column axis returns a Series: - - >>> df_single_level_cols - weight height - cat 0 1 - dog 2 3 - >>> df_single_level_cols.stack(future_stack=True) - cat weight 0 - height 1 - dog weight 2 - height 3 - dtype: int64 - - **Multi level columns: simple case** - - >>> multicol1 = pd.MultiIndex.from_tuples([('weight', 'kg'), - ... ('weight', 'pounds')]) - >>> df_multi_level_cols1 = pd.DataFrame([[1, 2], [2, 4]], - ... index=['cat', 'dog'], - ... columns=multicol1) - - Stacking a dataframe with a multi-level column axis: - - >>> df_multi_level_cols1 - weight - kg pounds - cat 1 2 - dog 2 4 - >>> df_multi_level_cols1.stack(future_stack=True) - weight - cat kg 1 - pounds 2 - dog kg 2 - pounds 4 - - **Missing values** - - >>> multicol2 = pd.MultiIndex.from_tuples([('weight', 'kg'), - ... ('height', 'm')]) - >>> df_multi_level_cols2 = pd.DataFrame([[1.0, 2.0], [3.0, 4.0]], - ... index=['cat', 'dog'], - ... columns=multicol2) - - It is common to have missing values when stacking a dataframe - with multi-level columns, as the stacked dataframe typically - has more values than the original dataframe. Missing values - are filled with NaNs: - - >>> df_multi_level_cols2 - weight height - kg m - cat 1.0 2.0 - dog 3.0 4.0 - >>> df_multi_level_cols2.stack(future_stack=True) - weight height - cat kg 1.0 NaN - m NaN 2.0 - dog kg 3.0 NaN - m NaN 4.0 - - **Prescribing the level(s) to be stacked** - - The first parameter controls which level or levels are stacked: - - >>> df_multi_level_cols2.stack(0, future_stack=True) - kg m - cat weight 1.0 NaN - height NaN 2.0 - dog weight 3.0 NaN - height NaN 4.0 - >>> df_multi_level_cols2.stack([0, 1], future_stack=True) - cat weight kg 1.0 - height m 2.0 - dog weight kg 3.0 - height m 4.0 - dtype: float64 - - **Dropping missing values** - - >>> df_multi_level_cols3 = pd.DataFrame([[None, 1.0], [2.0, 3.0]], - ... index=['cat', 'dog'], - ... columns=multicol2) - - Note that rows where all values are missing are dropped by - default but this behaviour can be controlled via the dropna - keyword parameter: - - >>> df_multi_level_cols3 - weight height - kg m - cat NaN 1.0 - dog 2.0 3.0 - >>> df_multi_level_cols3.stack(dropna=False) - weight height - cat kg NaN NaN - m NaN 1.0 - dog kg 2.0 NaN - m NaN 3.0 - >>> df_multi_level_cols3.stack(dropna=True) - weight height - cat m NaN 1.0 - dog kg 2.0 NaN - m NaN 3.0 - """ - if not future_stack: - from pandas.core.reshape.reshape import ( - stack, - stack_multiple, - ) - - if dropna is lib.no_default: - dropna = True - if sort is lib.no_default: - sort = True - - if isinstance(level, (tuple, list)): - result = stack_multiple(self, level, dropna=dropna, sort=sort) - else: - result = stack(self, level, dropna=dropna, sort=sort) - else: - from pandas.core.reshape.reshape import stack_v3 - - if dropna is not lib.no_default: - raise ValueError( - "dropna must be unspecified with future_stack=True as the new " - "implementation does not introduce rows of NA values. This " - "argument will be removed in a future version of pandas." - ) - - if sort is not lib.no_default: - raise ValueError( - "Cannot specify sort with future_stack=True, this argument will be " - "removed in a future version of pandas. Sort the result using " - ".sort_index instead." - ) - - if ( - isinstance(level, (tuple, list)) - and not all(lev in self.columns.names for lev in level) - and not all(isinstance(lev, int) for lev in level) - ): - raise ValueError( - "level should contain all level names or all level " - "numbers, not a mixture of the two." - ) - - if not isinstance(level, (tuple, list)): - level = [level] - level = [self.columns._get_level_number(lev) for lev in level] - result = stack_v3(self, level) - - return result.__finalize__(self, method="stack") - - def explode( - self, - column: IndexLabel, - ignore_index: bool = False, - ) -> DataFrame: - """ - Transform each element of a list-like to a row, replicating index values. - - Parameters - ---------- - column : IndexLabel - Column(s) to explode. - For multiple columns, specify a non-empty list with each element - be str or tuple, and all specified columns their list-like data - on same row of the frame must have matching length. - - .. versionadded:: 1.3.0 - Multi-column explode - - ignore_index : bool, default False - If True, the resulting index will be labeled 0, 1, …, n - 1. - - Returns - ------- - DataFrame - Exploded lists to rows of the subset columns; - index will be duplicated for these rows. - - Raises - ------ - ValueError : - * If columns of the frame are not unique. - * If specified columns to explode is empty list. - * If specified columns to explode have not matching count of - elements rowwise in the frame. - - See Also - -------- - DataFrame.unstack : Pivot a level of the (necessarily hierarchical) - index labels. - DataFrame.melt : Unpivot a DataFrame from wide format to long format. - Series.explode : Explode a DataFrame from list-like columns to long format. - - Notes - ----- - This routine will explode list-likes including lists, tuples, sets, - Series, and np.ndarray. The result dtype of the subset rows will - be object. Scalars will be returned unchanged, and empty list-likes will - result in a np.nan for that row. In addition, the ordering of rows in the - output will be non-deterministic when exploding sets. - - Reference :ref:`the user guide ` for more examples. - - Examples - -------- - >>> df = pd.DataFrame({'A': [[0, 1, 2], 'foo', [], [3, 4]], - ... 'B': 1, - ... 'C': [['a', 'b', 'c'], np.nan, [], ['d', 'e']]}) - >>> df - A B C - 0 [0, 1, 2] 1 [a, b, c] - 1 foo 1 NaN - 2 [] 1 [] - 3 [3, 4] 1 [d, e] - - Single-column explode. - - >>> df.explode('A') - A B C - 0 0 1 [a, b, c] - 0 1 1 [a, b, c] - 0 2 1 [a, b, c] - 1 foo 1 NaN - 2 NaN 1 [] - 3 3 1 [d, e] - 3 4 1 [d, e] - - Multi-column explode. - - >>> df.explode(list('AC')) - A B C - 0 0 1 a - 0 1 1 b - 0 2 1 c - 1 foo 1 NaN - 2 NaN 1 NaN - 3 3 1 d - 3 4 1 e - """ - if not self.columns.is_unique: - duplicate_cols = self.columns[self.columns.duplicated()].tolist() - raise ValueError( - f"DataFrame columns must be unique. Duplicate columns: {duplicate_cols}" - ) - - columns: list[Hashable] - if is_scalar(column) or isinstance(column, tuple): - columns = [column] - elif isinstance(column, list) and all( - is_scalar(c) or isinstance(c, tuple) for c in column - ): - if not column: - raise ValueError("column must be nonempty") - if len(column) > len(set(column)): - raise ValueError("column must be unique") - columns = column - else: - raise ValueError("column must be a scalar, tuple, or list thereof") - - df = self.reset_index(drop=True) - if len(columns) == 1: - result = df[columns[0]].explode() - else: - mylen = lambda x: len(x) if (is_list_like(x) and len(x) > 0) else 1 - counts0 = self[columns[0]].apply(mylen) - for c in columns[1:]: - if not all(counts0 == self[c].apply(mylen)): - raise ValueError("columns must have matching element counts") - result = DataFrame({c: df[c].explode() for c in columns}) - result = df.drop(columns, axis=1).join(result) - if ignore_index: - result.index = default_index(len(result)) - else: - result.index = self.index.take(result.index) - result = result.reindex(columns=self.columns, copy=False) - - return result.__finalize__(self, method="explode") - - def unstack(self, level: IndexLabel = -1, fill_value=None, sort: bool = True): - """ - Pivot a level of the (necessarily hierarchical) index labels. - - Returns a DataFrame having a new level of column labels whose inner-most level - consists of the pivoted index labels. - - If the index is not a MultiIndex, the output will be a Series - (the analogue of stack when the columns are not a MultiIndex). - - Parameters - ---------- - level : int, str, or list of these, default -1 (last level) - Level(s) of index to unstack, can pass level name. - fill_value : int, str or dict - Replace NaN with this value if the unstack produces missing values. - sort : bool, default True - Sort the level(s) in the resulting MultiIndex columns. - - Returns - ------- - Series or DataFrame - - See Also - -------- - DataFrame.pivot : Pivot a table based on column values. - DataFrame.stack : Pivot a level of the column labels (inverse operation - from `unstack`). - - Notes - ----- - Reference :ref:`the user guide ` for more examples. - - Examples - -------- - >>> index = pd.MultiIndex.from_tuples([('one', 'a'), ('one', 'b'), - ... ('two', 'a'), ('two', 'b')]) - >>> s = pd.Series(np.arange(1.0, 5.0), index=index) - >>> s - one a 1.0 - b 2.0 - two a 3.0 - b 4.0 - dtype: float64 - - >>> s.unstack(level=-1) - a b - one 1.0 2.0 - two 3.0 4.0 - - >>> s.unstack(level=0) - one two - a 1.0 3.0 - b 2.0 4.0 - - >>> df = s.unstack(level=0) - >>> df.unstack() - one a 1.0 - b 2.0 - two a 3.0 - b 4.0 - dtype: float64 - """ - from pandas.core.reshape.reshape import unstack - - result = unstack(self, level, fill_value, sort) - - return result.__finalize__(self, method="unstack") - - @Appender(_shared_docs["melt"] % {"caller": "df.melt(", "other": "melt"}) - def melt( - self, - id_vars=None, - value_vars=None, - var_name=None, - value_name: Hashable = "value", - col_level: Level | None = None, - ignore_index: bool = True, - ) -> DataFrame: - return melt( - self, - id_vars=id_vars, - value_vars=value_vars, - var_name=var_name, - value_name=value_name, - col_level=col_level, - ignore_index=ignore_index, - ).__finalize__(self, method="melt") - - # ---------------------------------------------------------------------- - # Time series-related - - @doc( - Series.diff, - klass="DataFrame", - extra_params="axis : {0 or 'index', 1 or 'columns'}, default 0\n " - "Take difference over rows (0) or columns (1).\n", - other_klass="Series", - examples=dedent( - """ - Difference with previous row - - >>> df = pd.DataFrame({'a': [1, 2, 3, 4, 5, 6], - ... 'b': [1, 1, 2, 3, 5, 8], - ... 'c': [1, 4, 9, 16, 25, 36]}) - >>> df - a b c - 0 1 1 1 - 1 2 1 4 - 2 3 2 9 - 3 4 3 16 - 4 5 5 25 - 5 6 8 36 - - >>> df.diff() - a b c - 0 NaN NaN NaN - 1 1.0 0.0 3.0 - 2 1.0 1.0 5.0 - 3 1.0 1.0 7.0 - 4 1.0 2.0 9.0 - 5 1.0 3.0 11.0 - - Difference with previous column - - >>> df.diff(axis=1) - a b c - 0 NaN 0 0 - 1 NaN -1 3 - 2 NaN -1 7 - 3 NaN -1 13 - 4 NaN 0 20 - 5 NaN 2 28 - - Difference with 3rd previous row - - >>> df.diff(periods=3) - a b c - 0 NaN NaN NaN - 1 NaN NaN NaN - 2 NaN NaN NaN - 3 3.0 2.0 15.0 - 4 3.0 4.0 21.0 - 5 3.0 6.0 27.0 - - Difference with following row - - >>> df.diff(periods=-1) - a b c - 0 -1.0 0.0 -3.0 - 1 -1.0 -1.0 -5.0 - 2 -1.0 -1.0 -7.0 - 3 -1.0 -2.0 -9.0 - 4 -1.0 -3.0 -11.0 - 5 NaN NaN NaN - - Overflow in input dtype - - >>> df = pd.DataFrame({'a': [1, 0]}, dtype=np.uint8) - >>> df.diff() - a - 0 NaN - 1 255.0""" - ), - ) - def diff(self, periods: int = 1, axis: Axis = 0) -> DataFrame: - if not lib.is_integer(periods): - if not (is_float(periods) and periods.is_integer()): - raise ValueError("periods must be an integer") - periods = int(periods) - - axis = self._get_axis_number(axis) - if axis == 1: - if periods != 0: - # in the periods == 0 case, this is equivalent diff of 0 periods - # along axis=0, and the Manager method may be somewhat more - # performant, so we dispatch in that case. - return self - self.shift(periods, axis=axis) - # With periods=0 this is equivalent to a diff with axis=0 - axis = 0 - - new_data = self._mgr.diff(n=periods) - res_df = self._constructor_from_mgr(new_data, axes=new_data.axes) - return res_df.__finalize__(self, "diff") - - # ---------------------------------------------------------------------- - # Function application - - def _gotitem( - self, - key: IndexLabel, - ndim: int, - subset: DataFrame | Series | None = None, - ) -> DataFrame | Series: - """ - Sub-classes to define. Return a sliced object. - - Parameters - ---------- - key : string / list of selections - ndim : {1, 2} - requested ndim of result - subset : object, default None - subset to act on - """ - if subset is None: - subset = self - elif subset.ndim == 1: # is Series - return subset - - # TODO: _shallow_copy(subset)? - return subset[key] - - _agg_see_also_doc = dedent( - """ - See Also - -------- - DataFrame.apply : Perform any type of operations. - DataFrame.transform : Perform transformation type operations. - core.groupby.GroupBy : Perform operations over groups. - core.resample.Resampler : Perform operations over resampled bins. - core.window.Rolling : Perform operations over rolling window. - core.window.Expanding : Perform operations over expanding window. - core.window.ExponentialMovingWindow : Perform operation over exponential weighted - window. - """ - ) - - _agg_examples_doc = dedent( - """ - Examples - -------- - >>> df = pd.DataFrame([[1, 2, 3], - ... [4, 5, 6], - ... [7, 8, 9], - ... [np.nan, np.nan, np.nan]], - ... columns=['A', 'B', 'C']) - - Aggregate these functions over the rows. - - >>> df.agg(['sum', 'min']) - A B C - sum 12.0 15.0 18.0 - min 1.0 2.0 3.0 - - Different aggregations per column. - - >>> df.agg({'A' : ['sum', 'min'], 'B' : ['min', 'max']}) - A B - sum 12.0 NaN - min 1.0 2.0 - max NaN 8.0 - - Aggregate different functions over the columns and rename the index of the resulting - DataFrame. - - >>> df.agg(x=('A', 'max'), y=('B', 'min'), z=('C', 'mean')) - A B C - x 7.0 NaN NaN - y NaN 2.0 NaN - z NaN NaN 6.0 - - Aggregate over the columns. - - >>> df.agg("mean", axis="columns") - 0 2.0 - 1 5.0 - 2 8.0 - 3 NaN - dtype: float64 - """ - ) - - @doc( - _shared_docs["aggregate"], - klass=_shared_doc_kwargs["klass"], - axis=_shared_doc_kwargs["axis"], - see_also=_agg_see_also_doc, - examples=_agg_examples_doc, - ) - def aggregate(self, func=None, axis: Axis = 0, *args, **kwargs): - from pandas.core.apply import frame_apply - - axis = self._get_axis_number(axis) - - op = frame_apply(self, func=func, axis=axis, args=args, kwargs=kwargs) - result = op.agg() - result = reconstruct_and_relabel_result(result, func, **kwargs) - return result - - agg = aggregate - - @doc( - _shared_docs["transform"], - klass=_shared_doc_kwargs["klass"], - axis=_shared_doc_kwargs["axis"], - ) - def transform( - self, func: AggFuncType, axis: Axis = 0, *args, **kwargs - ) -> DataFrame: - from pandas.core.apply import frame_apply - - op = frame_apply(self, func=func, axis=axis, args=args, kwargs=kwargs) - result = op.transform() - assert isinstance(result, DataFrame) - return result - - def apply( - self, - func: AggFuncType, - axis: Axis = 0, - raw: bool = False, - result_type: Literal["expand", "reduce", "broadcast"] | None = None, - args=(), - by_row: Literal[False, "compat"] = "compat", - **kwargs, - ): - """ - Apply a function along an axis of the DataFrame. - - Objects passed to the function are Series objects whose index is - either the DataFrame's index (``axis=0``) or the DataFrame's columns - (``axis=1``). By default (``result_type=None``), the final return type - is inferred from the return type of the applied function. Otherwise, - it depends on the `result_type` argument. - - Parameters - ---------- - func : function - Function to apply to each column or row. - axis : {0 or 'index', 1 or 'columns'}, default 0 - Axis along which the function is applied: - - * 0 or 'index': apply function to each column. - * 1 or 'columns': apply function to each row. - - raw : bool, default False - Determines if row or column is passed as a Series or ndarray object: - - * ``False`` : passes each row or column as a Series to the - function. - * ``True`` : the passed function will receive ndarray objects - instead. - If you are just applying a NumPy reduction function this will - achieve much better performance. - - result_type : {'expand', 'reduce', 'broadcast', None}, default None - These only act when ``axis=1`` (columns): - - * 'expand' : list-like results will be turned into columns. - * 'reduce' : returns a Series if possible rather than expanding - list-like results. This is the opposite of 'expand'. - * 'broadcast' : results will be broadcast to the original shape - of the DataFrame, the original index and columns will be - retained. - - The default behaviour (None) depends on the return value of the - applied function: list-like results will be returned as a Series - of those. However if the apply function returns a Series these - are expanded to columns. - args : tuple - Positional arguments to pass to `func` in addition to the - array/series. - by_row : False or "compat", default "compat" - Only has an effect when ``func`` is a listlike or dictlike of funcs - and the func isn't a string. - If "compat", will if possible first translate the func into pandas - methods (e.g. ``Series().apply(np.sum)`` will be translated to - ``Series().sum()``). If that doesn't work, will try call to apply again with - ``by_row=True`` and if that fails, will call apply again with - ``by_row=False`` (backward compatible). - If False, the funcs will be passed the whole Series at once. - - .. versionadded:: 2.1.0 - **kwargs - Additional keyword arguments to pass as keywords arguments to - `func`. - - Returns - ------- - Series or DataFrame - Result of applying ``func`` along the given axis of the - DataFrame. - - See Also - -------- - DataFrame.map: For elementwise operations. - DataFrame.aggregate: Only perform aggregating type operations. - DataFrame.transform: Only perform transforming type operations. - - Notes - ----- - Functions that mutate the passed object can produce unexpected - behavior or errors and are not supported. See :ref:`gotchas.udf-mutation` - for more details. - - Examples - -------- - >>> df = pd.DataFrame([[4, 9]] * 3, columns=['A', 'B']) - >>> df - A B - 0 4 9 - 1 4 9 - 2 4 9 - - Using a numpy universal function (in this case the same as - ``np.sqrt(df)``): - - >>> df.apply(np.sqrt) - A B - 0 2.0 3.0 - 1 2.0 3.0 - 2 2.0 3.0 - - Using a reducing function on either axis - - >>> df.apply(np.sum, axis=0) - A 12 - B 27 - dtype: int64 - - >>> df.apply(np.sum, axis=1) - 0 13 - 1 13 - 2 13 - dtype: int64 - - Returning a list-like will result in a Series - - >>> df.apply(lambda x: [1, 2], axis=1) - 0 [1, 2] - 1 [1, 2] - 2 [1, 2] - dtype: object - - Passing ``result_type='expand'`` will expand list-like results - to columns of a Dataframe - - >>> df.apply(lambda x: [1, 2], axis=1, result_type='expand') - 0 1 - 0 1 2 - 1 1 2 - 2 1 2 - - Returning a Series inside the function is similar to passing - ``result_type='expand'``. The resulting column names - will be the Series index. - - >>> df.apply(lambda x: pd.Series([1, 2], index=['foo', 'bar']), axis=1) - foo bar - 0 1 2 - 1 1 2 - 2 1 2 - - Passing ``result_type='broadcast'`` will ensure the same shape - result, whether list-like or scalar is returned by the function, - and broadcast it along the axis. The resulting column names will - be the originals. - - >>> df.apply(lambda x: [1, 2], axis=1, result_type='broadcast') - A B - 0 1 2 - 1 1 2 - 2 1 2 - """ - from pandas.core.apply import frame_apply - - op = frame_apply( - self, - func=func, - axis=axis, - raw=raw, - result_type=result_type, - by_row=by_row, - args=args, - kwargs=kwargs, - ) - return op.apply().__finalize__(self, method="apply") - - def map( - self, func: PythonFuncType, na_action: str | None = None, **kwargs - ) -> DataFrame: - """ - Apply a function to a Dataframe elementwise. - - .. versionadded:: 2.1.0 - - DataFrame.applymap was deprecated and renamed to DataFrame.map. - - This method applies a function that accepts and returns a scalar - to every element of a DataFrame. - - Parameters - ---------- - func : callable - Python function, returns a single value from a single value. - na_action : {None, 'ignore'}, default None - If 'ignore', propagate NaN values, without passing them to func. - **kwargs - Additional keyword arguments to pass as keywords arguments to - `func`. - - Returns - ------- - DataFrame - Transformed DataFrame. - - See Also - -------- - DataFrame.apply : Apply a function along input axis of DataFrame. - DataFrame.replace: Replace values given in `to_replace` with `value`. - Series.map : Apply a function elementwise on a Series. - - Examples - -------- - >>> df = pd.DataFrame([[1, 2.12], [3.356, 4.567]]) - >>> df - 0 1 - 0 1.000 2.120 - 1 3.356 4.567 - - >>> df.map(lambda x: len(str(x))) - 0 1 - 0 3 4 - 1 5 5 - - Like Series.map, NA values can be ignored: - - >>> df_copy = df.copy() - >>> df_copy.iloc[0, 0] = pd.NA - >>> df_copy.map(lambda x: len(str(x)), na_action='ignore') - 0 1 - 0 NaN 4 - 1 5.0 5 - - Note that a vectorized version of `func` often exists, which will - be much faster. You could square each number elementwise. - - >>> df.map(lambda x: x**2) - 0 1 - 0 1.000000 4.494400 - 1 11.262736 20.857489 - - But it's better to avoid map in that case. - - >>> df ** 2 - 0 1 - 0 1.000000 4.494400 - 1 11.262736 20.857489 - """ - if na_action not in {"ignore", None}: - raise ValueError( - f"na_action must be 'ignore' or None. Got {repr(na_action)}" - ) - - if self.empty: - return self.copy() - - func = functools.partial(func, **kwargs) - - def infer(x): - return x._map_values(func, na_action=na_action) - - return self.apply(infer).__finalize__(self, "map") - - def applymap( - self, func: PythonFuncType, na_action: NaAction | None = None, **kwargs - ) -> DataFrame: - """ - Apply a function to a Dataframe elementwise. - - .. deprecated:: 2.1.0 - - DataFrame.applymap has been deprecated. Use DataFrame.map instead. - - This method applies a function that accepts and returns a scalar - to every element of a DataFrame. - - Parameters - ---------- - func : callable - Python function, returns a single value from a single value. - na_action : {None, 'ignore'}, default None - If 'ignore', propagate NaN values, without passing them to func. - **kwargs - Additional keyword arguments to pass as keywords arguments to - `func`. - - Returns - ------- - DataFrame - Transformed DataFrame. - - See Also - -------- - DataFrame.apply : Apply a function along input axis of DataFrame. - DataFrame.map : Apply a function along input axis of DataFrame. - DataFrame.replace: Replace values given in `to_replace` with `value`. - - Examples - -------- - >>> df = pd.DataFrame([[1, 2.12], [3.356, 4.567]]) - >>> df - 0 1 - 0 1.000 2.120 - 1 3.356 4.567 - - >>> df.map(lambda x: len(str(x))) - 0 1 - 0 3 4 - 1 5 5 - """ - warnings.warn( - "DataFrame.applymap has been deprecated. Use DataFrame.map instead.", - FutureWarning, - stacklevel=find_stack_level(), - ) - return self.map(func, na_action=na_action, **kwargs) - - # ---------------------------------------------------------------------- - # Merging / joining methods - - def _append( - self, - other, - ignore_index: bool = False, - verify_integrity: bool = False, - sort: bool = False, - ) -> DataFrame: - if isinstance(other, (Series, dict)): - if isinstance(other, dict): - if not ignore_index: - raise TypeError("Can only append a dict if ignore_index=True") - other = Series(other) - if other.name is None and not ignore_index: - raise TypeError( - "Can only append a Series if ignore_index=True " - "or if the Series has a name" - ) - - index = Index( - [other.name], - name=self.index.names - if isinstance(self.index, MultiIndex) - else self.index.name, - ) - row_df = other.to_frame().T - # infer_objects is needed for - # test_append_empty_frame_to_series_with_dateutil_tz - other = row_df.infer_objects(copy=False).rename_axis( - index.names, copy=False - ) - elif isinstance(other, list): - if not other: - pass - elif not isinstance(other[0], DataFrame): - other = DataFrame(other) - if self.index.name is not None and not ignore_index: - other.index.name = self.index.name - - from pandas.core.reshape.concat import concat - - if isinstance(other, (list, tuple)): - to_concat = [self, *other] - else: - to_concat = [self, other] - - result = concat( - to_concat, - ignore_index=ignore_index, - verify_integrity=verify_integrity, - sort=sort, - ) - return result.__finalize__(self, method="append") - - def join( - self, - other: DataFrame | Series | Iterable[DataFrame | Series], - on: IndexLabel | None = None, - how: MergeHow = "left", - lsuffix: str = "", - rsuffix: str = "", - sort: bool = False, - validate: JoinValidate | None = None, - ) -> DataFrame: - """ - Join columns of another DataFrame. - - Join columns with `other` DataFrame either on index or on a key - column. Efficiently join multiple DataFrame objects by index at once by - passing a list. - - Parameters - ---------- - other : DataFrame, Series, or a list containing any combination of them - Index should be similar to one of the columns in this one. If a - Series is passed, its name attribute must be set, and that will be - used as the column name in the resulting joined DataFrame. - on : str, list of str, or array-like, optional - Column or index level name(s) in the caller to join on the index - in `other`, otherwise joins index-on-index. If multiple - values given, the `other` DataFrame must have a MultiIndex. Can - pass an array as the join key if it is not already contained in - the calling DataFrame. Like an Excel VLOOKUP operation. - how : {'left', 'right', 'outer', 'inner', 'cross'}, default 'left' - How to handle the operation of the two objects. - - * left: use calling frame's index (or column if on is specified) - * right: use `other`'s index. - * outer: form union of calling frame's index (or column if on is - specified) with `other`'s index, and sort it lexicographically. - * inner: form intersection of calling frame's index (or column if - on is specified) with `other`'s index, preserving the order - of the calling's one. - * cross: creates the cartesian product from both frames, preserves the order - of the left keys. - - .. versionadded:: 1.2.0 - - lsuffix : str, default '' - Suffix to use from left frame's overlapping columns. - rsuffix : str, default '' - Suffix to use from right frame's overlapping columns. - sort : bool, default False - Order result DataFrame lexicographically by the join key. If False, - the order of the join key depends on the join type (how keyword). - validate : str, optional - If specified, checks if join is of specified type. - - * "one_to_one" or "1:1": check if join keys are unique in both left - and right datasets. - * "one_to_many" or "1:m": check if join keys are unique in left dataset. - * "many_to_one" or "m:1": check if join keys are unique in right dataset. - * "many_to_many" or "m:m": allowed, but does not result in checks. - - .. versionadded:: 1.5.0 - - Returns - ------- - DataFrame - A dataframe containing columns from both the caller and `other`. - - See Also - -------- - DataFrame.merge : For column(s)-on-column(s) operations. - - Notes - ----- - Parameters `on`, `lsuffix`, and `rsuffix` are not supported when - passing a list of `DataFrame` objects. - - Examples - -------- - >>> df = pd.DataFrame({'key': ['K0', 'K1', 'K2', 'K3', 'K4', 'K5'], - ... 'A': ['A0', 'A1', 'A2', 'A3', 'A4', 'A5']}) - - >>> df - key A - 0 K0 A0 - 1 K1 A1 - 2 K2 A2 - 3 K3 A3 - 4 K4 A4 - 5 K5 A5 - - >>> other = pd.DataFrame({'key': ['K0', 'K1', 'K2'], - ... 'B': ['B0', 'B1', 'B2']}) - - >>> other - key B - 0 K0 B0 - 1 K1 B1 - 2 K2 B2 - - Join DataFrames using their indexes. - - >>> df.join(other, lsuffix='_caller', rsuffix='_other') - key_caller A key_other B - 0 K0 A0 K0 B0 - 1 K1 A1 K1 B1 - 2 K2 A2 K2 B2 - 3 K3 A3 NaN NaN - 4 K4 A4 NaN NaN - 5 K5 A5 NaN NaN - - If we want to join using the key columns, we need to set key to be - the index in both `df` and `other`. The joined DataFrame will have - key as its index. - - >>> df.set_index('key').join(other.set_index('key')) - A B - key - K0 A0 B0 - K1 A1 B1 - K2 A2 B2 - K3 A3 NaN - K4 A4 NaN - K5 A5 NaN - - Another option to join using the key columns is to use the `on` - parameter. DataFrame.join always uses `other`'s index but we can use - any column in `df`. This method preserves the original DataFrame's - index in the result. - - >>> df.join(other.set_index('key'), on='key') - key A B - 0 K0 A0 B0 - 1 K1 A1 B1 - 2 K2 A2 B2 - 3 K3 A3 NaN - 4 K4 A4 NaN - 5 K5 A5 NaN - - Using non-unique key values shows how they are matched. - - >>> df = pd.DataFrame({'key': ['K0', 'K1', 'K1', 'K3', 'K0', 'K1'], - ... 'A': ['A0', 'A1', 'A2', 'A3', 'A4', 'A5']}) - - >>> df - key A - 0 K0 A0 - 1 K1 A1 - 2 K1 A2 - 3 K3 A3 - 4 K0 A4 - 5 K1 A5 - - >>> df.join(other.set_index('key'), on='key', validate='m:1') - key A B - 0 K0 A0 B0 - 1 K1 A1 B1 - 2 K1 A2 B1 - 3 K3 A3 NaN - 4 K0 A4 B0 - 5 K1 A5 B1 - """ - from pandas.core.reshape.concat import concat - from pandas.core.reshape.merge import merge - - if isinstance(other, Series): - if other.name is None: - raise ValueError("Other Series must have a name") - other = DataFrame({other.name: other}) - - if isinstance(other, DataFrame): - if how == "cross": - return merge( - self, - other, - how=how, - on=on, - suffixes=(lsuffix, rsuffix), - sort=sort, - validate=validate, - ) - return merge( - self, - other, - left_on=on, - how=how, - left_index=on is None, - right_index=True, - suffixes=(lsuffix, rsuffix), - sort=sort, - validate=validate, - ) - else: - if on is not None: - raise ValueError( - "Joining multiple DataFrames only supported for joining on index" - ) - - if rsuffix or lsuffix: - raise ValueError( - "Suffixes not supported when joining multiple DataFrames" - ) - - # Mypy thinks the RHS is a - # "Union[DataFrame, Series, Iterable[Union[DataFrame, Series]]]" whereas - # the LHS is an "Iterable[DataFrame]", but in reality both types are - # "Iterable[Union[DataFrame, Series]]" due to the if statements - frames = [cast("DataFrame | Series", self)] + list(other) - - can_concat = all(df.index.is_unique for df in frames) - - # join indexes only using concat - if can_concat: - if how == "left": - res = concat( - frames, axis=1, join="outer", verify_integrity=True, sort=sort - ) - return res.reindex(self.index, copy=False) - else: - return concat( - frames, axis=1, join=how, verify_integrity=True, sort=sort - ) - - joined = frames[0] - - for frame in frames[1:]: - joined = merge( - joined, - frame, - how=how, - left_index=True, - right_index=True, - validate=validate, - ) - - return joined - - @Substitution("") - @Appender(_merge_doc, indents=2) - def merge( - self, - right: DataFrame | Series, - how: MergeHow = "inner", - on: IndexLabel | None = None, - left_on: IndexLabel | None = None, - right_on: IndexLabel | None = None, - left_index: bool = False, - right_index: bool = False, - sort: bool = False, - suffixes: Suffixes = ("_x", "_y"), - copy: bool | None = None, - indicator: str | bool = False, - validate: MergeValidate | None = None, - ) -> DataFrame: - from pandas.core.reshape.merge import merge - - return merge( - self, - right, - how=how, - on=on, - left_on=left_on, - right_on=right_on, - left_index=left_index, - right_index=right_index, - sort=sort, - suffixes=suffixes, - copy=copy, - indicator=indicator, - validate=validate, - ) - - def round( - self, decimals: int | dict[IndexLabel, int] | Series = 0, *args, **kwargs - ) -> DataFrame: - """ - Round a DataFrame to a variable number of decimal places. - - Parameters - ---------- - decimals : int, dict, Series - Number of decimal places to round each column to. If an int is - given, round each column to the same number of places. - Otherwise dict and Series round to variable numbers of places. - Column names should be in the keys if `decimals` is a - dict-like, or in the index if `decimals` is a Series. Any - columns not included in `decimals` will be left as is. Elements - of `decimals` which are not columns of the input will be - ignored. - *args - Additional keywords have no effect but might be accepted for - compatibility with numpy. - **kwargs - Additional keywords have no effect but might be accepted for - compatibility with numpy. - - Returns - ------- - DataFrame - A DataFrame with the affected columns rounded to the specified - number of decimal places. - - See Also - -------- - numpy.around : Round a numpy array to the given number of decimals. - Series.round : Round a Series to the given number of decimals. - - Examples - -------- - >>> df = pd.DataFrame([(.21, .32), (.01, .67), (.66, .03), (.21, .18)], - ... columns=['dogs', 'cats']) - >>> df - dogs cats - 0 0.21 0.32 - 1 0.01 0.67 - 2 0.66 0.03 - 3 0.21 0.18 - - By providing an integer each column is rounded to the same number - of decimal places - - >>> df.round(1) - dogs cats - 0 0.2 0.3 - 1 0.0 0.7 - 2 0.7 0.0 - 3 0.2 0.2 - - With a dict, the number of places for specific columns can be - specified with the column names as key and the number of decimal - places as value - - >>> df.round({'dogs': 1, 'cats': 0}) - dogs cats - 0 0.2 0.0 - 1 0.0 1.0 - 2 0.7 0.0 - 3 0.2 0.0 - - Using a Series, the number of places for specific columns can be - specified with the column names as index and the number of - decimal places as value - - >>> decimals = pd.Series([0, 1], index=['cats', 'dogs']) - >>> df.round(decimals) - dogs cats - 0 0.2 0.0 - 1 0.0 1.0 - 2 0.7 0.0 - 3 0.2 0.0 - """ - from pandas.core.reshape.concat import concat - - def _dict_round(df: DataFrame, decimals): - for col, vals in df.items(): - try: - yield _series_round(vals, decimals[col]) - except KeyError: - yield vals - - def _series_round(ser: Series, decimals: int) -> Series: - if is_integer_dtype(ser.dtype) or is_float_dtype(ser.dtype): - return ser.round(decimals) - return ser - - nv.validate_round(args, kwargs) - - if isinstance(decimals, (dict, Series)): - if isinstance(decimals, Series) and not decimals.index.is_unique: - raise ValueError("Index of decimals must be unique") - if is_dict_like(decimals) and not all( - is_integer(value) for _, value in decimals.items() - ): - raise TypeError("Values in decimals must be integers") - new_cols = list(_dict_round(self, decimals)) - elif is_integer(decimals): - # Dispatch to Block.round - # Argument "decimals" to "round" of "BaseBlockManager" has incompatible - # type "Union[int, integer[Any]]"; expected "int" - new_mgr = self._mgr.round( - decimals=decimals, # type: ignore[arg-type] - using_cow=using_copy_on_write(), - ) - return self._constructor_from_mgr(new_mgr, axes=new_mgr.axes).__finalize__( - self, method="round" - ) - else: - raise TypeError("decimals must be an integer, a dict-like or a Series") - - if new_cols is not None and len(new_cols) > 0: - return self._constructor( - concat(new_cols, axis=1), index=self.index, columns=self.columns - ).__finalize__(self, method="round") - else: - return self.copy(deep=False) - - # ---------------------------------------------------------------------- - # Statistical methods, etc. - - def corr( - self, - method: CorrelationMethod = "pearson", - min_periods: int = 1, - numeric_only: bool = False, - ) -> DataFrame: - """ - Compute pairwise correlation of columns, excluding NA/null values. - - Parameters - ---------- - method : {'pearson', 'kendall', 'spearman'} or callable - Method of correlation: - - * pearson : standard correlation coefficient - * kendall : Kendall Tau correlation coefficient - * spearman : Spearman rank correlation - * callable: callable with input two 1d ndarrays - and returning a float. Note that the returned matrix from corr - will have 1 along the diagonals and will be symmetric - regardless of the callable's behavior. - min_periods : int, optional - Minimum number of observations required per pair of columns - to have a valid result. Currently only available for Pearson - and Spearman correlation. - numeric_only : bool, default False - Include only `float`, `int` or `boolean` data. - - .. versionadded:: 1.5.0 - - .. versionchanged:: 2.0.0 - The default value of ``numeric_only`` is now ``False``. - - Returns - ------- - DataFrame - Correlation matrix. - - See Also - -------- - DataFrame.corrwith : Compute pairwise correlation with another - DataFrame or Series. - Series.corr : Compute the correlation between two Series. - - Notes - ----- - Pearson, Kendall and Spearman correlation are currently computed using pairwise complete observations. - - * `Pearson correlation coefficient `_ - * `Kendall rank correlation coefficient `_ - * `Spearman's rank correlation coefficient `_ - - Examples - -------- - >>> def histogram_intersection(a, b): - ... v = np.minimum(a, b).sum().round(decimals=1) - ... return v - >>> df = pd.DataFrame([(.2, .3), (.0, .6), (.6, .0), (.2, .1)], - ... columns=['dogs', 'cats']) - >>> df.corr(method=histogram_intersection) - dogs cats - dogs 1.0 0.3 - cats 0.3 1.0 - - >>> df = pd.DataFrame([(1, 1), (2, np.nan), (np.nan, 3), (4, 4)], - ... columns=['dogs', 'cats']) - >>> df.corr(min_periods=3) - dogs cats - dogs 1.0 NaN - cats NaN 1.0 - """ # noqa: E501 - data = self._get_numeric_data() if numeric_only else self - cols = data.columns - idx = cols.copy() - mat = data.to_numpy(dtype=float, na_value=np.nan, copy=False) - - if method == "pearson": - correl = libalgos.nancorr(mat, minp=min_periods) - elif method == "spearman": - correl = libalgos.nancorr_spearman(mat, minp=min_periods) - elif method == "kendall" or callable(method): - if min_periods is None: - min_periods = 1 - mat = mat.T - corrf = nanops.get_corr_func(method) - K = len(cols) - correl = np.empty((K, K), dtype=float) - mask = np.isfinite(mat) - for i, ac in enumerate(mat): - for j, bc in enumerate(mat): - if i > j: - continue - - valid = mask[i] & mask[j] - if valid.sum() < min_periods: - c = np.nan - elif i == j: - c = 1.0 - elif not valid.all(): - c = corrf(ac[valid], bc[valid]) - else: - c = corrf(ac, bc) - correl[i, j] = c - correl[j, i] = c - else: - raise ValueError( - "method must be either 'pearson', " - "'spearman', 'kendall', or a callable, " - f"'{method}' was supplied" - ) - - result = self._constructor(correl, index=idx, columns=cols, copy=False) - return result.__finalize__(self, method="corr") - - def cov( - self, - min_periods: int | None = None, - ddof: int | None = 1, - numeric_only: bool = False, - ) -> DataFrame: - """ - Compute pairwise covariance of columns, excluding NA/null values. - - Compute the pairwise covariance among the series of a DataFrame. - The returned data frame is the `covariance matrix - `__ of the columns - of the DataFrame. - - Both NA and null values are automatically excluded from the - calculation. (See the note below about bias from missing values.) - A threshold can be set for the minimum number of - observations for each value created. Comparisons with observations - below this threshold will be returned as ``NaN``. - - This method is generally used for the analysis of time series data to - understand the relationship between different measures - across time. - - Parameters - ---------- - min_periods : int, optional - Minimum number of observations required per pair of columns - to have a valid result. - - ddof : int, default 1 - Delta degrees of freedom. The divisor used in calculations - is ``N - ddof``, where ``N`` represents the number of elements. - This argument is applicable only when no ``nan`` is in the dataframe. - - numeric_only : bool, default False - Include only `float`, `int` or `boolean` data. - - .. versionadded:: 1.5.0 - - .. versionchanged:: 2.0.0 - The default value of ``numeric_only`` is now ``False``. - - Returns - ------- - DataFrame - The covariance matrix of the series of the DataFrame. - - See Also - -------- - Series.cov : Compute covariance with another Series. - core.window.ewm.ExponentialMovingWindow.cov : Exponential weighted sample - covariance. - core.window.expanding.Expanding.cov : Expanding sample covariance. - core.window.rolling.Rolling.cov : Rolling sample covariance. - - Notes - ----- - Returns the covariance matrix of the DataFrame's time series. - The covariance is normalized by N-ddof. - - For DataFrames that have Series that are missing data (assuming that - data is `missing at random - `__) - the returned covariance matrix will be an unbiased estimate - of the variance and covariance between the member Series. - - However, for many applications this estimate may not be acceptable - because the estimate covariance matrix is not guaranteed to be positive - semi-definite. This could lead to estimate correlations having - absolute values which are greater than one, and/or a non-invertible - covariance matrix. See `Estimation of covariance matrices - `__ for more details. - - Examples - -------- - >>> df = pd.DataFrame([(1, 2), (0, 3), (2, 0), (1, 1)], - ... columns=['dogs', 'cats']) - >>> df.cov() - dogs cats - dogs 0.666667 -1.000000 - cats -1.000000 1.666667 - - >>> np.random.seed(42) - >>> df = pd.DataFrame(np.random.randn(1000, 5), - ... columns=['a', 'b', 'c', 'd', 'e']) - >>> df.cov() - a b c d e - a 0.998438 -0.020161 0.059277 -0.008943 0.014144 - b -0.020161 1.059352 -0.008543 -0.024738 0.009826 - c 0.059277 -0.008543 1.010670 -0.001486 -0.000271 - d -0.008943 -0.024738 -0.001486 0.921297 -0.013692 - e 0.014144 0.009826 -0.000271 -0.013692 0.977795 - - **Minimum number of periods** - - This method also supports an optional ``min_periods`` keyword - that specifies the required minimum number of non-NA observations for - each column pair in order to have a valid result: - - >>> np.random.seed(42) - >>> df = pd.DataFrame(np.random.randn(20, 3), - ... columns=['a', 'b', 'c']) - >>> df.loc[df.index[:5], 'a'] = np.nan - >>> df.loc[df.index[5:10], 'b'] = np.nan - >>> df.cov(min_periods=12) - a b c - a 0.316741 NaN -0.150812 - b NaN 1.248003 0.191417 - c -0.150812 0.191417 0.895202 - """ - data = self._get_numeric_data() if numeric_only else self - cols = data.columns - idx = cols.copy() - mat = data.to_numpy(dtype=float, na_value=np.nan, copy=False) - - if notna(mat).all(): - if min_periods is not None and min_periods > len(mat): - base_cov = np.empty((mat.shape[1], mat.shape[1])) - base_cov.fill(np.nan) - else: - base_cov = np.cov(mat.T, ddof=ddof) - base_cov = base_cov.reshape((len(cols), len(cols))) - else: - base_cov = libalgos.nancorr(mat, cov=True, minp=min_periods) - - result = self._constructor(base_cov, index=idx, columns=cols, copy=False) - return result.__finalize__(self, method="cov") - - def corrwith( - self, - other: DataFrame | Series, - axis: Axis = 0, - drop: bool = False, - method: CorrelationMethod = "pearson", - numeric_only: bool = False, - ) -> Series: - """ - Compute pairwise correlation. - - Pairwise correlation is computed between rows or columns of - DataFrame with rows or columns of Series or DataFrame. DataFrames - are first aligned along both axes before computing the - correlations. - - Parameters - ---------- - other : DataFrame, Series - Object with which to compute correlations. - axis : {0 or 'index', 1 or 'columns'}, default 0 - The axis to use. 0 or 'index' to compute row-wise, 1 or 'columns' for - column-wise. - drop : bool, default False - Drop missing indices from result. - method : {'pearson', 'kendall', 'spearman'} or callable - Method of correlation: - - * pearson : standard correlation coefficient - * kendall : Kendall Tau correlation coefficient - * spearman : Spearman rank correlation - * callable: callable with input two 1d ndarrays - and returning a float. - - numeric_only : bool, default False - Include only `float`, `int` or `boolean` data. - - .. versionadded:: 1.5.0 - - .. versionchanged:: 2.0.0 - The default value of ``numeric_only`` is now ``False``. - - Returns - ------- - Series - Pairwise correlations. - - See Also - -------- - DataFrame.corr : Compute pairwise correlation of columns. - - Examples - -------- - >>> index = ["a", "b", "c", "d", "e"] - >>> columns = ["one", "two", "three", "four"] - >>> df1 = pd.DataFrame(np.arange(20).reshape(5, 4), index=index, columns=columns) - >>> df2 = pd.DataFrame(np.arange(16).reshape(4, 4), index=index[:4], columns=columns) - >>> df1.corrwith(df2) - one 1.0 - two 1.0 - three 1.0 - four 1.0 - dtype: float64 - - >>> df2.corrwith(df1, axis=1) - a 1.0 - b 1.0 - c 1.0 - d 1.0 - e NaN - dtype: float64 - """ # noqa: E501 - axis = self._get_axis_number(axis) - this = self._get_numeric_data() if numeric_only else self - - if isinstance(other, Series): - return this.apply(lambda x: other.corr(x, method=method), axis=axis) - - if numeric_only: - other = other._get_numeric_data() - left, right = this.align(other, join="inner", copy=False) - - if axis == 1: - left = left.T - right = right.T - - if method == "pearson": - # mask missing values - left = left + right * 0 - right = right + left * 0 - - # demeaned data - ldem = left - left.mean(numeric_only=numeric_only) - rdem = right - right.mean(numeric_only=numeric_only) - - num = (ldem * rdem).sum() - dom = ( - (left.count() - 1) - * left.std(numeric_only=numeric_only) - * right.std(numeric_only=numeric_only) - ) - - correl = num / dom - - elif method in ["kendall", "spearman"] or callable(method): - - def c(x): - return nanops.nancorr(x[0], x[1], method=method) - - correl = self._constructor_sliced( - map(c, zip(left.values.T, right.values.T)), - index=left.columns, - copy=False, - ) - - else: - raise ValueError( - f"Invalid method {method} was passed, " - "valid methods are: 'pearson', 'kendall', " - "'spearman', or callable" - ) - - if not drop: - # Find non-matching labels along the given axis - # and append missing correlations (GH 22375) - raxis: AxisInt = 1 if axis == 0 else 0 - result_index = this._get_axis(raxis).union(other._get_axis(raxis)) - idx_diff = result_index.difference(correl.index) - - if len(idx_diff) > 0: - correl = correl._append( - Series([np.nan] * len(idx_diff), index=idx_diff) - ) - - return correl - - # ---------------------------------------------------------------------- - # ndarray-like stats methods - - def count(self, axis: Axis = 0, numeric_only: bool = False): - """ - Count non-NA cells for each column or row. - - The values `None`, `NaN`, `NaT`, ``pandas.NA`` are considered NA. - - Parameters - ---------- - axis : {0 or 'index', 1 or 'columns'}, default 0 - If 0 or 'index' counts are generated for each column. - If 1 or 'columns' counts are generated for each row. - numeric_only : bool, default False - Include only `float`, `int` or `boolean` data. - - Returns - ------- - Series - For each column/row the number of non-NA/null entries. - - See Also - -------- - Series.count: Number of non-NA elements in a Series. - DataFrame.value_counts: Count unique combinations of columns. - DataFrame.shape: Number of DataFrame rows and columns (including NA - elements). - DataFrame.isna: Boolean same-sized DataFrame showing places of NA - elements. - - Examples - -------- - Constructing DataFrame from a dictionary: - - >>> df = pd.DataFrame({"Person": - ... ["John", "Myla", "Lewis", "John", "Myla"], - ... "Age": [24., np.nan, 21., 33, 26], - ... "Single": [False, True, True, True, False]}) - >>> df - Person Age Single - 0 John 24.0 False - 1 Myla NaN True - 2 Lewis 21.0 True - 3 John 33.0 True - 4 Myla 26.0 False - - Notice the uncounted NA values: - - >>> df.count() - Person 5 - Age 4 - Single 5 - dtype: int64 - - Counts for each **row**: - - >>> df.count(axis='columns') - 0 3 - 1 2 - 2 3 - 3 3 - 4 3 - dtype: int64 - """ - axis = self._get_axis_number(axis) - - if numeric_only: - frame = self._get_numeric_data() - else: - frame = self - - # GH #423 - if len(frame._get_axis(axis)) == 0: - result = self._constructor_sliced(0, index=frame._get_agg_axis(axis)) - else: - result = notna(frame).sum(axis=axis) - - return result.astype("int64", copy=False).__finalize__(self, method="count") - - def _reduce( - self, - op, - name: str, - *, - axis: Axis = 0, - skipna: bool = True, - numeric_only: bool = False, - filter_type=None, - **kwds, - ): - assert filter_type is None or filter_type == "bool", filter_type - out_dtype = "bool" if filter_type == "bool" else None - - if axis is not None: - axis = self._get_axis_number(axis) - - def func(values: np.ndarray): - # We only use this in the case that operates on self.values - return op(values, axis=axis, skipna=skipna, **kwds) - - dtype_has_keepdims: dict[ExtensionDtype, bool] = {} - - def blk_func(values, axis: Axis = 1): - if isinstance(values, ExtensionArray): - if not is_1d_only_ea_dtype(values.dtype) and not isinstance( - self._mgr, ArrayManager - ): - return values._reduce(name, axis=1, skipna=skipna, **kwds) - has_keepdims = dtype_has_keepdims.get(values.dtype) - if has_keepdims is None: - sign = signature(values._reduce) - has_keepdims = "keepdims" in sign.parameters - dtype_has_keepdims[values.dtype] = has_keepdims - if has_keepdims: - return values._reduce(name, skipna=skipna, keepdims=True, **kwds) - else: - warnings.warn( - f"{type(values)}._reduce will require a `keepdims` parameter " - "in the future", - FutureWarning, - stacklevel=find_stack_level(), - ) - result = values._reduce(name, skipna=skipna, **kwds) - return np.array([result]) - else: - return op(values, axis=axis, skipna=skipna, **kwds) - - def _get_data() -> DataFrame: - if filter_type is None: - data = self._get_numeric_data() - else: - # GH#25101, GH#24434 - assert filter_type == "bool" - data = self._get_bool_data() - return data - - # Case with EAs see GH#35881 - df = self - if numeric_only: - df = _get_data() - if axis is None: - dtype = find_common_type([arr.dtype for arr in df._mgr.arrays]) - if isinstance(dtype, ExtensionDtype): - df = df.astype(dtype, copy=False) - arr = concat_compat(list(df._iter_column_arrays())) - return arr._reduce(name, skipna=skipna, keepdims=False, **kwds) - return func(df.values) - elif axis == 1: - if len(df.index) == 0: - # Taking a transpose would result in no columns, losing the dtype. - # In the empty case, reducing along axis 0 or 1 gives the same - # result dtype, so reduce with axis=0 and ignore values - result = df._reduce( - op, - name, - axis=0, - skipna=skipna, - numeric_only=False, - filter_type=filter_type, - **kwds, - ).iloc[:0] - result.index = df.index - return result - - # kurtosis excluded since groupby does not implement it - if df.shape[1] and name != "kurt": - dtype = find_common_type([arr.dtype for arr in df._mgr.arrays]) - if isinstance(dtype, ExtensionDtype): - # GH 54341: fastpath for EA-backed axis=1 reductions - # This flattens the frame into a single 1D array while keeping - # track of the row and column indices of the original frame. Once - # flattened, grouping by the row indices and aggregating should - # be equivalent to transposing the original frame and aggregating - # with axis=0. - name = {"argmax": "idxmax", "argmin": "idxmin"}.get(name, name) - df = df.astype(dtype, copy=False) - arr = concat_compat(list(df._iter_column_arrays())) - nrows, ncols = df.shape - row_index = np.tile(np.arange(nrows), ncols) - col_index = np.repeat(np.arange(ncols), nrows) - ser = Series(arr, index=col_index, copy=False) - result = ser.groupby(row_index).agg(name, **kwds) - result.index = df.index - if not skipna and name not in ("any", "all"): - mask = df.isna().to_numpy(dtype=np.bool_).any(axis=1) - other = -1 if name in ("idxmax", "idxmin") else lib.no_default - result = result.mask(mask, other) - return result - - df = df.T - - # After possibly _get_data and transposing, we are now in the - # simple case where we can use BlockManager.reduce - res = df._mgr.reduce(blk_func) - out = df._constructor_from_mgr(res, axes=res.axes).iloc[0] - if out_dtype is not None and out.dtype != "boolean": - out = out.astype(out_dtype) - elif (df._mgr.get_dtypes() == object).any() and name not in ["any", "all"]: - out = out.astype(object) - elif len(self) == 0 and out.dtype == object and name in ("sum", "prod"): - # Even if we are object dtype, follow numpy and return - # float64, see test_apply_funcs_over_empty - out = out.astype(np.float64) - - return out - - def _reduce_axis1(self, name: str, func, skipna: bool) -> Series: - """ - Special case for _reduce to try to avoid a potentially-expensive transpose. - - Apply the reduction block-wise along axis=1 and then reduce the resulting - 1D arrays. - """ - if name == "all": - result = np.ones(len(self), dtype=bool) - ufunc = np.logical_and - elif name == "any": - result = np.zeros(len(self), dtype=bool) - # error: Incompatible types in assignment - # (expression has type "_UFunc_Nin2_Nout1[Literal['logical_or'], - # Literal[20], Literal[False]]", variable has type - # "_UFunc_Nin2_Nout1[Literal['logical_and'], Literal[20], - # Literal[True]]") - ufunc = np.logical_or # type: ignore[assignment] - else: - raise NotImplementedError(name) - - for arr in self._mgr.arrays: - middle = func(arr, axis=0, skipna=skipna) - result = ufunc(result, middle) - - res_ser = self._constructor_sliced(result, index=self.index, copy=False) - return res_ser - - @doc(make_doc("any", ndim=2)) - # error: Signature of "any" incompatible with supertype "NDFrame" - def any( # type: ignore[override] - self, - *, - axis: Axis = 0, - bool_only: bool = False, - skipna: bool = True, - **kwargs, - ) -> Series | bool: - result = self._logical_func( - "any", nanops.nanany, axis, bool_only, skipna, **kwargs - ) - if isinstance(result, Series): - result = result.__finalize__(self, method="any") - return result - - @doc(make_doc("all", ndim=2)) - def all( - self, - axis: Axis = 0, - bool_only: bool = False, - skipna: bool = True, - **kwargs, - ) -> Series | bool: - result = self._logical_func( - "all", nanops.nanall, axis, bool_only, skipna, **kwargs - ) - if isinstance(result, Series): - result = result.__finalize__(self, method="all") - return result - - @doc(make_doc("min", ndim=2)) - def min( - self, - axis: Axis | None = 0, - skipna: bool = True, - numeric_only: bool = False, - **kwargs, - ): - result = super().min(axis, skipna, numeric_only, **kwargs) - if isinstance(result, Series): - result = result.__finalize__(self, method="min") - return result - - @doc(make_doc("max", ndim=2)) - def max( - self, - axis: Axis | None = 0, - skipna: bool = True, - numeric_only: bool = False, - **kwargs, - ): - result = super().max(axis, skipna, numeric_only, **kwargs) - if isinstance(result, Series): - result = result.__finalize__(self, method="max") - return result - - @doc(make_doc("sum", ndim=2)) - def sum( - self, - axis: Axis | None = 0, - skipna: bool = True, - numeric_only: bool = False, - min_count: int = 0, - **kwargs, - ): - result = super().sum(axis, skipna, numeric_only, min_count, **kwargs) - return result.__finalize__(self, method="sum") - - @doc(make_doc("prod", ndim=2)) - def prod( - self, - axis: Axis | None = 0, - skipna: bool = True, - numeric_only: bool = False, - min_count: int = 0, - **kwargs, - ): - result = super().prod(axis, skipna, numeric_only, min_count, **kwargs) - return result.__finalize__(self, method="prod") - - @doc(make_doc("mean", ndim=2)) - def mean( - self, - axis: Axis | None = 0, - skipna: bool = True, - numeric_only: bool = False, - **kwargs, - ): - result = super().mean(axis, skipna, numeric_only, **kwargs) - if isinstance(result, Series): - result = result.__finalize__(self, method="mean") - return result - - @doc(make_doc("median", ndim=2)) - def median( - self, - axis: Axis | None = 0, - skipna: bool = True, - numeric_only: bool = False, - **kwargs, - ): - result = super().median(axis, skipna, numeric_only, **kwargs) - if isinstance(result, Series): - result = result.__finalize__(self, method="median") - return result - - @doc(make_doc("sem", ndim=2)) - def sem( - self, - axis: Axis | None = 0, - skipna: bool = True, - ddof: int = 1, - numeric_only: bool = False, - **kwargs, - ): - result = super().sem(axis, skipna, ddof, numeric_only, **kwargs) - if isinstance(result, Series): - result = result.__finalize__(self, method="sem") - return result - - @doc(make_doc("var", ndim=2)) - def var( - self, - axis: Axis | None = 0, - skipna: bool = True, - ddof: int = 1, - numeric_only: bool = False, - **kwargs, - ): - result = super().var(axis, skipna, ddof, numeric_only, **kwargs) - if isinstance(result, Series): - result = result.__finalize__(self, method="var") - return result - - @doc(make_doc("std", ndim=2)) - def std( - self, - axis: Axis | None = 0, - skipna: bool = True, - ddof: int = 1, - numeric_only: bool = False, - **kwargs, - ): - result = super().std(axis, skipna, ddof, numeric_only, **kwargs) - if isinstance(result, Series): - result = result.__finalize__(self, method="std") - return result - - @doc(make_doc("skew", ndim=2)) - def skew( - self, - axis: Axis | None = 0, - skipna: bool = True, - numeric_only: bool = False, - **kwargs, - ): - result = super().skew(axis, skipna, numeric_only, **kwargs) - if isinstance(result, Series): - result = result.__finalize__(self, method="skew") - return result - - @doc(make_doc("kurt", ndim=2)) - def kurt( - self, - axis: Axis | None = 0, - skipna: bool = True, - numeric_only: bool = False, - **kwargs, - ): - result = super().kurt(axis, skipna, numeric_only, **kwargs) - if isinstance(result, Series): - result = result.__finalize__(self, method="kurt") - return result - - kurtosis = kurt - product = prod - - @doc(make_doc("cummin", ndim=2)) - def cummin(self, axis: Axis | None = None, skipna: bool = True, *args, **kwargs): - return NDFrame.cummin(self, axis, skipna, *args, **kwargs) - - @doc(make_doc("cummax", ndim=2)) - def cummax(self, axis: Axis | None = None, skipna: bool = True, *args, **kwargs): - return NDFrame.cummax(self, axis, skipna, *args, **kwargs) - - @doc(make_doc("cumsum", ndim=2)) - def cumsum(self, axis: Axis | None = None, skipna: bool = True, *args, **kwargs): - return NDFrame.cumsum(self, axis, skipna, *args, **kwargs) - - @doc(make_doc("cumprod", 2)) - def cumprod(self, axis: Axis | None = None, skipna: bool = True, *args, **kwargs): - return NDFrame.cumprod(self, axis, skipna, *args, **kwargs) - - def nunique(self, axis: Axis = 0, dropna: bool = True) -> Series: - """ - Count number of distinct elements in specified axis. - - Return Series with number of distinct elements. Can ignore NaN - values. - - Parameters - ---------- - axis : {0 or 'index', 1 or 'columns'}, default 0 - The axis to use. 0 or 'index' for row-wise, 1 or 'columns' for - column-wise. - dropna : bool, default True - Don't include NaN in the counts. - - Returns - ------- - Series - - See Also - -------- - Series.nunique: Method nunique for Series. - DataFrame.count: Count non-NA cells for each column or row. - - Examples - -------- - >>> df = pd.DataFrame({'A': [4, 5, 6], 'B': [4, 1, 1]}) - >>> df.nunique() - A 3 - B 2 - dtype: int64 - - >>> df.nunique(axis=1) - 0 1 - 1 2 - 2 2 - dtype: int64 - """ - return self.apply(Series.nunique, axis=axis, dropna=dropna) - - @doc(_shared_docs["idxmin"], numeric_only_default="False") - def idxmin( - self, axis: Axis = 0, skipna: bool = True, numeric_only: bool = False - ) -> Series: - axis = self._get_axis_number(axis) - - if self.empty and len(self.axes[axis]): - axis_dtype = self.axes[axis].dtype - return self._constructor_sliced(dtype=axis_dtype) - - if numeric_only: - data = self._get_numeric_data() - else: - data = self - - res = data._reduce( - nanops.nanargmin, "argmin", axis=axis, skipna=skipna, numeric_only=False - ) - indices = res._values - # indices will always be np.ndarray since axis is not N - - if (indices == -1).any(): - warnings.warn( - f"The behavior of {type(self).__name__}.idxmin with all-NA " - "values, or any-NA and skipna=False, is deprecated. In a future " - "version this will raise ValueError", - FutureWarning, - stacklevel=find_stack_level(), - ) - - index = data._get_axis(axis) - result = algorithms.take( - index._values, indices, allow_fill=True, fill_value=index._na_value - ) - final_result = data._constructor_sliced(result, index=data._get_agg_axis(axis)) - return final_result.__finalize__(self, method="idxmin") - - @doc(_shared_docs["idxmax"], numeric_only_default="False") - def idxmax( - self, axis: Axis = 0, skipna: bool = True, numeric_only: bool = False - ) -> Series: - axis = self._get_axis_number(axis) - - if self.empty and len(self.axes[axis]): - axis_dtype = self.axes[axis].dtype - return self._constructor_sliced(dtype=axis_dtype) - - if numeric_only: - data = self._get_numeric_data() - else: - data = self - - res = data._reduce( - nanops.nanargmax, "argmax", axis=axis, skipna=skipna, numeric_only=False - ) - indices = res._values - # indices will always be 1d array since axis is not None - - if (indices == -1).any(): - warnings.warn( - f"The behavior of {type(self).__name__}.idxmax with all-NA " - "values, or any-NA and skipna=False, is deprecated. In a future " - "version this will raise ValueError", - FutureWarning, - stacklevel=find_stack_level(), - ) - - index = data._get_axis(axis) - result = algorithms.take( - index._values, indices, allow_fill=True, fill_value=index._na_value - ) - final_result = data._constructor_sliced(result, index=data._get_agg_axis(axis)) - return final_result.__finalize__(self, method="idxmax") - - def _get_agg_axis(self, axis_num: int) -> Index: - """ - Let's be explicit about this. - """ - if axis_num == 0: - return self.columns - elif axis_num == 1: - return self.index - else: - raise ValueError(f"Axis must be 0 or 1 (got {repr(axis_num)})") - - def mode( - self, axis: Axis = 0, numeric_only: bool = False, dropna: bool = True - ) -> DataFrame: - """ - Get the mode(s) of each element along the selected axis. - - The mode of a set of values is the value that appears most often. - It can be multiple values. - - Parameters - ---------- - axis : {0 or 'index', 1 or 'columns'}, default 0 - The axis to iterate over while searching for the mode: - - * 0 or 'index' : get mode of each column - * 1 or 'columns' : get mode of each row. - - numeric_only : bool, default False - If True, only apply to numeric columns. - dropna : bool, default True - Don't consider counts of NaN/NaT. - - Returns - ------- - DataFrame - The modes of each column or row. - - See Also - -------- - Series.mode : Return the highest frequency value in a Series. - Series.value_counts : Return the counts of values in a Series. - - Examples - -------- - >>> df = pd.DataFrame([('bird', 2, 2), - ... ('mammal', 4, np.nan), - ... ('arthropod', 8, 0), - ... ('bird', 2, np.nan)], - ... index=('falcon', 'horse', 'spider', 'ostrich'), - ... columns=('species', 'legs', 'wings')) - >>> df - species legs wings - falcon bird 2 2.0 - horse mammal 4 NaN - spider arthropod 8 0.0 - ostrich bird 2 NaN - - By default, missing values are not considered, and the mode of wings - are both 0 and 2. Because the resulting DataFrame has two rows, - the second row of ``species`` and ``legs`` contains ``NaN``. - - >>> df.mode() - species legs wings - 0 bird 2.0 0.0 - 1 NaN NaN 2.0 - - Setting ``dropna=False`` ``NaN`` values are considered and they can be - the mode (like for wings). - - >>> df.mode(dropna=False) - species legs wings - 0 bird 2 NaN - - Setting ``numeric_only=True``, only the mode of numeric columns is - computed, and columns of other types are ignored. - - >>> df.mode(numeric_only=True) - legs wings - 0 2.0 0.0 - 1 NaN 2.0 - - To compute the mode over columns and not rows, use the axis parameter: - - >>> df.mode(axis='columns', numeric_only=True) - 0 1 - falcon 2.0 NaN - horse 4.0 NaN - spider 0.0 8.0 - ostrich 2.0 NaN - """ - data = self if not numeric_only else self._get_numeric_data() - - def f(s): - return s.mode(dropna=dropna) - - data = data.apply(f, axis=axis) - # Ensure index is type stable (should always use int index) - if data.empty: - data.index = default_index(0) - - return data - - @overload - def quantile( - self, - q: float = ..., - axis: Axis = ..., - numeric_only: bool = ..., - interpolation: QuantileInterpolation = ..., - ) -> Series: - ... - - @overload - def quantile( - self, - q: AnyArrayLike | Sequence[float], - axis: Axis = ..., - numeric_only: bool = ..., - interpolation: QuantileInterpolation = ..., - ) -> Series | DataFrame: - ... - - @overload - def quantile( - self, - q: float | AnyArrayLike | Sequence[float] = ..., - axis: Axis = ..., - numeric_only: bool = ..., - interpolation: QuantileInterpolation = ..., - ) -> Series | DataFrame: - ... - - def quantile( - self, - q: float | AnyArrayLike | Sequence[float] = 0.5, - axis: Axis = 0, - numeric_only: bool = False, - interpolation: QuantileInterpolation = "linear", - method: Literal["single", "table"] = "single", - ) -> Series | DataFrame: - """ - Return values at the given quantile over requested axis. - - Parameters - ---------- - q : float or array-like, default 0.5 (50% quantile) - Value between 0 <= q <= 1, the quantile(s) to compute. - axis : {0 or 'index', 1 or 'columns'}, default 0 - Equals 0 or 'index' for row-wise, 1 or 'columns' for column-wise. - numeric_only : bool, default False - Include only `float`, `int` or `boolean` data. - - .. versionchanged:: 2.0.0 - The default value of ``numeric_only`` is now ``False``. - - interpolation : {'linear', 'lower', 'higher', 'midpoint', 'nearest'} - This optional parameter specifies the interpolation method to use, - when the desired quantile lies between two data points `i` and `j`: - - * linear: `i + (j - i) * fraction`, where `fraction` is the - fractional part of the index surrounded by `i` and `j`. - * lower: `i`. - * higher: `j`. - * nearest: `i` or `j` whichever is nearest. - * midpoint: (`i` + `j`) / 2. - method : {'single', 'table'}, default 'single' - Whether to compute quantiles per-column ('single') or over all columns - ('table'). When 'table', the only allowed interpolation methods are - 'nearest', 'lower', and 'higher'. - - Returns - ------- - Series or DataFrame - - If ``q`` is an array, a DataFrame will be returned where the - index is ``q``, the columns are the columns of self, and the - values are the quantiles. - If ``q`` is a float, a Series will be returned where the - index is the columns of self and the values are the quantiles. - - See Also - -------- - core.window.rolling.Rolling.quantile: Rolling quantile. - numpy.percentile: Numpy function to compute the percentile. - - Examples - -------- - >>> df = pd.DataFrame(np.array([[1, 1], [2, 10], [3, 100], [4, 100]]), - ... columns=['a', 'b']) - >>> df.quantile(.1) - a 1.3 - b 3.7 - Name: 0.1, dtype: float64 - >>> df.quantile([.1, .5]) - a b - 0.1 1.3 3.7 - 0.5 2.5 55.0 - - Specifying `method='table'` will compute the quantile over all columns. - - >>> df.quantile(.1, method="table", interpolation="nearest") - a 1 - b 1 - Name: 0.1, dtype: int64 - >>> df.quantile([.1, .5], method="table", interpolation="nearest") - a b - 0.1 1 1 - 0.5 3 100 - - Specifying `numeric_only=False` will also compute the quantile of - datetime and timedelta data. - - >>> df = pd.DataFrame({'A': [1, 2], - ... 'B': [pd.Timestamp('2010'), - ... pd.Timestamp('2011')], - ... 'C': [pd.Timedelta('1 days'), - ... pd.Timedelta('2 days')]}) - >>> df.quantile(0.5, numeric_only=False) - A 1.5 - B 2010-07-02 12:00:00 - C 1 days 12:00:00 - Name: 0.5, dtype: object - """ - validate_percentile(q) - axis = self._get_axis_number(axis) - - if not is_list_like(q): - # BlockManager.quantile expects listlike, so we wrap and unwrap here - # error: List item 0 has incompatible type "Union[float, Union[Union[ - # ExtensionArray, ndarray[Any, Any]], Index, Series], Sequence[float]]"; - # expected "float" - res_df = self.quantile( # type: ignore[call-overload] - [q], - axis=axis, - numeric_only=numeric_only, - interpolation=interpolation, - method=method, - ) - if method == "single": - res = res_df.iloc[0] - else: - # cannot directly iloc over sparse arrays - res = res_df.T.iloc[:, 0] - if axis == 1 and len(self) == 0: - # GH#41544 try to get an appropriate dtype - dtype = find_common_type(list(self.dtypes)) - if needs_i8_conversion(dtype): - return res.astype(dtype) - return res - - q = Index(q, dtype=np.float64) - data = self._get_numeric_data() if numeric_only else self - - if axis == 1: - data = data.T - - if len(data.columns) == 0: - # GH#23925 _get_numeric_data may have dropped all columns - cols = Index([], name=self.columns.name) - - dtype = np.float64 - if axis == 1: - # GH#41544 try to get an appropriate dtype - cdtype = find_common_type(list(self.dtypes)) - if needs_i8_conversion(cdtype): - dtype = cdtype - - res = self._constructor([], index=q, columns=cols, dtype=dtype) - return res.__finalize__(self, method="quantile") - - valid_method = {"single", "table"} - if method not in valid_method: - raise ValueError( - f"Invalid method: {method}. Method must be in {valid_method}." - ) - if method == "single": - res = data._mgr.quantile(qs=q, interpolation=interpolation) - elif method == "table": - valid_interpolation = {"nearest", "lower", "higher"} - if interpolation not in valid_interpolation: - raise ValueError( - f"Invalid interpolation: {interpolation}. " - f"Interpolation must be in {valid_interpolation}" - ) - # handle degenerate case - if len(data) == 0: - if data.ndim == 2: - dtype = find_common_type(list(self.dtypes)) - else: - dtype = self.dtype - return self._constructor([], index=q, columns=data.columns, dtype=dtype) - - q_idx = np.quantile(np.arange(len(data)), q, method=interpolation) - - by = data.columns - if len(by) > 1: - keys = [data._get_label_or_level_values(x) for x in by] - indexer = lexsort_indexer(keys) - else: - k = data._get_label_or_level_values(by[0]) - indexer = nargsort(k) - - res = data._mgr.take(indexer[q_idx], verify=False) - res.axes[1] = q - - result = self._constructor_from_mgr(res, axes=res.axes) - return result.__finalize__(self, method="quantile") - - def to_timestamp( - self, - freq: Frequency | None = None, - how: ToTimestampHow = "start", - axis: Axis = 0, - copy: bool | None = None, - ) -> DataFrame: - """ - Cast to DatetimeIndex of timestamps, at *beginning* of period. - - Parameters - ---------- - freq : str, default frequency of PeriodIndex - Desired frequency. - how : {'s', 'e', 'start', 'end'} - Convention for converting period to timestamp; start of period - vs. end. - axis : {0 or 'index', 1 or 'columns'}, default 0 - The axis to convert (the index by default). - copy : bool, default True - If False then underlying input data is not copied. - - Returns - ------- - DataFrame - The DataFrame has a DatetimeIndex. - - Examples - -------- - >>> idx = pd.PeriodIndex(['2023', '2024'], freq='Y') - >>> d = {'col1': [1, 2], 'col2': [3, 4]} - >>> df1 = pd.DataFrame(data=d, index=idx) - >>> df1 - col1 col2 - 2023 1 3 - 2024 2 4 - - The resulting timestamps will be at the beginning of the year in this case - - >>> df1 = df1.to_timestamp() - >>> df1 - col1 col2 - 2023-01-01 1 3 - 2024-01-01 2 4 - >>> df1.index - DatetimeIndex(['2023-01-01', '2024-01-01'], dtype='datetime64[ns]', freq=None) - - Using `freq` which is the offset that the Timestamps will have - - >>> df2 = pd.DataFrame(data=d, index=idx) - >>> df2 = df2.to_timestamp(freq='M') - >>> df2 - col1 col2 - 2023-01-31 1 3 - 2024-01-31 2 4 - >>> df2.index - DatetimeIndex(['2023-01-31', '2024-01-31'], dtype='datetime64[ns]', freq=None) - """ - new_obj = self.copy(deep=copy and not using_copy_on_write()) - - axis_name = self._get_axis_name(axis) - old_ax = getattr(self, axis_name) - if not isinstance(old_ax, PeriodIndex): - raise TypeError(f"unsupported Type {type(old_ax).__name__}") - - new_ax = old_ax.to_timestamp(freq=freq, how=how) - - setattr(new_obj, axis_name, new_ax) - return new_obj - - def to_period( - self, freq: Frequency | None = None, axis: Axis = 0, copy: bool | None = None - ) -> DataFrame: - """ - Convert DataFrame from DatetimeIndex to PeriodIndex. - - Convert DataFrame from DatetimeIndex to PeriodIndex with desired - frequency (inferred from index if not passed). - - Parameters - ---------- - freq : str, default - Frequency of the PeriodIndex. - axis : {0 or 'index', 1 or 'columns'}, default 0 - The axis to convert (the index by default). - copy : bool, default True - If False then underlying input data is not copied. - - Returns - ------- - DataFrame - The DataFrame has a PeriodIndex. - - Examples - -------- - >>> idx = pd.to_datetime( - ... [ - ... "2001-03-31 00:00:00", - ... "2002-05-31 00:00:00", - ... "2003-08-31 00:00:00", - ... ] - ... ) - - >>> idx - DatetimeIndex(['2001-03-31', '2002-05-31', '2003-08-31'], - dtype='datetime64[ns]', freq=None) - - >>> idx.to_period("M") - PeriodIndex(['2001-03', '2002-05', '2003-08'], dtype='period[M]') - - For the yearly frequency - - >>> idx.to_period("Y") - PeriodIndex(['2001', '2002', '2003'], dtype='period[A-DEC]') - """ - new_obj = self.copy(deep=copy and not using_copy_on_write()) - - axis_name = self._get_axis_name(axis) - old_ax = getattr(self, axis_name) - if not isinstance(old_ax, DatetimeIndex): - raise TypeError(f"unsupported Type {type(old_ax).__name__}") - - new_ax = old_ax.to_period(freq=freq) - - setattr(new_obj, axis_name, new_ax) - return new_obj - - def isin(self, values: Series | DataFrame | Sequence | Mapping) -> DataFrame: - """ - Whether each element in the DataFrame is contained in values. - - Parameters - ---------- - values : iterable, Series, DataFrame or dict - The result will only be true at a location if all the - labels match. If `values` is a Series, that's the index. If - `values` is a dict, the keys must be the column names, - which must match. If `values` is a DataFrame, - then both the index and column labels must match. - - Returns - ------- - DataFrame - DataFrame of booleans showing whether each element in the DataFrame - is contained in values. - - See Also - -------- - DataFrame.eq: Equality test for DataFrame. - Series.isin: Equivalent method on Series. - Series.str.contains: Test if pattern or regex is contained within a - string of a Series or Index. - - Examples - -------- - >>> df = pd.DataFrame({'num_legs': [2, 4], 'num_wings': [2, 0]}, - ... index=['falcon', 'dog']) - >>> df - num_legs num_wings - falcon 2 2 - dog 4 0 - - When ``values`` is a list check whether every value in the DataFrame - is present in the list (which animals have 0 or 2 legs or wings) - - >>> df.isin([0, 2]) - num_legs num_wings - falcon True True - dog False True - - To check if ``values`` is *not* in the DataFrame, use the ``~`` operator: - - >>> ~df.isin([0, 2]) - num_legs num_wings - falcon False False - dog True False - - When ``values`` is a dict, we can pass values to check for each - column separately: - - >>> df.isin({'num_wings': [0, 3]}) - num_legs num_wings - falcon False False - dog False True - - When ``values`` is a Series or DataFrame the index and column must - match. Note that 'falcon' does not match based on the number of legs - in other. - - >>> other = pd.DataFrame({'num_legs': [8, 3], 'num_wings': [0, 2]}, - ... index=['spider', 'falcon']) - >>> df.isin(other) - num_legs num_wings - falcon False True - dog False False - """ - if isinstance(values, dict): - from pandas.core.reshape.concat import concat - - values = collections.defaultdict(list, values) - result = concat( - ( - self.iloc[:, [i]].isin(values[col]) - for i, col in enumerate(self.columns) - ), - axis=1, - ) - elif isinstance(values, Series): - if not values.index.is_unique: - raise ValueError("cannot compute isin with a duplicate axis.") - result = self.eq(values.reindex_like(self), axis="index") - elif isinstance(values, DataFrame): - if not (values.columns.is_unique and values.index.is_unique): - raise ValueError("cannot compute isin with a duplicate axis.") - result = self.eq(values.reindex_like(self)) - else: - if not is_list_like(values): - raise TypeError( - "only list-like or dict-like objects are allowed " - "to be passed to DataFrame.isin(), " - f"you passed a '{type(values).__name__}'" - ) - - def isin_(x): - # error: Argument 2 to "isin" has incompatible type "Union[Series, - # DataFrame, Sequence[Any], Mapping[Any, Any]]"; expected - # "Union[Union[Union[ExtensionArray, ndarray[Any, Any]], Index, - # Series], List[Any], range]" - result = algorithms.isin( - x.ravel(), - values, # type: ignore[arg-type] - ) - return result.reshape(x.shape) - - res_mgr = self._mgr.apply(isin_) - result = self._constructor_from_mgr( - res_mgr, - axes=res_mgr.axes, - ) - return result.__finalize__(self, method="isin") - - # ---------------------------------------------------------------------- - # Add index and columns - _AXIS_ORDERS: list[Literal["index", "columns"]] = ["index", "columns"] - _AXIS_TO_AXIS_NUMBER: dict[Axis, int] = { - **NDFrame._AXIS_TO_AXIS_NUMBER, - 1: 1, - "columns": 1, - } - _AXIS_LEN = len(_AXIS_ORDERS) - _info_axis_number: Literal[1] = 1 - _info_axis_name: Literal["columns"] = "columns" - - index = properties.AxisProperty( - axis=1, - doc=""" - The index (row labels) of the DataFrame. - - The index of a DataFrame is a series of labels that identify each row. - The labels can be integers, strings, or any other hashable type. The index - is used for label-based access and alignment, and can be accessed or - modified using this attribute. - - Returns - ------- - pandas.Index - The index labels of the DataFrame. - - See Also - -------- - DataFrame.columns : The column labels of the DataFrame. - DataFrame.to_numpy : Convert the DataFrame to a NumPy array. - - Examples - -------- - >>> df = pd.DataFrame({'Name': ['Alice', 'Bob', 'Aritra'], - ... 'Age': [25, 30, 35], - ... 'Location': ['Seattle', 'New York', 'Kona']}, - ... index=([10, 20, 30])) - >>> df.index - Index([10, 20, 30], dtype='int64') - - In this example, we create a DataFrame with 3 rows and 3 columns, - including Name, Age, and Location information. We set the index labels to - be the integers 10, 20, and 30. We then access the `index` attribute of the - DataFrame, which returns an `Index` object containing the index labels. - - >>> df.index = [100, 200, 300] - >>> df - Name Age Location - 100 Alice 25 Seattle - 200 Bob 30 New York - 300 Aritra 35 Kona - - In this example, we modify the index labels of the DataFrame by assigning - a new list of labels to the `index` attribute. The DataFrame is then - updated with the new labels, and the output shows the modified DataFrame. - """, - ) - columns = properties.AxisProperty( - axis=0, - doc=dedent( - """ - The column labels of the DataFrame. - - Examples - -------- - >>> df = pd.DataFrame({'A': [1, 2], 'B': [3, 4]}) - >>> df - A B - 0 1 3 - 1 2 4 - >>> df.columns - Index(['A', 'B'], dtype='object') - """ - ), - ) - - # ---------------------------------------------------------------------- - # Add plotting methods to DataFrame - plot = CachedAccessor("plot", pandas.plotting.PlotAccessor) - hist = pandas.plotting.hist_frame - boxplot = pandas.plotting.boxplot_frame - sparse = CachedAccessor("sparse", SparseFrameAccessor) - - # ---------------------------------------------------------------------- - # Internal Interface Methods - - def _to_dict_of_blocks(self, copy: bool = True): - """ - Return a dict of dtype -> Constructor Types that - each is a homogeneous dtype. - - Internal ONLY - only works for BlockManager - """ - mgr = self._mgr - # convert to BlockManager if needed -> this way support ArrayManager as well - mgr = mgr_to_mgr(mgr, "block") - mgr = cast(BlockManager, mgr) - return { - k: self._constructor_from_mgr(v, axes=v.axes).__finalize__(self) - for k, v, in mgr.to_dict(copy=copy).items() - } - - @property - def values(self) -> np.ndarray: - """ - Return a Numpy representation of the DataFrame. - - .. warning:: - - We recommend using :meth:`DataFrame.to_numpy` instead. - - Only the values in the DataFrame will be returned, the axes labels - will be removed. - - Returns - ------- - numpy.ndarray - The values of the DataFrame. - - See Also - -------- - DataFrame.to_numpy : Recommended alternative to this method. - DataFrame.index : Retrieve the index labels. - DataFrame.columns : Retrieving the column names. - - Notes - ----- - The dtype will be a lower-common-denominator dtype (implicit - upcasting); that is to say if the dtypes (even of numeric types) - are mixed, the one that accommodates all will be chosen. Use this - with care if you are not dealing with the blocks. - - e.g. If the dtypes are float16 and float32, dtype will be upcast to - float32. If dtypes are int32 and uint8, dtype will be upcast to - int32. By :func:`numpy.find_common_type` convention, mixing int64 - and uint64 will result in a float64 dtype. - - Examples - -------- - A DataFrame where all columns are the same type (e.g., int64) results - in an array of the same type. - - >>> df = pd.DataFrame({'age': [ 3, 29], - ... 'height': [94, 170], - ... 'weight': [31, 115]}) - >>> df - age height weight - 0 3 94 31 - 1 29 170 115 - >>> df.dtypes - age int64 - height int64 - weight int64 - dtype: object - >>> df.values - array([[ 3, 94, 31], - [ 29, 170, 115]]) - - A DataFrame with mixed type columns(e.g., str/object, int64, float32) - results in an ndarray of the broadest type that accommodates these - mixed types (e.g., object). - - >>> df2 = pd.DataFrame([('parrot', 24.0, 'second'), - ... ('lion', 80.5, 1), - ... ('monkey', np.nan, None)], - ... columns=('name', 'max_speed', 'rank')) - >>> df2.dtypes - name object - max_speed float64 - rank object - dtype: object - >>> df2.values - array([['parrot', 24.0, 'second'], - ['lion', 80.5, 1], - ['monkey', nan, None]], dtype=object) - """ - return self._mgr.as_array() - - -def _from_nested_dict(data) -> collections.defaultdict: - new_data: collections.defaultdict = collections.defaultdict(dict) - for index, s in data.items(): - for col, v in s.items(): - new_data[col][index] = v - return new_data - - -def _reindex_for_setitem( - value: DataFrame | Series, index: Index -) -> tuple[ArrayLike, BlockValuesRefs | None]: - # reindex if necessary - - if value.index.equals(index) or not len(index): - if using_copy_on_write() and isinstance(value, Series): - return value._values, value._references - return value._values.copy(), None - - # GH#4107 - try: - reindexed_value = value.reindex(index)._values - except ValueError as err: - # raised in MultiIndex.from_tuples, see test_insert_error_msmgs - if not value.index.is_unique: - # duplicate axis - raise err - - raise TypeError( - "incompatible index of inserted column with frame index" - ) from err - return reindexed_value, None diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/arrays/integer/test_repr.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/arrays/integer/test_repr.py deleted file mode 100644 index 168210eed5d06a461bbf42dd1e1fae3db0fd851c..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/arrays/integer/test_repr.py +++ /dev/null @@ -1,67 +0,0 @@ -import numpy as np -import pytest - -import pandas as pd -from pandas.core.arrays.integer import ( - Int8Dtype, - Int16Dtype, - Int32Dtype, - Int64Dtype, - UInt8Dtype, - UInt16Dtype, - UInt32Dtype, - UInt64Dtype, -) - - -def test_dtypes(dtype): - # smoke tests on auto dtype construction - - if dtype.is_signed_integer: - assert np.dtype(dtype.type).kind == "i" - else: - assert np.dtype(dtype.type).kind == "u" - assert dtype.name is not None - - -@pytest.mark.parametrize( - "dtype, expected", - [ - (Int8Dtype(), "Int8Dtype()"), - (Int16Dtype(), "Int16Dtype()"), - (Int32Dtype(), "Int32Dtype()"), - (Int64Dtype(), "Int64Dtype()"), - (UInt8Dtype(), "UInt8Dtype()"), - (UInt16Dtype(), "UInt16Dtype()"), - (UInt32Dtype(), "UInt32Dtype()"), - (UInt64Dtype(), "UInt64Dtype()"), - ], -) -def test_repr_dtype(dtype, expected): - assert repr(dtype) == expected - - -def test_repr_array(): - result = repr(pd.array([1, None, 3])) - expected = "\n[1, , 3]\nLength: 3, dtype: Int64" - assert result == expected - - -def test_repr_array_long(): - data = pd.array([1, 2, None] * 1000) - expected = ( - "\n" - "[ 1, 2, , 1, 2, , 1, 2, , 1,\n" - " ...\n" - " , 1, 2, , 1, 2, , 1, 2, ]\n" - "Length: 3000, dtype: Int64" - ) - result = repr(data) - assert result == expected - - -def test_frame_repr(data_missing): - df = pd.DataFrame({"A": data_missing}) - result = repr(df) - expected = " A\n0 \n1 1" - assert result == expected diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/indexes/multi/test_get_set.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/indexes/multi/test_get_set.py deleted file mode 100644 index 0720a1e1c648c0b2d1d077c8c5086339d6c72e57..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/indexes/multi/test_get_set.py +++ /dev/null @@ -1,379 +0,0 @@ -import numpy as np -import pytest - -from pandas.compat import PY311 - -from pandas.core.dtypes.dtypes import DatetimeTZDtype - -import pandas as pd -from pandas import ( - CategoricalIndex, - MultiIndex, -) -import pandas._testing as tm - - -def assert_matching(actual, expected, check_dtype=False): - # avoid specifying internal representation - # as much as possible - assert len(actual) == len(expected) - for act, exp in zip(actual, expected): - act = np.asarray(act) - exp = np.asarray(exp) - tm.assert_numpy_array_equal(act, exp, check_dtype=check_dtype) - - -def test_get_level_number_integer(idx): - idx.names = [1, 0] - assert idx._get_level_number(1) == 0 - assert idx._get_level_number(0) == 1 - msg = "Too many levels: Index has only 2 levels, not 3" - with pytest.raises(IndexError, match=msg): - idx._get_level_number(2) - with pytest.raises(KeyError, match="Level fourth not found"): - idx._get_level_number("fourth") - - -def test_get_dtypes(): - # Test MultiIndex.dtypes (# Gh37062) - idx_multitype = MultiIndex.from_product( - [[1, 2, 3], ["a", "b", "c"], pd.date_range("20200101", periods=2, tz="UTC")], - names=["int", "string", "dt"], - ) - expected = pd.Series( - { - "int": np.dtype("int64"), - "string": np.dtype("O"), - "dt": DatetimeTZDtype(tz="utc"), - } - ) - tm.assert_series_equal(expected, idx_multitype.dtypes) - - -def test_get_dtypes_no_level_name(): - # Test MultiIndex.dtypes (# GH38580 ) - idx_multitype = MultiIndex.from_product( - [ - [1, 2, 3], - ["a", "b", "c"], - pd.date_range("20200101", periods=2, tz="UTC"), - ], - ) - expected = pd.Series( - { - "level_0": np.dtype("int64"), - "level_1": np.dtype("O"), - "level_2": DatetimeTZDtype(tz="utc"), - } - ) - tm.assert_series_equal(expected, idx_multitype.dtypes) - - -def test_get_dtypes_duplicate_level_names(): - # Test MultiIndex.dtypes with non-unique level names (# GH45174) - result = MultiIndex.from_product( - [ - [1, 2, 3], - ["a", "b", "c"], - pd.date_range("20200101", periods=2, tz="UTC"), - ], - names=["A", "A", "A"], - ).dtypes - expected = pd.Series( - [np.dtype("int64"), np.dtype("O"), DatetimeTZDtype(tz="utc")], - index=["A", "A", "A"], - ) - tm.assert_series_equal(result, expected) - - -def test_get_level_number_out_of_bounds(multiindex_dataframe_random_data): - frame = multiindex_dataframe_random_data - - with pytest.raises(IndexError, match="Too many levels"): - frame.index._get_level_number(2) - with pytest.raises(IndexError, match="not a valid level number"): - frame.index._get_level_number(-3) - - -def test_set_name_methods(idx, index_names): - # so long as these are synonyms, we don't need to test set_names - assert idx.rename == idx.set_names - new_names = [name + "SUFFIX" for name in index_names] - ind = idx.set_names(new_names) - assert idx.names == index_names - assert ind.names == new_names - msg = "Length of names must match number of levels in MultiIndex" - with pytest.raises(ValueError, match=msg): - ind.set_names(new_names + new_names) - new_names2 = [name + "SUFFIX2" for name in new_names] - res = ind.set_names(new_names2, inplace=True) - assert res is None - assert ind.names == new_names2 - - # set names for specific level (# GH7792) - ind = idx.set_names(new_names[0], level=0) - assert idx.names == index_names - assert ind.names == [new_names[0], index_names[1]] - - res = ind.set_names(new_names2[0], level=0, inplace=True) - assert res is None - assert ind.names == [new_names2[0], index_names[1]] - - # set names for multiple levels - ind = idx.set_names(new_names, level=[0, 1]) - assert idx.names == index_names - assert ind.names == new_names - - res = ind.set_names(new_names2, level=[0, 1], inplace=True) - assert res is None - assert ind.names == new_names2 - - -def test_set_levels_codes_directly(idx): - # setting levels/codes directly raises AttributeError - - levels = idx.levels - new_levels = [[lev + "a" for lev in level] for level in levels] - - codes = idx.codes - major_codes, minor_codes = codes - major_codes = [(x + 1) % 3 for x in major_codes] - minor_codes = [(x + 1) % 1 for x in minor_codes] - new_codes = [major_codes, minor_codes] - - msg = "Can't set attribute" - with pytest.raises(AttributeError, match=msg): - idx.levels = new_levels - - msg = ( - "property 'codes' of 'MultiIndex' object has no setter" - if PY311 - else "can't set attribute" - ) - with pytest.raises(AttributeError, match=msg): - idx.codes = new_codes - - -def test_set_levels(idx): - # side note - you probably wouldn't want to use levels and codes - # directly like this - but it is possible. - levels = idx.levels - new_levels = [[lev + "a" for lev in level] for level in levels] - - # level changing [w/o mutation] - ind2 = idx.set_levels(new_levels) - assert_matching(ind2.levels, new_levels) - assert_matching(idx.levels, levels) - - # level changing specific level [w/o mutation] - ind2 = idx.set_levels(new_levels[0], level=0) - assert_matching(ind2.levels, [new_levels[0], levels[1]]) - assert_matching(idx.levels, levels) - - ind2 = idx.set_levels(new_levels[1], level=1) - assert_matching(ind2.levels, [levels[0], new_levels[1]]) - assert_matching(idx.levels, levels) - - # level changing multiple levels [w/o mutation] - ind2 = idx.set_levels(new_levels, level=[0, 1]) - assert_matching(ind2.levels, new_levels) - assert_matching(idx.levels, levels) - - # illegal level changing should not change levels - # GH 13754 - original_index = idx.copy() - with pytest.raises(ValueError, match="^On"): - idx.set_levels(["c"], level=0) - assert_matching(idx.levels, original_index.levels, check_dtype=True) - - with pytest.raises(ValueError, match="^On"): - idx.set_codes([0, 1, 2, 3, 4, 5], level=0) - assert_matching(idx.codes, original_index.codes, check_dtype=True) - - with pytest.raises(TypeError, match="^Levels"): - idx.set_levels("c", level=0) - assert_matching(idx.levels, original_index.levels, check_dtype=True) - - with pytest.raises(TypeError, match="^Codes"): - idx.set_codes(1, level=0) - assert_matching(idx.codes, original_index.codes, check_dtype=True) - - -def test_set_codes(idx): - # side note - you probably wouldn't want to use levels and codes - # directly like this - but it is possible. - codes = idx.codes - major_codes, minor_codes = codes - major_codes = [(x + 1) % 3 for x in major_codes] - minor_codes = [(x + 1) % 1 for x in minor_codes] - new_codes = [major_codes, minor_codes] - - # changing codes w/o mutation - ind2 = idx.set_codes(new_codes) - assert_matching(ind2.codes, new_codes) - assert_matching(idx.codes, codes) - - # codes changing specific level w/o mutation - ind2 = idx.set_codes(new_codes[0], level=0) - assert_matching(ind2.codes, [new_codes[0], codes[1]]) - assert_matching(idx.codes, codes) - - ind2 = idx.set_codes(new_codes[1], level=1) - assert_matching(ind2.codes, [codes[0], new_codes[1]]) - assert_matching(idx.codes, codes) - - # codes changing multiple levels w/o mutation - ind2 = idx.set_codes(new_codes, level=[0, 1]) - assert_matching(ind2.codes, new_codes) - assert_matching(idx.codes, codes) - - # label changing for levels of different magnitude of categories - ind = MultiIndex.from_tuples([(0, i) for i in range(130)]) - new_codes = range(129, -1, -1) - expected = MultiIndex.from_tuples([(0, i) for i in new_codes]) - - # [w/o mutation] - result = ind.set_codes(codes=new_codes, level=1) - assert result.equals(expected) - - -def test_set_levels_codes_names_bad_input(idx): - levels, codes = idx.levels, idx.codes - names = idx.names - - with pytest.raises(ValueError, match="Length of levels"): - idx.set_levels([levels[0]]) - - with pytest.raises(ValueError, match="Length of codes"): - idx.set_codes([codes[0]]) - - with pytest.raises(ValueError, match="Length of names"): - idx.set_names([names[0]]) - - # shouldn't scalar data error, instead should demand list-like - with pytest.raises(TypeError, match="list of lists-like"): - idx.set_levels(levels[0]) - - # shouldn't scalar data error, instead should demand list-like - with pytest.raises(TypeError, match="list of lists-like"): - idx.set_codes(codes[0]) - - # shouldn't scalar data error, instead should demand list-like - with pytest.raises(TypeError, match="list-like"): - idx.set_names(names[0]) - - # should have equal lengths - with pytest.raises(TypeError, match="list of lists-like"): - idx.set_levels(levels[0], level=[0, 1]) - - with pytest.raises(TypeError, match="list-like"): - idx.set_levels(levels, level=0) - - # should have equal lengths - with pytest.raises(TypeError, match="list of lists-like"): - idx.set_codes(codes[0], level=[0, 1]) - - with pytest.raises(TypeError, match="list-like"): - idx.set_codes(codes, level=0) - - # should have equal lengths - with pytest.raises(ValueError, match="Length of names"): - idx.set_names(names[0], level=[0, 1]) - - with pytest.raises(TypeError, match="Names must be a"): - idx.set_names(names, level=0) - - -@pytest.mark.parametrize("inplace", [True, False]) -def test_set_names_with_nlevel_1(inplace): - # GH 21149 - # Ensure that .set_names for MultiIndex with - # nlevels == 1 does not raise any errors - expected = MultiIndex(levels=[[0, 1]], codes=[[0, 1]], names=["first"]) - m = MultiIndex.from_product([[0, 1]]) - result = m.set_names("first", level=0, inplace=inplace) - - if inplace: - result = m - - tm.assert_index_equal(result, expected) - - -@pytest.mark.parametrize("ordered", [True, False]) -def test_set_levels_categorical(ordered): - # GH13854 - index = MultiIndex.from_arrays([list("xyzx"), [0, 1, 2, 3]]) - - cidx = CategoricalIndex(list("bac"), ordered=ordered) - result = index.set_levels(cidx, level=0) - expected = MultiIndex(levels=[cidx, [0, 1, 2, 3]], codes=index.codes) - tm.assert_index_equal(result, expected) - - result_lvl = result.get_level_values(0) - expected_lvl = CategoricalIndex( - list("bacb"), categories=cidx.categories, ordered=cidx.ordered - ) - tm.assert_index_equal(result_lvl, expected_lvl) - - -def test_set_value_keeps_names(): - # motivating example from #3742 - lev1 = ["hans", "hans", "hans", "grethe", "grethe", "grethe"] - lev2 = ["1", "2", "3"] * 2 - idx = MultiIndex.from_arrays([lev1, lev2], names=["Name", "Number"]) - df = pd.DataFrame( - np.random.default_rng(2).standard_normal((6, 4)), - columns=["one", "two", "three", "four"], - index=idx, - ) - df = df.sort_index() - assert df._is_copy is None - assert df.index.names == ("Name", "Number") - df.at[("grethe", "4"), "one"] = 99.34 - assert df._is_copy is None - assert df.index.names == ("Name", "Number") - - -def test_set_levels_with_iterable(): - # GH23273 - sizes = [1, 2, 3] - colors = ["black"] * 3 - index = MultiIndex.from_arrays([sizes, colors], names=["size", "color"]) - - result = index.set_levels(map(int, ["3", "2", "1"]), level="size") - - expected_sizes = [3, 2, 1] - expected = MultiIndex.from_arrays([expected_sizes, colors], names=["size", "color"]) - tm.assert_index_equal(result, expected) - - -def test_set_empty_level(): - # GH#48636 - midx = MultiIndex.from_arrays([[]], names=["A"]) - result = midx.set_levels(pd.DatetimeIndex([]), level=0) - expected = MultiIndex.from_arrays([pd.DatetimeIndex([])], names=["A"]) - tm.assert_index_equal(result, expected) - - -def test_set_levels_pos_args_removal(): - # https://github.com/pandas-dev/pandas/issues/41485 - idx = MultiIndex.from_tuples( - [ - (1, "one"), - (3, "one"), - ], - names=["foo", "bar"], - ) - with pytest.raises(TypeError, match="positional arguments"): - idx.set_levels(["a", "b", "c"], 0) - - with pytest.raises(TypeError, match="positional arguments"): - idx.set_codes([[0, 1], [1, 0]], 0) - - -def test_set_levels_categorical_keep_dtype(): - # GH#52125 - midx = MultiIndex.from_arrays([[5, 6]]) - result = midx.set_levels(levels=pd.Categorical([1, 2]), level=0) - expected = MultiIndex.from_arrays([pd.Categorical([1, 2])]) - tm.assert_index_equal(result, expected) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/tslibs/test_api.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/tslibs/test_api.py deleted file mode 100644 index a596d4a85074e1a005cae1fd7aa566a6e3045480..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/tslibs/test_api.py +++ /dev/null @@ -1,64 +0,0 @@ -"""Tests that the tslibs API is locked down""" - -from pandas._libs import tslibs - - -def test_namespace(): - submodules = [ - "base", - "ccalendar", - "conversion", - "dtypes", - "fields", - "nattype", - "np_datetime", - "offsets", - "parsing", - "period", - "strptime", - "vectorized", - "timedeltas", - "timestamps", - "timezones", - "tzconversion", - ] - - api = [ - "BaseOffset", - "NaT", - "NaTType", - "iNaT", - "nat_strings", - "OutOfBoundsDatetime", - "OutOfBoundsTimedelta", - "Period", - "IncompatibleFrequency", - "Resolution", - "Tick", - "Timedelta", - "dt64arr_to_periodarr", - "Timestamp", - "is_date_array_normalized", - "ints_to_pydatetime", - "normalize_i8_timestamps", - "get_resolution", - "delta_to_nanoseconds", - "ints_to_pytimedelta", - "localize_pydatetime", - "tz_convert_from_utc", - "tz_convert_from_utc_single", - "to_offset", - "tz_compare", - "is_unitless", - "astype_overflowsafe", - "get_unit_from_dtype", - "periods_per_day", - "periods_per_second", - "is_supported_unit", - "get_supported_reso", - "npy_unit_to_abbrev", - ] - - expected = set(submodules + api) - names = [x for x in dir(tslibs) if not x.startswith("__")] - assert set(names) == expected diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/styles/paraiso_dark.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/styles/paraiso_dark.py deleted file mode 100644 index 8e8e6dcf34c1e60f8b25cf002a2995c49640f762..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/styles/paraiso_dark.py +++ /dev/null @@ -1,120 +0,0 @@ -""" - pygments.styles.paraiso_dark - ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - - Paraíso (Dark) by Jan T. Sott - - Pygments template by Jan T. Sott (https://github.com/idleberg) - Created with Base16 Builder by Chris Kempson - (https://github.com/chriskempson/base16-builder). - - :copyright: Copyright 2006-2023 by the Pygments team, see AUTHORS. - :license: BSD, see LICENSE for details. -""" - -from pygments.style import Style -from pygments.token import Keyword, Name, Comment, String, Error, Text, \ - Number, Operator, Generic, Whitespace, Punctuation, Other, Literal - - -BACKGROUND = "#2f1e2e" -CURRENT_LINE = "#41323f" -SELECTION = "#4f424c" -FOREGROUND = "#e7e9db" -COMMENT = "#776e71" -RED = "#ef6155" -ORANGE = "#f99b15" -YELLOW = "#fec418" -GREEN = "#48b685" -AQUA = "#5bc4bf" -BLUE = "#06b6ef" -PURPLE = "#815ba4" - - -class ParaisoDarkStyle(Style): - - background_color = BACKGROUND - highlight_color = SELECTION - - styles = { - # No corresponding class for the following: - Text: FOREGROUND, # class: '' - Whitespace: "", # class: 'w' - Error: RED, # class: 'err' - Other: "", # class 'x' - - Comment: COMMENT, # class: 'c' - Comment.Multiline: "", # class: 'cm' - Comment.Preproc: "", # class: 'cp' - Comment.Single: "", # class: 'c1' - Comment.Special: "", # class: 'cs' - - Keyword: PURPLE, # class: 'k' - Keyword.Constant: "", # class: 'kc' - Keyword.Declaration: "", # class: 'kd' - Keyword.Namespace: AQUA, # class: 'kn' - Keyword.Pseudo: "", # class: 'kp' - Keyword.Reserved: "", # class: 'kr' - Keyword.Type: YELLOW, # class: 'kt' - - Operator: AQUA, # class: 'o' - Operator.Word: "", # class: 'ow' - like keywords - - Punctuation: FOREGROUND, # class: 'p' - - Name: FOREGROUND, # class: 'n' - Name.Attribute: BLUE, # class: 'na' - to be revised - Name.Builtin: "", # class: 'nb' - Name.Builtin.Pseudo: "", # class: 'bp' - Name.Class: YELLOW, # class: 'nc' - to be revised - Name.Constant: RED, # class: 'no' - to be revised - Name.Decorator: AQUA, # class: 'nd' - to be revised - Name.Entity: "", # class: 'ni' - Name.Exception: RED, # class: 'ne' - Name.Function: BLUE, # class: 'nf' - Name.Property: "", # class: 'py' - Name.Label: "", # class: 'nl' - Name.Namespace: YELLOW, # class: 'nn' - to be revised - Name.Other: BLUE, # class: 'nx' - Name.Tag: AQUA, # class: 'nt' - like a keyword - Name.Variable: RED, # class: 'nv' - to be revised - Name.Variable.Class: "", # class: 'vc' - to be revised - Name.Variable.Global: "", # class: 'vg' - to be revised - Name.Variable.Instance: "", # class: 'vi' - to be revised - - Number: ORANGE, # class: 'm' - Number.Float: "", # class: 'mf' - Number.Hex: "", # class: 'mh' - Number.Integer: "", # class: 'mi' - Number.Integer.Long: "", # class: 'il' - Number.Oct: "", # class: 'mo' - - Literal: ORANGE, # class: 'l' - Literal.Date: GREEN, # class: 'ld' - - String: GREEN, # class: 's' - String.Backtick: "", # class: 'sb' - String.Char: FOREGROUND, # class: 'sc' - String.Doc: COMMENT, # class: 'sd' - like a comment - String.Double: "", # class: 's2' - String.Escape: ORANGE, # class: 'se' - String.Heredoc: "", # class: 'sh' - String.Interpol: ORANGE, # class: 'si' - String.Other: "", # class: 'sx' - String.Regex: "", # class: 'sr' - String.Single: "", # class: 's1' - String.Symbol: "", # class: 'ss' - - Generic: "", # class: 'g' - Generic.Deleted: RED, # class: 'gd', - Generic.Emph: "italic", # class: 'ge' - Generic.Error: "", # class: 'gr' - Generic.Heading: "bold " + FOREGROUND, # class: 'gh' - Generic.Inserted: GREEN, # class: 'gi' - Generic.Output: "", # class: 'go' - Generic.Prompt: "bold " + COMMENT, # class: 'gp' - Generic.Strong: "bold", # class: 'gs' - Generic.EmphStrong: "bold italic", # class: 'ges' - Generic.Subheading: "bold " + AQUA, # class: 'gu' - Generic.Traceback: "", # class: 'gt' - } diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/tqdm/contrib/concurrent.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/tqdm/contrib/concurrent.py deleted file mode 100644 index cd81d622a1309df179042159a56cef4f8c309224..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/tqdm/contrib/concurrent.py +++ /dev/null @@ -1,105 +0,0 @@ -""" -Thin wrappers around `concurrent.futures`. -""" -from contextlib import contextmanager -from operator import length_hint -from os import cpu_count - -from ..auto import tqdm as tqdm_auto -from ..std import TqdmWarning - -__author__ = {"github.com/": ["casperdcl"]} -__all__ = ['thread_map', 'process_map'] - - -@contextmanager -def ensure_lock(tqdm_class, lock_name=""): - """get (create if necessary) and then restore `tqdm_class`'s lock""" - old_lock = getattr(tqdm_class, '_lock', None) # don't create a new lock - lock = old_lock or tqdm_class.get_lock() # maybe create a new lock - lock = getattr(lock, lock_name, lock) # maybe subtype - tqdm_class.set_lock(lock) - yield lock - if old_lock is None: - del tqdm_class._lock - else: - tqdm_class.set_lock(old_lock) - - -def _executor_map(PoolExecutor, fn, *iterables, **tqdm_kwargs): - """ - Implementation of `thread_map` and `process_map`. - - Parameters - ---------- - tqdm_class : [default: tqdm.auto.tqdm]. - max_workers : [default: min(32, cpu_count() + 4)]. - chunksize : [default: 1]. - lock_name : [default: "":str]. - """ - kwargs = tqdm_kwargs.copy() - if "total" not in kwargs: - kwargs["total"] = length_hint(iterables[0]) - tqdm_class = kwargs.pop("tqdm_class", tqdm_auto) - max_workers = kwargs.pop("max_workers", min(32, cpu_count() + 4)) - chunksize = kwargs.pop("chunksize", 1) - lock_name = kwargs.pop("lock_name", "") - with ensure_lock(tqdm_class, lock_name=lock_name) as lk: - # share lock in case workers are already using `tqdm` - with PoolExecutor(max_workers=max_workers, initializer=tqdm_class.set_lock, - initargs=(lk,)) as ex: - return list(tqdm_class(ex.map(fn, *iterables, chunksize=chunksize), **kwargs)) - - -def thread_map(fn, *iterables, **tqdm_kwargs): - """ - Equivalent of `list(map(fn, *iterables))` - driven by `concurrent.futures.ThreadPoolExecutor`. - - Parameters - ---------- - tqdm_class : optional - `tqdm` class to use for bars [default: tqdm.auto.tqdm]. - max_workers : int, optional - Maximum number of workers to spawn; passed to - `concurrent.futures.ThreadPoolExecutor.__init__`. - [default: max(32, cpu_count() + 4)]. - """ - from concurrent.futures import ThreadPoolExecutor - return _executor_map(ThreadPoolExecutor, fn, *iterables, **tqdm_kwargs) - - -def process_map(fn, *iterables, **tqdm_kwargs): - """ - Equivalent of `list(map(fn, *iterables))` - driven by `concurrent.futures.ProcessPoolExecutor`. - - Parameters - ---------- - tqdm_class : optional - `tqdm` class to use for bars [default: tqdm.auto.tqdm]. - max_workers : int, optional - Maximum number of workers to spawn; passed to - `concurrent.futures.ProcessPoolExecutor.__init__`. - [default: min(32, cpu_count() + 4)]. - chunksize : int, optional - Size of chunks sent to worker processes; passed to - `concurrent.futures.ProcessPoolExecutor.map`. [default: 1]. - lock_name : str, optional - Member of `tqdm_class.get_lock()` to use [default: mp_lock]. - """ - from concurrent.futures import ProcessPoolExecutor - if iterables and "chunksize" not in tqdm_kwargs: - # default `chunksize=1` has poor performance for large iterables - # (most time spent dispatching items to workers). - longest_iterable_len = max(map(length_hint, iterables)) - if longest_iterable_len > 1000: - from warnings import warn - warn("Iterable length %d > 1000 but `chunksize` is not set." - " This may seriously degrade multiprocess performance." - " Set `chunksize=1` or more." % longest_iterable_len, - TqdmWarning, stacklevel=2) - if "lock_name" not in tqdm_kwargs: - tqdm_kwargs = tqdm_kwargs.copy() - tqdm_kwargs["lock_name"] = "mp_lock" - return _executor_map(ProcessPoolExecutor, fn, *iterables, **tqdm_kwargs) diff --git a/spaces/propilot/propilot-calling-functions/calling_functions.py b/spaces/propilot/propilot-calling-functions/calling_functions.py deleted file mode 100644 index 9879820aa4ce07962b7aef502466c40cc82c29cd..0000000000000000000000000000000000000000 --- a/spaces/propilot/propilot-calling-functions/calling_functions.py +++ /dev/null @@ -1,54 +0,0 @@ -import re - - -# Definición de esquema para Calling Function -ADD_DECIMAL_AND_HEXADECIMAL_FUNCTION_SCHEMA = [ - { - "name": "add_decimal_values", - "description": "Add two decimal values", - "parameters": { - "type": "object", - "properties": { - "value1": { - "type": "integer", - "description": "The first decimal value to add. For example, 5", - }, - "value2": { - "type": "integer", - "description": "The second decimal value to add. For example, 10", - }, - }, - "required": ["value1", "value2"], - }, - }, - { - "name": "add_hexadecimal_values", - "description": "Add two hexadecimal values", - "parameters": { - "type": "object", - "properties": { - "value1": { - "type": "string", - "description": "The first hexadecimal value to add. For example, 5", - }, - "value2": { - "type": "string", - "description": "The second hexadecimal value to add. For example, A", - }, - }, - "required": ["value1", "value2"], - }, - }, -] - - -# Definición de las funciones -def add_decimal_values(arguments): - v1 = int(re.search(r'"value1": (\d+)', str(arguments)).group(1)) - v2 = int(re.search(r'"value2": (\d+)', str(arguments)).group(1)) - return v1 + v2 - -def add_hexadecimal_values(arguments): - v1 = re.search(r'"value1": "(\w+)"', str(arguments)).group(1) - v2 = re.search(r'"value2": "(\w+)"', str(arguments)).group(1) - return hex(int(v1, 16) + int(v2, 16))[2:] diff --git a/spaces/pseudolab/schoolrecord_gen/achivenment_standards.py b/spaces/pseudolab/schoolrecord_gen/achivenment_standards.py deleted file mode 100644 index 89abb85b0c0dd882ff41cd368db7c239519c2068..0000000000000000000000000000000000000000 --- a/spaces/pseudolab/schoolrecord_gen/achivenment_standards.py +++ /dev/null @@ -1,2439 +0,0 @@ -achievement_standards = { - "1~2학년군": { - "국어": [ - "[2국01-01] 상황에 어울리는 인사말을 주고받는다.", - "[2국01-02] 일이 일어난 순서를 고려하며 듣고 말한다.", - "[2국01-01] 상황에 어울리는 인사말을 주고받는다.", - "[2국01-02] 일이 일어난 순서를 고려하며 듣고 말한다.", - "[2국01-03] 자신의 감정을 표현하며 대화를 나눈다.", - "[2국01-04] 듣는 이를 바라보며 바른 자세로 자신 있게 말한다.", - "[2국01-05] 말하는 이와 말의 내용에 집중하며 듣는다.", - "[2국01-06] 바르고 고운 말을 사용하여 말하는 태도를 지닌다.", - "[2국02-01] 글자, 낱말, 문장을 소리 내어 읽는다.", - "[2국02-02] 문장과 글을 알맞게 띄어 읽는다.", - "[2국02-03] 글을 읽고 주요 내용을 확인한다.", - "[2국02-04] 글을 읽고 인물의 처지와 마음을 짐작한다.", - "[2국02-05] 읽기에 흥미를 가지고 즐겨 읽는 태도를 지닌다.", - "[2국03-01] 글자를 바르게 쓴다.", - "[2국03-02] 자신의 생각을 문장으로 표현한다.", - "[2국03-03] 주변의 사람이나 사물에 대해 짧은 글을 쓴다.", - "[2국03-04] 인상 깊었던 일이나 겪은 일에 대한 생각이나 느낌을 쓴다.", - "[2국03-05] 쓰기에 흥미를 가지고 즐겨 쓰는 태도를 지닌다.", - "[2국04-01] 한글 자모의 이름과 소릿값을 알고 정확하게 발음하고 쓴다.", - "[2국04-02] 소리와 표기가 다를 수 있음을 알고 낱말을 바르게 읽고 쓴다.", - "[2국04-03] 문장에 따라 알맞은 문장 부호를 사용한다.", - "[2국04-04] 글자, 낱말, 문장을 관심 있게 살펴보고 흥미를 가진다.", - "[2국05-01] 느낌과 분위기를 살려 그림책, 시나 노래, 짧은 이야기를 들려주거나 듣는다.", - "[2국05-02] 인물의 모습, 행동, 마음을 상상하며 그림책, 시나 노래, 이야기를 감상한다.", - "[2국05-03] 여러 가지 말놀이를 통해 말의 재미를 느낀다.", - "[2국05-04] 자신의 생각이나 겪은 일을 시나 노래, 이야기 등으로 표현한다.", - "[2국05-05] 시나 노래, 이야기에 흥미를 가진다." - ], - "수학": [ - "[2수01-01] 0과 100까지의 수 개념을 이해하고, 수를 세고 읽고 쓸 수 있다.", - "[2수01-02] 일, 십, 백, 천의 자릿값과 위치적 기수법을 이해하고, 네 자리 이하의 수를 읽고 쓸 수 있다.", - "[2수01-03] 네 자리 이하의 수의 범위에서 수의 계열을 이해하고, 수의 크기를 비교할 수 있다.", - "[2수01-04] 하나의 수를 두 수로 분해하고 두 수를 하나의 수로 합성하는 활동을 통하여 수 감각을 기른다.", - "[2수01-05] 덧셈과 뺄셈이 이루어지는 실생활 상황을 통하여 덧셈과 뺄셈의 의미를 이해한다.", - "[2수01-06] 두 자리 수의 범위에서 덧셈과 뺄셈의 계산 원리를 이해하고 그 계산을 할 수 있다.", - "[2수01-07] 덧셈과 뺄셈의 관계를 이해한다.", - "[2수01-08] 두 자리 수의 범위에서 세 수의 덧셈과 뺄셈을 할 수 있다.", - "[2수01-09] □가 사용된 덧셈식과 뺄셈식을 만들고, □의 값을 구할 수 있다.", - "[2수01-10] 곱셈이 이루어지는 실생활 상황을 통하여 곱셈의 의미를 이해한다.", - "[2수01-11] 곱셈구구를 이해하고, 한 자리 수의 곱셈을 할 수 있다.", - "[2수02-01] 교실 및 생활 주변에서 여러 가지 물건을 관찰하여 직육면체, 원기둥, 구의 모양을 찾고, 그것들을 이용하여 여러 가지 모양을 만들 수 있다.", - "[2수02-02] 쌓기나무를 이용하여 여러 가지 입체도형의 모양을 만들고, 그 모양에 대해 위치나 방향을 이용하여 말할 수 있다.", - "[2수02-03] 교실 및 생활 주변에서 여러 가지 물건을 관찰하여 삼각형, 사각형, 원의 모양을 찾고, 그것들을 이용하여 여러 가지 모양을 꾸밀 수 있다.", - "[2수02-04] 삼각형, 사각형, 원을 직관적으로 이해하고, 그 모양을 그릴 수 있다.", - "[2수02-05] 삼각형, 사각형에서 각각의 공통점을 찾아 말하고, 이를 일반화하여 오각형, 육각형을 알고 구별할 수 있다.", - "[2수03-01] 구체물의 길이, 들이, 무게, 넓이를 비교하여 각각 ‘길다, 짧다’, ‘많다, 적다’, ‘무겁다, 가볍다’, ‘넓다, 좁다’ 등을 구별하여 말할 수 있다.", - "[2수03-02] 시계를 보고 시각을 ‘몇 시 몇 분’까지 읽을 수 있다.", - "[2수03-03] 1시간은 60분임을 알고, 시간을 ‘시간’, ‘분’으로 표현할 수 있다.", - "[2수03-04] 1분, 1시간, 1일, 1주일, 1개월, 1년 사이의 관계를 이해한다.", - "[2수03-05] 길이를 나타내는 표준 단위의 필요성을 인식하고, 1cm와 1m의 단위를 알며, 상황에 따라 적절한 단위를 사용하여 길이를 측정할 수 있다.", - "[2수03-06] 1m가 100cm임을 알고, 길이를 단명수와 복명수로 표현할 수 있다.", - "[2수03-07] 여러 가지 물건의 길이를 어림하여 보고, 길이에 대한 양감을 기른다.", - "[2수03-08] 구체물의 길이를 재는 과정에서 자의 눈금과 일치하지 않는 길이의 측정값을 ‘약’으로 표현할 수 있다.", - "[2수03-09] 실생활 문제 상황을 통하여 길이의 덧셈과 뺄셈을 이해한다.", - "[2수04-01] 물체, 무늬, 수 등의 배열에서 규칙을 찾아 여러 가지 방법으로 나타낼 수 있다.", - "[2수04-02] 자신이 정한 규칙에 따라 물체, 무늬, 수 등을 배열할 수 있다.", - "[2수05-01] 교실 및 생활 주변에 있는 사물들을 정해진 기준 또는 자신이 정한 기준으로 분류하여 개수를 세어보고, 기준에 따른 결과를 말할 수 있다.", - "[2수05-02] 분류한 자료를 표로 나타내고, 표로 나타내면 편리한 점을 말할 수 있다.", - "[2수05-03] 분류한 자료를 ○, ×, / 등을 이용하여 그래프로 나타내고, 그래프로 나타내면 편리한 점을 말할 수 있다." - ], - "바·슬·즐": [ - "[2바01-01] 학교생활에 필요한 규칙과 약속을 정해서 지킨다.", - "[2바01-02] 몸과 마음을 건강하게 유지한다.", - "[2바02-01] 봄철 날씨 변화를 알고 건강 수칙을 스스로 지키는 습관을 기른다.", - "[2바02-02] 봄에 볼 수 있는 동식물을 소중히 여기고 보살핀다.", - "[2바03-01] 가족 및 친척 간에 지켜야 할 예절을 실천한다.", - "[2바03-02] 가족의 형태와 문화가 다양함을 알고 존중한다.", - "[2바04-01] 여름철의 에너지 절약 수칙을 알고 습관화한다.", - "[2바04-02] 여름 생활을 건강하고 안전하게 할 수 있도록 계획을 세워 실천한다.", - "[2바05-01] 공공장소의 올바른 이용과 시설물을 바르게 사용하는 습관을 기른다.", - "[2바05-02] 동네를 위해 할 수 있는 일을 찾아 실천하면서 일의 소중함을 안다.", - "[2바06-01] 사람들이 많이 모이는 곳에서 질서와 규칙을 지키며 생활한다.", - "[2바06-02] 추수하는 사람들의 수고에 감사하는 태도를 기른다.", - "[2바07-01] 우리와 북한이 같은 민족임을 알고, 통일 의지를 다진다.", - "[2바07–02] 다른 나라의 문화를 존중하고 공감하는 태도를 기른다.", - "[2바08-01] 상대방을 배려하며 서로 돕고 나누는 생활을 한다.", - "[2바08-02] 생명을 존중하며 동식물을 보호한다.", - "[2바08-03] 겨울방학 생활 계획을 세워서 실천한다." - ], - "안전한 생활": [ - "[2안01-01] 교실과 특별실에서 활동할 때 질서를 지켜 안전하게 생활한다.", - "[2안01-02] 학용품의 위험 요인을 알고 안전하게 사용한다.", - "[2안01-03] 운동장이나 놀이터에서의 위험 요인을 알고 안전하게 놀이한다.", - "[2안01-04] 가정에서 발생하는 사고의 종류를 알고 안전하게 생활한다.", - "[2안01-05] 가정생활 도구의 안전한 사용법을 익힌다.", - "[2안01-06] 응급 상황 발생 시 신고 등의 방법으로 주변에 알린다.", - "[2안01-07] 현장체험학습이나 캠핑 등 야외 활동에서의 위험 요인을 알고 사고를 예방한다.", - "[2안01-08] 일상생활에서 접하게 되는 여러 가지 시설물의 위험 요인을 알고 안전하게 이용한다.", - "[2안01-09] 공중위생을 지키기 위한 여러 가지 방법을 알고 생활에서 실천한다.", - "[2안02-01] 신호등과 교통 표지판을 알고 바르게 길을 건넌다.", - "[2안02-02] 신호등이 없는 거리에서의 위험 요인을 알고 안전하게 보행한다.", - "[2안02-03] 골목에서 놀 때의 위험성을 알고 바르게 대처한다.", - "[2안02-04] 자전거를 탈 때 보호 장구를 착용하고 안전한 장소에서 탄다.", - "[2안02-05] 자동차에서의 안전 수칙을 알고 실천한다.", - "[2안02-06] 대중교통을 안전하게 이용하는 방법을 알고 실천한다.", - "[2안03-01] 낯선 사람이 접근할 때의 대처 방법을 알고 바르게 행동한다.", - "[2안03-02] 미아가 되었을 때의 대처 방법을 안다.", - "[2안03-03] 집단 따돌림의 해로움을 알고 예방한다.", - "[2안03-04] 학교폭력의 유형을 알고 대처한다.", - "[2안03-05] 좋은 접촉과 나쁜 접촉을 구별하고 바르게 대처한다.", - "[2안03-06] 가정폭력이 발생되었을 때 도움을 요청하는 방법을 안다.", - "[2안04-01] 화재가 발생하는 요인을 알고 예방하는 생활을 한다.", - "[2안04-02] 화재 발생 시의 대피 방법을 알고 안전하게 행동한다.", - "[2안04-03] 지진, 황사, 미세먼지 등의 위험성을 알고 상황 발생 시 대처 방법을 적용한다.", - "[2안04-04] 계절에 따른 자연 재난 발생 시의 행동요령을 익혀 생활화한다." -] - }, - "3~4학년군": { - "국어": [ - "[4국01-01] 대화의 즐거움을 알고 대화를 나눈다.", - "[4국01-02] 회의에서 의견을 적극적으로 교환한다.", - "[4국01-03] 원인과 결과의 관계를 고려하며 듣고 말한다.", - "[4국01-04] 적절한 표정, 몸짓, 말투로 말한다.", - "[4국01-05] 내용을 요약하며 듣는다.", - "[4국01-06] 예의를 지키며 듣고 말하는 태도를 지닌다.", - "[4국02-01] 문단과 글의 중심 생각을 파악한다.", - "[4국02-02] 글의 유형을 고려하여 대강의 내용을 간추린다.", - "[4국02-03] 글에서 낱말의 의미나 생략된 내용을 짐작한다.", - "[4국02-04] 글을 읽고 사실과 의견을 구별한다.", - "[4국02-05] 읽기 경험과 느낌을 다른 사람과 나누는 태도를 지닌다.", - "[4국03-01] 중심 문장과 뒷받침 문장을 갖추어 문단을 쓴다.", - "[4국03-02] 시간의 흐름에 따라 사건이나 행동이 드러나게 글을 쓴다.", - "[4국03-03] 관심 있는 주제에 대해 자신의 의견이 드러나게 글을 쓴다.", - "[4국03-04] 읽는 이를 고려하며 자신의 마음을 표현하는 글을 쓴다.", - "[4국03-05] 쓰기에 자신감을 갖고 자신의 글을 적극적으로 나누는 태도를 지닌다.", - "[4국04-01] 낱말을 분류하고 국어사전에서 찾는다.", - "[4국04-02] 낱말과 낱말의 의미 관계를 파악한다.", - "[4국04-03] 기본적인 문장의 짜임을 이해하고 사용한다.", - "[4국04-04] 높임법을 알고 언어 예절에 맞게 사용한다.", - "[4국04-05] 한글을 소중히 여기는 태도를 지닌다.", - "[4국05-01] 시각이나 청각 등 감각적 표현에 주목하며 작품을 감상한다.", - "[4국05-02] 인물, 사건, 배경에 주목하며 작품을 이해한다.", - "[4국05-03] 이야기의 흐름을 파악하여 이어질 내용을 상상하고 표현한다.", - "[4국05-04] 작품을 듣거나 읽거나 보고 떠오른 느낌과 생각을 다양하게 표현한다.", - "[4국05-05] 재미나 감동을 느끼며 작품을 즐겨 감상하는 태도를 지닌다.", - ], - "사회": [ - "[4사01-01] 우리 마을 또는 고장의 모습을 자유롭게 그려 보고, 서로 비교하여 공통점과 차이점을 찾아 고장에 대한 서로 다른 장소감을 탐색한다.", - "[4사01-02] 디지털 영상 지도 등을 활용하여 주요 지형지물들의 위치를 파악하고, 백지도에 다시 배치하는 활동을 통하여 마을 또는 고장의 실제 모습을 익힌다.", - "[4사01-03] 고장과 관련된 옛이야기를 통하여 고장의 역사적인 유래와 특징을 설명한다.", - "[4사01-04] 고장에 전해 내려오는 대표적인 문화유산을 살펴보고 고장에 대한 자긍심을 기른다.", - "[4사01-05] 옛날과 오늘날의 교통수단에 관한 자료를 바탕으로 하여 교통수단의 발달에 따른 생활 모습의 변화를 설명한다.", - "[4사01-06] 옛날과 오늘날의 통신수단에 관한 자료를 바탕으로 하여 통신수단의 발달에 따른 생활 모습의 변화를 설명한다.", - "[4사02-01] 우리 고장의 지리적 특성을 조사하고, 이것이 고장 사람들의 생활 모습에 미치는 영향을 탐구한다.", - "[4사02-02] 우리 고장과 다른 고장 사람들의 의식주 생활 모습을 비교하여, 환경의 차이에 따른 생활 모습의 다양성을 탐구한다.", - "[4사02-03] 옛 사람들의 생활 도구나 주거 형태를 알아보고, 오늘날의 생활 모습과 비교하여 그 변화상을 탐색한다.", - "[4사02-04] 옛날의 세시 풍속을 알아보고, 오늘날의 변화상을 탐색하여 공통점과 차이점을 분석한다.", - "[4사02-05] 옛날과 오늘날의 혼인 풍습과 가족 구성을 비교하고, 시대별 가족의 모습과 가족 구성원의 역할 변화를 탐색한다.", - "[4사02-06] 현대의 여러 가지 가족 형태를 조사하여 가족의 다양한 삶의 모습을 존중하는 태도를 기른다.", - "[4사03-01] 지도의 기본 요소에 대한 이해를 바탕으로 하여 우리 지역 지도에 나타난 지리 정보를 실제 생활에 활용한다.", - "[4사03-02] 고장 사람들의 생활과 밀접하게 관련이 있는 지역의 다양한 중심지(행정, 교통, 상업, 산업, 관광 등)를 조사하고, 각 중심지의 위치, 기능, 경관의 특성을 탐색한다.", - "[4사03-03] 우리 지역을 대표하는 유・무형의 문화유산을 알아보고, 지역의 문화유산을 소중히 여기는 태도를 갖는다.", - "[4사03-04] 우리 지역과 관련된 역사적 인물의 삶을 알아보고, 지역의 역사에 대해 자부심을 갖는다.", - "[4사03-05] 우리 지역에 있는 공공 기관의 종류와 역할을 조사하고, 공공 기관이 지역 주민들의 생활에 주는 도움을 탐색한다.", - "[4사03-06] 주민 참여를 통해 지역 문제를 해결하는 방안을 살펴보고, 지역 문제의 해결에 참여하는 태도를 기른다.", - "[4사04-01] 촌락과 도시의 공통점과 차이점을 비교하고, 각각에서 나타나는 문제점과 해결 방안을 탐색한다.", - "[4사04-02] 촌락과 도시 사이에 이루어지는 다양한 교류를 조사하고, 이들 사이의 상호 의존 관계를 탐구한다.", - "[4사04-03] 자원의 희소성으로 경제활동에서 선택의 문제가 발생함을 파악하고, 시장을 중심으로 이루어지는 생산, 소비 등 경제활동을 설명한다.", - "[4사04-04] 우리 지역과 다른 지역의 물자 교환 및 교류 사례를 조사하여, 지역 간 경제활동이 밀접하게 관련되어 있음을 탐구한다.", - "[4사04-05] 사회 변화(저출산・고령화, 정보화, 세계화 등)로 나타난 일상생활의 모습을 조사하고, 그 특징을 분석한다.", - "[4사04-06] 우리 사회에 다양한 문화가 확산되면서 생기는 문제(편견, 차별 등)및 해결 방안을 탐구하고, 다른 문화를 존중하는 태도를 기른다." - ], - "도덕": [ - "[4도01-01] 도덕 시간에 무엇을 배우며 도덕 공부가 왜 필요한지를 알고 공부하는 사람으로서 지켜야 할 규칙을 모범 사례를 통해 습관화한다.", - "[4도01-02] 시간과 물건의 소중함을 알고 자신이 시간과 물건을 아껴 쓰고 있는지 반성해 보며 그 모범 사례를 따라 습관화한다.", - "[4도01-03] 최선을 다하는 삶을 위해 정성과 인내가 필요한 이유를 탐구하고 생활 계획을 세워본다.", - "[4도02-01] 가족을 사랑하고 감사해야 하는 이유를 찾아보고, 가족 간에 지켜야 할 도리와 해야 할 일을 약속으로 정해 실천한다.", - "[4도02-02] 친구의 소중함을 알고 친구와 사이좋게 지내며, 서로의 입장을 이해하고 인정한다.", - "[4도02-03] 예절의 중요성을 이해하고, 대상과 상황에 따른 예절이 다름을 탐구하여 이를 습관화한다.", - "[4도02-04] 협동의 의미와 중요성을 알고, 경청․도덕적 대화하기․도덕적 민감성을 통해 협동할 수 있는 능력을 기른다.", - "[4도03-01] 공공장소에서 지켜야 할 규칙과 공익의 중요성을 알고, 공익에 기여하고자 하는 실천 의지를 기른다.", - "[4도03-02] 다문화 사회에서 다양성을 수용해야 하는 이유를 탐구하고, 올바른 의사 결정 과정을 통해 다른 사람과 문화를 공정하게 대하는 태도를 지닌다.", - "[4도03-03] 남북 분단 과정과 민족의 아픔을 통해 통일의 필요성을 알고, 통일에 대한 관심과 통일 의지를 기른다.", - "[4도04-01] 생명의 소중함을 이해하고 인간 생명과 환경 문제에 관심을 가지며 인간 생명과 자연을 보호하려는 태도를 가진다.", - "[4도04-02] 참된 아름다움을 올바르게 이해하고 느껴 생활 속에서 이를 실천한다." - ], - "수학": [ - "[4수01-01] 10000 이상의 큰 수에 대한 자릿값과 위치적 기수법을 이해하고, 수를 읽고 쓸 수 있다.", - "[4수01-02] 다섯 자리 이상의 수의 범위에서 수의 계열을 이해하고 수의 크기를 비교할 수 있다.", - "[4수01-03] 세 자리 수의 덧셈과 뺄셈의 계산 원리를 이해하고 그 계산을 할 수 있다.", - "[4수01-04] 세 자리 수의 덧셈과 뺄셈에서 계산 결과를 어림할 수 있다.", - "[4수01-05] 곱하는 수가 한 자리 수 또는 두 자리 수인 곱셈의 계산 원리를 이해하고 그 계산을 할 수 있다.", - "[4수01-06] 곱하는 수가 한 자리 수 또는 두 자리 수인 곱셈에서 계산 결과를 어림할 수 있다.", - "[4수01-07] 나눗셈이 이루어지는 실생활 상황을 통하여 나눗셈의 의미를 알고, 곱셈과 나눗셈의 관계를 이해한다.", - "[4수01-08] 나누는 수가 한 자리 수인 나눗셈의 계산 원리를 이해하고 그 계산을 할 수 있으며, 나눗셈에서 몫과 나머지의 의미를 안다.", - "[4수01-09] 나누는 수가 두 자리 수인 나눗셈의 계산 원리를 이해하고 그 계산을 할 수 있다.", - "[4수01-10] 양의 등분할을 통하여 분수를 이해하고 읽고 쓸 수 있다.", - "[4수01-11] 단위분수, 진분수, 가분수, 대분수를 알고, 그 관계를 이해한다.", - "[4수01-12] 분모가 같은 분수끼리, 단위분수끼리 크기를 비교할 수 있다.", - "[4수01-13] 분모가 10인 진분수를 통하여 소수 한 자리 수를 이해하고 읽고 쓸 수 있다.", - "[4수01-14] 자릿값의 원리를 바탕으로 소수 두 자리 수와 소수 세 자리 수를 이해하고 읽고 쓸 수 있다.", - "[4수01-15] 소수의 크기를 비교할 수 있다.", - "[4수01-16] 분모가 같은 분수의 덧셈과 뺄셈의 계산 원리를 이해하고, 그 계산을 할 수 있다.", - "[4수01-17] 소수 두 자리 수의 범위에서 소수의 덧셈과 뺄셈의 계산 원리를 이해하고 그 계산을 할 수 있다.", - "[4수02-01] 직선, 선분, 반직선을 알고 구별할 수 있다.", - "[4수02-02] 각과 직각을 이해하고, 직각과 비교하는 활동을 통해 예각과 둔각을 구별할 수 있다.", - "[4수02-03] 교실 및 생활 주변에서 직각인 곳이나 서로 만나지 않는 직선을 찾는 활동을 통하여 직선의 수직 관계와 평행 관계를 이해한다.", - "[4수02-04] 구체물이나 평면도형의 밀기, 뒤집기, 돌리기 활동을 통하여 그 변화를 이해한다.", - "[4수02-05] 평면도형의 이동을 이용하여 규칙적인 무늬를 꾸밀 수 있다.", - "[4수02-06] 원의 중심, 반지름, 지름을 알고, 그 관계를 이해한다.", - "[4수02-07] 컴퍼스를 이용하여 여러 가지 크기의 원을 그려서 다양한 모양을 꾸밀 수 있다.", - "[4수02-08] 여러 가지 모양의 삼각형에 대한 분류 활동을 통하여 이등변삼각형, 정삼각형을 이해한다.", - "[4수02-09] 여러 가지 모양의 삼각형에 대한 분류 활동을 통하여 직각삼각형, 예각삼각형, 둔각삼각형을 이해한다.", - "[4수02-10] 여러 가지 모양의 사각형에 대한 분류 활동을 통하여 직사각형, 정사각형, 사다리꼴, 평행사변형, 마름모를 알고, 그 성질을 이해한다.", - "[4수02-11] 다각형과 정다각형의 의미를 안다.", - "[4수02-12] 주어진 도형을 이용하여 여러 가지 모양을 만들거나 채울 수 있다.", - "[4수03-01] 1분은 60초임을 알고, 초 단위까지 시각을 읽을 수 있다.", - "[4수03-02] 초 단위까지의 시간의 덧셈과 뺄셈을 할 수 있다." - "[4수03-03] 길이를 나타내는 새로운 단위의 필요성을 인식하여 1mm와 1km의 단위를 알고, 이를 이용하여 길이를 측정하고 어림할 수 있다.", - "[4수03-04] 1cm와 1mm, 1km와 1m의 관계를 이해하고, 길이를 단명수와 복명수로 표현할 수 있다.", - "[4수03-05] 들이를 나타내는 표준 단위의 필요성을 인식하여 1L와 1mL의 단위를 알고, 이를 이용하여 들이를 측정하고 어림할 수 있다.", - "[4수03-06] 1L와 1mL의 관계를 이해하고, 들이를 단명수와 복명수로 표현할 수 있다.", - "[4수03-07] 실생활 문제 상황을 통하여 들이의 덧셈과 뺄셈을 이해한다.", - "[4수03-08] 무게를 나타내는 표준 단위의 필요성을 인식하여 1g과 1kg의 단위를 알고, 이를 이용하여 무게를 측정하고 어림할 수 있다.", - "[4수03-09] 1kg과 1g의 관계를 이해하고, 무게를 단명수와 복명수로 표현할 수 있다.", - "[4수03-10] 실생활에서 무게를 나타내는 새로운 단위의 필요성을 인식하여 1t의 단위를 안다.", - "[4수03-11] 실생활 문제 상황을 통하여 무게의 덧셈과 뺄셈을 이해한다.", - "[4수03-12] 각의 크기의 단위인 1도(°)를 알고, 각도기를 이용하여 각의 크기를 측정하고 어림할 수 있다.", - "[4수03-13] 주어진 각도와 크기가 같은 각을 그릴 수 있다.", - "[4수03-14] 여러 가지 방법으로 삼각형과 사각형의 내각의 크기의 합을 추론하고, 자신의 추론과정을 설명할 수 있다.", - "[4수04-01] 다양한 변화 규칙을 찾아 설명하고, 그 규칙을 수나 식으로 나타낼 수 있다.", - "[4수04-02] 규칙적인 계산식의 배열에서 계산 결과의 규칙을 찾고, 계산 결과를 추측할 수 있다.", - "[4수05-01] 실생활 자료를 수집하여 간단한 그림그래프나 막대그래프로 나타낼 수 있다.", - "[4수05-02] 연속적인 변량에 대한 자료를 수집하여 꺾은선그래프로 나타낼 수 있다.", - "[4수05-03] 여러 가지 자료를 수집, 분류, 정리하여 자료의 특성에 맞는 그래프로 나타내고, 그래프를 해석할 수 있다.", - ], - "수학": [ - "[4수01-01]10000 이상의 큰 수에 대한 자릿값과 위치적 기수법을 이해하고, 수를 읽고 쓸 수 있다.", - "[4수01-02]다섯 자리 이상의 수의 범위에서 수의 계열을 이해하고 수의 크기를 비교할 수 있다.", - "[4수01-03]세 자리 수의 덧셈과 뺄셈의 계산 원리를 이해하고 그 계산을 할 수 있다.", - "[4수01-04]세 자리 수의 덧셈과 뺄셈에서 계산 결과를 어림할 수 있다.", - "[4수01-05]곱하는 수가 한 자리 수 또는 두 자리 수인 곱셈의 계산 원리를 이해하고 그 계산을 할 수 있다.", - "[4수01-06]곱하는 수가 한 자리 수 또는 두 자리 수인 곱셈에서 계산 결과를 어림할 수 있다.", - "[4수01-07]나눗셈이 이루어지는 실생활 상황을 통하여 나눗셈의 의미를 알고, 곱셈과 나눗셈의 관계를 이해한다.", - "[4수01-08]나누는 수가 한 자리 수인 나눗셈의 계산 원리를 이해하고 그 계산을 할 수 있으며, 나눗셈에서 몫과 나머지의 의미를 안다.", - "[4수01-09]나누는 수가 두 자리 수인 나눗셈의 계산 원리를 이해하고 그 계산을 할 수 있다.", - "[4수01-10]양의 등분할을 통하여 분수를 이해하고 읽고 쓸 수 있다.", - "[4수01-11]단위분수, 진분수, 가분수, 대분수를 알고, 그 관계를 이해한다.", - "[4수01-12]분모가 같은 분수끼리, 단위분수끼리 크기를 비교할 수 있다.", - "[4수01-13]분모가 10인 진분수를 통하여 소수 한 자리 수를 이해하고 읽고 쓸 수 있다.", - "[4수01-14]자릿값의 원리를 바탕으로 소수 두 자리 수와 소수 세 자리 수를 이해하고 읽고 쓸 수 있다.", - "[4수01-15]소수의 크기를 비교할 수 있다.", - "[4수01-16]분모가 같은 분수의 덧셈과 뺄셈의 계산 원리를 이해하고, 그 계산을 할 수 있다.", - "[4수01-17]소수 두 자리 수의 범위에서 소수의 덧셈과 뺄셈의 계산 원리를 이해하고 그 계산을 할 수 있다.", - "[4수02-01]직선, 선분, 반직선을 알고 구별할 수 있다.", - "[4수02-02]각과 직각을 이해하고, 직각과 비교하는 활동을 통해 예각과 둔각을 구별할 수 있다.", - "[4수02-03]교실 및 생활 주변에서 직각인 곳이나 서로 만나지 않는 직선을 찾는 활동을 통하여 직선의 수직 관계와 평행 관계를 이해한다.", - "[4수02-04]구체물이나 평면도형의 밀기, 뒤집기, 돌리기 활동을 통하여 그 변화를 이해한다.", - "[4수02-05]평면도형의 이동을 이용하여 규칙적인 무늬를 꾸밀 수 있다.", - "[4수02-06]원의 중심, 반지름, 지름을 알고, 그 관계를 이해한다.", - "[4수02-07]컴퍼스를 이용하여 여러 가지 크기의 원을 그려서 다양한 모양을 꾸밀 수 있다.", - "[4수02-08]여러 가지 모양의 삼각형에 대한 분류 활동을 통하여 이등변삼각형, 정삼각형을 이해한다.", - "[4수02-09]여러 가지 모양의 삼각형에 대한 분류 활동을 통하여 직각삼각형, 예각삼각형, 둔각삼각형을 이해한다.", - "[4수02-10]여러 가지 모양의 사각형에 대한 분류 활동을 통하여 직사각형, 정사각형, 사다리꼴, 평행사변형, 마름모를 알고, 그 성질을 이해한다.", - "[4수02-11]다각형과 정다각형의 의미를 안다.", - "[4수02-12]주어진 도형을 이용하여 여러 가지 모양을 만들거나 채울 수 있다.", - "[4수03-01]1분은 60초임을 알고, 초 단위까지 시각을 읽을 수 있다.", - "[4수03-02]초 단위까지의 시간의 덧셈과 뺄셈을 할 수 있다.", - "[4수03-03]길이를 나타내는 새로운 단위의 필요성을 인식하여 1mm와 1km의 단위를 알고, 이를 이용하여 길이를 측정하고 어림할 수 있다.", - "[4수03-04]1cm와 1mm, 1km와 1m의 관계를 이해하고, 길이를 단명수와 복명수로 표현할 수 있다.", - "[4수03-05]들이를 나타내는 표준 단위의 필요성을 인식하여 1L와 1mL의 단위를 알고, 이를 이용하여 들이를 측정하고 어림할 수 있다.", - "[4수03-06]1L와 1mL의 관계를 이해하고, 들이를 단명수와 복명수로 표현할 수 있다.", - "[4수03-07]실생활 문제 상황을 통하여 들이의 덧셈과 뺄셈을 이해한다.", - "[4수03-08]무게를 나타내는 표준 단위의 필요성을 인식하여 1g과 1kg의 단위를 알고, 이를 이용하여 무게를 측정하고 어림할 수 있다.", - "[4수03-09]1kg과 1g의 관계를 이해하고, 무게를 단명수와 복명수로 표현할 수 있다.", - "[4수03-10]실생활에서 무게를 나타내는 새로운 단위의 필요성을 인식하여 1t의 단위를 안다.", - "[4수03-11]실생활 문제 상황을 통하여 무게의 덧셈과 뺄셈을 이해한다.", - "[4수03-12]각의 크기의 단위인 1도(°)를 알고, 각도기를 이용하여 각의 크기를 측정하고 어림할 수 있다.", - "[4수03-13]주어진 각도와 크기가 같은 각을 그릴 수 있다.", - "[4수03-14]여러 가지 방법으로 삼각형과 사각형의 내각의 크기의 합을 추론하고, 자신의 추론과정을 설명할 수 있다.", - "[4수04-01]다양한 변화 규칙을 찾아 설명하고, 그 규칙을 수나 식으로 나타낼 수 있다.", - "[4수04-02]규칙적인 계산식의 배열에서 계산 결과의 규칙을 찾고, 계산 결과를 추측할 수 있다.", - "[4수05-01]실생활 자료를 수집하여 간단한 그림그래프나 막대그래프로 나타낼 수 있다.", - "[4수05-02]연속적인 변량에 대한 자료를 수집하여 꺾은선그래프로 나타낼 수 있다.", - "[4수05-03]여러 가지 자료를 수집, 분류, 정리하여 자료의 특성에 맞는 그래프로 나타내고, 그래프를 해석할 수 있다.", - "[4수01-01]10000 이상의 큰 수에 대한 자릿값과 위치적 기수법을 이해하고, 수를 읽고 쓸 수 있다.", - "[4수01-02]다섯 자리 이상의 수의 범위에서 수의 계열을 이해하고 수의 크기를 비교할 수 있다.", - "[4수01-03]세 자리 수의 덧셈과 뺄셈의 계산 원리를 이해하고 그 계산을 할 수 있다.", - "[4수01-04]세 자리 수의 덧셈과 뺄셈에서 계산 결과를 어림할 수 있다.", - "[4수01-05]곱하는 수가 한 자리 수 또는 두 자리 수인 곱셈의 계산 원리를 이해하고 그 계산을 할 수 있다.", - "[4수01-06]곱하는 수가 한 자리 수 또는 두 자리 수인 곱셈에서 계산 결과를 어림할 수 있다.", - "[4수01-07]나눗셈이 이루어지는 실생활 상황을 통하여 나눗셈의 의미를 알고, 곱셈과 나눗셈의 관계를 이해한다.", - "[4수01-08]나누는 수가 한 자리 수인 나눗셈의 계산 원리를 이해하고 그 계산을 할 수 있으며, 나눗셈에서 몫과 나머지의 의미를 안다.", - "[4수01-09]나누는 수가 두 자리 수인 나눗셈의 계산 원리를 이해하고 그 계산을 할 수 있다.", - "[4수01-10]양의 등분할을 통하여 분수를 이해하고 읽고 쓸 수 있다.", - "[4수01-11]단위분수, 진분수, 가분수, 대분수를 알고, 그 관계를 이해한다.", - "[4수01-12]분모가 같은 분수끼리, 단위분수끼리 크기를 비교할 수 있다.", - "[4수01-13]분모가 10인 진분수를 통하여 소수 한 자리 수를 이해하고 읽고 쓸 수 있다.", - "[4수01-14]자릿값의 원리를 바탕으로 소수 두 자리 수와 소수 세 자리 수를 이해하고 읽고 쓸 수 있다.", - "[4수01-15]소수의 크기를 비교할 수 있다.", - "[4수01-16]분모가 같은 분수의 덧셈과 뺄셈의 계산 원리를 이해하고, 그 계산을 할 수 있다.", - "[4수01-17]소수 두 자리 수의 범위에서 소수의 덧셈과 뺄셈의 계산 원리를 이해하고 그 계산을 할 수 있다.", - "[4수02-01]직선, 선분, 반직선을 알고 구별할 수 있다.", - "[4수02-02]각과 직각을 이해하고, 직각과 비교하는 활동을 통해 예각과 둔각을 구별할 수 있다.", - "[4수02-03]교실 및 생활 주변에서 직각인 곳이나 서로 만나지 않는 직선을 찾는 활동을 통하여 직선의 수직 관계와 평행 관계를 이해한다.", - "[4수02-04]구체물이나 평면도형의 밀기, 뒤집기, 돌리기 활동을 통하여 그 변화를 이해한다.", - "[4수02-05]평면도형의 이동을 이용하여 규칙적인 무늬를 꾸밀 수 있다.", - "[4수02-06]원의 중심, 반지름, 지름을 알고, 그 관계를 이해한다.", - "[4수02-07]컴퍼스를 이용하여 여러 가지 크기의 원을 그려서 다양한 모양을 꾸밀 수 있다.", - "[4수02-08]여러 가지 모양의 삼각형에 대한 분류 활동을 통하여 이등변삼각형, 정삼각형을 이해한다.", - "[4수02-09]여러 가지 모양의 삼각형에 대한 분류 활동을 통하여 직각삼각형, 예각삼각형, 둔각삼각형을 이해한다.", - "[4수02-10]여러 가지 모양의 사각형에 대한 분류 활동을 통하여 직사각형, 정사각형, 사다리꼴, 평행사변형, 마름모를 알고, 그 성질을 이해한다.", - "[4수02-11]다각형과 정다각형의 의미를 안다.", - "[4수02-12]주어진 도형을 이용하여 여러 가지 모양을 만들거나 채울 수 있다.", - "[4수03-01]1분은 60초임을 알고, 초 단위까지 시각을 읽을 수 있다.", - "[4수03-02]초 단위까지의 시간의 덧셈과 뺄셈을 할 수 있다.", - "[4수03-03]길이를 나타내는 새로운 단위의 필요성을 인식하여 1mm와 1km의 단위를 알고, 이를 이용하여 길이를 측정하고 어림할 수 있다.", - "[4수03-04]1cm와 1mm, 1km와 1m의 관계를 이해하고, 길이를 단명수와 복명수로 표현할 수 있다.", - "[4수03-05]들이를 나타내는 표준 단위의 필요성을 인식하여 1L와 1mL의 단위를 알고, 이를 이용하여 들이를 측정하고 어림할 수 있다.", - "[4수03-06]1L와 1mL의 관계를 이해하고, 들이를 단명수와 복명수로 표현할 수 있다.", - "[4수03-07]실생활 문제 상황을 통하여 들이의 덧셈과 뺄셈을 이해한다.", - "[4수03-08]무게를 나타내는 표준 단위의 필요성을 인식하여 1g과 1kg의 단위를 알고, 이를 이용하여 무게를 측정하고 어림할 수 있다.", - "[4수03-09]1kg과 1g의 관계를 이해하고, 무게를 단명수와 복명수로 표현할 수 있다.", - "[4수03-10]실생활에서 무게를 나타내는 새로운 단위의 필요성을 인식하여 1t의 단위를 안다.", - "[4수03-11]실생활 문제 상황을 통하여 무게의 덧셈과 뺄셈을 이해한다.", - "[4수03-12]각의 크기의 단위인 1도(°)를 알고, 각도기를 이용하여 각의 크기를 측정하고 어림할 수 있다.", - "[4수03-13]주어진 각도와 크기가 같은 각을 그릴 수 있다.", - "[4수03-14]여러 가지 방법으로 삼각형과 사각형의 내각의 크기의 합을 추론하고, 자신의 추론과정을 설명할 수 있다.", - "[4수04-01]다양한 변화 규칙을 찾아 설명하고, 그 규칙을 수나 식으로 나타낼 수 있다.", - "[4수04-02]규칙적인 계산식의 배열에서 계산 결과의 규칙을 찾고, 계산 결과를 추측할 수 있다.", - "[4수05-01]실생활 자료를 수집하여 간단한 그림그래프나 막대그래프로 나타낼 수 있다.", - "[4수05-02]연속적인 변량에 대한 자료를 수집하여 꺾은선그래프로 나타낼 수 있다.", - "[4수05-03]여러 가지 자료를 수집, 분류, 정리하여 자료의 특성에 맞는 그래프로 나타내고, 그래프를 해석할 수 있다.", - "[4수01-01]10000 이상의 큰 수에 대한 자릿값과 위치적 기수법을 이해하고, 수를 읽고 쓸 수 있다.", - "[4수01-02]다섯 자리 이상의 수의 범위에서 수의 계열을 이해하고 수의 크기를 비교할 수 있다.", - "[4수01-03]세 자리 수의 덧셈과 뺄셈의 계산 원리를 이해하고 그 계산을 할 수 있다.", - "[4수01-04]세 자리 수의 덧셈과 뺄셈에서 계산 결과를 어림할 수 있다.", - "[4수01-05]곱하는 수가 한 자리 수 또는 두 자리 수인 곱셈의 계산 원리를 이해하고 그 계산을 할 수 있다.", - "[4수01-06]곱하는 수가 한 자리 수 또는 두 자리 수인 곱셈에서 계산 결과를 어림할 수 있다.", - "[4수01-07]나눗셈이 이루어지는 실생활 상황을 통하여 나눗셈의 의미를 알고, 곱셈과 나눗셈의 관계를 이해한다.", - "[4수01-08]나누는 수가 한 자리 수인 나눗셈의 계산 원리를 이해하고 그 계산을 할 수 있으며, 나눗셈에서 몫과 나머지의 의미를 안다.", - "[4수01-09]나누는 수가 두 자리 수인 나눗셈의 계산 원리를 이해하고 그 계산을 할 수 있다.", - "[4수01-10]양의 등분할을 통하여 분수를 이해하고 읽고 쓸 수 있다.", - "[4수01-11]단위분수, 진분수, 가분수, 대분수를 알고, 그 관계를 이해한다.", - "[4수01-12]분모가 같은 분수끼리, 단위분수끼리 크기를 비교할 수 있다.", - "[4수01-13]분모가 10인 진분수를 통하여 소수 한 자리 수를 이해하고 읽고 쓸 수 있다.", - "[4수01-14]자릿값의 원리를 바탕으로 소수 두 자리 수와 소수 세 자리 수를 이해하고 읽고 쓸 수 있다.", - "[4수01-15]소수의 크기를 비교할 수 있다.", - "[4수01-16]분모가 같은 분수의 덧셈과 뺄셈의 계산 원리를 이해하고, 그 계산을 할 수 있다.", - "[4수01-17]소수 두 자리 수의 범위에서 소수의 덧셈과 뺄셈의 계산 원리를 이해하고 그 계산을 할 수 있다.", - "[4수02-01]직선, 선분, 반직선을 알고 구별할 수 있다.", - "[4수02-02]각과 직각을 이해하고, 직각과 비교하는 활동을 통해 예각과 둔각을 구별할 수 있다.", - "[4수02-03]교실 및 생활 주변에서 직각인 곳이나 서로 만나지 않는 직선을 찾는 활동을 통하여 직선의 수직 관계와 평행 관계를 이해한다.", - "[4수02-04]구체물이나 평면도형의 밀기, 뒤집기, 돌리기 활동을 통하여 그 변화를 이해한다.", - "[4수02-05]평면도형의 이동을 이용하여 규칙적인 무늬를 꾸밀 수 있다.", - "[4수02-06]원의 중심, 반지름, 지름을 알고, 그 관계를 이해한다.", - "[4수02-07]컴퍼스를 이용하여 여러 가지 크기의 원을 그려서 다양한 모양을 꾸밀 수 있다.", - "[4수02-08]여러 가지 모양의 삼각형에 대한 분류 활동을 통하여 이등변삼각형, 정삼각형을 이해한다.", - "[4수02-09]여러 가지 모양의 삼각형에 대한 분류 활동을 통하여 직각삼각형, 예각삼각형, 둔각삼각형을 이해한다.", - "[4수02-10]여러 가지 모양의 사각형에 대한 분류 활동을 통하여 직사각형, 정사각형, 사다리꼴, 평행사변형, 마름모를 알고, 그 성질을 이해한다.", - "[4수02-11]다각형과 정다각형의 의미를 안다.", - "[4수02-12]주어진 도형을 이용하여 여러 가지 모양을 만들거나 채울 수 있다.", - "[4수03-01]1분은 60초임을 알고, 초 단위까지 시각을 읽을 수 있다.", - "[4수03-02]초 단위까지의 시간의 덧셈과 뺄셈을 할 수 있다.", - "[4수03-03]길이를 나타내는 새로운 단위의 필요성을 인식하여 1mm와 1km의 단위를 알고, 이를 이용하여 길이를 측정하고 어림할 수 있다.", - "[4수03-04]1cm와 1mm, 1km와 1m의 관계를 이해하고, 길이를 단명수와 복명수로 표현할 수 있다.", - "[4수03-05]들이를 나타내는 표준 단위의 필요성을 인식하여 1L와 1mL의 단위를 알고, 이를 이용하여 들이를 측정하고 어림할 수 있다.", - "[4수03-06]1L와 1mL의 관계를 이해하고, 들이를 단명수와 복명수로 표현할 수 있다.", - "[4수03-07]실생활 문제 상황을 통하여 들이의 덧셈과 뺄셈을 이해한다.", - "[4수03-08]무게를 나타내는 표준 단위의 필요성을 인식하여 1g과 1kg의 단위를 알고, 이를 이용하여 무게를 측정하고 어림할 수 있다.", - "[4수03-09]1kg과 1g의 관계를 이해하고, 무게를 단명수와 복명수로 표현할 수 있다.", - "[4수03-10]실생활에서 무게를 나타내는 새로운 단위의 필요성을 인식하여 1t의 단위를 안다.", - "[4수03-11]실생활 문제 상황을 통하여 무게의 덧셈과 뺄셈을 이해한다.", - "[4수03-12]각의 크기의 단위인 1도(°)를 알고, 각도기를 이용하여 각의 크기를 측정하고 어림할 수 있다.", - "[4수03-13]주어진 각도와 크기가 같은 각을 그릴 수 있다.", - "[4수03-14]여러 가지 방법으로 삼각형과 사각형의 내각의 크기의 합을 추론하고, 자신의 추론과정을 설명할 수 있다.", - "[4수04-01]다양한 변화 규칙을 찾아 설명하고, 그 규칙을 수나 식으로 나타낼 수 있다.", - "[4수04-02]규칙적인 계산식의 배열에서 계산 결과의 규칙을 찾고, 계산 결과를 추측할 수 있다.", - "[4수05-01]실생활 자료를 수집하여 간단한 그림그래프나 막대그래프로 나타낼 수 있다.", - "[4수05-02]연속적인 변량에 대한 자료를 수집하여 꺾은선그래프로 나타낼 수 있다.", - "[4수05-03]여러 가지 자료를 수집, 분류, 정리하여 자료의 특성에 맞는 그래프로 나타내고, 그래프를 해석할 수 있다.", - "[4수01-01]10000 이상의 큰 수에 대한 자릿값과 위치적 기수법을 이해하고, 수를 읽고 쓸 수 있다.", - "[4수01-02]다섯 자리 이상의 수의 범위에서 수의 계열을 이해하고 수의 크기를 비교할 수 있다.", - "[4수01-03]세 자리 수의 덧셈과 뺄셈의 계산 원리를 이해하고 그 계산을 할 수 있다.", - "[4수01-04]세 자리 수의 덧셈과 뺄셈에서 계산 결과를 어림할 수 있다.", - "[4수01-05]곱하는 수가 한 자리 수 또는 두 자리 수인 곱셈의 계산 원리를 이해하고 그 계산을 할 수 있다.", - "[4수01-06]곱하는 수가 한 자리 수 또는 두 자리 수인 곱셈에서 계산 결과를 어림할 수 있다.", - "[4수01-07]나눗셈이 이루어지는 실생활 상황을 통하여 나눗셈의 의미를 알고, 곱셈과 나눗셈의 관계를 이해한다.", - "[4수01-08]나누는 수가 한 자리 수인 나눗셈의 계산 원리를 이해하고 그 계산을 할 수 있으며, 나눗셈에서 몫과 나머지의 의미를 안다.", - "[4수01-09]나누는 수가 두 자리 수인 나눗셈의 계산 원리를 이해하고 그 계산을 할 수 있다.", - "[4수01-10]양의 등분할을 통하여 분수를 이해하고 읽고 쓸 수 있다.", - "[4수01-11]단위분수, 진분수, 가분수, 대분수를 알고, 그 관계를 이해한다.", - "[4수01-12]분모가 같은 분수끼리, 단위분수끼리 크기를 비교할 수 있다.", - "[4수01-13]분모가 10인 진분수를 통하여 소수 한 자리 수를 이해하고 읽고 쓸 수 있다.", - "[4수01-14]자릿값의 원리를 바탕으로 소수 두 자리 수와 소수 세 자리 수를 이해하고 읽고 쓸 수 있다.", - "[4수01-15]소수의 크기를 비교할 수 있다.", - "[4수01-16]분모가 같은 분수의 덧셈과 뺄셈의 계산 원리를 이해하고, 그 계산을 할 수 있다.", - "[4수01-17]소수 두 자리 수의 범위에서 소수의 덧셈과 뺄셈의 계산 원리를 이해하고 그 계산을 할 수 있다.", - "[4수02-01]직선, 선분, 반직선을 알고 구별할 수 있다.", - "[4수02-02]각과 직각을 이해하고, 직각과 비교하는 활동을 통해 예각과 둔각을 구별할 수 있다.", - "[4수02-03]교실 및 생활 주변에서 직각인 곳이나 서로 만나지 않는 직선을 찾는 활동을 통하여 직선의 수직 관계와 평행 관계를 이해한다.", - "[4수02-04]구체물이나 평면도형의 밀기, 뒤집기, 돌리기 활동을 통하여 그 변화를 이해한다.", - "[4수02-05]평면도형의 이동을 이용하여 규칙적인 무늬를 꾸밀 수 있다.", - "[4수02-06]원의 중심, 반지름, 지름을 알고, 그 관계를 이해한다.", - "[4수02-07]컴퍼스를 이용하여 여러 가지 크기의 원을 그려서 다양한 모양을 꾸밀 수 있다.", - "[4수02-08]여러 가지 모양의 삼각형에 대한 분류 활동을 통하여 이등변삼각형, 정삼각형을 이해한다.", - "[4수02-09]여러 가지 모양의 삼각형에 대한 분류 활동을 통하여 직각삼각형, 예각삼각형, 둔각삼각형을 이해한다.", - "[4수02-10]여러 가지 모양의 사각형에 대한 분류 활동을 통하여 직사각형, 정사각형, 사다리꼴, 평행사변형, 마름모를 알고, 그 성질을 이해한다.", - "[4수02-11]다각형과 정다각형의 의미를 안다.", - "[4수02-12]주어진 도형을 이용하여 여러 가지 모양을 만들거나 채울 수 있다.", - "[4수03-01]1분은 60초임을 알고, 초 단위까지 시각을 읽을 수 있다.", - "[4수03-02]초 단위까지의 시간의 덧셈과 뺄셈을 할 수 있다.", - "[4수03-03]길이를 나타내는 새로운 단위의 필요성을 인식하여 1mm와 1km의 단위를 알고, 이를 이용하여 길이를 측정하고 어림할 수 있다.", - "[4수03-04]1cm와 1mm, 1km와 1m의 관계를 이해하고, 길이를 단명수와 복명수로 표현할 수 있다.", - "[4수03-05]들이를 나타내는 표준 단위의 필요성을 인식하여 1L와 1mL의 단위를 알고, 이를 이용하여 들이를 측정하고 어림할 수 있다.", - "[4수03-06]1L와 1mL의 관계를 이해하고, 들이를 단명수와 복명수로 표현할 수 있다.", - "[4수03-07]실생활 문제 상황을 통하여 들이의 덧셈과 뺄셈을 이해한다.", - "[4수03-08]무게를 나타내는 표준 단위의 필요성을 인식하여 1g과 1kg의 단위를 알고, 이를 이용하여 무게를 측정하고 어림할 수 있다.", - "[4수03-09]1kg과 1g의 관계를 이해하고, 무게를 단명수와 복명수로 표현할 수 있다.", - "[4수03-10]실생활에서 무게를 나타내는 새로운 단위의 필요성을 인식하여 1t의 단위를 안다.", - "[4수03-11]실생활 문제 상황을 통하여 무게의 덧셈과 뺄셈을 이해한다.", - "[4수03-12]각의 크기의 단위인 1도(°)를 알고, 각도기를 이용하여 각의 크기를 측정하고 어림할 수 있다.", - "[4수03-13]주어진 각도와 크기가 같은 각을 그릴 수 있다.", - "[4수03-14]여러 가지 방법으로 삼각형과 사각형의 내각의 크기의 합을 추론하고, 자신의 추론과정을 설명할 수 있다.", - "[4수04-01]다양한 변화 규칙을 찾아 설명하고, 그 규칙을 수나 식으로 나타낼 수 있다.", - "[4수04-02]규칙적인 계산식의 배열에서 계산 결과의 규칙을 찾고, 계산 결과를 추측할 수 있다.", - "[4수05-01]실생활 자료를 수집하여 간단한 그림그래프나 막대그래프로 나타낼 수 있다.", - "[4수05-02]연속적인 변량에 대한 자료를 수집하여 꺾은선그래프로 나타낼 수 있다.", - "[4수05-03]여러 가지 자료를 수집, 분류, 정리하여 자료의 특성에 맞는 그래프로 나타내고, 그래프를 해석할 수 있다." - ], - "사회": [ - "[4사01-01]우리 마을 또는 고장의 모습을 자유롭게 그려 보고, 서로 비교하여 공통점과 차이점을 찾아 고장에 대한 서로 다른 장소감을 탐색한다.", - "[4사01-02]디지털 영상 지도 등을 활용하여 주요 지형지물들의 위치를 파악하고, 백지도에 다시 배치하는 활동을 통하여 마을 또는 고장의 실제 모습을 익힌다.", - "[4사01-03]고장과 관련된 옛이야기를 통하여 고장의 역사적인 유래와 특징을 설명한다.", - "[4사01-04]고장에 전해 내려오는 대표적인 문화유산을 살펴보고 고장에 대한 자긍심을 기른다.", - "[4사01-05]옛날과 오늘날의 교통수단에 관한 자료를 바탕으로 하여 교통수단의 발달에 따른 생활 모습의 변화를 설명한다.", - "[4사01-06]옛날과 오늘날의 통신수단에 관한 자료를 바탕으로 하여 통신수단의 발달에 따른 생활 모습의 변화를 설명한다.", - "[4사02-01]우리 고장의 지리적 특성을 조사하고, 이것이 고장 사람들의 생활 모습에 미치는 영향을 탐구한다.", - "[4사02-02]우리 고장과 다른 고장 사람들의 의식주 생활 모습을 비교하여, 환경의 차이에 따른 생활 모습의 다양성을 탐구한다.", - "[4사02-03]옛 사람들의 생활 도구나 주거 형태를 알아보고, 오늘날의 생활 모습과 비교하여 그 변화상을 탐색한다.", - "[4사02-04]옛날의 세시 풍속을 알아보고, 오늘날의 변화상을 탐색하여 공통점과 차이점을 분석한다.", - "[4사02-05]옛날과 오늘날의 혼인 풍습과 가족 구성을 비교하고, 시대별 가족의 모습과 가족 구성원의 역할 변화를 탐색한다.", - "[4사02-06]현대의 여러 가지 가족 형태를 조사하여 가족의 다양한 삶의 모습을 존중하는 태도를 기른다.", - "[4사03-01]지도의 기본 요소에 대한 이해를 바탕으로 하여 우리 지역 지도에 나타난 지리 정보를 실제 생활에 활용한다.", - "[4사03-02]고장 사람들의 생활과 밀접하게 관련이 있는 지역의 다양한 중심지(행정, 교통, 상업, 산업, 관광 등)를 조사하고, 각 중심지의 위치, 기능, 경관의 특성을 탐색한다.", - "[4사03-03]우리 지역을 대표하는 유・무형의 문화유산을 알아보고, 지역의 문화유산을 소중히 여기는 태도를 갖는다.", - "[4사03-04]우리 지역과 관련된 역사적 인물의 삶을 알아보고, 지역의 역사에 대해 자부심을 갖는다.", - "[4사03-05]우리 지역에 있는 공공 기관의 종류와 역할을 조사하고, 공공 기관이 지역 주민들의 생활에 주는 도움을 탐색한다.", - "[4사03-06]주민 참여를 통해 지역 문제를 해결하는 방안을 살펴보고, 지역 문제의 해결에 참여하는 태도를 기른다.", - "[4사04-01]촌락과 도시의 공통점과 차이점을 비교하고, 각각에서 나타나는 문제점과 해결 방안을 탐색한다.", - "[4사04-02]촌락과 도시 사이에 이루어지는 다양한 교류를 조사하고, 이들 사이의 상호 의존 관계를 탐구한다.", - "[4사04-03]자원의 희소성으로 경제활동에서 선택의 문제가 발생함을 파악하고, 시장을 중심으로 이루어지는 생산, 소비 등 경제활동을 설명한다.", - "[4사04-04]우리 지역과 다른 지역의 물자 교환 및 교류 사례를 조사하여, 지역 간 경제활동이 밀접하게 관련되어 있음을 탐구한다.", - "[4사04-05]사회 변화(저출산・고령화, 정보화, 세계화 등)로 나타난 일상생활의 모습을 조사하고, 그 특징을 분석한다.", - "[4사04-06]우리 사회에 다양한 문화가 확산되면서 생기는 문제(편견, 차별 등)및 해결 방안을 탐구하고, 다른 문화를 존중하는 태도를 기른다.", - "[4사01-01]우리 마을 또는 고장의 모습을 자유롭게 그려 보고, 서로 비교하여 공통점과 차이점을 찾아 고장에 대한 서로 다른 장소감을 탐색한다.", - "[4사01-02]디지털 영상 지도 등을 활용하여 주요 지형지물들의 위치를 파악하고, 백지도에 다시 배치하는 활동을 통하여 마을 또는 고장의 실제 모습을 익힌다.", - "[4사01-03]고장과 관련된 옛이야기를 통하여 고장의 역사적인 유래와 특징을 설명한다.", - "[4사01-04]고장에 전해 내려오는 대표적인 문화유산을 살펴보고 고장에 대한 자긍심을 기른다.", - "[4사01-05]옛날과 오늘날의 교통수단에 관한 자료를 바탕으로 하여 교통수단의 발달에 따른 생활 모습의 변화를 설명한다.", - "[4사01-06]옛날과 오늘날의 통신수단에 관한 자료를 바탕으로 하여 통신수단의 발달에 따른 생활 모습의 변화를 설명한다.", - "[4사02-01]우리 고장의 지리적 특성을 조사하고, 이것이 고장 사람들의 생활 모습에 미치는 영향을 탐구한다.", - "[4사02-02]우리 고장과 다른 고장 사람들의 의식주 생활 모습을 비교하여, 환경의 차이에 따른 생활 모습의 다양성을 탐구한다.", - "[4사02-03]옛 사람들의 생활 도구나 주거 형태를 알아보고, 오늘날의 생활 모습과 비교하여 그 변화상을 탐색한다.", - "[4사02-04]옛날의 세시 풍속을 알아보고, 오늘날의 변화상을 탐색하여 공통점과 차이점을 분석한다.", - "[4사02-05]옛날과 오늘날의 혼인 풍습과 가족 구성을 비교하고, 시대별 가족의 모습과 가족 구성원의 역할 변화를 탐색한다.", - "[4사02-06]현대의 여러 가지 가족 형태를 조사하여 가족의 다양한 삶의 모습을 존중하는 태도를 기른다.", - "[4사03-01]지도의 기본 요소에 대한 이해를 바탕으로 하여 우리 지역 지도에 나타난 지리 정보를 실제 생활에 활용한다.", - "[4사03-02]고장 사람들의 생활과 밀접하게 관련이 있는 지역의 다양한 중심지(행정, 교통, 상업, 산업, 관광 등)를 조사하고, 각 중심지의 위치, 기능, 경관의 특성을 탐색한다.", - "[4사03-03]우리 지역을 대표하는 유・무형의 문화유산을 알아보고, 지역의 문화유산을 소중히 여기는 태도를 갖는다.", - "[4사03-04]우리 지역과 관련된 역사적 인물의 삶을 알아보고, 지역의 역사에 대해 자부심을 갖는다.", - "[4사03-05]우리 지역에 있는 공공 기관의 종류와 역할을 조사하고, 공공 기관이 지역 주민들의 생활에 주는 도움을 탐색한다.", - "[4사03-06]주민 참여를 통해 지역 문제를 해결하는 방안을 살펴보고, 지역 문제의 해결에 참여하는 태도를 기른다.", - "[4사04-01]촌락과 도시의 공통점과 차이점을 비교하고, 각각에서 나타나는 문제점과 해결 방안을 탐색한다.", - "[4사04-02]촌락과 도시 사이에 이루어지는 다양한 교류를 조사하고, 이들 사이의 상호 의존 관계를 탐구한다.", - "[4사04-03]자원의 희소성으로 경제활동에서 선택의 문제가 발생함을 파악하고, 시장을 중심으로 이루어지는 생산, 소비 등 경제활동을 설명한다.", - "[4사04-04]우리 지역과 다른 지역의 물자 교환 및 교류 사례를 조사하여, 지역 간 경제활동이 밀접하게 관련되어 있음을 탐구한다.", - "[4사04-05]사회 변화(저출산・고령화, 정보화, 세계화 등)로 나타난 일상생활의 모습을 조사하고, 그 특징을 분석한다.", - "[4사04-06]우리 사회에 다양한 문화가 확산되면서 생기는 문제(편견, 차별 등)및 해결 방안을 탐구하고, 다른 문화를 존중하는 태도를 기른다.", - "[4사01-01]우리 마을 또는 고장의 모습을 자유롭게 그려 보고, 서로 비교하여 공통점과 차이점을 찾아 고장에 대한 서로 다른 장소감을 탐색한다.", - "[4사01-02]디지털 영상 지도 등을 활용하여 주요 지형지물들의 위치를 파악하고, 백지도에 다시 배치하는 활동을 통하여 마을 또는 고장의 실제 모습을 익힌다.", - "[4사01-03]고장과 관련된 옛이야기를 통하여 고장의 역사적인 유래와 특징을 설명한다.", - "[4사01-04]고장에 전해 내려오는 대표적인 문화유산을 살펴보고 고장에 대한 자긍심을 기른다.", - "[4사01-05]옛날과 오늘날의 교통수단에 관한 자료를 바탕으로 하여 교통수단의 발달에 따른 생활 모습의 변화를 설명한다.", - "[4사01-06]옛날과 오늘날의 통신수단에 관한 자료를 바탕으로 하여 통신수단의 발달에 따른 생활 모습의 변화를 설명한다.", - "[4사02-01]우리 고장의 지리적 특성을 조사하고, 이것이 고장 사람들의 생활 모습에 미치는 영향을 탐구한다.", - "[4사02-02]우리 고장과 다른 고장 사람들의 의식주 생활 모습을 비교하여, 환경의 차이에 따른 생활 모습의 다양성을 탐구한다.", - "[4사02-03]옛 사람들의 생활 도구나 주거 형태를 알아보고, 오늘날의 생활 모습과 비교하여 그 변화상을 탐색한다.", - "[4사02-04]옛날의 세시 풍속을 알아보고, 오늘날의 변화상을 탐색하여 공통점과 차이점을 분석한다.", - "[4사02-05]옛날과 오늘날의 혼인 풍습과 가족 구성을 비교하고, 시대별 가족의 모습과 가족 구성원의 역할 변화를 탐색한다.", - "[4사02-06]현대의 여러 가지 가족 형태를 조사하여 가족의 다양한 삶의 모습을 존중하는 태도를 기른다.", - "[4사03-01]지도의 기본 요소에 대한 이해를 바탕으로 하여 우리 지역 지도에 나타난 지리 정보를 실제 생활에 활용한다.", - "[4사03-02]고장 사람들의 생활과 밀접하게 관련이 있는 지역의 다양한 중심지(행정, 교통, 상업, 산업, 관광 등)를 조사하고, 각 중심지의 위치, 기능, 경관의 특성을 탐색한다.", - "[4사03-03]우리 지역을 대표하는 유・무형의 문화유산을 알아보고, 지역의 문화유산을 소중히 여기는 태도를 갖는다.", - "[4사03-04]우리 지역과 관련된 역사적 인물의 삶을 알아보고, 지역의 역사에 대해 자부심을 갖는다.", - "[4사03-05]우리 지역에 있는 공공 기관의 종류와 역할을 조사하고, 공공 기관이 지역 주민들의 생활에 주는 도움을 탐색한다.", - "[4사03-06]주민 참여를 통해 지역 문제를 해결하는 방안을 살펴보고, 지역 문제의 해결에 참여하는 태도를 기른다.", - "[4사04-01]촌락과 도시의 공통점과 차이점을 비교하고, 각각에서 나타나는 문제점과 해결 방안을 탐색한다.", - "[4사04-02]촌락과 도시 사이에 이루어지는 다양한 교류를 조사하고, 이들 사이의 상호 의존 관계를 탐구한다.", - "[4사04-03]자원의 희소성으로 경제활동에서 선택의 문제가 발생함을 파악하고, 시장을 중심으로 이루어지는 생산, 소비 등 경제활동을 설명한다.", - "[4사04-04]우리 지역과 다른 지역의 물자 교환 및 교류 사례를 조사하여, 지역 간 경제활동이 밀접하게 관련되어 있음을 탐구한다.", - "[4사04-05]사회 변화(저출산・고령화, 정보화, 세계화 등)로 나타난 일상생활의 모습을 조사하고, 그 특징을 분석한다.", - "[4사04-06]우리 사회에 다양한 문화가 확산되면서 생기는 문제(편견, 차별 등)및 해결 방안을 탐구하고, 다른 문화를 존중하는 태도를 기른다.", - "[4사01-01]우리 마을 또는 고장의 모습을 자유롭게 그려 보고, 서로 비교하여 공통점과 차이점을 찾아 고장에 대한 서로 다른 장소감을 탐색한다.", - "[4사01-02]디지털 영상 지도 등을 활용하여 주요 지형지물들의 위치를 파악하고, 백지도에 다시 배치하는 활동을 통하여 마을 또는 고장의 실제 모습을 익힌다.", - "[4사01-03]고장과 관련된 옛이야기를 통하여 고장의 역사적인 유래와 특징을 설명한다.", - "[4사01-04]고장에 전해 내려오는 대표적인 문화유산을 살펴보고 고장에 대한 자긍심을 기른다.", - "[4사01-05]옛날과 오늘날의 교통수단에 관한 자료를 바탕으로 하여 교통수단의 발달에 따른 생활 모습의 변화를 설명한다.", - "[4사01-06]옛날과 오늘날의 통신수단에 관한 자료를 바탕으로 하여 통신수단의 발달에 따른 생활 모습의 변화를 설명한다.", - "[4사02-01]우리 고장의 지리적 특성을 조사하고, 이것이 고장 사람들의 생활 모습에 미치는 영향을 탐구한다.", - "[4사02-02]우리 고장과 다른 고장 사람들의 의식주 생활 모습을 비교하여, 환경의 차이에 따른 생활 모습의 다양성을 탐구한다.", - "[4사02-03]옛 사람들의 생활 도구나 주거 형태를 알아보고, 오늘날의 생활 모습과 비교하여 그 변화상을 탐색한다.", - "[4사02-04]옛날의 세시 풍속을 알아보고, 오늘날의 변화상을 탐색하여 공통점과 차이점을 분석한다.", - "[4사02-05]옛날과 오늘날의 혼인 풍습과 가족 구성을 비교하고, 시대별 가족의 모습과 가족 구성원의 역할 변화를 탐색한다.", - "[4사02-06]현대의 여러 가지 가족 형태를 조사하여 가족의 다양한 삶의 모습을 존중하는 태도를 기른다.", - "[4사03-01]지도의 기본 요소에 대한 이해를 바탕으로 하여 우리 지역 지도에 나타난 지리 정보를 실제 생활에 활용한다.", - "[4사03-02]고장 사람들의 생활과 밀접하게 관련이 있는 지역의 다양한 중심지(행정, 교통, 상업, 산업, 관광 등)를 조사하고, 각 중심지의 위치, 기능, 경관의 특성을 탐색한다.", - "[4사03-03]우리 지역을 대표하는 유・무형의 문화유산을 알아보고, 지역의 문화유산을 소중히 여기는 태도를 갖는다.", - "[4사03-04]우리 지역과 관련된 역사적 인물의 삶을 알아보고, 지역의 역사에 대해 자부심을 갖는다.", - "[4사03-05]우리 지역에 있는 공공 기관의 종류와 역할을 조사하고, 공공 기관이 지역 주민들의 생활에 주는 도움을 탐색한다.", - "[4사03-06]주민 참여를 통해 지역 문제를 해결하는 방안을 살펴보고, 지역 문제의 해결에 참여하는 태도를 기른다.", - "[4사04-01]촌락과 도시의 공통점과 차이점을 비교하고, 각각에서 나타나는 문제점과 해결 방안을 탐색한다.", - "[4사04-02]촌락과 도시 사이에 이루어지는 다양한 교류를 조사하고, 이들 사이의 상호 의존 관계를 탐구한다.", - "[4사04-03]자원의 희소성으로 경제활동에서 선택의 문제가 발생함을 파악하고, 시장을 중심으로 이루어지는 생산, 소비 등 경제활동을 설명한다.", - "[4사04-04]우리 지역과 다른 지역의 물자 교환 및 교류 사례를 조사하여, 지역 간 경제활동이 밀접하게 관련되어 있음을 탐구한다.", - "[4사04-05]사회 변화(저출산・고령화, 정보화, 세계화 등)로 나타난 일상생활의 모습을 조사하고, 그 특징을 분석한다.", - "[4사04-06]우리 사회에 다양한 문화가 확산되면서 생기는 문제(편견, 차별 등)및 해결 방안을 탐구하고, 다른 문화를 존중하는 태도를 기른다." - ], - "과학": [ - "[4과01-01]서로 다른 물질로 만들어진 물체들을 비교하여 물체의 기능과 물질의 성질을 관련지을 수 있다.", - "[4과01-02]크기와 모양은 같지만 서로 다른 물질로 이루어진 물체들을 관찰하여 물질의 여러 가지 성질을 비교할 수 있다.", - "[4과01-03]서로 다른 물질을 섞었을 때 물질을 섞기 전과 후의 변화를 관찰하여 어떤 성질이 달라졌는지 설명할 수 있다.", - "[4과01-04]여러 가지 물질을 선택하여 다양한 물체를 설계하고 장단점을 토의할 수 있다.", - "[4과02-01]자석 사이에 밀거나 당기는 힘이 작용하는 현상을 관찰하고 두 종류의 극을 구별할 수 있다.", - "[4과02-02]나침반의 바늘이 일정한 방향을 가리키는 성질이 있음을 관찰을 통해 설명할 수 있다.", - "[4과02-03]일상생활에서 자석이 사용되는 예를 조사하고, 자석의 성질과 관련지어 그 기능을 설명할 수 있다.", - "[4과03-01]여러 가지 동물을 관찰하여 특징에 따라 동물을 분류할 수 있다.", - "[4과03-02]동물의 생김새나 생활 방식이 환경과 관련되어 있음을 설명할 수 있다.", - "[4과03-03]동물의 특징을 모방하여 생활 속에서 활용하고 있는 사례를 발표할 수 있다.", - "[4과04-01]여러 장소의 흙을 관찰하여 비교할 수 있다.", - "[4과04-02]흙의 생성 과정을 모형을 통해 설명할 수 있다.", - "[4과04-03]강과 바닷가 주변 지형의 특징을 흐르는 물과 바닷물의 작용과 관련지을 수 있다.", - "[4과05-01]여러 가지 식물을 관찰하여 특징에 따라 식물을 분류할 수 있다.", - "[4과05-02]식물의 생김새나 생활 방식이 환경과 관련되어 있음을 설명할 수 있다.", - "[4과05-03]식물의 특징을 모방하여 생활 속에서 활용하고 있는 사례를 발표할 수 있다.", - "[4과06-01]여러 가지 지층을 관찰하고 지층의 형성 과정을 모형을 통해 설명할 수 있다.", - "[4과06-02]퇴적암을 알갱이의 크기에 따라 구분하고 퇴적암이 만들어지는 과정을 모형을 통해 설명할 수 있다.", - "[4과06-03]화석의 생성 과정을 이해하고 화석을 관찰하여 지구의 과거 생물과 환경을 추리할 수 있다.", - "[4과07-01]고체와 액체의 성질을 용기에 따른 모양과 부피 변화를 관찰하여 설명할 수 있다.", - "[4과07-02]기체가 공간을 차지하고 있음을 알아보는 실험을 할 수 있다.", - "[4과07-03]기체가 무게가 있음을 알아보는 실험을 할 수 있다.", - "[4과07-04]우리 주변의 물질을 고체, 액체,기체로 분류할 수 있다.", - "[4과08-01]여러 가지 물체에서 소리가 나는 현상을 관찰하여 소리가 나는 물체는 떨림이 있음을 설명할 수 있다.", - "[4과08-02]소리의 세기와 높낮이를 비교할 수 있다.", - "[4과08-03]여러 가지 물체를 통하여 소리가 전달되거나 반사됨을 관찰하고 소음을 줄이는 방법을 토의할 수 있다.", - "[4과09-01]일상생활에서 물체의 무게를 측정하는 예를 조사하고, 무게 측정이 필요한 이유를 설명할 수 있다.", - "[4과09-02]수평 잡기 활동을 통해 물체의 무게를 비교할 수 있다.", - "[4과09-03]용수철에 매단 물체의 무게와 용수철의 늘어난 길이의 관계를 조사하고 물체의 무게를 재는 원리를 설명할 수 있다.", - "[4과09-04]간단한 저울을 설계하여 제작하고 그 결과물을 평가할 수 있다.", - "[4과10-01]동물의 암・수에 따른 특징을 동물별로 비교해보고, 번식 과정에서 암・수의 역할이 다양함을 설명할 수 있다.", - "[4과10-02]동물의 한살이 관찰 계획을 세우고, 동물을 기르면서 한살이를 관찰하며, 관찰한 내용을 글과 그림으로 표현할 수 있다.", - "[4과10-03]여러 가지 동물의 한살이 과정을 조사하여 동물에 따라 한살이의 유형이 다양함을 설명할 수있다.", - "[4과11-01]화산 활동으로 나오는 여러 가지 물질을 설명할 수 있다.", - "[4과11-02]화성암의 생성 과정을 이해하고 화강암과 현무암의 특징을 비교할 수 있다.", - "[4과11-03]화산 활동이 우리 생활에 미치는 영향을 발표할 수 있다.", - "[4과11-04]지진 발생의 원인을 이해하고 지진이 났을 때 안전하게 대처하는 방법을 토의할 수 있다.", - "[4과12-01]일상생활에서 혼합물의 예를 찾고 혼합물 분리의 필요성을 설명할 수 있다.", - "[4과12-02]알갱이의 크기와 자석에 붙는 성질을 이용하여 고체 혼합물을 분리할 수 있다.", - "[4과12-03]거름 장치를 꾸며 물에 녹는 물질과 녹지 않는 물질의 혼합물을 분리할 수 있다.", - "[4과12-04]물을 증발시켜 물에 녹아 있는 고체 물질을 분리할 수 있다.", - "[4과13-01]씨가 싹트거나 자라는데 필요한 조건을 설명할 수 있다.", - "[4과13-02]식물의 한살이 관찰계획을 세워 식물을 기르면서 한살이를 관찰할 수 있다.", - "[4과13-03]여러 가지식물의 한살이 과정을 조사하여 식물에 따라 한살이의 유형이 다양함을 설명할 수 있다.", - "[4과14-01]물이 수증기나 얼음으로 변할 수 있음을 알고, 물이 얼 때와 얼음이 녹을 때의 부피와 무게 변화를 관찰할 수 있다.", - "[4과14-02]물이 증발할 때와 끓을 때의 변화를 관찰하여 차이점을 알고, 이와 관련된 예를 우리 주변에서 찾을 수 있다.", - "[4과14-03]수증기가 응결하는 현상을 관찰하고, 이와 관련된 예를 우리 주변에서 찾을 수 있다.", - "[4과15-01]여러 가지 물체의 그림자를 관찰하여 그림자가 생기는 원리를 설명할 수 있다.", - "[4과15-02]전등과 물체 사이의 거리에 따른 그림자의 크기 변화를 관찰하여 서술할 수 있다.", - "[4과15-03]물체와 평면거울에 비친 모습을 비교하여 거울의 성질을 설명할 수 있다.", - "[4과15-04]일상생활에서 거울을 이용하는 예를 조사하고 거울의 성질과 관련지어 그 기능을 설명할 수 있다.", - "[4과16-01]지구와 관련된 자료를 조사하여 모양과 표면의 모습을 설명할 수 있다.", - "[4과16-02]육지와 비교하여 바다의 특징을 설명할 수 있다.", - "[4과16-03]지구 주위를 둘러싸고 있는 공기의 역할을 예를 들어 설명할 수 있다.", - "[4과16-04]달을 조사하여 모양, 표면, 환경을 이해하고 지구와 달을 비교할 수 있다.", - "[4과17-01]물이 이동하거나 상태가 변하면서 순환하는 과정을 생명체, 지표면, 공기 사이에서 일어나는 다양한 현상과 관련지어 설명할 수 있다.", - "[4과17-02]물의 중요성을 알고 물 부족 현상을 해결하기 위해 창의적 방법을 활용한 사례를 조사할 수 있다.", - "[4과01-01]서로 다른 물질로 만들어진 물체들을 비교하여 물체의 기능과 물질의 성질을 관련지을 수 있다.", - "[4과01-02]크기와 모양은 같지만 서로 다른 물질로 이루어진 물체들을 관찰하여 물질의 여러 가지 성질을 비교할 수 있다.", - "[4과01-03]서로 다른 물질을 섞었을 때 물질을 섞기 전과 후의 변화를 관찰하여 어떤 성질이 달라졌는지 설명할 수 있다.", - "[4과01-04]여러 가지 물질을 선택하여 다양한 물체를 설계하고 장단점을 토의할 수 있다.", - "[4과02-01]자석 사이에 밀거나 당기는 힘이 작용하는 현상을 관찰하고 두 종류의 극을 구별할 수 있다.", - "[4과02-02]나침반의 바늘이 일정한 방향을 가리키는 성질이 있음을 관찰을 통해 설명할 수 있다.", - "[4과02-03]일상생활에서 자석이 사용되는 예를 조사하고, 자석의 성질과 관련지어 그 기능을 설명할 수 있다.", - "[4과03-01]여러 가지 동물을 관찰하여 특징에 따라 동물을 분류할 수 있다.", - "[4과03-02]동물의 생김새나 생활 방식이 환경과 관련되어 있음을 설명할 수 있다.", - "[4과03-03]동물의 특징을 모방하여 생활 속에서 활용하고 있는 사례를 발표할 수 있다.", - "[4과04-01]여러 장소의 흙을 관찰하여 비교할 수 있다.", - "[4과04-02]흙의 생성 과정을 모형을 통해 설명할 수 있다.", - "[4과04-03]강과 바닷가 주변 지형의 특징을 흐르는 물과 바닷물의 작용과 관련지을 수 있다.", - "[4과05-01]여러 가지 식물을 관찰하여 특징에 따라 식물을 분류할 수 있다.", - "[4과05-02]식물의 생김새나 생활 방식이 환경과 관련되어 있음을 설명할 수 있다.", - "[4과05-03]식물의 특징을 모방하여 생활 속에서 활용하고 있는 사례를 발표할 수 있다.", - "[4과06-01]여러 가지 지층을 관찰하고 지층의 형성 과정을 모형을 통해 설명할 수 있다.", - "[4과06-02]퇴적암을 알갱이의 크기에 따라 구분하고 퇴적암이 만들어지는 과정을 모형을 통해 설명할 수 있다.", - "[4과06-03]화석의 생성 과정을 이해하고 화석을 관찰하여 지구의 과거 생물과 환경을 추리할 수 있다.", - "[4과07-01]고체와 액체의 성질을 용기에 따른 모양과 부피 변화를 관찰하여 설명할 수 있다.", - "[4과07-02]기체가 공간을 차지하고 있음을 알아보는 실험을 할 수 있다.", - "[4과07-03]기체가 무게가 있음을 알아보는 실험을 할 수 있다.", - "[4과07-04]우리 주변의 물질을 고체, 액체,기체로 분류할 수 있다.", - "[4과08-01]여러 가지 물체에서 소리가 나는 현상을 관찰하여 소리가 나는 물체는 떨림이 있음을 설명할 수 있다.", - "[4과08-02]소리의 세기와 높낮이를 비교할 수 있다.", - "[4과08-03]여러 가지 물체를 통하여 소리가 전달되거나 반사됨을 관찰하고 소음을 줄이는 방법을 토의할 수 있다.", - "[4과09-01]일상생활에서 물체의 무게를 측정하는 예를 조사하고, 무게 측정이 필요한 이유를 설명할 수 있다.", - "[4과09-02]수평 잡기 활동을 통해 물체의 무게를 비교할 수 있다.", - "[4과09-03]용수철에 매단 물체의 무게와 용수철의 늘어난 길이의 관계를 조사하고 물체의 무게를 재는 원리를 설명할 수 있다.", - "[4과09-04]간단한 저울을 설계하여 제작하고 그 결과물을 평가할 수 있다.", - "[4과10-01]동물의 암・수에 따른 특징을 동물별로 비교해보고, 번식 과정에서 암・수의 역할이 다양함을 설명할 수 있다.", - "[4과10-02]동물의 한살이 관찰 계획을 세우고, 동물을 기르면서 한살이를 관찰하며, 관찰한 내용을 글과 그림으로 표현할 수 있다.", - "[4과10-03]여러 가지 동물의 한살이 과정을 조사하여 동물에 따라 한살이의 유형이 다양함을 설명할 수있다.", - "[4과11-01]화산 활동으로 나오는 여러 가지 물질을 설명할 수 있다.", - "[4과11-02]화성암의 생성 과정을 이해하고 화강암과 현무암의 특징을 비교할 수 있다.", - "[4과11-03]화산 활동이 우리 생활에 미치는 영향을 발표할 수 있다.", - "[4과11-04]지진 발생의 원인을 이해하고 지진이 났을 때 안전하게 대처하는 방법을 토의할 수 있다.", - "[4과12-01]일상생활에서 혼합물의 예를 찾고 혼합물 분리의 필요성을 설명할 수 있다.", - "[4과12-02]알갱이의 크기와 자석에 붙는 성질을 이용하여 고체 혼합물을 분리할 수 있다.", - "[4과12-03]거름 장치를 꾸며 물에 녹는 물질과 녹지 않는 물질의 혼합물을 분리할 수 있다.", - "[4과12-04]물을 증발시켜 물에 녹아 있는 고체 물질을 분리할 수 있다.", - "[4과13-01]씨가 싹트거나 자라는데 필요한 조건을 설명할 수 있다.", - "[4과13-02]식물의 한살이 관찰계획을 세워 식물을 기르면서 한살이를 관찰할 수 있다.", - "[4과13-03]여러 가지식물의 한살이 과정을 조사하여 식물에 따라 한살이의 유형이 다양함을 설명할 수 있다.", - "[4과14-01]물이 수증기나 얼음으로 변할 수 있음을 알고, 물이 얼 때와 얼음이 녹을 때의 부피와 무게 변화를 관찰할 수 있다.", - "[4과14-02]물이 증발할 때와 끓을 때의 변화를 관찰하여 차이점을 알고, 이와 관련된 예를 우리 주변에서 찾을 수 있다.", - "[4과14-03]수증기가 응결하는 현상을 관찰하고, 이와 관련된 예를 우리 주변에서 찾을 수 있다.", - "[4과15-01]여러 가지 물체의 그림자를 관찰하여 그림자가 생기는 원리를 설명할 수 있다.", - "[4과15-02]전등과 물체 사이의 거리에 따른 그림자의 크기 변화를 관찰하여 서술할 수 있다.", - "[4과15-03]물체와 평면거울에 비친 모습을 비교하여 거울의 성질을 설명할 수 있다.", - "[4과15-04]일상생활에서 거울을 이용하는 예를 조사하고 거울의 성질과 관련지어 그 기능을 설명할 수 있다.", - "[4과16-01]지구와 관련된 자료를 조사하여 모양과 표면의 모습을 설명할 수 있다.", - "[4과16-02]육지와 비교하여 바다의 특징을 설명할 수 있다.", - "[4과16-03]지구 주위를 둘러싸고 있는 공기의 역할을 예를 들어 설명할 수 있다.", - "[4과16-04]달을 조사하여 모양, 표면, 환경을 이해하고 지구와 달을 비교할 수 있다.", - "[4과17-01]물이 이동하거나 상태가 변하면서 순환하는 과정을 생명체, 지표면, 공기 사이에서 일어나는 다양한 현상과 관련지어 설명할 수 있다.", - "[4과17-02]물의 중요성을 알고 물 부족 현상을 해결하기 위해 창의적 방법을 활용한 사례를 조사할 수 있다.", - "[4과01-01]서로 다른 물질로 만들어진 물체들을 비교하여 물체의 기능과 물질의 성질을 관련지을 수 있다.", - "[4과01-02]크기와 모양은 같지만 서로 다른 물질로 이루어진 물체들을 관찰하여 물질의 여러 가지 성질을 비교할 수 있다.", - "[4과01-03]서로 다른 물질을 섞었을 때 물질을 섞기 전과 후의 변화를 관찰하여 어떤 성질이 달라졌는지 설명할 수 있다.", - "[4과01-04]여러 가지 물질을 선택하여 다양한 물체를 설계하고 장단점을 토의할 수 있다.", - "[4과02-01]자석 사이에 밀거나 당기는 힘이 작용하는 현상을 관찰하고 두 종류의 극을 구별할 수 있다.", - "[4과02-02]나침반의 바늘이 일정한 방향을 가리키는 성질이 있음을 관찰을 통해 설명할 수 있다.", - "[4과02-03]일상생활에서 자석이 사용되는 예를 조사하고, 자석의 성질과 관련지어 그 기능을 설명할 수 있다.", - "[4과03-01]여러 가지 동물을 관찰하여 특징에 따라 동물을 분류할 수 있다.", - "[4과03-02]동물의 생김새나 생활 방식이 환경과 관련되어 있음을 설명할 수 있다.", - "[4과03-03]동물의 특징을 모방하여 생활 속에서 활용하고 있는 사례를 발표할 수 있다.", - "[4과04-01]여러 장소의 흙을 관찰하여 비교할 수 있다.", - "[4과04-02]흙의 생성 과정을 모형을 통해 설명할 수 있다.", - "[4과04-03]강과 바닷가 주변 지형의 특징을 흐르는 물과 바닷물의 작용과 관련지을 수 있다.", - "[4과05-01]여러 가지 식물을 관찰하여 특징에 따라 식물을 분류할 수 있다.", - "[4과05-02]식물의 생김새나 생활 방식이 환경과 관련되어 있음을 설명할 수 있다.", - "[4과05-03]식물의 특징을 모방하여 생활 속에서 활용하고 있는 사례를 발표할 수 있다.", - "[4과06-01]여러 가지 지층을 관찰하고 지층의 형성 과정을 모형을 통해 설명할 수 있다.", - "[4과06-02]퇴적암을 알갱이의 크기에 따라 구분하고 퇴적암이 만들어지는 과정을 모형을 통해 설명할 수 있다.", - "[4과06-03]화석의 생성 과정을 이해하고 화석을 관찰하여 지구의 과거 생물과 환경을 추리할 수 있다.", - "[4과07-01]고체와 액체의 성질을 용기에 따른 모양과 부피 변화를 관찰하여 설명할 수 있다.", - "[4과07-02]기체가 공간을 차지하고 있음을 알아보는 실험을 할 수 있다.", - "[4과07-03]기체가 무게가 있음을 알아보는 실험을 할 수 있다.", - "[4과07-04]우리 주변의 물질을 고체, 액체,기체로 분류할 수 있다.", - "[4과08-01]여러 가지 물체에서 소리가 나는 현상을 관찰하여 소리가 나는 물체는 떨림이 있음을 설명할 수 있다.", - "[4과08-02]소리의 세기와 높낮이를 비교할 수 있다.", - "[4과08-03]여러 가지 물체를 통하여 소리가 전달되거나 반사됨을 관찰하고 소음을 줄이는 방법을 토의할 수 있다.", - "[4과09-01]일상생활에서 물체의 무게를 측정하는 예를 조사하고, 무게 측정이 필요한 이유를 설명할 수 있다.", - "[4과09-02]수평 잡기 활동을 통해 물체의 무게를 비교할 수 있다.", - "[4과09-03]용수철에 매단 물체의 무게와 용수철의 늘어난 길이의 관계를 조사하고 물체의 무게를 재는 원리를 설명할 수 있다.", - "[4과09-04]간단한 저울을 설계하여 제작하고 그 결과물을 평가할 수 있다.", - "[4과10-01]동물의 암・수에 따른 특징을 동물별로 비교해보고, 번식 과정에서 암・수의 역할이 다양함을 설명할 수 있다.", - "[4과10-02]동물의 한살이 관찰 계획을 세우고, 동물을 기르면서 한살이를 관찰하며, 관찰한 내용을 글과 그림으로 표현할 수 있다.", - "[4과10-03]여러 가지 동물의 한살이 과정을 조사하여 동물에 따라 한살이의 유형이 다양함을 설명할 수있다.", - "[4과11-01]화산 활동으로 나오는 여러 가지 물질을 설명할 수 있다.", - "[4과11-02]화성암의 생성 과정을 이해하고 화강암과 현무암의 특징을 비교할 수 있다.", - "[4과11-03]화산 활동이 우리 생활에 미치는 영향을 발표할 수 있다.", - "[4과11-04]지진 발생의 원인을 이해하고 지진이 났을 때 안전하게 대처하는 방법을 토의할 수 있다.", - "[4과12-01]일상생활에서 혼합물의 예를 찾고 혼합물 분리의 필요성을 설명할 수 있다.", - "[4과12-02]알갱이의 크기와 자석에 붙는 성질을 이용하여 고체 혼합물을 분리할 수 있다.", - "[4과12-03]거름 장치를 꾸며 물에 녹는 물질과 녹지 않는 물질의 혼합물을 분리할 수 있다.", - "[4과12-04]물을 증발시켜 물에 녹아 있는 고체 물질을 분리할 수 있다.", - "[4과13-01]씨가 싹트거나 자라는데 필요한 조건을 설명할 수 있다.", - "[4과13-02]식물의 한살이 관찰계획을 세워 식물을 기르면서 한살이를 관찰할 수 있다.", - "[4과13-03]여러 가지식물의 한살이 과정을 조사하여 식물에 따라 한살이의 유형이 다양함을 설명할 수 있다.", - "[4과14-01]물이 수증기나 얼음으로 변할 수 있음을 알고, 물이 얼 때와 얼음이 녹을 때의 부피와 무게 변화를 관찰할 수 있다.", - "[4과14-02]물이 증발할 때와 끓을 때의 변화를 관찰하여 차이점을 알고, 이와 관련된 예를 우리 주변에서 찾을 수 있다.", - "[4과14-03]수증기가 응결하는 현상을 관찰하고, 이와 관련된 예를 우리 주변에서 찾을 수 있다.", - "[4과15-01]여러 가지 물체의 그림자를 관찰하여 그림자가 생기는 원리를 설명할 수 있다.", - "[4과15-02]전등과 물체 사이의 거리에 따른 그림자의 크기 변화를 관찰하여 서술할 수 있다.", - "[4과15-03]물체와 평면거울에 비친 모습을 비교하여 거울의 성질을 설명할 수 있다.", - "[4과15-04]일상생활에서 거울을 이용하는 예를 조사하고 거울의 성질과 관련지어 그 기능을 설명할 수 있다.", - "[4과16-01]지구와 관련된 자료를 조사하여 모양과 표면의 모습을 설명할 수 있다.", - "[4과16-02]육지와 비교하여 바다의 특징을 설명할 수 있다.", - "[4과16-03]지구 주위를 둘러싸고 있는 공기의 역할을 예를 들어 설명할 수 있다.", - "[4과16-04]달을 조사하여 모양, 표면, 환경을 이해하고 지구와 달을 비교할 수 있다.", - "[4과17-01]물이 이동하거나 상태가 변하면서 순환하는 과정을 생명체, 지표면, 공기 사이에서 일어나는 다양한 현상과 관련지어 설명할 수 있다.", - "[4과17-02]물의 중요성을 알고 물 부족 현상을 해결하기 위해 창의적 방법을 활용한 사례를 조사할 수 있다.", - "[4과01-01]서로 다른 물질로 만들어진 물체들을 비교하여 물체의 기능과 물질의 성질을 관련지을 수 있다.", - "[4과01-02]크기와 모양은 같지만 서로 다른 물질로 이루어진 물체들을 관찰하여 물질의 여러 가지 성질을 비교할 수 있다.", - "[4과01-03]서로 다른 물질을 섞었을 때 물질을 섞기 전과 후의 변화를 관찰하여 어떤 성질이 달라졌는지 설명할 수 있다.", - "[4과01-04]여러 가지 물질을 선택하여 다양한 물체를 설계하고 장단점을 토의할 수 있다.", - "[4과02-01]자석 사이에 밀거나 당기는 힘이 작용하는 현상을 관찰하고 두 종류의 극을 구별할 수 있다.", - "[4과02-02]나침반의 바늘이 일정한 방향을 가리키는 성질이 있음을 관찰을 통해 설명할 수 있다.", - "[4과02-03]일상생활에서 자석이 사용되는 예를 조사하고, 자석의 성질과 관련지어 그 기능을 설명할 수 있다.", - "[4과03-01]여러 가지 동물을 관찰하여 특징에 따라 동물을 분류할 수 있다.", - "[4과03-02]동물의 생김새나 생활 방식이 환경과 관련되어 있음을 설명할 수 있다.", - "[4과03-03]동물의 특징을 모방하여 생활 속에서 활용하고 있는 사례를 발표할 수 있다.", - "[4과04-01]여러 장소의 흙을 관찰하여 비교할 수 있다.", - "[4과04-02]흙의 생성 과정을 모형을 통해 설명할 수 있다.", - "[4과04-03]강과 바닷가 주변 지형의 특징을 흐르는 물과 바닷물의 작용과 관련지을 수 있다.", - "[4과05-01]여러 가지 식물을 관찰하여 특징에 따라 식물을 분류할 수 있다.", - "[4과05-02]식물의 생김새나 생활 방식이 환경과 관련되어 있음을 설명할 수 있다.", - "[4과05-03]식물의 특징을 모방하여 생활 속에서 활용하고 있는 사례를 발표할 수 있다.", - "[4과06-01]여러 가지 지층을 관찰하고 지층의 형성 과정을 모형을 통해 설명할 수 있다.", - "[4과06-02]퇴적암을 알갱이의 크기에 따라 구분하고 퇴적암이 만들어지는 과정을 모형을 통해 설명할 수 있다.", - "[4과06-03]화석의 생성 과정을 이해하고 화석을 관찰하여 지구의 과거 생물과 환경을 추리할 수 있다.", - "[4과07-01]고체와 액체의 성질을 용기에 따른 모양과 부피 변화를 관찰하여 설명할 수 있다.", - "[4과07-02]기체가 공간을 차지하고 있음을 알아보는 실험을 할 수 있다.", - "[4과07-03]기체가 무게가 있음을 알아보는 실험을 할 수 있다.", - "[4과07-04]우리 주변의 물질을 고체, 액체,기체로 분류할 수 있다.", - "[4과08-01]여러 가지 물체에서 소리가 나는 현상을 관찰하여 소리가 나는 물체는 떨림이 있음을 설명할 수 있다.", - "[4과08-02]소리의 세기와 높낮이를 비교할 수 있다.", - "[4과08-03]여러 가지 물체를 통하여 소리가 전달되거나 반사됨을 관찰하고 소음을 줄이는 방법을 토의할 수 있다.", - "[4과09-01]일상생활에서 물체의 무게를 측정하는 예를 조사하고, 무게 측정이 필요한 이유를 설명할 수 있다.", - "[4과09-02]수평 잡기 활동을 통해 물체의 무게를 비교할 수 있다.", - "[4과09-03]용수철에 매단 물체의 무게와 용수철의 늘어난 길이의 관계를 조사하고 물체의 무게를 재는 원리를 설명할 수 있다.", - "[4과09-04]간단한 저울을 설계하여 제작하고 그 결과물을 평가할 수 있다.", - "[4과10-01]동물의 암・수에 따른 특징을 동물별로 비교해보고, 번식 과정에서 암・수의 역할이 다양함을 설명할 수 있다.", - "[4과10-02]동물의 한살이 관찰 계획을 세우고, 동물을 기르면서 한살이를 관찰하며, 관찰한 내용을 글과 그림으로 표현할 수 있다.", - "[4과10-03]여러 가지 동물의 한살이 과정을 조사하여 동물에 따라 한살이의 유형이 다양함을 설명할 수있다.", - "[4과11-01]화산 활동으로 나오는 여러 가지 물질을 설명할 수 있다.", - "[4과11-02]화성암의 생성 과정을 이해하고 화강암과 현무암의 특징을 비교할 수 있다.", - "[4과11-03]화산 활동이 우리 생활에 미치는 영향을 발표할 수 있다.", - "[4과11-04]지진 발생의 원인을 이해하고 지진이 났을 때 안전하게 대처하는 방법을 토의할 수 있다.", - "[4과12-01]일상생활에서 혼합물의 예를 찾고 혼합물 분리의 필요성을 설명할 수 있다.", - "[4과12-02]알갱이의 크기와 자석에 붙는 성질을 이용하여 고체 혼합물을 분리할 수 있다.", - "[4과12-03]거름 장치를 꾸며 물에 녹는 물질과 녹지 않는 물질의 혼합물을 분리할 수 있다.", - "[4과12-04]물을 증발시켜 물에 녹아 있는 고체 물질을 분리할 수 있다.", - "[4과13-01]씨가 싹트거나 자라는데 필요한 조건을 설명할 수 있다.", - "[4과13-02]식물의 한살이 관찰계획을 세워 식물을 기르면서 한살이를 관찰할 수 있다.", - "[4과13-03]여러 가지식물의 한살이 과정을 조사하여 식물에 따라 한살이의 유형이 다양함을 설명할 수 있다.", - "[4과14-01]물이 수증기나 얼음으로 변할 수 있음을 알고, 물이 얼 때와 얼음이 녹을 때의 부피와 무게 변화를 관찰할 수 있다.", - "[4과14-02]물이 증발할 때와 끓을 때의 변화를 관찰하여 차이점을 알고, 이와 관련된 예를 우리 주변에서 찾을 수 있다.", - "[4과14-03]수증기가 응결하는 현상을 관찰하고, 이와 관련된 예를 우리 주변에서 찾을 수 있다.", - "[4과15-01]여러 가지 물체의 그림자를 관찰하여 그림자가 생기는 원리를 설명할 수 있다.", - "[4과15-02]전등과 물체 사이의 거리에 따른 그림자의 크기 변화를 관찰하여 서술할 수 있다.", - "[4과15-03]물체와 평면거울에 비친 모습을 비교하여 거울의 성질을 설명할 수 있다.", - "[4과15-04]일상생활에서 거울을 이용하는 예를 조사하고 거울의 성질과 관련지어 그 기능을 설명할 수 있다.", - "[4과16-01]지구와 관련된 자료를 조사하여 모양과 표면의 모습을 설명할 수 있다.", - "[4과16-02]육지와 비교하여 바다의 특징을 설명할 수 있다.", - "[4과16-03]지구 주위를 둘러싸고 있는 공기의 역할을 예를 들어 설명할 수 있다.", - "[4과16-04]달을 조사하여 모양, 표면, 환경을 이해하고 지구와 달을 비교할 수 있다.", - "[4과17-01]물이 이동하거나 상태가 변하면서 순환하는 과정을 생명체, 지표면, 공기 사이에서 일어나는 다양한 현상과 관련지어 설명할 수 있다.", - "[4과17-02]물의 중요성을 알고 물 부족 현상을 해결하기 위해 창의적 방법을 활용한 사례를 조사할 수 있다." - ], - "영어": [ - "[4영01-01]알파벳과 낱말의 소리를 듣고 식별할 수 있다.", - "[4영01-02]낱말, 어구, 문장을 듣고 강세, 리듬, 억양을 식별할 수 있다.", - "[4영01-03]기초적인 낱말, 어구, 문장을 듣고 의미를 이해할 수 있다.", - "[4영01-04]쉽고 친숙한 표현을 듣고 의미를 이해할 수 있다.", - "[4영01-05]한두 문장의 쉽고 간단한 지시나 설명을 듣고 이해할 수 있다.", - "[4영01-06]주변의 사물과 사람에 관한 쉽고 간단한 말이나 대화를 듣고 세부정보를 파악할 수 있다.", - "[4영01-07]일상생활 속의 친숙한 주제에 관한 쉽고 간단한 말이나 대화를 듣고 세부정보를 파악할 수 있다.", - "[4영02-01]알파벳과 낱말의 소리를 듣고 따라 말할 수 있다.", - "[4영02-02]영어의 강세, 리듬, 억양에 맞게 따라 말할 수 있다.", - "[4영02-03]그림, 실물, 동작에 관해 쉽고 간단한 낱말이나 어구, 문장으로 표현할 수 있다.", - "[4영02-04]한두 문장으로 자기소개를 할 수 있다.", - "[4영02-05]한두 문장으로 지시하거나 설명할 수 있다.", - "[4영02-06]쉽고 간단한 인사말을 주고받을 수 있다.", - "[4영02-07]일상생활 속의 친숙한 주제에 관해 쉽고 간단한 표현으로 묻거나 답할 수 있다.", - "[4영03-01]알파벳 대소문자를 식별하여 읽을 수 있다.", - "[4영03-02]소리와 철자의 관계를 이해하여 낱말을 읽을 수 있다.", - "[4영03-03]쉽고 간단한 낱말이나 어구, 문장을 따라 읽을 수 있다.", - "[4영03-04]쉽고 간단한 낱말이나 어구를 읽고 의미를 이해할 수 있다.", - "[4영03-05]쉽고 간단한 문장을 읽고 의미를 이해할 수 있다.", - "[4영04-01]알파벳 대소문자를 구별하여 쓸 수 있다.", - "[4영04-02]구두로 익힌 낱말이나 어구를 따라 쓰거나 보고 쓸 수 있다.", - "[4영04-03]실물이나 그림을 보고 쉽고 간단한 낱말이나 어구를 쓸 수 있다.", - "[4영01-01]알파벳과 낱말의 소리를 듣고 식별할 수 있다.", - "[4영01-02]낱말, 어구, 문장을 듣고 강세, 리듬, 억양을 식별할 수 있다.", - "[4영01-03]기초적인 낱말, 어구, 문장을 듣고 의미를 이해할 수 있다.", - "[4영01-04]쉽고 친숙한 표현을 듣고 의미를 이해할 수 있다.", - "[4영01-05]한두 문장의 쉽고 간단한 지시나 설명을 듣고 이해할 수 있다.", - "[4영01-06]주변의 사물과 사람에 관한 쉽고 간단한 말이나 대화를 듣고 세부정보를 파악할 수 있다.", - "[4영01-07]일상생활 속의 친숙한 주제에 관한 쉽고 간단한 말이나 대화를 듣고 세부정보를 파악할 수 있다.", - "[4영02-01]알파벳과 낱말의 소리를 듣고 따라 말할 수 있다.", - "[4영02-02]영어의 강세, 리듬, 억양에 맞게 따라 말할 수 있다.", - "[4영02-03]그림, 실물, 동작에 관해 쉽고 간단한 낱말이나 어구, 문장으로 표현할 수 있다.", - "[4영02-04]한두 문장으로 자기소개를 할 수 있다.", - "[4영02-05]한두 문장으로 지시하거나 설명할 수 있다.", - "[4영02-06]쉽고 간단한 인사말을 주고받을 수 있다.", - "[4영02-07]일상생활 속의 친숙한 주제에 관해 쉽고 간단한 표현으로 묻거나 답할 수 있다.", - "[4영03-01]알파벳 대소문자를 식별하여 읽을 수 있다.", - "[4영03-02]소리와 철자의 관계를 이해하여 낱말을 읽을 수 있다.", - "[4영03-03]쉽고 간단한 낱말이나 어구, 문장을 따라 읽을 수 있다.", - "[4영03-04]쉽고 간단한 낱말이나 어구를 읽고 의미를 이해할 수 있다.", - "[4영03-05]쉽고 간단한 문장을 읽고 의미를 이해할 수 있다.", - "[4영04-01]알파벳 대소문자를 구별하여 쓸 수 있다.", - "[4영04-02]구두로 익힌 낱말이나 어구를 따라 쓰거나 보고 쓸 수 있다.", - "[4영04-03]실물이나 그림을 보고 쉽고 간단한 낱말이나 어구를 쓸 수 있다.", - "[4영01-01]알파벳과 낱말의 소리를 듣고 식별할 수 있다.", - "[4영01-02]낱말, 어구, 문장을 듣고 강세, 리듬, 억양을 식별할 수 있다.", - "[4영01-03]기초적인 낱말, 어구, 문장을 듣고 의미를 이해할 수 있다.", - "[4영01-04]쉽고 친숙한 표현을 듣고 의미를 이해할 수 있다.", - "[4영01-05]한두 문장의 쉽고 간단한 지시나 설명을 듣고 이해할 수 있다.", - "[4영01-06]주변의 사물과 사람에 관한 쉽고 간단한 말이나 대화를 듣고 세부정보를 파악할 수 있다.", - "[4영01-07]일상생활 속의 친숙한 주제에 관한 쉽고 간단한 말이나 대화를 듣고 세부정보를 파악할 수 있다.", - "[4영02-01]알파벳과 낱말의 소리를 듣고 따라 말할 수 있다.", - "[4영02-02]영어의 강세, 리듬, 억양에 맞게 따라 말할 수 있다.", - "[4영02-03]그림, 실물, 동작에 관해 쉽고 간단한 낱말이나 어구, 문장으로 표현할 수 있다.", - "[4영02-04]한두 문장으로 자기소개를 할 수 있다.", - "[4영02-05]한두 문장으로 지시하거나 설명할 수 있다.", - "[4영02-06]쉽고 간단한 인사말을 주고받을 수 있다.", - "[4영02-07]일상생활 속의 친숙한 주제에 관해 쉽고 간단한 표현으로 묻거나 답할 수 있다.", - "[4영03-01]알파벳 대소문자를 식별하여 읽을 수 있다.", - "[4영03-02]소리와 철자의 관계를 이해하여 낱말을 읽을 수 있다.", - "[4영03-03]쉽고 간단한 낱말이나 어구, 문장을 따라 읽을 수 있다.", - "[4영03-04]쉽고 간단한 낱말이나 어구를 읽고 의미를 이해할 수 있다.", - "[4영03-05]쉽고 간단한 문장을 읽고 의미를 이해할 수 있다.", - "[4영04-01]알파벳 대소문자를 구별하여 쓸 수 있다.", - "[4영04-02]구두로 익힌 낱말이나 어구를 따라 쓰거나 보고 쓸 수 있다.", - "[4영04-03]실물이나 그림을 보고 쉽고 간단한 낱말이나 어구를 쓸 수 있다.", - "[4영01-01]알파벳과 낱말의 소리를 듣고 식별할 수 있다.", - "[4영01-02]낱말, 어구, 문장을 듣고 강세, 리듬, 억양을 식별할 수 있다.", - "[4영01-03]기초적인 낱말, 어구, 문장을 듣고 의미를 이해할 수 있다.", - "[4영01-04]쉽고 친숙한 표현을 듣고 의미를 이해할 수 있다.", - "[4영01-05]한두 문장의 쉽고 간단한 지시나 설명을 듣고 이해할 수 있다.", - "[4영01-06]주변의 사물과 사람에 관한 쉽고 간단한 말이나 대화를 듣고 세부정보를 파악할 수 있다.", - "[4영01-07]일상생활 속의 친숙한 주제에 관한 쉽고 간단한 말이나 대화를 듣고 세부정보를 파악할 수 있다.", - "[4영02-01]알파벳과 낱말의 소리를 듣고 따라 말할 수 있다.", - "[4영02-02]영어의 강세, 리듬, 억양에 맞게 따라 말할 수 있다.", - "[4영02-03]그림, 실물, 동작에 관해 쉽고 간단한 낱말이나 어구, 문장으로 표현할 수 있다.", - "[4영02-04]한두 문장으로 자기소개를 할 수 있다.", - "[4영02-05]한두 문장으로 지시하거나 설명할 수 있다.", - "[4영02-06]쉽고 간단한 인사말을 주고받을 수 있다.", - "[4영02-07]일상생활 속의 친숙한 주제에 관해 쉽고 간단한 표현으로 묻거나 답할 수 있다.", - "[4영03-01]알파벳 대소문자를 식별하여 읽을 수 있다.", - "[4영03-02]소리와 철자의 관계를 이해하여 낱말을 읽을 수 있다.", - "[4영03-03]쉽고 간단한 낱말이나 어구, 문장을 따라 읽을 수 있다.", - "[4영03-04]쉽고 간단한 낱말이나 어구를 읽고 의미를 이해할 수 있다.", - "[4영03-05]쉽고 간단한 문장을 읽고 의미를 이해할 수 있다.", - "[4영04-01]알파벳 대소문자를 구별하여 쓸 수 있다.", - "[4영04-02]구두로 익힌 낱말이나 어구를 따라 쓰거나 보고 쓸 수 있다.", - "[4영04-03]실물이나 그림을 보고 쉽고 간단한 낱말이나 어구를 쓸 수 있다." - ], - "음악": [ - "[4음01-01]악곡의 특징을 이해하며 노래 부르거나 악기로 연주한다.", - "[4음01-02]악곡에 어울리는 신체표현을 한다.", - "[4음01-03]제재곡의 노랫말을 바꾸거나 노랫말에 맞는 말붙임새로 만든다.", - "[4음01-04]제재곡의 리듬꼴이나 장단꼴을 바꾸어 표현한다.", - "[4음01-05]주변의 소리를 탐색하여 다양한 방법으로 표현한다.", - "[4음01-06]바른 자세로 노래 부르거나 바른 자세와 주법으로 악기를 연주한다.", - "[4음02-01]3∼4 학년 수준의 음악 요소와 개념을 구별하여 표현한다.", - "[4음02-02]상황이나 이야기 등을 표현한 음악을 듣고 느낌을 발표한다.", - "[4음03-01]음악을 활용하여 가정, 학교, 사회 등의 행사에 참여하고 느낌을 발표한다.", - "[4음03-02]음악을 놀이에 활용해 보고 느낌을 발표한다.", - "[4음03-03]생활 속에서 활용되고 있는 국악을 찾아 발표한다.", - "[4음01-01]악곡의 특징을 이해하며 노래 부르거나 악기로 연주한다.", - "[4음01-02]악곡에 어울리는 신체표현을 한다.", - "[4음01-03]제재곡의 노랫말을 바꾸거나 노랫말에 맞는 말붙임새로 만든다.", - "[4음01-04]제재곡의 리듬꼴이나 장단꼴을 바꾸어 표현한다.", - "[4음01-05]주변의 소리를 탐색하여 다양한 방법으로 표현한다.", - "[4음01-06]바른 자세로 노래 부르거나 바른 자세와 주법으로 악기를 연주한다.", - "[4음02-01]3∼4 학년 수준의 음악 요소와 개념을 구별하여 표현한다.", - "[4음02-02]상황이나 이야기 등을 표현한 음악을 듣고 느낌을 발표한다.", - "[4음03-01]음악을 활용하여 가정, 학교, 사회 등의 행사에 참여하고 느낌을 발표한다.", - "[4음03-02]음악을 놀이에 활용해 보고 느낌을 발표한다.", - "[4음03-03]생활 속에서 활용되고 있는 국악을 찾아 발표한다.", - "[4음01-01]악곡의 특징을 이해하며 노래 부르거나 악기로 연주한다.", - "[4음01-02]악곡에 어울리는 신체표현을 한다.", - "[4음01-03]제재곡의 노랫말을 바꾸거나 노랫말에 맞는 말붙임새로 만든다.", - "[4음01-04]제재곡의 리듬꼴이나 장단꼴을 바꾸어 표현한다.", - "[4음01-05]주변의 소리를 탐색하여 다양한 방법으로 표현한다.", - "[4음01-06]바른 자세로 노래 부르거나 바른 자세와 주법으로 악기를 연주한다.", - "[4음02-01]3∼4 학년 수준의 음악 요소와 개념을 구별하여 표현한다.", - "[4음02-02]상황이나 이야기 등을 표현한 음악을 듣고 느낌을 발표한다.", - "[4음03-01]음악을 활용하여 가정, 학교, 사회 등의 행사에 참여하고 느낌을 발표한다.", - "[4음03-02]음악을 놀이에 활용해 보고 느낌을 발표한다.", - "[4음03-03]생활 속에서 활용되고 있는 국악을 찾아 발표한다.", - "[4음01-01]악곡의 특징을 이해하며 노래 부르거나 악기로 연주한다.", - "[4음01-02]악곡에 어울리는 신체표현을 한다.", - "[4음01-03]제재곡의 노랫말을 바꾸거나 노랫말에 맞는 말붙임새로 만든다.", - "[4음01-04]제재곡의 리듬꼴이나 장단꼴을 바꾸어 표현한다.", - "[4음01-05]주변의 소리를 탐색하여 다양한 방법으로 표현한다.", - "[4음01-06]바른 자세로 노래 부르거나 바른 자세와 주법으로 악기를 연주한다.", - "[4음02-01]3∼4 학년 수준의 음악 요소와 개념을 구별하여 표현한다.", - "[4음02-02]상황이나 이야기 등을 표현한 음악을 듣고 느낌을 발표한다.", - "[4음03-01]음악을 활용하여 가정, 학교, 사회 등의 행사에 참여하고 느낌을 발표한다.", - "[4음03-02]음악을 놀이에 활용해 보고 느낌을 발표한다.", - "[4음03-03]생활 속에서 활용되고 있는 국악을 찾아 발표한다." - ], - "미술": [ - "[4미01-01]자연물과 인공물을 탐색하는 데 다양한 감각을 활용할 수 있다.", - "[4미01-02]주변 대상을 탐색하여 자신의 느낌과 생각을 다양한 방법으로 나타낼 수 있다.", - "[4미01-03]생활 속에서 다양하게 활용되고 있는 미술을 발견할 수 있다.", - "[4미01-04]미술을 자신의 생활과 관련지을 수 있다.", - "[4미02-01]미술의 다양한 표현 주제에 관심을 가질 수 있다.", - "[4미02-02]주제를 자유롭게 떠올릴 수 있다.", - "[4미02-03]연상, 상상하거나 대상을 관찰하여 주제를 탐색할 수 있다.", - "[4미02-04]표현 방법과 과정에 관심을 가지고 계획할 수 있다.", - "[4미02-05]조형 요소(점, 선, 면, 형・형태, 색, 질감, 양감 등)의 특징을 탐색하고, 표현 의도에 적합하게 적용할 수 있다.", - "[4미02-06]기본적인 표현 재료와 용구의 사용법을 익혀 안전하게 사용할 수 있다.", - "[4미03-01]다양한 분야의 미술 작품과 미술가들에 관심을 가질 수 있다.", - "[4미03-02]관심 있는 미술 작품과 미술가에 대하여 설명할 수 있다.", - "[4미03-03]미술 작품에 대한 자신의 느낌과 생각을 발표하고, 그 이유를 설명할 수 있다.", - "[4미03-04]미술 작품을 감상하는 올바른 태도를 알고 작품을 소중히 다룰 수 있다.", - "[4미01-01]자연물과 인공물을 탐색하는 데 다양한 감각을 활용할 수 있다.", - "[4미01-02]주변 대상을 탐색하여 자신의 느낌과 생각을 다양한 방법으로 나타낼 수 있다.", - "[4미01-03]생활 속에서 다양하게 활용되고 있는 미술을 발견할 수 있다.", - "[4미01-04]미술을 자신의 생활과 관련지을 수 있다.", - "[4미02-01]미술의 다양한 표현 주제에 관심을 가질 수 있다.", - "[4미02-02]주제를 자유롭게 떠올릴 수 있다.", - "[4미02-03]연상, 상상하거나 대상을 관찰하여 주제를 탐색할 수 있다.", - "[4미02-04]표현 방법과 과정에 관심을 가지고 계획할 수 있다.", - "[4미02-05]조형 요소(점, 선, 면, 형・형태, 색, 질감, 양감 등)의 특징을 탐색하고, 표현 의도에 적합하게 적용할 수 있다.", - "[4미02-06]기본적인 표현 재료와 용구의 사용법을 익혀 안전하게 사용할 수 있다.", - "[4미03-01]다양한 분야의 미술 작품과 미술가들에 관심을 가질 수 있다.", - "[4미03-02]관심 있는 미술 작품과 미술가에 대하여 설명할 수 있다.", - "[4미03-03]미술 작품에 대한 자신의 느낌과 생각을 발표하고, 그 이유를 설명할 수 있다.", - "[4미03-04]미술 작품을 감상하는 올바른 태도를 알고 작품을 소중히 다룰 수 있다.", - "[4미01-01]자연물과 인공물을 탐색하는 데 다양한 감각을 활용할 수 있다.", - "[4미01-02]주변 대상을 탐색하여 자신의 느낌과 생각을 다양한 방법으로 나타낼 수 있다.", - "[4미01-03]생활 속에서 다양하게 활용되고 있는 미술을 발견할 수 있다.", - "[4미01-04]미술을 자신의 생활과 관련지을 수 있다.", - "[4미02-01]미술의 다양한 표현 주제에 관심을 가질 수 있다.", - "[4미02-02]주제를 자유롭게 떠올릴 수 있다.", - "[4미02-03]연상, 상상하거나 대상을 관찰하여 주제를 탐색할 수 있다.", - "[4미02-04]표현 방법과 과정에 관심을 가지고 계획할 수 있다.", - "[4미02-05]조형 요소(점, 선, 면, 형・형태, 색, 질감, 양감 등)의 특징을 탐색하고, 표현 의도에 적합하게 적용할 수 있다.", - "[4미02-06]기본적인 표현 재료와 용구의 사용법을 익혀 안전하게 사용할 수 있다.", - "[4미03-01]다양한 분야의 미술 작품과 미술가들에 관심을 가질 수 있다.", - "[4미03-02]관심 있는 미술 작품과 미술가에 대하여 설명할 수 있다.", - "[4미03-03]미술 작품에 대한 자신의 느낌과 생각을 발표하고, 그 이유를 설명할 수 있다.", - "[4미03-04]미술 작품을 감상하는 올바른 태도를 알고 작품을 소중히 다룰 수 있다.", - "[4미01-01]자연물과 인공물을 탐색하는 데 다양한 감각을 활용할 수 있다.", - "[4미01-02]주변 대상을 탐색하여 자신의 느낌과 생각을 다양한 방법으로 나타낼 수 있다.", - "[4미01-03]생활 속에서 다양하게 활용되고 있는 미술을 발견할 수 있다.", - "[4미01-04]미술을 자신의 생활과 관련지을 수 있다.", - "[4미02-01]미술의 다양한 표현 주제에 관심을 가질 수 있다.", - "[4미02-02]주제를 자유롭게 떠올릴 수 있다.", - "[4미02-03]연상, 상상하거나 대상을 관찰하여 주제를 탐색할 수 있다.", - "[4미02-04]표현 방법과 과정에 관심을 가지고 계획할 수 있다.", - "[4미02-05]조형 요소(점, 선, 면, 형・형태, 색, 질감, 양감 등)의 특징을 탐색하고, 표현 의도에 적합하게 적용할 수 있다.", - "[4미02-06]기본적인 표현 재료와 용구의 사용법을 익혀 안전하게 사용할 수 있다.", - "[4미03-01]다양한 분야의 미술 작품과 미술가들에 관심을 가질 수 있다.", - "[4미03-02]관심 있는 미술 작품과 미술가에 대하여 설명할 수 있다.", - "[4미03-03]미술 작품에 대한 자신의 느낌과 생각을 발표하고, 그 이유를 설명할 수 있다.", - "[4미03-04]미술 작품을 감상하는 올바른 태도를 알고 작품을 소중히 다룰 수 있다." - ], - "체육": [ - "[4체01-01]건강한 생활 습관(몸의 바른 자세, 개인 위생, 비만 예방)을 알고 생활 속에서 규칙적으로 실천한다.", - "[4체01-02]다양한 운동 수행을 통해 체력의 향상과 건강한 생활을 경험한다.", - "[4체01-03]신체활동을 통해 다른 사람과 구별되는 자신의 신체적․정신적 특징 등을 인식한다.", - "[4체01-04]여가 활동 경험을 바탕으로 여가 활동의 의미와 건강과의 관계를 탐색한다.", - "[4체01-05]체격 및 체력의 특성을 이해하고 자신에게 맞는 체력 운동 계획을 세워 올바른 방법으로 수행한다.", - "[4체01-06]건강을 유지․증진하기 위한 체력 운동 및 여가 생활을 실천한다.", - "[4체02-01]속도를 향상시켜 자신의 기록을 단축하려는 속도 도전의 개념과 특성을 탐색한다.", - "[4체02-02]속도 도전과 관련된 여러 유형의 활동에 참여해 자신의 기록을 향상할 수 있는 기본자세와 동작을 찾아 도전 상황에 적용한다.", - "[4체02-03]자신의 속도 도전 결과를 시기별로 측정하여 그 과정의 장단점을 분석하고 기록을 향상할 수 있는 방법을 지속적으로 수행하고 평가한다.", - "[4체02-04]수련을 통해 힘든 상황에서도 포기하지 않고 목표 달성을 위해 정진하며 속도에 도전한다.", - "[4체02-05]자신이 수행할 수 있는 최상의 자세와 동작을 수행하는 동작 도전의 개념과 특성을 탐색한다.", - "[4체02-06]동작 도전과 관련된 여러 유형의 활동에 참여해 수행의 성공에 도움이 되는 기본자세와 동작을 찾아 도전 상황에 적용한다.", - "[4체02-07]자신의 동작 도전 결과를 시기별로 측정하여 그 과정의 장단점을 분석하고 성공률을 높일 수 있는 방법을 지속적으로 수행하고 평가한다.", - "[4체02-08]수련을 통해 동작 수행이 어렵거나 두려운 상황을 극복하며 동작에 도전한다.", - "[4체03-01]단순한 규칙으로 이루어진 게임을 종합적으로 체험함으로써 공통의 목표 달성을 위해 정해진 규칙을 지키며 상대와 실력을 겨루는 경쟁의 의미를 탐색한다.", - "[4체03-02]단순한 규칙으로 이루어진 게임을 수행하며 경쟁에 필요한 기본 기능을 탐색한다.", - "[4체03-03]게임 방법에 대한 이해를 바탕으로 게임을 유리하게 전개할 수 있는 전략을 탐색한다.", - "[4체03-04]경쟁의 과정에서 규칙의 필요성을 알고 합의된 규칙을 준수하며 게임을 수행한다.", - "[4체03-05]영역형 게임을 다양하게 체험함으로써 상대 영역으로 이동하여 정해진 지점으로 공을 보내 득점하는 영역형 경쟁의 개념과 특성을 탐색한다.", - "[4체03-06]영역형 게임의 기본 기능을 탐색하고 게임 상황에 맞게 적용한다.", - "[4체03-07]영역형 게임 방법에 대한 이해를 바탕으로 게임을 유리하게 전개할 수 있는 전략을 탐색하고 적용한다.", - "[4체03-08]공동의 목표 달성을 위해 협동의 필요성을 알고 팀원과 협력하며 게임을 수행한다.", - "[4체04-01]움직임 언어(이동 움직임, 비이동 움직임, 조작 움직임)와 표현 요소(신체, 공간, 노력, 관계)를 탐색한다.", - "[4체04-02]느낌이나 생각을 창의적인 움직임으로 표현하는 데 적합한 기본 동작을 다양한 표현 상황에 적용한다.", - "[4체04-03]개인 또는 모둠별로 움직임 언어나 표현 요소를 활용하여 구성한 작품을 발표하고 이를 감상한다.", - "[4체04-04]움직임 표현 활동을 수행하며 움직임 표현에 따른 자신의 신체 움직임과 신체의 변화 등을 인식한다.", - "[4체04-05]신체활동(체조, 줄넘기 등)에 나타나는 리듬의 유형과 요소를 탐색한다.", - "[4체04-06]음악(동요, 민요 등)에 맞추어 신체 또는 여러 가지 도구(공, 줄, 후프 등)를 활용한 다양한 동작을 표현 상황에 적용한다.", - "[4체04-07]개인 또는 모둠별로 리듬에 따른 다양한 동작을 구성하여 작품을 만들어 발표하고 이를 감상한다.", - "[4체04-08]리듬 표현 활동을 수행하며 리듬의 특징과 변화를 빠르게 수용하고 이를 신체 움직임에 반영하여 표현한다.", - "[4체05-01]신체활동에서 자주 발생하는 안전사고의 종류와 원인을 탐색한다.", - "[4체05-02]수상활동에서 발생하는 안전사고의 사례를 조사하고 예방 및 대처 방법을 익혀 위험 상황에 대처한다.", - "[4체05-03]신체활동 시 발생할 수 있는 위험 상황을 인지하며 안전하게 신체활동을 수행한다.", - "[4체05-04]운동 장비 사용 시 발생할 수 있는 안전사고의 종류와 원인을 탐색한다.", - "[4체05-05]게임 활동에서 발생하는 안전사고의 사례를 조사하고 예방 및 대처 방법을 익혀 위험 상황에 대처한다.", - "[4체05-06]신체활동 시 행동에 주의를 기울이며 안전하게 활동한다.", - "[4체01-01]건강한 생활 습관(몸의 바른 자세, 개인 위생, 비만 예방)을 알고 생활 속에서 규칙적으로 실천한다.", - "[4체01-02]다양한 운동 수행을 통해 체력의 향상과 건강한 생활을 경험한다.", - "[4체01-03]신체활동을 통해 다른 사람과 구별되는 자신의 신체적․정신적 특징 등을 인식한다.", - "[4체01-04]여가 활동 경험을 바탕으로 여가 활동의 의미와 건강과의 관계를 탐색한다.", - "[4체01-05]체격 및 체력의 특성을 이해하고 자신에게 맞는 체력 운동 계획을 세워 올바른 방법으로 수행한다.", - "[4체01-06]건강을 유지․증진하기 위한 체력 운동 및 여가 생활을 실천한다.", - "[4체02-01]속도를 향상시켜 자신의 기록을 단축하려는 속도 도전의 개념과 특성을 탐색한다.", - "[4체02-02]속도 도전과 관련된 여러 유형의 활동에 참여해 자신의 기록을 향상할 수 있는 기본자세와 동작을 찾아 도전 상황에 적용한다.", - "[4체02-03]자신의 속도 도전 결과를 시기별로 측정하여 그 과정의 장단점을 분석하고 기록을 향상할 수 있는 방법을 지속적으로 수행하고 평가한다.", - "[4체02-04]수련을 통해 힘든 상황에서도 포기하지 않고 목표 달성을 위해 정진하며 속도에 도전한다.", - "[4체02-05]자신이 수행할 수 있는 최상의 자세와 동작을 수행하는 동작 도전의 개념과 특성을 탐색한다.", - "[4체02-06]동작 도전과 관련된 여러 유형의 활동에 참여해 수행의 성공에 도움이 되는 기본자세와 동작을 찾아 도전 상황에 적용한다.", - "[4체02-07]자신의 동작 도전 결과를 시기별로 측정하여 그 과정의 장단점을 분석하고 성공률을 높일 수 있는 방법을 지속적으로 수행하고 평가한다.", - "[4체02-08]수련을 통해 동작 수행이 어렵거나 두려운 상황을 극복하며 동작에 도전한다.", - "[4체03-01]단순한 규칙으로 이루어진 게임을 종합적으로 체험함으로써 공통의 목표 달성을 위해 정해진 규칙을 지키며 상대와 실력을 겨루는 경쟁의 의미를 탐색한다.", - "[4체03-02]단순한 규칙으로 이루어진 게임을 수행하며 경쟁에 필요한 기본 기능을 탐색한다.", - "[4체03-03]게임 방법에 대한 이해를 바탕으로 게임을 유리하게 전개할 수 있는 전략을 탐색한다.", - "[4체03-04]경쟁의 과정에서 규칙의 필요성을 알고 합의된 규칙을 준수하며 게임을 수행한다.", - "[4체03-05]영역형 게임을 다양하게 체험함으로써 상대 영역으로 이동하여 정해진 지점으로 공을 보내 득점하는 영역형 경쟁의 개념과 특성을 탐색한다.", - "[4체03-06]영역형 게임의 기본 기능을 탐색하고 게임 상황에 맞게 적용한다.", - "[4체03-07]영역형 게임 방법에 대한 이해를 바탕으로 게임을 유리하게 전개할 수 있는 전략을 탐색하고 적용한다.", - "[4체03-08]공동의 목표 달성을 위해 협동의 필요성을 알고 팀원과 협력하며 게임을 수행한다.", - "[4체04-01]움직임 언어(이동 움직임, 비이동 움직임, 조작 움직임)와 표현 요소(신체, 공간, 노력, 관계)를 탐색한다.", - "[4체04-02]느낌이나 생각을 창의적인 움직임으로 표현하는 데 적합한 기본 동작을 다양한 표현 상황에 적용한다.", - "[4체04-03]개인 또는 모둠별로 움직임 언어나 표현 요소를 활용하여 구성한 작품을 발표하고 이를 감상한다.", - "[4체04-04]움직임 표현 활동을 수행하며 움직임 표현에 따른 자신의 신체 움직임과 신체의 변화 등을 인식한다.", - "[4체04-05]신체활동(체조, 줄넘기 등)에 나타나는 리듬의 유형과 요소를 탐색한다.", - "[4체04-06]음악(동요, 민요 등)에 맞추어 신체 또는 여러 가지 도구(공, 줄, 후프 등)를 활용한 다양한 동작을 표현 상황에 적용한다.", - "[4체04-07]개인 또는 모둠별로 리듬에 따른 다양한 동작을 구성하여 작품을 만들어 발표하고 이를 감상한다.", - "[4체04-08]리듬 표현 활동을 수행하며 리듬의 특징과 변화를 빠르게 수용하고 이를 신체 움직임에 반영하여 표현한다.", - "[4체05-01]신체활동에서 자주 발생하는 안전사고의 종류와 원인을 탐색한다.", - "[4체05-02]수상활동에서 발생하는 안전사고의 사례를 조사하고 예방 및 대처 방법을 익혀 위험 상황에 대처한다.", - "[4체05-03]신체활동 시 발생할 수 있는 위험 상황을 인지하며 안전하게 신체활동을 수행한다.", - "[4체05-04]운동 장비 사용 시 발생할 수 있는 안전사고의 종류와 원인을 탐색한다.", - "[4체05-05]게임 활동에서 발생하는 안전사고의 사례를 조사하고 예방 및 대처 방법을 익혀 위험 상황에 대처한다.", - "[4체05-06]신체활동 시 행동에 주의를 기울이며 안전하게 활동한다.", - "[4체01-01]건강한 생활 습관(몸의 바른 자세, 개인 위생, 비만 예방)을 알고 생활 속에서 규칙적으로 실천한다.", - "[4체01-02]다양한 운동 수행을 통해 체력의 향상과 건강한 생활을 경험한다.", - "[4체01-03]신체활동을 통해 다른 사람과 구별되는 자신의 신체적․정신적 특징 등을 인식한다.", - "[4체01-04]여가 활동 경험을 바탕으로 여가 활동의 의미와 건강과의 관계를 탐색한다.", - "[4체01-05]체격 및 체력의 특성을 이해하고 자신에게 맞는 체력 운동 계획을 세워 올바른 방법으로 수행한다.", - "[4체01-06]건강을 유지․증진하기 위한 체력 운동 및 여가 생활을 실천한다.", - "[4체02-01]속도를 향상시켜 자신의 기록을 단축하려는 속도 도전의 개념과 특성을 탐색한다.", - "[4체02-02]속도 도전과 관련된 여러 유형의 활동에 참여해 자신의 기록을 향상할 수 있는 기본자세와 동작을 찾아 도전 상황에 적용한다.", - "[4체02-03]자신의 속도 도전 결과를 시기별로 측정하여 그 과정의 장단점을 분석하고 기록을 향상할 수 있는 방법을 지속적으로 수행하고 평가한다.", - "[4체02-04]수련을 통해 힘든 상황에서도 포기하지 않고 목표 달성을 위해 정진하며 속도에 도전한다.", - "[4체02-05]자신이 수행할 수 있는 최상의 자세와 동작을 수행하는 동작 도전의 개념과 특성을 탐색한다.", - "[4체02-06]동작 도전과 관련된 여러 유형의 활동에 참여해 수행의 성공에 도움이 되는 기본자세와 동작을 찾아 도전 상황에 적용한다.", - "[4체02-07]자신의 동작 도전 결과를 시기별로 측정하여 그 과정의 장단점을 분석하고 성공률을 높일 수 있는 방법을 지속적으로 수행하고 평가한다.", - "[4체02-08]수련을 통해 동작 수행이 어렵거나 두려운 상황을 극복하며 동작에 도전한다.", - "[4체03-01]단순한 규칙으로 이루어진 게임을 종합적으로 체험함으로써 공통의 목표 달성을 위해 정해진 규칙을 지키며 상대와 실력을 겨루는 경쟁의 의미를 탐색한다.", - "[4체03-02]단순한 규칙으로 이루어진 게임을 수행하며 경쟁에 필요한 기본 기능을 탐색한다.", - "[4체03-03]게임 방법에 대한 이해를 바탕으로 게임을 유리하게 전개할 수 있는 전략을 탐색한다.", - "[4체03-04]경쟁의 과정에서 규칙의 필요성을 알고 합의된 규칙을 준수하며 게임을 수행한다.", - "[4체03-05]영역형 게임을 다양하게 체험함으로써 상대 영역으로 이동하여 정해진 지점으로 공을 보내 득점하는 영역형 경쟁의 개념과 특성을 탐색한다.", - "[4체03-06]영역형 게임의 기본 기능을 탐색하고 게임 상황에 맞게 적용한다.", - "[4체03-07]영역형 게임 방법에 대한 이해를 바탕으로 게임을 유리하게 전개할 수 있는 전략을 탐색하고 적용한다.", - "[4체03-08]공동의 목표 달성을 위해 협동의 필요성을 알고 팀원과 협력하며 게임을 수행한다.", - "[4체04-01]움직임 언어(이동 움직임, 비이동 움직임, 조작 움직임)와 표현 요소(신체, 공간, 노력, 관계)를 탐색한다.", - "[4체04-02]느낌이나 생각을 창의적인 움직임으로 표현하는 데 적합한 기본 동작을 다양한 표현 상황에 적용한다.", - "[4체04-03]개인 또는 모둠별로 움직임 언어나 표현 요소를 활용하여 구성한 작품을 발표하고 이를 감상한다.", - "[4체04-04]움직임 표현 활동을 수행하며 움직임 표현에 따른 자신의 신체 움직임과 신체의 변화 등을 인식한다.", - "[4체04-05]신체활동(체조, 줄넘기 등)에 나타나는 리듬의 유형과 요소를 탐색한다.", - "[4체04-06]음악(동요, 민요 등)에 맞추어 신체 또는 여러 가지 도구(공, 줄, 후프 등)를 활용한 다양한 동작을 표현 상황에 적용한다.", - "[4체04-07]개인 또는 모둠별로 리듬에 따른 다양한 동작을 구성하여 작품을 만들어 발표하고 이를 감상한다.", - "[4체04-08]리듬 표현 활동을 수행하며 리듬의 특징과 변화를 빠르게 수용하고 이를 신체 움직임에 반영하여 표현한다.", - "[4체05-01]신체활동에서 자주 발생하는 안전사고의 종류와 원인을 탐색한다.", - "[4체05-02]수상활동에서 발생하는 안전사고의 사례를 조사하고 예방 및 대처 방법을 익혀 위험 상황에 대처한다.", - "[4체05-03]신체활동 시 발생할 수 있는 위험 상황을 인지하며 안전하게 신체활동을 수행한다.", - "[4체05-04]운동 장비 사용 시 발생할 수 있는 안전사고의 종류와 원인을 탐색한다.", - "[4체05-05]게임 활동에서 발생하는 안전사고의 사례를 조사하고 예방 및 대처 방법을 익혀 위험 상황에 대처한다.", - "[4체05-06]신체활동 시 행동에 주의를 기울이며 안전하게 활동한다.", - "[4체01-01]건강한 생활 습관(몸의 바른 자세, 개인 위생, 비만 예방)을 알고 생활 속에서 규칙적으로 실천한다.", - "[4체01-02]다양한 운동 수행을 통해 체력의 향상과 건강한 생활을 경험한다.", - "[4체01-03]신체활동을 통해 다른 사람과 구별되는 자신의 신체적․정신적 특징 등을 인식한다.", - "[4체01-04]여가 활동 경험을 바탕으로 여가 활동의 의미와 건강과의 관계를 탐색한다.", - "[4체01-05]체격 및 체력의 특성을 이해하고 자신에게 맞는 체력 운동 계획을 세워 올바른 방법으로 수행한다.", - "[4체01-06]건강을 유지․증진하기 위한 체력 운동 및 여가 생활을 실천한다.", - "[4체02-01]속도를 향상시켜 자신의 기록을 단축하려는 속도 도전의 개념과 특성을 탐색한다.", - "[4체02-02]속도 도전과 관련된 여러 유형의 활동에 참여해 자신의 기록을 향상할 수 있는 기본자세와 동작을 찾아 도전 상황에 적용한다.", - "[4체02-03]자신의 속도 도전 결과를 시기별로 측정하여 그 과정의 장단점을 분석하고 기록을 향상할 수 있는 방법을 지속적으로 수행하고 평가한다.", - "[4체02-04]수련을 통해 힘든 상황에서도 포기하지 않고 목표 달성을 위해 정진하며 속도에 도전한다.", - "[4체02-05]자신이 수행할 수 있는 최상의 자세와 동작을 수행하는 동작 도전의 개념과 특성을 탐색한다.", - "[4체02-06]동작 도전과 관련된 여러 유형의 활동에 참여해 수행의 성공에 도움이 되는 기본자세와 동작을 찾아 도전 상황에 적용한다.", - "[4체02-07]자신의 동작 도전 결과를 시기별로 측정하여 그 과정의 장단점을 분석하고 성공률을 높일 수 있는 방법을 지속적으로 수행하고 평가한다.", - "[4체02-08]수련을 통해 동작 수행이 어렵거나 두려운 상황을 극복하며 동작에 도전한다.", - "[4체03-01]단순한 규칙으로 이루어진 게임을 종합적으로 체험함으로써 공통의 목표 달성을 위해 정해진 규칙을 지키며 상대와 실력을 겨루는 경쟁의 의미를 탐색한다.", - "[4체03-02]단순한 규칙으로 이루어진 게임을 수행하며 경쟁에 필요한 기본 기능을 탐색한다.", - "[4체03-03]게임 방법에 대한 이해를 바탕으로 게임을 유리하게 전개할 수 있는 전략을 탐색한다.", - "[4체03-04]경쟁의 과정에서 규칙의 필요성을 알고 합의된 규칙을 준수하며 게임을 수행한다.", - "[4체03-05]영역형 게임을 다양하게 체험함으로써 상대 영역으로 이동하여 정해진 지점으로 공을 보내 득점하는 영역형 경쟁의 개념과 특성을 탐색한다.", - "[4체03-06]영역형 게임의 기본 기능을 탐색하고 게임 상황에 맞게 적용한다.", - "[4체03-07]영역형 게임 방법에 대한 이해를 바탕으로 게임을 유리하게 전개할 수 있는 전략을 탐색하고 적용한다.", - "[4체03-08]공동의 목표 달성을 위해 협동의 필요성을 알고 팀원과 협력하며 게임을 수행한다.", - "[4체04-01]움직임 언어(이동 움직임, 비이동 움직임, 조작 움직임)와 표현 요소(신체, 공간, 노력, 관계)를 탐색한다.", - "[4체04-02]느낌이나 생각을 창의적인 움직임으로 표현하는 데 적합한 기본 동작을 다양한 표현 상황에 적용한다.", - "[4체04-03]개인 또는 모둠별로 움직임 언어나 표현 요소를 활용하여 구성한 작품을 발표하고 이를 감상한다.", - "[4체04-04]움직임 표현 활동을 수행하며 움직임 표현에 따른 자신의 신체 움직임과 신체의 변화 등을 인식한다.", - "[4체04-05]신체활동(체조, 줄넘기 등)에 나타나는 리듬의 유형과 요소를 탐색한다.", - "[4체04-06]음악(동요, 민요 등)에 맞추어 신체 또는 여러 가지 도구(공, 줄, 후프 등)를 활용한 다양한 동작을 표현 상황에 적용한다.", - "[4체04-07]개인 또는 모둠별로 리듬에 따른 다양한 동작을 구성하여 작품을 만들어 발표하고 이를 감상한다.", - "[4체04-08]리듬 표현 활동을 수행하며 리듬의 특징과 변화를 빠르게 수용하고 이를 신체 움직임에 반영하여 표현한다.", - "[4체05-01]신체활동에서 자주 발생하는 안전사고의 종류와 원인을 탐색한다.", - "[4체05-02]수상활동에서 발생하는 안전사고의 사례를 조사하고 예방 및 대처 방법을 익혀 위험 상황에 대처한다.", - "[4체05-03]신체활동 시 발생할 수 있는 위험 상황을 인지하며 안전하게 신체활동을 수행한다.", - "[4체05-04]운동 장비 사용 시 발생할 수 있는 안전사고의 종류와 원인을 탐색한다.", - "[4체05-05]게임 활동에서 발생하는 안전사고의 사례를 조사하고 예방 및 대처 방법을 익혀 위험 상황에 대처한다.", - "[4체05-06]신체활동 시 행동에 주의를 기울이며 안전하게 활동한다." - ], - "도덕": [ - "[4도01-01]도덕 시간에 무엇을 배우며 도덕 공부가 왜 필요한지를 알고 공부하는 사람으로서 지켜야 할 규칙을 모범 사례를 통해 습관화한다.", - "[4도01-02]시간과 물건의 소중함을 알고 자신이 시간과 물건을 아껴 쓰고 있는지 반성해 보며 그 모범 사례를 따라 습관화한다.", - "[4도01-03]최선을 다하는 삶을 위해 정성과 인내가 필요한 이유를 탐구하고 생활 계획을 세워본다.", - "[4도02-01]가족을 사랑하고 감사해야 하는 이유를 찾아보고, 가족 간에 지켜야 할 도리와 해야 할 일을 약속으로 정해 실천한다.", - "[4도02-02]친구의 소중함을 알고 친구와 사이좋게 지내며, 서로의 입장을 이해하고 인정한다.", - "[4도02-03]예절의 중요성을 이해하고, 대상과 상황에 따른 예절이 다름을 탐구하여 이를 습관화한다.", - "[4도02-04]협동의 의미와 중요성을 알고, 경청․도덕적 대화하기․도덕적 민감성을 통해 협동할 수 있는 능력을 기른다.", - "[4도03-01]공공장소에서 지켜야 할 규칙과 공익의 중요성을 알고, 공익에 기여하고자 하는 실천 의지를 기른다.", - "[4도03-02]다문화 사회에서 다양성을 수용해야 하는 이유를 탐구하고, 올바른 의사 결정 과정을 통해 다른 사람과 문화를 공정하게 대하는 태도를 지닌다.", - "[4도03-03]남북 분단 과정과 민족의 아픔을 통해 통일의 필요성을 알고, 통일에 대한 관심과 통일 의지를 기른다.", - "[4도04-01]생명의 소중함을 이해하고 인간 생명과 환경 문제에 관심을 가지며 인간 생명과 자연을 보호하려는 태도를 가진다.", - "[4도04-02]참된 아름다움을 올바르게 이해하고 느껴 생활 속에서 이를 실천한다.", - "[4도01-01]도덕 시간에 무엇을 배우며 도덕 공부가 왜 필요한지를 알고 공부하는 사람으로서 지켜야 할 규칙을 모범 사례를 통해 습관화한다.", - "[4도01-02]시간과 물건의 소중함을 알고 자신이 시간과 물건을 아껴 쓰고 있는지 반성해 보며 그 모범 사례를 따라 습관화한다.", - "[4도01-03]최선을 다하는 삶을 위해 정성과 인내가 필요한 이유를 탐구하고 생활 계획을 세워본다.", - "[4도02-01]가족을 사랑하고 감사해야 하는 이유를 찾아보고, 가족 간에 지켜야 할 도리와 해야 할 일을 약속으로 정해 실천한다.", - "[4도02-02]친구의 소중함을 알고 친구와 사이좋게 지내며, 서로의 입장을 이해하고 인정한다.", - "[4도02-03]예절의 중요성을 이해하고, 대상과 상황에 따른 예절이 다름을 탐구하여 이를 습관화한다.", - "[4도02-04]협동의 의미와 중요성을 알고, 경청․도덕적 대화하기․도덕적 민감성을 통해 협동할 수 있는 능력을 기른다.", - "[4도03-01]공공장소에서 지켜야 할 규칙과 공익의 중요성을 알고, 공익에 기여하고자 하는 실천 의지를 기른다.", - "[4도03-02]다문화 사회에서 다양성을 수용해야 하는 이유를 탐구하고, 올바른 의사 결정 과정을 통해 다른 사람과 문화를 공정하게 대하는 태도를 지닌다.", - "[4도03-03]남북 분단 과정과 민족의 아픔을 통해 통일의 필요성을 알고, 통일에 대한 관심과 통일 의지를 기른다.", - "[4도04-01]생명의 소중함을 이해하고 인간 생명과 환경 문제에 관심을 가지며 인간 생명과 자연을 보호하려는 태도를 가진다.", - "[4도04-02]참된 아름다움을 올바르게 이해하고 느껴 생활 속에서 이를 실천한다.", - "[4도01-01]도덕 시간에 무엇을 배우며 도덕 공부가 왜 필요한지를 알고 공부하는 사람으로서 지켜야 할 규칙을 모범 사례를 통해 습관화한다.", - "[4도01-02]시간과 물건의 소중함을 알고 자신이 시간과 물건을 아껴 쓰고 있는지 반성해 보며 그 모범 사례를 따라 습관화한다.", - "[4도01-03]최선을 다하는 삶을 위해 정성과 인내가 필요한 이유를 탐구하고 생활 계획을 세워본다.", - "[4도02-01]가족을 사랑하고 감사해야 하는 이유를 찾아보고, 가족 간에 지켜야 할 도리와 해야 할 일을 약속으로 정해 실천한다.", - "[4도02-02]친구의 소중함을 알고 친구와 사이좋게 지내며, 서로의 입장을 이해하고 인정한다.", - "[4도02-03]예절의 중요성을 이해하고, 대상과 상황에 따른 예절이 다름을 탐구하여 이를 습관화한다.", - "[4도02-04]협동의 의미와 중요성을 알고, 경청․도덕적 대화하기․도덕적 민감성을 통해 협동할 수 있는 능력을 기른다.", - "[4도03-01]공공장소에서 지켜야 할 규칙과 공익의 중요성을 알고, 공익에 기여하고자 하는 실천 의지를 기른다.", - "[4도03-02]다문화 사회에서 다양성을 수용해야 하는 이유를 탐구하고, 올바른 의사 결정 과정을 통해 다른 사람과 문화를 공정하게 대하는 태도를 지닌다.", - "[4도03-03]남북 분단 과정과 민족의 아픔을 통해 통일의 필요성을 알고, 통일에 대한 관심과 통일 의지를 기른다.", - "[4도04-01]생명의 소중함을 이해하고 인간 생명과 환경 문제에 관심을 가지며 인간 생명과 자연을 보호하려는 태도를 가진다.", - "[4도04-02]참된 아름다움을 올바르게 이해하고 느껴 생활 속에서 이를 실천한다.", - "[4도01-01]도덕 시간에 무엇을 배우며 도덕 공부가 왜 필요한지를 알고 공부하는 사람으로서 지켜야 할 규칙을 모범 사례를 통해 습관화한다.", - "[4도01-02]시간과 물건의 소중함을 알고 자신이 시간과 물건을 아껴 쓰고 있는지 반성해 보며 그 모범 사례를 따라 습관화한다.", - "[4도01-03]최선을 다하는 삶을 위해 정성과 인내가 필요한 이유를 탐구하고 생활 계획을 세워본다.", - "[4도02-01]가족을 사랑하고 감사해야 하는 이유를 찾아보고, 가족 간에 지켜야 할 도리와 해야 할 일을 약속으로 정해 실천한다.", - "[4도02-02]친구의 소중함을 알고 친구와 사이좋게 지내며, 서로의 입장을 이해하고 인정한다.", - "[4도02-03]예절의 중요성을 이해하고, 대상과 상황에 따른 예절이 다름을 탐구하여 이를 습관화한다.", - "[4도02-04]협동의 의미와 중요성을 알고, 경청․도덕적 대화하기․도덕적 민감성을 통해 협동할 수 있는 능력을 기른다.", - "[4도03-01]공공장소에서 지켜야 할 규칙과 공익의 중요성을 알고, 공익에 기여하고자 하는 실천 의지를 기른다.", - "[4도03-02]다문화 사회에서 다양성을 수용해야 하는 이유를 탐구하고, 올바른 의사 결정 과정을 통해 다른 사람과 문화를 공정하게 대하는 태도를 지닌다.", - "[4도03-03]남북 분단 과정과 민족의 아픔을 통해 통일의 필요성을 알고, 통일에 대한 관심과 통일 의지를 기른다.", - "[4도04-01]생명의 소중함을 이해하고 인간 생명과 환경 문제에 관심을 가지며 인간 생명과 자연을 보호하려는 태도를 가진다.", - "[4도04-02]참된 아름다움을 올바르게 이해하고 느껴 생활 속에서 이를 실천한다." - ] - }, - "5~6학년군": { - "국어": [ - "[6국01-01]구어 의사소통의 특성을 바탕으로 하여 듣기․말하기 활동을 한다.", - "[6국01-02]의견을 제시하고 함께 조정하며 토의한다.", - "[6국01-03]절차와 규칙을 지키고 근거를 제시하며 토론한다.", - "[6국01-04]자료를 정리하여 말할 내용을 체계적으로 구성한다.", - "[6국01-05]매체 자료를 활용하여 내용을 효과적으로 발표한다.", - "[6국01-06]드러나지 않거나 생략된 내용을 추론하며 듣는다.", - "[6국01-07]상대가 처한 상황을 이해하고 공감하며 듣는 태도를 지닌다.", - "[6국02-02]글의 구조를 고려하여 글 전체의 내용을 요약한다.", - "[6국02-03]글을 읽고 글쓴이가 말하고자 하는 주장이나 주제를 파악한다.", - "[6국02-04]글을 읽고 내용의 타당성과 표현의 적절성을 판단한다.", - "[6국02-05]매체에 따른 다양한 읽기 방법을 이해하고 적절하게 적용하며 읽는다.", - "[6국02-06]자신의 읽기 습관을 점검하며 스스로 글을 찾아 읽는 태도를 지닌다.", - "[6국03-01]쓰기는 절차에 따라 의미를 구성하고 표현하는 과정임을 이해하고 글을 쓴다.", - "[6국03-02]목적이나 주제에 따라 알맞은 내용과 매체를 선정하여 글을 쓴다.", - "[6국03-03]목적이나 대상에 따라 알맞은 형식과 자료를 사용하여 설명하는 글을 쓴다.", - "[6국03-04]적절한 근거와 알맞은 표현을 사용하여 주장하는 글을 쓴다.", - "[6국03-05]체험한 일에 대한 감상이 드러나게 글을 쓴다.", - "[6국03-06]독자를 존중하고 배려하며 글을 쓰는 태도를 지닌다.", - "[6국04-01]언어는 생각을 표현하며 다른 사람과 관계를 맺는 수단임을 이해하고 국어생활을 한다.", - "[6국04-02]국어의 낱말 확장 방법을 탐구하고 어휘력을 높이는 데에 적용한다.", - "[6국04-03]낱말이 상황에 따라 다양하게 해석됨을 탐구한다.", - "[6국04-04]관용 표현을 이해하고 적절하게 활용한다.", - "[6국04-05]국어의 문장 성분을 이해하고 호응 관계가 올바른 문장을 구성한다.", - "[6국04-06]일상생활에서 국어를 바르게 사용하는 태도를 지닌다.", - "[6국05-01]문학은 가치 있는 내용을 언어로 표현하여 아름다움을 느끼게 하는 활동임을 이해하고 문학 활동을 한다.", - "[6국05-02]작품 속 세계와 현실 세계를 비교하며 작품을 감상한다.", - "[6국05-03]비유적 표현의 특성과 효과를 살려 생각과 느낌을 다양하게 표현한다.", - "[6국05-04]일상생활의 경험을 이야기나 극의 형식으로 표현한다.", - "[6국05-05]작품에 대한 이해와 감상을 바탕으로 하여 다른 사람과 적극적으로 소통한다.", - "[6국05-06]작품에서 얻은 깨달음을 바탕으로 하여 바람직한 삶의 가치를 내면화하는 태도를 지닌다.", - "[6국01-01]구어 의사소통의 특성을 바탕으로 하여 듣기․말하기 활동을 한다.", - "[6국01-02]의견을 제시하고 함께 조정하며 토의한다.", - "[6국01-03]절차와 규칙을 지키고 근거를 제시하며 토론한다.", - "[6국01-04]자료를 정리하여 말할 내용을 체계적으로 구성한다.", - "[6국01-05]매체 자료를 활용하여 내용을 효과적으로 발표한다.", - "[6국01-06]드러나지 않거나 생략된 내용을 추론하며 듣는다.", - "[6국01-07]상대가 처한 상황을 이해하고 공감하며 듣는 태도를 지닌다.", - "[6국02-02]글의 구조를 고려하여 글 전체의 내용을 요약한다.", - "[6국02-03]글을 읽고 글쓴이가 말하고자 하는 주장이나 주제를 파악한다.", - "[6국02-04]글을 읽고 내용의 타당성과 표현의 적절성을 판단한다.", - "[6국02-05]매체에 따른 다양한 읽기 방법을 이해하고 적절하게 적용하며 읽는다.", - "[6국02-06]자신의 읽기 습관을 점검하며 스스로 글을 찾아 읽는 태도를 지닌다.", - "[6국03-01]쓰기는 절차에 따라 의미를 구성하고 표현하는 과정임을 이해하고 글을 쓴다.", - "[6국03-02]목적이나 주제에 따라 알맞은 내용과 매체를 선정하여 글을 쓴다.", - "[6국03-03]목적이나 대상에 따라 알맞은 형식과 자료를 사용하여 설명하는 글을 쓴다.", - "[6국03-04]적절한 근거와 알맞은 표현을 사용하여 주장하는 글을 쓴다.", - "[6국03-05]체험한 일에 대한 감상이 드러나게 글을 쓴다.", - "[6국03-06]독자를 존중하고 배려하며 글을 쓰는 태도를 지닌다.", - "[6국04-01]언어는 생각을 표현하며 다른 사람과 관계를 맺는 수단임을 이해하고 국어생활을 한다.", - "[6국04-02]국어의 낱말 확장 방법을 탐구하고 어휘력을 높이는 데에 적용한다.", - "[6국04-03]낱말이 상황에 따라 다양하게 해석됨을 탐구한다.", - "[6국04-04]관용 표현을 이해하고 적절하게 활용한다.", - "[6국04-05]국어의 문장 성분을 이해하고 호응 관계가 올바른 문장을 구성한다.", - "[6국04-06]일상생활에서 국어를 바르게 사용하는 태도를 지닌다.", - "[6국05-01]문학은 가치 있는 내용을 언어로 표현하여 아름다움을 느끼게 하는 활동임을 이해하고 문학 활동을 한다.", - "[6국05-02]작품 속 세계와 현실 세계를 비교하며 작품을 감상한다.", - "[6국05-03]비유적 표현의 특성과 효과를 살려 생각과 느낌을 다양하게 표현한다.", - "[6국05-04]일상생활의 경험을 이야기나 극의 형식으로 표현한다.", - "[6국05-05]작품에 대한 이해와 감상을 바탕으로 하여 다른 사람과 적극적으로 소통한다.", - "[6국05-06]작품에서 얻은 깨달음을 바탕으로 하여 바람직한 삶의 가치를 내면화하는 태도를 지닌다.", - "[6국01-01]구어 의사소통의 특성을 바탕으로 하여 듣기․말하기 활동을 한다.", - "[6국01-02]의견을 제시하고 함께 조정하며 토의한다.", - "[6국01-03]절차와 규칙을 지키고 근거를 제시하며 토론한다.", - "[6국01-04]자료를 정리하여 말할 내용을 체계적으로 구성한다.", - "[6국01-05]매체 자료를 활용하여 내용을 효과적으로 발표한다.", - "[6국01-06]드러나지 않거나 생략된 내용을 추론하며 듣는다.", - "[6국01-07]상대가 처한 상황을 이해하고 공감하며 듣는 태도를 지닌다.", - "[6국02-02]글의 구조를 고려하여 글 전체의 내용을 요약한다.", - "[6국02-03]글을 읽고 글쓴이가 말하고자 하는 주장이나 주제를 파악한다.", - "[6국02-04]글을 읽고 내용의 타당성과 표현의 적절성을 판단한다.", - "[6국02-05]매체에 따른 다양한 읽기 방법을 이해하고 적절하게 적용하며 읽는다.", - "[6국02-06]자신의 읽기 습관을 점검하며 스스로 글을 찾아 읽는 태도를 지닌다.", - "[6국03-01]쓰기는 절차에 따라 의미를 구성하고 표현하는 과정임을 이해하고 글을 쓴다.", - "[6국03-02]목적이나 주제에 따라 알맞은 내용과 매체를 선정하여 글을 쓴다.", - "[6국03-03]목적이나 대상에 따라 알맞은 형식과 자료를 사용하여 설명하는 글을 쓴다.", - "[6국03-04]적절한 근거와 알맞은 표현을 사용하여 주장하는 글을 쓴다.", - "[6국03-05]체험한 일에 대한 감상이 드러나게 글을 쓴다.", - "[6국03-06]독자를 존중하고 배려하며 글을 쓰는 태도를 지닌다.", - "[6국04-01]언어는 생각을 표현하며 다른 사람과 관계를 맺는 수단임을 이해하고 국어생활을 한다.", - "[6국04-02]국어의 낱말 확장 방법을 탐구하고 어휘력을 높이는 데에 적용한다.", - "[6국04-03]낱말이 상황에 따라 다양하게 해석됨을 탐구한다.", - "[6국04-04]관용 표현을 이해하고 적절하게 활용한다.", - "[6국04-05]국어의 문장 성분을 이해하고 호응 관계가 올바른 문장을 구성한다.", - "[6국04-06]일상생활에서 국어를 바르게 사용하는 태도를 지닌다.", - "[6국05-01]문학은 가치 있는 내용을 언어로 표현하여 아름다움을 느끼게 하는 활동임을 이해하고 문학 활동을 한다.", - "[6국05-02]작품 속 세계와 현실 세계를 비교하며 작품을 감상한다.", - "[6국05-03]비유적 표현의 특성과 효과를 살려 생각과 느낌을 다양하게 표현한다.", - "[6국05-04]일상생활의 경험을 이야기나 극의 형식으로 표현한다.", - "[6국05-05]작품에 대한 이해와 감상을 바탕으로 하여 다른 사람과 적극적으로 소통한다.", - "[6국05-06]작품에서 얻은 깨달음을 바탕으로 하여 바람직한 삶의 가치를 내면화하는 태도를 지닌다.", - "[6국01-01]구어 의사소통의 특성을 바탕으로 하여 듣기․말하기 활동을 한다.", - "[6국01-02]의견을 제시하고 함께 조정하며 토의한다.", - "[6국01-03]절차와 규칙을 지키고 근거를 제시하며 토론한다.", - "[6국01-04]자료를 정리하여 말할 내용을 체계적으로 구성한다.", - "[6국01-05]매체 자료를 활용하여 내용을 효과적으로 발표한다.", - "[6국01-06]드러나지 않거나 생략된 내용을 추론하며 듣는다.", - "[6국01-07]상대가 처한 상황을 이해하고 공감하며 듣는 태도를 지닌다.", - "[6국02-02]글의 구조를 고려하여 글 전체의 내용을 요약한다.", - "[6국02-03]글을 읽고 글쓴이가 말하고자 하는 주장이나 주제를 파악한다.", - "[6국02-04]글을 읽고 내용의 타당성과 표현의 적절성을 판단한다.", - "[6국02-05]매체에 따른 다양한 읽기 방법을 이해하고 적절하게 적용하며 읽는다.", - "[6국02-06]자신의 읽기 습관을 점검하며 스스로 글을 찾아 읽는 태도를 지닌다.", - "[6국03-01]쓰기는 절차에 따라 의미를 구성하고 표현하는 과정임을 이해하고 글을 쓴다.", - "[6국03-02]목적이나 주제에 따라 알맞은 내용과 매체를 선정하여 글을 쓴다.", - "[6국03-03]목적이나 대상에 따라 알맞은 형식과 자료를 사용하여 설명하는 글을 쓴다.", - "[6국03-04]적절한 근거와 알맞은 표현을 사용하여 주장하는 글을 쓴다.", - "[6국03-05]체험한 일에 대한 감상이 드러나게 글을 쓴다.", - "[6국03-06]독자를 존중하고 배려하며 글을 쓰는 태도를 지닌다.", - "[6국04-01]언어는 생각을 표현하며 다른 사람과 관계를 맺는 수단임을 이해하고 국어생활을 한다.", - "[6국04-02]국어의 낱말 확장 방법을 탐구하고 어휘력을 높이는 데에 적용한다.", - "[6국04-03]낱말이 상황에 따라 다양하게 해석됨을 탐구한다.", - "[6국04-04]관용 표현을 이해하고 적절하게 활용한다.", - "[6국04-05]국어의 문장 성분을 이해하고 호응 관계가 올바른 문장을 구성한다.", - "[6국04-06]일상생활에서 국어를 바르게 사용하는 태도를 지닌다.", - "[6국05-01]문학은 가치 있는 내용을 언어로 표현하여 아름다움을 느끼게 하는 활동임을 이해하고 문학 활동을 한다.", - "[6국05-02]작품 속 세계와 현실 세계를 비교하며 작품을 감상한다.", - "[6국05-03]비유적 표현의 특성과 효과를 살려 생각과 느낌을 다양하게 표현한다.", - "[6국05-04]일상생활의 경험을 이야기나 극의 형식으로 표현한다.", - "[6국05-05]작품에 대한 이해와 감상을 바탕으로 하여 다른 사람과 적극적으로 소통한다.", - "[6국05-06]작품에서 얻은 깨달음을 바탕으로 하여 바람직한 삶의 가치를 내면화하는 태도를 지닌다." - ], - "수학": [ - "[6수01-01]덧셈, 뺄셈, 곱셈, 나눗셈의 혼합 계산에서 계산하는 순서를 알고, 혼합 계산을 할 수 있다.", - "[6수01-02]약수, 공약수, 최대공약수의 의미를 알고 구할 수 있다.", - "[6수01-03]배수, 공배수, 최소공배수의 의미를 알고 구할 수 있다.", - "[6수01-04]약수와 배수의 관계를 이해한다.", - "[6수01-05]분수의 성질을 이용하여 크기가 같은 분수를 만들 수 있다.", - "[6수01-06]분수를 약분, 통분할 수 있다.", - "[6수01-07]분모가 다른 분수의 크기를 비교할 수 있다.", - "[6수01-08]분모가 다른 분수의 덧셈과 뺄셈의 계산 원리를 이해하고, 그 계산을 할 수 있다.", - "[6수01-09]분수의 곱셈의 계산 원리를 이해하고 그 계산을 할 수 있다.", - "[6수01-10]‘(자연수)÷(자연수)’에서 나눗셈의 몫을 분수로 나타낼 수 있다.", - "[6수01-11]분수의 나눗셈의 계산 원리를 이해하고 그 계산을 할 수 있다.", - "[6수01-12]분수와 소수의 관계를 이해하고 크기를 비교할 수 있다.", - "[6수01-13]소수의 곱셈의 계산 원리를 이해한다.", - "[6수01-14]‘(자연수)÷(자연수)’, ‘(소수)÷(자연수)’에서 나눗셈의 몫을 소수로 나타낼 수 있다.", - "[6수01-15]나누는 수가 소수인 나눗셈의 계산 원리를 이해한다.", - "[6수01-16]소수의 곱셈과 나눗셈의 계산 결과를 어림할 수 있다.", - "[6수02-01]구체적인 조작 활동을 통하여 도형의 합동의 의미를 알고, 합동인 도형을 찾을 수 있다.", - "[6수02-02]합동인 두 도형에서 대응점, 대응변, 대응각을 각각 찾고, 그 성질을 이해한다.", - "[6수02-03]선대칭도형과 점대칭도형을 이해하고 그릴 수 있다.", - "[6수02-04]직육면체와 정육면체를 알고, 구성 요소와 성질을 이해한다.", - "[6수02-05]직육면체와 정육면체의 겨냥도와 전개도를 그릴 수 있다.", - "[6수02-06]각기둥과 각뿔을 알고, 구성 요소와 성질을 이해한다.", - "[6수02-07]각기둥의 전개도를 그릴 수 있다.", - "[6수02-08]원기둥을 알고, 구성 요소, 성질, 전개도를 이해한다.", - "[6수02-09]원뿔과 구를 알고, 구성 요소와 성질을 이해한다.", - "[6수02-10]쌓기나무로 만든 입체도형을 보고 사용된 쌓기나무의 개수를 구할 수 있다.", - "[6수02-11]쌓기나무로 만든 입체도형의 위, 앞, 옆에서 본 모양을 표현할 수 있고, 이러한 표현을 보고 입체도형의 모양을 추측할 수 있다.", - "[6수03-01]실생활 장면에서 이상, 이하, 초과, 미만의 의미와 쓰임을 알고, 이를 활용하여 수의 범위를 나타낼 수 있다.", - "[6수03-02]어림값을 구하기 위한 방법으로 올림, 버림, 반올림의 의미와 필요성을 알고, 이를 실생활에 활용할 수 있다.", - "[6수03-03]평면도형의 둘레를 재어보는 활동을 통하여 둘레를 이해하고, 기본적인 평면도형의 둘레의 길이를 구할 수 있다.", - "[6수03-04]넓이를 나타내는 표준 단위의 필요성을 인식하여 1cm², 1m², 1km²의 단위를 알며, 그 관계를 이해한다.", - "[6수03-05]직사각형의 넓이를 구하는 방법을 이해하고, 이를 통하여 직사각형과 정사각형의 넓이를 구할 수 있다.", - "[6수03-06]평행사변형, 삼각형, 사다리꼴, 마름모의 넓이를 구하는 방법을 다양하게 추론하고, 이와 관련된 문제를 해결할 수 있다.", - "[6수03-07]여러 가지 둥근 물체의 원주와 지름을 측정하는 활동을 통하여 원주율을 이해한다.", - "[6수03-08]원주와 원의 넓이를 구하는 방법을 이해하고, 이를 구할 수 있다.", - "[6수03-09]직육면체와 정육면체의 겉넓이를 구하는 방법을 이해하고, 이를 구할 수 있다.", - "[6수03-10]부피를 이해하고, 1cm³, 1m³의 단위를 알며, 그 관계를 이해한다.", - "[6수03-11]직육면체와 정육면체의 부피를 구하는 방법을 이해하고, 이를 구할 수 있다.", - "[6수04-01]한 양이 변할 때 다른 양이 그에 종속하여 변하는 대응 관계를 나타낸 표에서 규칙을 찾아 설명하고, □, △ 등을 사용하여 식으로 나타낼 수 있다.", - "[6수04-02]두 양의 크기를 비교하는 상황을 통해 비의 개념을 이해하고, 그 관계를 비로 나타낼 수 있다.", - "[6수04-03]비율을 이해하고, 비율을 분수, 소수, 백분율로 나타낼 수 있다.", - "[6수04-04]비례식을 알고, 그 성질을 이해하며, 이를 활용하여 간단한 비례식을 풀 수 있다.", - "[6수04-05]비례배분을 알고, 주어진 양을 비례배분 할 수 있다.", - "[6수05-01]평균의 의미를 알고, 주어진 자료의 평균을 구할 수 있으며, 이를 활용할 수 있다.", - "[6수05-02]실생활 자료를 그림그래프로 나타내고, 이를 활용할 수 있다.", - "[6수05-03]주어진 자료를 띠그래프와 원그래프로 나타낼 수 있다.", - "[6수05-04]자료를 수집, 분류, 정리하여 목적에 맞는 그래프로 나타내고, 그래프를 해석할 수 있다.", - "[6수05-05]실생활에서 가능성과 관련된 상황을 ‘불가능하다’, ‘~아닐 것 같다’, ‘반반이다’, ‘~일 것 같다’, ‘확실하다’ 등으로 나타낼 수 있다.", - "[6수05-06]가능성을 수나 말로 나타낸 예를 찾아보고, 가능성을 비교할 수 있다.", - "[6수05-07]사건이 일어날 가능성을 수로 표현할 수 있다.", - "[6수01-01]덧셈, 뺄셈, 곱셈, 나눗셈의 혼합 계산에서 계산하는 순서를 알고, 혼합 계산을 할 수 있다.", - "[6수01-02]약수, 공약수, 최대공약수의 의미를 알고 구할 수 있다.", - "[6수01-03]배수, 공배수, 최소공배수의 의미를 알고 구할 수 있다.", - "[6수01-04]약수와 배수의 관계를 이해한다.", - "[6수01-05]분수의 성질을 이용하여 크기가 같은 분수를 만들 수 있다.", - "[6수01-06]분수를 약분, 통분할 수 있다.", - "[6수01-07]분모가 다른 분수의 크기를 비교할 수 있다.", - "[6수01-08]분모가 다른 분수의 덧셈과 뺄셈의 계산 원리를 이해하고, 그 계산을 할 수 있다.", - "[6수01-09]분수의 곱셈의 계산 원리를 이해하고 그 계산을 할 수 있다.", - "[6수01-10]‘(자연수)÷(자연수)’에서 나눗셈의 몫을 분수로 나타낼 수 있다.", - "[6수01-11]분수의 나눗셈의 계산 원리를 이해하고 그 계산을 할 수 있다.", - "[6수01-12]분수와 소수의 관계를 이해하고 크기를 비교할 수 있다.", - "[6수01-13]소수의 곱셈의 계산 원리를 이해한다.", - "[6수01-14]‘(자연수)÷(자연수)’, ‘(소수)÷(자연수)’에서 나눗셈의 몫을 소수로 나타낼 수 있다.", - "[6수01-15]나누는 수가 소수인 나눗셈의 계산 원리를 이해한다.", - "[6수01-16]소수의 곱셈과 나눗셈의 계산 결과를 어림할 수 있다.", - "[6수02-01]구체적인 조작 활동을 통하여 도형의 합동의 의미를 알고, 합동인 도형을 찾을 수 있다.", - "[6수02-02]합동인 두 도형에서 대응점, 대응변, 대응각을 각각 찾고, 그 성질을 이해한다.", - "[6수02-03]선대칭도형과 점대칭도형을 이해하고 그릴 수 있다.", - "[6수02-04]직육면체와 정육면체를 알고, 구성 요소와 성질을 이해한다.", - "[6수02-05]직육면체와 정육면체의 겨냥도와 전개도를 그릴 수 있다.", - "[6수02-06]각기둥과 각뿔을 알고, 구성 요소와 성질을 이해한다.", - "[6수02-07]각기둥의 전개도를 그릴 수 있다.", - "[6수02-08]원기둥을 알고, 구성 요소, 성질, 전개도를 이해한다.", - "[6수02-09]원뿔과 구를 알고, 구성 요소와 성질을 이해한다.", - "[6수02-10]쌓기나무로 만든 입체도형을 보고 사용된 쌓기나무의 개수를 구할 수 있다.", - "[6수02-11]쌓기나무로 만든 입체도형의 위, 앞, 옆에서 본 모양을 표현할 수 있고, 이러한 표현을 보고 입체도형의 모양을 추측할 수 있다.", - "[6수03-01]실생활 장면에서 이상, 이하, 초과, 미만의 의미와 쓰임을 알고, 이를 활용하여 수의 범위를 나타낼 수 있다.", - "[6수03-02]어림값을 구하기 위한 방법으로 올림, 버림, 반올림의 의미와 필요성을 알고, 이를 실생활에 활용할 수 있다.", - "[6수03-03]평면도형의 둘레를 재어보는 활동을 통하여 둘레를 이해하고, 기본적인 평면도형의 둘레의 길이를 구할 수 있다.", - "[6수03-04]넓이를 나타내는 표준 단위의 필요성을 인식하여 1cm², 1m², 1km²의 단위를 알며, 그 관계를 이해한다.", - "[6수03-05]직사각형의 넓이를 구하는 방법을 이해하고, 이를 통하여 직사각형과 정사각형의 넓이를 구할 수 있다.", - "[6수03-06]평행사변형, 삼각형, 사다리꼴, 마름모의 넓이를 구하는 방법을 다양하게 추론하고, 이와 관련된 문제를 해결할 수 있다.", - "[6수03-07]여러 가지 둥근 물체의 원주와 지름을 측정하는 활동을 통하여 원주율을 이해한다.", - "[6수03-08]원주와 원의 넓이를 구하는 방법을 이해하고, 이를 구할 수 있다.", - "[6수03-09]직육면체와 정육면체의 겉넓이를 구하는 방법을 이해하고, 이를 구할 수 있다.", - "[6수03-10]부피를 이해하고, 1cm³, 1m³의 단위를 알며, 그 관계를 이해한다.", - "[6수03-11]직육면체와 정육면체의 부피를 구하는 방법을 이해하고, 이를 구할 수 있다.", - "[6수04-01]한 양이 변할 때 다른 양이 그에 종속하여 변하는 대응 관계를 나타낸 표에서 규칙을 찾아 설명하고, □, △ 등을 사용하여 식으로 나타낼 수 있다.", - "[6수04-02]두 양의 크기를 비교하는 상황을 통해 비의 개념을 이해하고, 그 관계를 비로 나타낼 수 있다.", - "[6수04-03]비율을 이해하고, 비율을 분수, 소수, 백분율로 나타낼 수 있다.", - "[6수04-04]비례식을 알고, 그 성질을 이해하며, 이를 활용하여 간단한 비례식을 풀 수 있다.", - "[6수04-05]비례배분을 알고, 주어진 양을 비례배분 할 수 있다.", - "[6수05-01]평균의 의미를 알고, 주어진 자료의 평균을 구할 수 있으며, 이를 활용할 수 있다.", - "[6수05-02]실생활 자료를 그림그래프로 나타내고, 이를 활용할 수 있다.", - "[6수05-03]주어진 자료를 띠그래프와 원그래프로 나타낼 수 있다.", - "[6수05-04]자료를 수집, 분류, 정리하여 목적에 맞는 그래프로 나타내고, 그래프를 해석할 수 있다.", - "[6수05-05]실생활에서 가능성과 관련된 상황을 ‘불가능하다’, ‘~아닐 것 같다’, ‘반반이다’, ‘~일 것 같다’, ‘확실하다’ 등으로 나타낼 수 있다.", - "[6수05-06]가능성을 수나 말로 나타낸 예를 찾아보고, 가능성을 비교할 수 있다.", - "[6수05-07]사건이 일어날 가능성을 수로 표현할 수 있다.", - "[6수01-01]덧셈, 뺄셈, 곱셈, 나눗셈의 혼합 계산에서 계산하는 순서를 알고, 혼합 계산을 할 수 있다.", - "[6수01-02]약수, 공약수, 최대공약수의 의미를 알고 구할 수 있다.", - "[6수01-03]배수, 공배수, 최소공배수의 의미를 알고 구할 수 있다.", - "[6수01-04]약수와 배수의 관계를 이해한다.", - "[6수01-05]분수의 성질을 이용하여 크기가 같은 분수를 만들 수 있다.", - "[6수01-06]분수를 약분, 통분할 수 있다.", - "[6수01-07]분모가 다른 분수의 크기를 비교할 수 있다.", - "[6수01-08]분모가 다른 분수의 덧셈과 뺄셈의 계산 원리를 이해하고, 그 계산을 할 수 있다.", - "[6수01-09]분수의 곱셈의 계산 원리를 이해하고 그 계산을 할 수 있다.", - "[6수01-10]‘(자연수)÷(자연수)’에서 나눗셈의 몫을 분수로 나타낼 수 있다.", - "[6수01-11]분수의 나눗셈의 계산 원리를 이해하고 그 계산을 할 수 있다.", - "[6수01-12]분수와 소수의 관계를 이해하고 크기를 비교할 수 있다.", - "[6수01-13]소수의 곱셈의 계산 원리를 이해한다.", - "[6수01-14]‘(자연수)÷(자연수)’, ‘(소수)÷(자연수)’에서 나눗셈의 몫을 소수로 나타낼 수 있다.", - "[6수01-15]나누는 수가 소수인 나눗셈의 계산 원리를 이해한다.", - "[6수01-16]소수의 곱셈과 나눗셈의 계산 결과를 어림할 수 있다.", - "[6수02-01]구체적인 조작 활동을 통하여 도형의 합동의 의미를 알고, 합동인 도형을 찾을 수 있다.", - "[6수02-02]합동인 두 도형에서 대응점, 대응변, 대응각을 각각 찾고, 그 성질을 이해한다.", - "[6수02-03]선대칭도형과 점대칭도형을 이해하고 그릴 수 있다.", - "[6수02-04]직육면체와 정육면체를 알고, 구성 요소와 성질을 이해한다.", - "[6수02-05]직육면체와 정육면체의 겨냥도와 전개도를 그릴 수 있다.", - "[6수02-06]각기둥과 각뿔을 알고, 구성 요소와 성질을 이해한다.", - "[6수02-07]각기둥의 전개도를 그릴 수 있다.", - "[6수02-08]원기둥을 알고, 구성 요소, 성질, 전개도를 이해한다.", - "[6수02-09]원뿔과 구를 알고, 구성 요소와 성질을 이해한다.", - "[6수02-10]쌓기나무로 만든 입체도형을 보고 사용된 쌓기나무의 개수를 구할 수 있다.", - "[6수02-11]쌓기나무로 만든 입체도형의 위, 앞, 옆에서 본 모양을 표현할 수 있고, 이러한 표현을 보고 입체도형의 모양을 추측할 수 있다.", - "[6수03-01]실생활 장면에서 이상, 이하, 초과, 미만의 의미와 쓰임을 알고, 이를 활용하여 수의 범위를 나타낼 수 있다.", - "[6수03-02]어림값을 구하기 위한 방법으로 올림, 버림, 반올림의 의미와 필요성을 알고, 이를 실생활에 활용할 수 있다.", - "[6수03-03]평면도형의 둘레를 재어보는 활동을 통하여 둘레를 이해하고, 기본적인 평면도형의 둘레의 길이를 구할 수 있다.", - "[6수03-04]넓이를 나타내는 표준 단위의 필요성을 인식하여 1cm², 1m², 1km²의 단위를 알며, 그 관계를 이해한다.", - "[6수03-05]직사각형의 넓이를 구하는 방법을 이해하고, 이를 통하여 직사각형과 정사각형의 넓이를 구할 수 있다.", - "[6수03-06]평행사변형, 삼각형, 사다리꼴, 마름모의 넓이를 구하는 방법을 다양하게 추론하고, 이와 관련된 문제를 해결할 수 있다.", - "[6수03-07]여러 가지 둥근 물체의 원주와 지름을 측정하는 활동을 통하여 원주율을 이해한다.", - "[6수03-08]원주와 원의 넓이를 구하는 방법을 이해하고, 이를 구할 수 있다.", - "[6수03-09]직육면체와 정육면체의 겉넓이를 구하는 방법을 이해하고, 이를 구할 수 있다.", - "[6수03-10]부피를 이해하고, 1cm³, 1m³의 단위를 알며, 그 관계를 이해한다.", - "[6수03-11]직육면체와 정육면체의 부피를 구하는 방법을 이해하고, 이를 구할 수 있다.", - "[6수04-01]한 양이 변할 때 다른 양이 그에 종속하여 변하는 대응 관계를 나타낸 표에서 규칙을 찾아 설명하고, □, △ 등을 사용하여 식으로 나타낼 수 있다.", - "[6수04-02]두 양의 크기를 비교하는 상황을 통해 비의 개념을 이해하고, 그 관계를 비로 나타낼 수 있다.", - "[6수04-03]비율을 이해하고, 비율을 분수, 소수, 백분율로 나타낼 수 있다.", - "[6수04-04]비례식을 알고, 그 성질을 이해하며, 이를 활용하여 간단한 비례식을 풀 수 있다.", - "[6수04-05]비례배분을 알고, 주어진 양을 비례배분 할 수 있다.", - "[6수05-01]평균의 의미를 알고, 주어진 자료의 평균을 구할 수 있으며, 이를 활용할 수 있다.", - "[6수05-02]실생활 자료를 그림그래프로 나타내고, 이를 활용할 수 있다.", - "[6수05-03]주어진 자료를 띠그래프와 원그래프로 나타낼 수 있다.", - "[6수05-04]자료를 수집, 분류, 정리하여 목적에 맞는 그래프로 나타내고, 그래프를 해석할 수 있다.", - "[6수05-05]실생활에서 가능성과 관련된 상황을 ‘불가능하다’, ‘~아닐 것 같다’, ‘반반이다’, ‘~일 것 같다’, ‘확실하다’ 등으로 나타낼 수 있다.", - "[6수05-06]가능성을 수나 말로 나타낸 예를 찾아보고, 가능성을 비교할 수 있다.", - "[6수05-07]사건이 일어날 가능성을 수로 표현할 수 있다.", - "[6수01-01]덧셈, 뺄셈, 곱셈, 나눗셈의 혼합 계산에서 계산하는 순서를 알고, 혼합 계산을 할 수 있다.", - "[6수01-02]약수, 공약수, 최대공약수의 의미를 알고 구할 수 있다.", - "[6수01-03]배수, 공배수, 최소공배수의 의미를 알고 구할 수 있다.", - "[6수01-04]약수와 배수의 관계를 이해한다.", - "[6수01-05]분수의 성질을 이용하여 크기가 같은 분수를 만들 수 있다.", - "[6수01-06]분수를 약분, 통분할 수 있다.", - "[6수01-07]분모가 다른 분수의 크기를 비교할 수 있다.", - "[6수01-08]분모가 다른 분수의 덧셈과 뺄셈의 계산 원리를 이해하고, 그 계산을 할 수 있다.", - "[6수01-09]분수의 곱셈의 계산 원리를 이해하고 그 계산을 할 수 있다.", - "[6수01-10]‘(자연수)÷(자연수)’에서 나눗셈의 몫을 분수로 나타낼 수 있다.", - "[6수01-11]분수의 나눗셈의 계산 원리를 이해하고 그 계산을 할 수 있다.", - "[6수01-12]분수와 소수의 관계를 이해하고 크기를 비교할 수 있다.", - "[6수01-13]소수의 곱셈의 계산 원리를 이해한다.", - "[6수01-14]‘(자연수)÷(자연수)’, ‘(소수)÷(자연수)’에서 나눗셈의 몫을 소수로 나타낼 수 있다.", - "[6수01-15]나누는 수가 소수인 나눗셈의 계산 원리를 이해한다.", - "[6수01-16]소수의 곱셈과 나눗셈의 계산 결과를 어림할 수 있다.", - "[6수02-01]구체적인 조작 활동을 통하여 도형의 합동의 의미를 알고, 합동인 도형을 찾을 수 있다.", - "[6수02-02]합동인 두 도형에서 대응점, 대응변, 대응각을 각각 찾고, 그 성질을 이해한다.", - "[6수02-03]선대칭도형과 점대칭도형을 이해하고 그릴 수 있다.", - "[6수02-04]직육면체와 정육면체를 알고, 구성 요소와 성질을 이해한다.", - "[6수02-05]직육면체와 정육면체의 겨냥도와 전개도를 그릴 수 있다.", - "[6수02-06]각기둥과 각뿔을 알고, 구성 요소와 성질을 이해한다.", - "[6수02-07]각기둥의 전개도를 그릴 수 있다.", - "[6수02-08]원기둥을 알고, 구성 요소, 성질, 전개도를 이해한다.", - "[6수02-09]원뿔과 구를 알고, 구성 요소와 성질을 이해한다.", - "[6수02-10]쌓기나무로 만든 입체도형을 보고 사용된 쌓기나무의 개수를 구할 수 있다.", - "[6수02-11]쌓기나무로 만든 입체도형의 위, 앞, 옆에서 본 모양을 표현할 수 있고, 이러한 표현을 보고 입체도형의 모양을 추측할 수 있다.", - "[6수03-01]실생활 장면에서 이상, 이하, 초과, 미만의 의미와 쓰임을 알고, 이를 활용하여 수의 범위를 나타낼 수 있다.", - "[6수03-02]어림값을 구하기 위한 방법으로 올림, 버림, 반올림의 의미와 필요성을 알고, 이를 실생활에 활용할 수 있다.", - "[6수03-03]평면도형의 둘레를 재어보는 활동을 통하여 둘레를 이해하고, 기본적인 평면도형의 둘레의 길이를 구할 수 있다.", - "[6수03-04]넓이를 나타내는 표준 단위의 필요성을 인식하여 1cm², 1m², 1km²의 단위를 알며, 그 관계를 이해한다.", - "[6수03-05]직사각형의 넓이를 구하는 방법을 이해하고, 이를 통하여 직사각형과 정사각형의 넓이를 구할 수 있다.", - "[6수03-06]평행사변형, 삼각형, 사다리꼴, 마름모의 넓이를 구하는 방법을 다양하게 추론하고, 이와 관련된 문제를 해결할 수 있다.", - "[6수03-07]여러 가지 둥근 물체의 원주와 지름을 측정하는 활동을 통하여 원주율을 이해한다.", - "[6수03-08]원주와 원의 넓이를 구하는 방법을 이해하고, 이를 구할 수 있다.", - "[6수03-09]직육면체와 정육면체의 겉넓이를 구하는 방법을 이해하고, 이를 구할 수 있다.", - "[6수03-10]부피를 이해하고, 1cm³, 1m³의 단위를 알며, 그 관계를 이해한다.", - "[6수03-11]직육면체와 정육면체의 부피를 구하는 방법을 이해하고, 이를 구할 수 있다.", - "[6수04-01]한 양이 변할 때 다른 양이 그에 종속하여 변하는 대응 관계를 나타낸 표에서 규칙을 찾아 설명하고, □, △ 등을 사용하여 식으로 나타낼 수 있다.", - "[6수04-02]두 양의 크기를 비교하는 상황을 통해 비의 개념을 이해하고, 그 관계를 비로 나타낼 수 있다.", - "[6수04-03]비율을 이해하고, 비율을 분수, 소수, 백분율로 나타낼 수 있다.", - "[6수04-04]비례식을 알고, 그 성질을 이해하며, 이를 활용하여 간단한 비례식을 풀 수 있다.", - "[6수04-05]비례배분을 알고, 주어진 양을 비례배분 할 수 있다.", - "[6수05-01]평균의 의미를 알고, 주어진 자료의 평균을 구할 수 있으며, 이를 활용할 수 있다.", - "[6수05-02]실생활 자료를 그림그래프로 나타내고, 이를 활용할 수 있다.", - "[6수05-03]주어진 자료를 띠그래프와 원그래프로 나타낼 수 있다.", - "[6수05-04]자료를 수집, 분류, 정리하여 목적에 맞는 그래프로 나타내고, 그래프를 해석할 수 있다.", - "[6수05-05]실생활에서 가능성과 관련된 상황을 ‘불가능하다’, ‘~아닐 것 같다’, ‘반반이다’, ‘~일 것 같다’, ‘확실하다’ 등으로 나타낼 수 있다.", - "[6수05-06]가능성을 수나 말로 나타낸 예를 찾아보고, 가능성을 비교할 수 있다.", - "[6수05-07]사건이 일어날 가능성을 수로 표현할 수 있다." - ], - "사회": [ - "[6사01-01]우리나라의 위치와 영역이 지니는 특성을 설명하고, 이를 바탕으로 하여 국토 사랑의 태도를 기른다.", - "[6사01-02]우리 국토를 구분하는 기준들을 살펴보고, 시・도 단위 행정구역 및 주요 도시들의 위치 특성을 파악한다.", - "[6사01-03]우리나라의 기후 환경 및 지형 환경에서 나타나는 특성을 탐구한다.", - "[6사01-04]우리나라 자연재해의 종류 및 대책을 탐색하고, 그와 관련된 생활 안전 수칙을 실천하는 태도를 지닌다.", - "[6사01-05]우리나라의 인구 분포 및 구조에서 나타난 변화와 도시 발달 과정에서 나타난 특징을 탐구한다.", - "[6사01-06]우리나라의 산업구조의 변화와 교통 발달 과정에서 나타난 특징을 탐구한다.", - "[6사02-01]인권의 중요성을 인식하고 인권 신장을 위해 노력했던 옛 사람들의 활동을 탐구한다.", - "[6사02-02]생활 속에서 인권 보장이 필요한 사례를 탐구하여 인권의 중요성을 인식하고, 인권 보호를 실천하는 태도를 기른다.", - "[6사02-03]인권 보장 측면에서 헌법의 의미와 역할을 탐구하고, 그 중요성을 설명한다.", - "[6사02-04]헌법에서 규정하는 기본권과 의무가 일상생활에 적용된 사례를 조사하고, 권리와 의무의 조화를 추구하는 자세를 기른다.", - "[6사02-05]우리 생활 속에서 법이 적용되는 다양한 사례를 제시하고, 법의 의미와 성격을 설명한다.", - "[6사02-06]법의 역할을 권리 보호와 질서 유지의 측면에서 설명하고, 법을 준수하는 태도를 기른다.", - "[6사03-01]고조선의 등장과 관련된 건국 이야기를 살펴보고, 고대 시기 나라의 발전에 기여한 인물(근초고왕, 광개토대왕, 김유신과 김춘추, 대조영 등)의 활동을 통하여 여러 나라가 성장하는 모습을 탐색한다.", - "[6사03-02]불국사와 석굴암, 미륵사 등 대표적인 문화유산을 통하여 고대 사람들이 이룩한 문화의 우수성을 탐색한다.", - "[6사03-03]고려를 세우고 외침을 막는 데 힘쓴 인물(왕건, 서희, 강감찬 등)의 업적을 통하여 고려의 개창과 외침 극복 과정을 탐색한다.", - "[6사03-04]고려청자와 금속 활자, 팔만대장경 등의 문화유산을 통하여 고려 시대 과학 기술과 문화의 우수성을 탐색한다.", - "[6사03-05]조선을 세우거나 문화 발전에 기여한 인물(이성계, 세종대왕, 신사임당 등)의 업적을 통해 조선 전기 정치와 민족문화의 발전상을 탐색한다.", - "[6사03-06]대표적인 유적지(행주산성, 남한산성 등)와 인물들(이순신과 곽재우, 김상헌과 최명길 등)의 활동을 통하여 임진왜란, 병자호란 등과 같은 국가적 위기의 극복 과정을 탐색한다.", - "[6사04-01]영・정조 시기의 개혁 정치와 서민 문화의 발달을 중심으로 조선 후기 사회와 문화의 변화 모습을 탐색한다.", - "[6사04-02]조선 사회의 모순을 극복하기 위해 개혁을 시도한 인물(정약용, 흥선 대원군, 김옥균과 전봉준 등)의 활동을 중심으로 사회 변화를 위한 옛 사람들의 노력을 탐색한다.", - "[6사04-03]일제의 침략에 맞서 나라를 지키고자 노력한 인물(명성황후, 안중근, 신돌석 등)의 활동에 대해 조사한다.", - "[6사04-04]광복을 위하여 힘쓴 인물(이회영, 김구, 유관순, 신채호 등)의 활동을 파악하고, 나라를 되찾기 위한 노력을 소중히 여기는 태도를 기른다.", - "[6사04-05]광복 이후 대한민국의 수립 과정을 살펴보고, 대한민국 수립의 의의를 파악한다.", - "[6사04-06]6․25 전쟁의 원인과 과정을 이해하고, 그 피해상과 영향을 탐구한다.", - "[6사05-01]4·19 혁명, 5·18 민주화 운동, 6월 민주 항쟁 등을 통해 자유민주주의가 발전해 온 과정을 파악한다.", - "[6사05-02]광복 이후 시민의 정치 참여 활동이 확대되는 과정을 중심으로 오늘날 우리 사회의 발전상을 살펴본다.", - "[6사05-03]일상생활에서 경험하는 민주주의 실천 사례를 탐구하여 민주주의의 의미와 중요성을 파악하고, 생활 속에서 민주주의를 실천하는 태도를 기른다.", - "[6사05-04]민주적 의사 결정 원리(다수결, 대화와 타협, 소수 의견 존중 등)의 의미와 필요성을 이해하고, 이를 실제 생활 속에서 실천하는 자세를 지닌다.", - "[6사05-05]민주정치의 기본원리(국민주권, 권력분립 등)을 이해하고, 그것이 적용된 다양한 사례를 탐구한다.", - "[6사05-06]국회, 행정부, 법원의 기능을 이해하고, 그것이 국민 생활에 미치는 영향을 다양한 사례를 통해 탐구한다.", - "[6사06-01]다양한 경제활동 사례를 통해 가계와 기업의 경제적 역할을 파악하고, 가계와 기업의 합리적 선택 방법을 탐색한다.", - "[6사06-02]여러 경제활동의 사례를 통하여 자유경쟁과 경제 정의의 조화를 추구하는 우리나라 경제체제의 특징을 설명한다.", - "[6사06-03]농업 중심 경제에서 공업・서비스업 중심 경제로 변화하는 모습을 중심으로 우리나라 경제성장 과정을 파악한다.", - "[6사06-04]광복 이후 경제성장 과정에서 우리 사회가 겪은 사회 변동의 특징과 다양한 문제를 살펴보고, 더 나은 사회를 만들기 위하여 해결해야 할 과제를 탐구한다.", - "[6사06-05]세계 여러 나라와의 경제 교류 활동으로 나타난 우리 경제생활의 변화 모습을 탐구한다.", - "[6사06-06]다양한 경제 교류 사례를 통해 우리나라 경제가 다른 나라와 상호 의존 및 경쟁 관계에 있음을 파악한다.", - "[6사07-01]세계지도, 지구본을 비롯한 다양한 형태의 공간 자료에 대한 기초적인 내용과 활용 방법을 알고, 이를 실제 생활에 활용한다.", - "[6사07-02]여러 시각 및 공간 자료를 활용하여 세계 주요 대륙과 대양의 위치 및 범위, 대륙별 주요 나라의 위치와 영토의 특징을 탐색한다.", - "[6사07-03]세계 주요 기후의 분포와 특성을 파악하고, 이를 바탕으로 하여 기후 환경과 인간 생활 간의 관계를 탐색한다.", - "[6사07-04]의식주 생활에 특색이 있는 나라나 지역의 사례를 조사하고, 이를 바탕으로 하여 인간 생활에 영향을 미치는 여러 자연적, 인문적 요인을 탐구한다.", - "[6사07-05]우리나라와 관계 깊은 나라들의 기초적인 지리 정보를 조사하고, 정치・경제・문화면에서 맺고 있는 상호 의존 관계를 탐구한다.", - "[6사07-06]이웃 나라들(중국, 일본, 러시아)의 자연적, 인문적 특성과 교류 현황을 조사하고, 이를 바탕으로 하여 상호 이해와 협력의 태도를 기른다.", - "[6사08-01]독도를 지키려는 조상들의 노력을 역사적 자료를 통하여 살펴보고, 독도의 위치 등 지리적 특성에 대한 이해를 바탕으로 하여 영토주권 의식을 기른다.", - "[6사08-02]남북통일을 위한 노력을 살펴보고, 지구촌 평화에 기여하는 통일 한국의 미래상을 그려 본다.", - "[6사08-03]지구촌의 평화와 발전을 위협하는 다양한 갈등 사례를 조사하고 그 해결 방안을 탐색한다.", - "[6사08-04]지구촌의 평화와 발전을 위해 노력하는 다양한 행위 주체(개인, 국가, 국제기구, 비정부 기구 등)의 활동 사례를 조사한다.", - "[6사08-05]지구촌의 주요 환경문제를 조사하여 해결 방안을 탐색하고, 환경문제 해결에 협력하는 세계시민의 자세를 기른다.", - "[6사08-06]지속가능한 미래를 건설하기 위한 과제(친환경적 생산과 소비 방식 확산, 빈곤과 기아 퇴치, 문화적 편견과 차별 해소 등)를 조사하고, 세계시민으로서 이에 적극 참여하는 방안을 모색한다.", - "[6사01-01]우리나라의 위치와 영역이 지니는 특성을 설명하고, 이를 바탕으로 하여 국토 사랑의 태도를 기른다.", - "[6사01-02]우리 국토를 구분하는 기준들을 살펴보고, 시・도 단위 행정구역 및 주요 도시들의 위치 특성을 파악한다.", - "[6사01-03]우리나라의 기후 환경 및 지형 환경에서 나타나는 특성을 탐구한다.", - "[6사01-04]우리나라 자연재해의 종류 및 대책을 탐색하고, 그와 관련된 생활 안전 수칙을 실천하는 태도를 지닌다.", - "[6사01-05]우리나라의 인구 분포 및 구조에서 나타난 변화와 도시 발달 과정에서 나타난 특징을 탐구한다.", - "[6사01-06]우리나라의 산업구조의 변화와 교통 발달 과정에서 나타난 특징을 탐구한다.", - "[6사02-01]인권의 중요성을 인식하고 인권 신장을 위해 노력했던 옛 사람들의 활동을 탐구한다.", - "[6사02-02]생활 속에서 인권 보장이 필요한 사례를 탐구하여 인권의 중요성을 인식하고, 인권 보호를 실천하는 태도를 기른다.", - "[6사02-03]인권 보장 측면에서 헌법의 의미와 역할을 탐구하고, 그 중요성을 설명한다.", - "[6사02-04]헌법에서 규정하는 기본권과 의무가 일상생활에 적용된 사례를 조사하고, 권리와 의무의 조화를 추구하는 자세를 기른다.", - "[6사02-05]우리 생활 속에서 법이 적용되는 다양한 사례를 제시하고, 법의 의미와 성격을 설명한다.", - "[6사02-06]법의 역할을 권리 보호와 질서 유지의 측면에서 설명하고, 법을 준수하는 태도를 기른다.", - "[6사03-01]고조선의 등장과 관련된 건국 이야기를 살펴보고, 고대 시기 나라의 발전에 기여한 인물(근초고왕, 광개토대왕, 김유신과 김춘추, 대조영 등)의 활동을 통하여 여러 나라가 성장하는 모습을 탐색한다.", - "[6사03-02]불국사와 석굴암, 미륵사 등 대표적인 문화유산을 통하여 고대 사람들이 이룩한 문화의 우수성을 탐색한다.", - "[6사03-03]고려를 세우고 외침을 막는 데 힘쓴 인물(왕건, 서희, 강감찬 등)의 업적을 통하여 고려의 개창과 외침 극복 과정을 탐색한다.", - "[6사03-04]고려청자와 금속 활자, 팔만대장경 등의 문화유산을 통하여 고려 시대 과학 기술과 문화의 우수성을 탐색한다.", - "[6사03-05]조선을 세우거나 문화 발전에 기여한 인물(이성계, 세종대왕, 신사임당 등)의 업적을 통해 조선 전기 정치와 민족문화의 발전상을 탐색한다.", - "[6사03-06]대표적인 유적지(행주산성, 남한산성 등)와 인물들(이순신과 곽재우, 김상헌과 최명길 등)의 활동을 통하여 임진왜란, 병자호란 등과 같은 국가적 위기의 극복 과정을 탐색한다.", - "[6사04-01]영・정조 시기의 개혁 정치와 서민 문화의 발달을 중심으로 조선 후기 사회와 문화의 변화 모습을 탐색한다.", - "[6사04-02]조선 사회의 모순을 극복하기 위해 개혁을 시도한 인물(정약용, 흥선 대원군, 김옥균과 전봉준 등)의 활동을 중심으로 사회 변화를 위한 옛 사람들의 노력을 탐색한다.", - "[6사04-03]일제의 침략에 맞서 나라를 지키고자 노력한 인물(명성황후, 안중근, 신돌석 등)의 활동에 대해 조사한다.", - "[6사04-04]광복을 위하여 힘쓴 인물(이회영, 김구, 유관순, 신채호 등)의 활동을 파악하고, 나라를 되찾기 위한 노력을 소중히 여기는 태도를 기른다.", - "[6사04-05]광복 이후 대한민국의 수립 과정을 살펴보고, 대한민국 수립의 의의를 파악한다.", - "[6사04-06]6․25 전쟁의 원인과 과정을 이해하고, 그 피해상과 영향을 탐구한다.", - "[6사05-01]4·19 혁명, 5·18 민주화 운동, 6월 민주 항쟁 등을 통해 자유민주주의가 발전해 온 과정을 파악한다.", - "[6사05-02]광복 이후 시민의 정치 참여 활동이 확대되는 과정을 중심으로 오늘날 우리 사회의 발전상을 살펴본다.", - "[6사05-03]일상생활에서 경험하는 민주주의 실천 사례를 탐구하여 민주주의의 의미와 중요성을 파악하고, 생활 속에서 민주주의를 실천하는 태도를 기른다.", - "[6사05-04]민주적 의사 결정 원리(다수결, 대화와 타협, 소수 의견 존중 등)의 의미와 필요성을 이해하고, 이를 실제 생활 속에서 실천하는 자세를 지닌다.", - "[6사05-05]민주정치의 기본원리(국민주권, 권력분립 등)을 이해하고, 그것이 적용된 다양한 사례를 탐구한다.", - "[6사05-06]국회, 행정부, 법원의 기능을 이해하고, 그것이 국민 생활에 미치는 영향을 다양한 사례를 통해 탐구한다.", - "[6사06-01]다양한 경제활동 사례를 통해 가계와 기업의 경제적 역할을 파악하고, 가계와 기업의 합리적 선택 방법을 탐색한다.", - "[6사06-02]여러 경제활동의 사례를 통하여 자유경쟁과 경제 정의의 조화를 추구하는 우리나라 경제체제의 특징을 설명한다.", - "[6사06-03]농업 중심 경제에서 공업・서비스업 중심 경제로 변화하는 모습을 중심으로 우리나라 경제성장 과정을 파악한다.", - "[6사06-04]광복 이후 경제성장 과정에서 우리 사회가 겪은 사회 변동의 특징과 다양한 문제를 살펴보고, 더 나은 사회를 만들기 위하여 해결해야 할 과제를 탐구한다.", - "[6사06-05]세계 여러 나라와의 경제 교류 활동으로 나타난 우리 경제생활의 변화 모습을 탐구한다.", - "[6사06-06]다양한 경제 교류 사례를 통해 우리나라 경제가 다른 나라와 상호 의존 및 경쟁 관계에 있음을 파악한다.", - "[6사07-01]세계지도, 지구본을 비롯한 다양한 형태의 공간 자료에 대한 기초적인 내용과 활용 방법을 알고, 이를 실제 생활에 활용한다.", - "[6사07-02]여러 시각 및 공간 자료를 활용하여 세계 주요 대륙과 대양의 위치 및 범위, 대륙별 주요 나라의 위치와 영토의 특징을 탐색한다.", - "[6사07-03]세계 주요 기후의 분포와 특성을 파악하고, 이를 바탕으로 하여 기후 환경과 인간 생활 간의 관계를 탐색한다.", - "[6사07-04]의식주 생활에 특색이 있는 나라나 지역의 사례를 조사하고, 이를 바탕으로 하여 인간 생활에 영향을 미치는 여러 자연적, 인문적 요인을 탐구한다.", - "[6사07-05]우리나라와 관계 깊은 나라들의 기초적인 지리 정보를 조사하고, 정치・경제・문화면에서 맺고 있는 상호 의존 관계를 탐구한다.", - "[6사07-06]이웃 나라들(중국, 일본, 러시아)의 자연적, 인문적 특성과 교류 현황을 조사하고, 이를 바탕으로 하여 상호 이해와 협력의 태도를 기른다.", - "[6사08-01]독도를 지키려는 조상들의 노력을 역사적 자료를 통하여 살펴보고, 독도의 위치 등 지리적 특성에 대한 이해를 바탕으로 하여 영토주권 의식을 기른다.", - "[6사08-02]남북통일을 위한 노력을 살펴보고, 지구촌 평화에 기여하는 통일 한국의 미래상을 그려 본다.", - "[6사08-03]지구촌의 평화와 발전을 위협하는 다양한 갈등 사례를 조사하고 그 해결 방안을 탐색한다.", - "[6사08-04]지구촌의 평화와 발전을 위해 노력하는 다양한 행위 주체(개인, 국가, 국제기구, 비정부 기구 등)의 활동 사례를 조사한다.", - "[6사08-05]지구촌의 주요 환경문제를 조사하여 해결 방안을 탐색하고, 환경문제 해결에 협력하는 세계시민의 자세를 기른다.", - "[6사08-06]지속가능한 미래를 건설하기 위한 과제(친환경적 생산과 소비 방식 확산, 빈곤과 기아 퇴치, 문화적 편견과 차별 해소 등)를 조사하고, 세계시민으로서 이에 적극 참여하는 방안을 모색한다.", - "[6사01-01]우리나라의 위치와 영역이 지니는 특성을 설명하고, 이를 바탕으로 하여 국토 사랑의 태도를 기른다.", - "[6사01-02]우리 국토를 구분하는 기준들을 살펴보고, 시・도 단위 행정구역 및 주요 도시들의 위치 특성을 파악한다.", - "[6사01-03]우리나라의 기후 환경 및 지형 환경에서 나타나는 특성을 탐구한다.", - "[6사01-04]우리나라 자연재해의 종류 및 대책을 탐색하고, 그와 관련된 생활 안전 수칙을 실천하는 태도를 지닌다.", - "[6사01-05]우리나라의 인구 분포 및 구조에서 나타난 변화와 도시 발달 과정에서 나타난 특징을 탐구한다.", - "[6사01-06]우리나라의 산업구조의 변화와 교통 발달 과정에서 나타난 특징을 탐구한다.", - "[6사02-01]인권의 중요성을 인식하고 인권 신장을 위해 노력했던 옛 사람들의 활동을 탐구한다.", - "[6사02-02]생활 속에서 인권 보장이 필요한 사례를 탐구하여 인권의 중요성을 인식하고, 인권 보호를 실천하는 태도를 기른다.", - "[6사02-03]인권 보장 측면에서 헌법의 의미와 역할을 탐구하고, 그 중요성을 설명한다.", - "[6사02-04]헌법에서 규정하는 기본권과 의무가 일상생활에 적용된 사례를 조사하고, 권리와 의무의 조화를 추구하는 자세를 기른다.", - "[6사02-05]우리 생활 속에서 법이 적용되는 다양한 사례를 제시하고, 법의 의미와 성격을 설명한다.", - "[6사02-06]법의 역할을 권리 보호와 질서 유지의 측면에서 설명하고, 법을 준수하는 태도를 기른다.", - "[6사03-01]고조선의 등장과 관련된 건국 이야기를 살펴보고, 고대 시기 나라의 발전에 기여한 인물(근초고왕, 광개토대왕, 김유신과 김춘추, 대조영 등)의 활동을 통하여 여러 나라가 성장하는 모습을 탐색한다.", - "[6사03-02]불국사와 석굴암, 미륵사 등 대표적인 문화유산을 통하여 고대 사람들이 이룩한 문화의 우수성을 탐색한다.", - "[6사03-03]고려를 세우고 외침을 막는 데 힘쓴 인물(왕건, 서희, 강감찬 등)의 업적을 통하여 고려의 개창과 외침 극복 과정을 탐색한다.", - "[6사03-04]고려청자와 금속 활자, 팔만대장경 등의 문화유산을 통하여 고려 시대 과학 기술과 문화의 우수성을 탐색한다.", - "[6사03-05]조선을 세우거나 문화 발전에 기여한 인물(이성계, 세종대왕, 신사임당 등)의 업적을 통해 조선 전기 정치와 민족문화의 발전상을 탐색한다.", - "[6사03-06]대표적인 유적지(행주산성, 남한산성 등)와 인물들(이순신과 곽재우, 김상헌과 최명길 등)의 활동을 통하여 임진왜란, 병자호란 등과 같은 국가적 위기의 극복 과정을 탐색한다.", - "[6사04-01]영・정조 시기의 개혁 정치와 서민 문화의 발달을 중심으로 조선 후기 사회와 문화의 변화 모습을 탐색한다.", - "[6사04-02]조선 사회의 모순을 극복하기 위해 개혁을 시도한 인물(정약용, 흥선 대원군, 김옥균과 전봉준 등)의 활동을 중심으로 사회 변화를 위한 옛 사람들의 노력을 탐색한다.", - "[6사04-03]일제의 침략에 맞서 나라를 지키고자 노력한 인물(명성황후, 안중근, 신돌석 등)의 활동에 대해 조사한다.", - "[6사04-04]광복을 위하여 힘쓴 인물(이회영, 김구, 유관순, 신채호 등)의 활동을 파악하고, 나라를 되찾기 위한 노력을 소중히 여기는 태도를 기른다.", - "[6사04-05]광복 이후 대한민국의 수립 과정을 살펴보고, 대한민국 수립의 의의를 파악한다.", - "[6사04-06]6․25 전쟁의 원인과 과정을 이해하고, 그 피해상과 영향을 탐구한다.", - "[6사05-01]4·19 혁명, 5·18 민주화 운동, 6월 민주 항쟁 등을 통해 자유민주주의가 발전해 온 과정을 파악한다.", - "[6사05-02]광복 이후 시민의 정치 참여 활동이 확대되는 과정을 중심으로 오늘날 우리 사회의 발전상을 살펴본다.", - "[6사05-03]일상생활에서 경험하는 민주주의 실천 사례를 탐구하여 민주주의의 의미와 중요성을 파악하고, 생활 속에서 민주주의를 실천하는 태도를 기른다.", - "[6사05-04]민주적 의사 결정 원리(다수결, 대화와 타협, 소수 의견 존중 등)의 의미와 필요성을 이해하고, 이를 실제 생활 속에서 실천하는 자세를 지닌다.", - "[6사05-05]민주정치의 기본원리(국민주권, 권력분립 등)을 이해하고, 그것이 적용된 다양한 사례를 탐구한다.", - "[6사05-06]국회, 행정부, 법원의 기능을 이해하고, 그것이 국민 생활에 미치는 영향을 다양한 사례를 통해 탐구한다.", - "[6사06-01]다양한 경제활동 사례를 통해 가계와 기업의 경제적 역할을 파악하고, 가계와 기업의 합리적 선택 방법을 탐색한다.", - "[6사06-02]여러 경제활동의 사례를 통하여 자유경쟁과 경제 정의의 조화를 추구하는 우리나라 경제체제의 특징을 설명한다.", - "[6사06-03]농업 중심 경제에서 공업・서비스업 중심 경제로 변화하는 모습을 중심으로 우리나라 경제성장 과정을 파악한다.", - "[6사06-04]광복 이후 경제성장 과정에서 우리 사회가 겪은 사회 변동의 특징과 다양한 문제를 살펴보고, 더 나은 사회를 만들기 위하여 해결해야 할 과제를 탐구한다.", - "[6사06-05]세계 여러 나라와의 경제 교류 활동으로 나타난 우리 경제생활의 변화 모습을 탐구한다.", - "[6사06-06]다양한 경제 교류 사례를 통해 우리나라 경제가 다른 나라와 상호 의존 및 경쟁 관계에 있음을 파악한다.", - "[6사07-01]세계지도, 지구본을 비롯한 다양한 형태의 공간 자료에 대한 기초적인 내용과 활용 방법을 알고, 이를 실제 생활에 활용한다.", - "[6사07-02]여러 시각 및 공간 자료를 활용하여 세계 주요 대륙과 대양의 위치 및 범위, 대륙별 주요 나라의 위치와 영토의 특징을 탐색한다.", - "[6사07-03]세계 주요 기후의 분포와 특성을 파악하고, 이를 바탕으로 하여 기후 환경과 인간 생활 간의 관계를 탐색한다.", - "[6사07-04]의식주 생활에 특색이 있는 나라나 지역의 사례를 조사하고, 이를 바탕으로 하여 인간 생활에 영향을 미치는 여러 자연적, 인문적 요인을 탐구한다.", - "[6사07-05]우리나라와 관계 깊은 나라들의 기초적인 지리 정보를 조사하고, 정치・경제・문화면에서 맺고 있는 상호 의존 관계를 탐구한다.", - "[6사07-06]이웃 나라들(중국, 일본, 러시아)의 자연적, 인문적 특성과 교류 현황을 조사하고, 이를 바탕으로 하여 상호 이해와 협력의 태도를 기른다.", - "[6사08-01]독도를 지키려는 조상들의 노력을 역사적 자료를 통하여 살펴보고, 독도의 위치 등 지리적 특성에 대한 이해를 바탕으로 하여 영토주권 의식을 기른다.", - "[6사08-02]남북통일을 위한 노력을 살펴보고, 지구촌 평화에 기여하는 통일 한국의 미래상을 그려 본다.", - "[6사08-03]지구촌의 평화와 발전을 위협하는 다양한 갈등 사례를 조사하고 그 해결 방안을 탐색한다.", - "[6사08-04]지구촌의 평화와 발전을 위해 노력하는 다양한 행위 주체(개인, 국가, 국제기구, 비정부 기구 등)의 활동 사례를 조사한다.", - "[6사08-05]지구촌의 주요 환경문제를 조사하여 해결 방안을 탐색하고, 환경문제 해결에 협력하는 세계시민의 자세를 기른다.", - "[6사08-06]지속가능한 미래를 건설하기 위한 과제(친환경적 생산과 소비 방식 확산, 빈곤과 기아 퇴치, 문화적 편견과 차별 해소 등)를 조사하고, 세계시민으로서 이에 적극 참여하는 방안을 모색한다.", - "[6사01-01]우리나라의 위치와 영역이 지니는 특성을 설명하고, 이를 바탕으로 하여 국토 사랑의 태도를 기른다.", - "[6사01-02]우리 국토를 구분하는 기준들을 살펴보고, 시・도 단위 행정구역 및 주요 도시들의 위치 특성을 파악한다.", - "[6사01-03]우리나라의 기후 환경 및 지형 환경에서 나타나는 특성을 탐구한다.", - "[6사01-04]우리나라 자연재해의 종류 및 대책을 탐색하고, 그와 관련된 생활 안전 수칙을 실천하는 태도를 지닌다.", - "[6사01-05]우리나라의 인구 분포 및 구조에서 나타난 변화와 도시 발달 과정에서 나타난 특징을 탐구한다.", - "[6사01-06]우리나라의 산업구조의 변화와 교통 발달 과정에서 나타난 특징을 탐구한다.", - "[6사02-01]인권의 중요성을 인식하고 인권 신장을 위해 노력했던 옛 사람들의 활동을 탐구한다.", - "[6사02-02]생활 속에서 인권 보장이 필요한 사례를 탐구하여 인권의 중요성을 인식하고, 인권 보호를 실천하는 태도를 기른다.", - "[6사02-03]인권 보장 측면에서 헌법의 의미와 역할을 탐구하고, 그 중요성을 설명한다.", - "[6사02-04]헌법에서 규정하는 기본권과 의무가 일상생활에 적용된 사례를 조사하고, 권리와 의무의 조화를 추구하는 자세를 기른다.", - "[6사02-05]우리 생활 속에서 법이 적용되는 다양한 사례를 제시하고, 법의 의미와 성격을 설명한다.", - "[6사02-06]법의 역할을 권리 보호와 질서 유지의 측면에서 설명하고, 법을 준수하는 태도를 기른다.", - "[6사03-01]고조선의 등장과 관련된 건국 이야기를 살펴보고, 고대 시기 나라의 발전에 기여한 인물(근초고왕, 광개토대왕, 김유신과 김춘추, 대조영 등)의 활동을 통하여 여러 나라가 성장하는 모습을 탐색한다.", - "[6사03-02]불국사와 석굴암, 미륵사 등 대표적인 문화유산을 통하여 고대 사람들이 이룩한 문화의 우수성을 탐색한다.", - "[6사03-03]고려를 세우고 외침을 막는 데 힘쓴 인물(왕건, 서희, 강감찬 등)의 업적을 통하여 고려의 개창과 외침 극복 과정을 탐색한다.", - "[6사03-04]고려청자와 금속 활자, 팔만대장경 등의 문화유산을 통하여 고려 시대 과학 기술과 문화의 우수성을 탐색한다.", - "[6사03-05]조선을 세우거나 문화 발전에 기여한 인물(이성계, 세종대왕, 신사임당 등)의 업적을 통해 조선 전기 정치와 민족문화의 발전상을 탐색한다.", - "[6사03-06]대표적인 유적지(행주산성, 남한산성 등)와 인물들(이순신과 곽재우, 김상헌과 최명길 등)의 활동을 통하여 임진왜란, 병자호란 등과 같은 국가적 위기의 극복 과정을 탐색한다.", - "[6사04-01]영・정조 시기의 개혁 정치와 서민 문화의 발달을 중심으로 조선 후기 사회와 문화의 변화 모습을 탐색한다.", - "[6사04-02]조선 사회의 모순을 극복하기 위해 개혁을 시도한 인물(정약용, 흥선 대원군, 김옥균과 전봉준 등)의 활동을 중심으로 사회 변화를 위한 옛 사람들의 노력을 탐색한다.", - "[6사04-03]일제의 침략에 맞서 나라를 지키고자 노력한 인물(명성황후, 안중근, 신돌석 등)의 활동에 대해 조사한다.", - "[6사04-04]광복을 위하여 힘쓴 인물(이회영, 김구, 유관순, 신채호 등)의 활동을 파악하고, 나라를 되찾기 위한 노력을 소중히 여기는 태도를 기른다.", - "[6사04-05]광복 이후 대한민국의 수립 과정을 살펴보고, 대한민국 수립의 의의를 파악한다.", - "[6사04-06]6․25 전쟁의 원인과 과정을 이해하고, 그 피해상과 영향을 탐구한다.", - "[6사05-01]4·19 혁명, 5·18 민주화 운동, 6월 민주 항쟁 등을 통해 자유민주주의가 발전해 온 과정을 파악한다.", - "[6사05-02]광복 이후 시민의 정치 참여 활동이 확대되는 과정을 중심으로 오늘날 우리 사회의 발전상을 살펴본다.", - "[6사05-03]일상생활에서 경험하는 민주주의 실천 사례를 탐구하여 민주주의의 의미와 중요성을 파악하고, 생활 속에서 민주주의를 실천하는 태도를 기른다.", - "[6사05-04]민주적 의사 결정 원리(다수결, 대화와 타협, 소수 의견 존중 등)의 의미와 필요성을 이해하고, 이를 실제 생활 속에서 실천하는 자세를 지닌다.", - "[6사05-05]민주정치의 기본원리(국민주권, 권력분립 등)을 이해하고, 그것이 적용된 다양한 사례를 탐구한다.", - "[6사05-06]국회, 행정부, 법원의 기능을 이해하고, 그것이 국민 생활에 미치는 영향을 다양한 사례를 통해 탐구한다.", - "[6사06-01]다양한 경제활동 사례를 통해 가계와 기업의 경제적 역할을 파악하고, 가계와 기업의 합리적 선택 방법을 탐색한다.", - "[6사06-02]여러 경제활동의 사례를 통하여 자유경쟁과 경제 정의의 조화를 추구하는 우리나라 경제체제의 특징을 설명한다.", - "[6사06-03]농업 중심 경제에서 공업・서비스업 중심 경제로 변화하는 모습을 중심으로 우리나라 경제성장 과정을 파악한다.", - "[6사06-04]광복 이후 경제성장 과정에서 우리 사회가 겪은 사회 변동의 특징과 다양한 문제를 살펴보고, 더 나은 사회를 만들기 위하여 해결해야 할 과제를 탐구한다.", - "[6사06-05]세계 여러 나라와의 경제 교류 활동으로 나타난 우리 경제생활의 변화 모습을 탐구한다.", - "[6사06-06]다양한 경제 교류 사례를 통해 우리나라 경제가 다른 나라와 상호 의존 및 경쟁 관계에 있음을 파악한다.", - "[6사07-01]세계지도, 지구본을 비롯한 다양한 형태의 공간 자료에 대한 기초적인 내용과 활용 방법을 알고, 이를 실제 생활에 활용한다.", - "[6사07-02]여러 시각 및 공간 자료를 활용하여 세계 주요 대륙과 대양의 위치 및 범위, 대륙별 주요 나라의 위치와 영토의 특징을 탐색한다.", - "[6사07-03]세계 주요 기후의 분포와 특성을 파악하고, 이를 바탕으로 하여 기후 환경과 인간 생활 간의 관계를 탐색한다.", - "[6사07-04]의식주 생활에 특색이 있는 나라나 지역의 사례를 조사하고, 이를 바탕으로 하여 인간 생활에 영향을 미치는 여러 자연적, 인문적 요인을 탐구한다.", - "[6사07-05]우리나라와 관계 깊은 나라들의 기초적인 지리 정보를 조사하고, 정치・경제・문화면에서 맺고 있는 상호 의존 관계를 탐구한다.", - "[6사07-06]이웃 나라들(중국, 일본, 러시아)의 자연적, 인문적 특성과 교류 현황을 조사하고, 이를 바탕으로 하여 상호 이해와 협력의 태도를 기른다.", - "[6사08-01]독도를 지키려는 조상들의 노력을 역사적 자료를 통하여 살펴보고, 독도의 위치 등 지리적 특성에 대한 이해를 바탕으로 하여 영토주권 의식을 기른다.", - "[6사08-02]남북통일을 위한 노력을 살펴보고, 지구촌 평화에 기여하는 통일 한국의 미래상을 그려 본다.", - "[6사08-03]지구촌의 평화와 발전을 위협하는 다양한 갈등 사례를 조사하고 그 해결 방안을 탐색한다.", - "[6사08-04]지구촌의 평화와 발전을 위해 노력하는 다양한 행위 주체(개인, 국가, 국제기구, 비정부 기구 등)의 활동 사례를 조사한다.", - "[6사08-05]지구촌의 주요 환경문제를 조사하여 해결 방안을 탐색하고, 환경문제 해결에 협력하는 세계시민의 자세를 기른다.", - "[6사08-06]지속가능한 미래를 건설하기 위한 과제(친환경적 생산과 소비 방식 확산, 빈곤과 기아 퇴치, 문화적 편견과 차별 해소 등)를 조사하고, 세계시민으로서 이에 적극 참여하는 방안을 모색한다." - ], - "과학": [ - "[6과01-01]일상생활에서 온도를 어림하거나 측정하는 사례를 조사하고 정확한 온도 측정이 필요한 이유를 설명할 수 있다.", - "[6과01-02]온도가 다른 두 물체를 접촉하여 온도가 같아지는 현상을 관찰하고 물체의 온도 변화를 열의 이동으로 설명할 수 있다.", - "[6과01-03]고체 물질의 종류에 따라 열이 전도되는 빠르기를 관찰을 통해 비교하고 일상생활에서 단열을 이용하는 예를 조사할 수 있다.", - "[6과01-04]액체나 기체에서 대류 현상을 관찰하고 대류 현상에서 열의 이동을 설명할 수 있다.", - "[6과02-01]태양이 지구의 에너지원임을 이해하고 태양계를 구성하는 태양과 행성을 조사할 수 있다.", - "[6과02-02]별의 의미를 알고 대표적인 별자리를 조사할 수 있다.", - "[6과02-03]북쪽 하늘의 별자리를 이용하여 북극성을 찾을 수 있다.", - "[6과03-01]물질이 물에 녹는 현상을 관찰하고 용액을 설명할 수 있다.", - "[6과03-02]용질의 종류에 따라 물에 녹는 양이 달라짐을 비교할 수 있다.", - "[6과03-03]물의 온도에 따라 용질의 녹는 양이 달라짐을 실험할 수 있다.", - "[6과03-04]용액의 진하기를 상대적으로 비교하는 방법을 고안할 수 있다.", - "[6과04-01]동물과 식물 이외의 생물을 조사하여 생물의 종류와 특징을 말할 수 있다.", - "[6과04-03]우리 생활에 첨단 생명과학이 이용된 사례를 조사하여 발표할 수 있다.", - "[6과05-01]생태계가 생물 요소와 비생물 요소로 이루어져 있음을 알고 생태계 구성 요소들이 서로 영향을 주고받음을 설명할 수 있다.", - "[6과05-03]생태계 보전의 필요성을 인식하고 생태계 보전을 위해 우리가 할 수 있는 일에 대해 토의할 수 있다.", - "[6과06-01]습도를 측정하고 습도가 우리 생활에 영향을 주는 사례를 조사할 수 있다.", - "[6과06-02]이슬, 안개, 구름의 공통점과 차이점을 이해하고 비와 눈이 내리는 과정을 설명할 수 있다.", - "[6과06-03]저기압과 고기압이 무엇인지 알고 바람이 부는 이유를 설명할 수 있다.", - "[6과06-04]계절별 날씨의 특징을 우리나라에 영향을 주는 공기의 성질과 관련지을 수 있다.", - "[6과07-01]일상생활에서 물체의 운동을 관찰하여 속력을 정성적으로 비교할 수 있다.", - "[6과07-02]물체의 이동 거리와 걸린 시간을 조사하여 속력을 구할 수 있다.", - "[6과07-03]일상생활에서 속력과 관련된 안전 사항과 안전장치의 예를 찾아 발표할 수 있다.", - "[6과08-01]우리 주변에서 볼 수 있는 여러 가지 용액을 다양한 기준으로 분류할 수 있다.", - "[6과08-02]지시약을 이용하여 여러 가지 용액을 산성 용액과 염기성 용액으로 분류할 수 있다.", - "[6과08-03]산성 용액과 염기성 용액의 여러 가지 성질을 비교하고, 산성 용액과 염기성 용액을 섞었을 때의 변화를 관찰할 수 있다.", - "[6과08-04]우리 생활에서 산성 용액과 염기성 용액을 이용하는 예를 찾아 발표할 수 있다.", - "[6과09-01]하루 동안 태양과 달의 위치가 달라지는 것을 지구의 자전으로 설명할 수 있다.", - "[6과09-02]계절에 따라 별자리가 달라진다는 것을 지구의 공전으로 설명할 수 있다.", - "[6과09-03]달의 모양과 위치가 주기적으로 바뀌는 것을 관찰할 수 있다.", - "[6과10-01]산소, 이산화 탄소를 실험을 통해 발생시키고 성질을 확인한 후, 각 기체의 성질을 설명할 수 있다.", - "[6과10-02]온도와 압력에 따라 기체의 부피가 달라지는 현상을 관찰하고, 일상생활에서 이와 관련된 사례를 찾을 수 있다.", - "[6과10-03]공기를 이루는 여러 가지 기체를 조사하여 발표할 수 있다.", - "[6과11-01]햇빛이 프리즘에서 다양한 색으로 나타나는 현상을 관찰하여, 햇빛이 여러 가지 색의 빛으로 되어 있음을 설명할 수 있다.", - "[6과11-02]빛이 유리나 물, 볼록 렌즈를 통과하면서 굴절되는 현상을 관찰하고 관찰한 내용을 그림으로 표현할 수 있다.", - "[6과11-03]볼록 렌즈를 이용하여 물체의 모습을 관찰하고 볼록 렌즈의 쓰임새를 조사할 수 있다.", - "[6과12-01]생물체를 이루고 있는 기본 단위인 세포를 현미경으로 관찰할 수 있다.", - "[6과12-02]식물의 전체적인 구조 관찰과 실험을 통해 뿌리, 줄기, 잎, 꽃의 구조와 기능을 설명할 수 있다.", - "[6과12-03]여러 가지 식물의 씨가 퍼지는 방법을 조사하고, 씨가 퍼지는 방법이 다양함을 설명할 수 있다.", - "[6과13-01]전지와 전구, 전선을 연결하여 전구에 불이 켜지는 조건을 찾아 설명할 수 있다.", - "[6과13-02]전구를 직렬연결 할 때와 병렬연결 할 때 전구의 밝기 차이를 비교할 수 있다.", - "[6과13-03]전기를 절약하고 안전하게 사용하는 방법을 토의할 수 있다.", - "[6과13-04]전자석을 만들어 영구 자석과 전자석을 비교하고, 일상생활에서 전자석이 사용되는 예를 조사할 수 있다.", - "[6과14-01]하루 동안 태양의 고도, 그림자 길이, 기온을 측정하여 이들 사이의 관계를 찾을 수 있다.", - "[6과14-02]계절에 따른 태양의 남중 고도, 낮과 밤의 길이, 기온 변화를 설명할 수 있다.", - "[6과14-03]계절 변화의 원인은 지구 자전축이 기울어진 채 공전하기 때문임을 모형실험을 통해 설명할 수 있다.", - "[6과15-01]물질이 탈 때 나타나는 공통적인 현상을 관찰하고, 연소의 조건을 찾을 수 있다.", - "[6과15-02]실험을 통해 연소 후에 생성되는 물질을 찾을 수 있다.", - "[6과15-03]연소의 조건과 관련지어 소화 방법을 제안하고 화재 안전 대책에 대해 토의할 수 있다.", - "[6과16-01]뼈와 근육의 생김새와 기능을 이해하여 몸이 움직이는 원리를 설명할 수 있다.", - "[6과16-02]소화, 순환, 호흡, 배설 기관의 종류, 위치, 생김새, 기능을 설명할 수 있다.", - "[6과16-03]감각 기관의 종류, 위치, 생김새, 기능을 알고 자극이 전달되는 과정을 설명할 수 있다.", - "[6과16-04]운동할 때 우리 몸에서 나타나는 변화를 관찰하여 우리 몸의 여러 기관이 서로 관련되어 있음을 알 수 있다.", - "[6과17-01]생물이 살아가거나 기계를 움직이는 데 에너지가 필요함을 알고, 이때 이용하는 에너지의 형태를 조사할 수 있다.", - "[6과17-02]자연 현상이나 일상생활의 예를 통해 에너지의 형태가 전환됨을 알고, 에너지를 효율적으로 사용하는 방법을 토의할 수 있다.", - "[6과01-01]일상생활에서 온도를 어림하거나 측정하는 사례를 조사하고 정확한 온도 측정이 필요한 이유를 설명할 수 있다.", - "[6과01-02]온도가 다른 두 물체를 접촉하여 온도가 같아지는 현상을 관찰하고 물체의 온도 변화를 열의 이동으로 설명할 수 있다.", - "[6과01-03]고체 물질의 종류에 따라 열이 전도되는 빠르기를 관찰을 통해 비교하고 일상생활에서 단열을 이용하는 예를 조사할 수 있다.", - "[6과01-04]액체나 기체에서 대류 현상을 관찰하고 대류 현상에서 열의 이동을 설명할 수 있다.", - "[6과02-01]태양이 지구의 에너지원임을 이해하고 태양계를 구성하는 태양과 행성을 조사할 수 있다.", - "[6과02-02]별의 의미를 알고 대표적인 별자리를 조사할 수 있다.", - "[6과02-03]북쪽 하늘의 별자리를 이용하여 북극성을 찾을 수 있다.", - "[6과03-01]물질이 물에 녹는 현상을 관찰하고 용액을 설명할 수 있다.", - "[6과03-02]용질의 종류에 따라 물에 녹는 양이 달라짐을 비교할 수 있다.", - "[6과03-03]물의 온도에 따라 용질의 녹는 양이 달라짐을 실험할 수 있다.", - "[6과03-04]용액의 진하기를 상대적으로 비교하는 방법을 고안할 수 있다.", - "[6과04-01]동물과 식물 이외의 생물을 조사하여 생물의 종류와 특징을 말할 수 있다.", - "[6과04-03]우리 생활에 첨단 생명과학이 이용된 사례를 조사하여 발표할 수 있다.", - "[6과05-01]생태계가 생물 요소와 비생물 요소로 이루어져 있음을 알고 생태계 구성 요소들이 서로 영향을 주고받음을 설명할 수 있다.", - "[6과05-03]생태계 보전의 필요성을 인식하고 생태계 보전을 위해 우리가 할 수 있는 일에 대해 토의할 수 있다.", - "[6과06-01]습도를 측정하고 습도가 우리 생활에 영향을 주는 사례를 조사할 수 있다.", - "[6과06-02]이슬, 안개, 구름의 공통점과 차이점을 이해하고 비와 눈이 내리는 과정을 설명할 수 있다.", - "[6과06-03]저기압과 고기압이 무엇인지 알고 바람이 부는 이유를 설명할 수 있다.", - "[6과06-04]계절별 날씨의 특징을 우리나라에 영향을 주는 공기의 성질과 관련지을 수 있다.", - "[6과07-01]일상생활에서 물체의 운동을 관찰하여 속력을 정성적으로 비교할 수 있다.", - "[6과07-02]물체의 이동 거리와 걸린 시간을 조사하여 속력을 구할 수 있다.", - "[6과07-03]일상생활에서 속력과 관련된 안전 사항과 안전장치의 예를 찾아 발표할 수 있다.", - "[6과08-01]우리 주변에서 볼 수 있는 여러 가지 용액을 다양한 기준으로 분류할 수 있다.", - "[6과08-02]지시약을 이용하여 여러 가지 용액을 산성 용액과 염기성 용액으로 분류할 수 있다.", - "[6과08-03]산성 용액과 염기성 용액의 여러 가지 성질을 비교하고, 산성 용액과 염기성 용액을 섞었을 때의 변화를 관찰할 수 있다.", - "[6과08-04]우리 생활에서 산성 용액과 염기성 용액을 이용하는 예를 찾아 발표할 수 있다.", - "[6과09-01]하루 동안 태양과 달의 위치가 달라지는 것을 지구의 자전으로 설명할 수 있다.", - "[6과09-02]계절에 따라 별자리가 달라진다는 것을 지구의 공전으로 설명할 수 있다.", - "[6과09-03]달의 모양과 위치가 주기적으로 바뀌는 것을 관찰할 수 있다.", - "[6과10-01]산소, 이산화 탄소를 실험을 통해 발생시키고 성질을 확인한 후, 각 기체의 성질을 설명할 수 있다.", - "[6과10-02]온도와 압력에 따라 기체의 부피가 달라지는 현상을 관찰하고, 일상생활에서 이와 관련된 사례를 찾을 수 있다.", - "[6과10-03]공기를 이루는 여러 가지 기체를 조사하여 발표할 수 있다.", - "[6과11-01]햇빛이 프리즘에서 다양한 색으로 나타나는 현상을 관찰하여, 햇빛이 여러 가지 색의 빛으로 되어 있음을 설명할 수 있다.", - "[6과11-02]빛이 유리나 물, 볼록 렌즈를 통과하면서 굴절되는 현상을 관찰하고 관찰한 내용을 그림으로 표현할 수 있다.", - "[6과11-03]볼록 렌즈를 이용하여 물체의 모습을 관찰하고 볼록 렌즈의 쓰임새를 조사할 수 있다.", - "[6과12-01]생물체를 이루고 있는 기본 단위인 세포를 현미경으로 관찰할 수 있다.", - "[6과12-02]식물의 전체적인 구조 관찰과 실험을 통해 뿌리, 줄기, 잎, 꽃의 구조와 기능을 설명할 수 있다.", - "[6과12-03]여러 가지 식물의 씨가 퍼지는 방법을 조사하고, 씨가 퍼지는 방법이 다양함을 설명할 수 있다.", - "[6과13-01]전지와 전구, 전선을 연결하여 전구에 불이 켜지는 조건을 찾아 설명할 수 있다.", - "[6과13-02]전구를 직렬연결 할 때와 병렬연결 할 때 전구의 밝기 차이를 비교할 수 있다.", - "[6과13-03]전기를 절약하고 안전하게 사용하는 방법을 토의할 수 있다.", - "[6과13-04]전자석을 만들어 영구 자석과 전자석을 비교하고, 일상생활에서 전자석이 사용되는 예를 조사할 수 있다.", - "[6과14-01]하루 동안 태양의 고도, 그림자 길이, 기온을 측정하여 이들 사이의 관계를 찾을 수 있다.", - "[6과14-02]계절에 따른 태양의 남중 고도, 낮과 밤의 길이, 기온 변화를 설명할 수 있다.", - "[6과14-03]계절 변화의 원인은 지구 자전축이 기울어진 채 공전하기 때문임을 모형실험을 통해 설명할 수 있다.", - "[6과15-01]물질이 탈 때 나타나는 공통적인 현상을 관찰하고, 연소의 조건을 찾을 수 있다.", - "[6과15-02]실험을 통해 연소 후에 생성되는 물질을 찾을 수 있다.", - "[6과15-03]연소의 조건과 관련지어 소화 방법을 제안하고 화재 안전 대책에 대해 토의할 수 있다.", - "[6과16-01]뼈와 근육의 생김새와 기능을 이해하여 몸이 움직이는 원리를 설명할 수 있다.", - "[6과16-02]소화, 순환, 호흡, 배설 기관의 종류, 위치, 생김새, 기능을 설명할 수 있다.", - "[6과16-03]감각 기관의 종류, 위치, 생김새, 기능을 알고 자극이 전달되는 과정을 설명할 수 있다.", - "[6과16-04]운동할 때 우리 몸에서 나타나는 변화를 관찰하여 우리 몸의 여러 기관이 서로 관련되어 있음을 알 수 있다.", - "[6과17-01]생물이 살아가거나 기계를 움직이는 데 에너지가 필요함을 알고, 이때 이용하는 에너지의 형태를 조사할 수 있다.", - "[6과17-02]자연 현상이나 일상생활의 예를 통해 에너지의 형태가 전환됨을 알고, 에너지를 효율적으로 사용하는 방법을 토의할 수 있다.", - "[6과01-01]일상생활에서 온도를 어림하거나 측정하는 사례를 조사하고 정확한 온도 측정이 필요한 이유를 설명할 수 있다.", - "[6과01-02]온도가 다른 두 물체를 접촉하여 온도가 같아지는 현상을 관찰하고 물체의 온도 변화를 열의 이동으로 설명할 수 있다.", - "[6과01-03]고체 물질의 종류에 따라 열이 전도되는 빠르기를 관찰을 통해 비교하고 일상생활에서 단열을 이용하는 예를 조사할 수 있다.", - "[6과01-04]액체나 기체에서 대류 현상을 관찰하고 대류 현상에서 열의 이동을 설명할 수 있다.", - "[6과02-01]태양이 지구의 에너지원임을 이해하고 태양계를 구성하는 태양과 행성을 조사할 수 있다.", - "[6과02-02]별의 의미를 알고 대표적인 별자리를 조사할 수 있다.", - "[6과02-03]북쪽 하늘의 별자리를 이용하여 북극성을 찾을 수 있다.", - "[6과03-01]물질이 물에 녹는 현상을 관찰하고 용액을 설명할 수 있다.", - "[6과03-02]용질의 종류에 따라 물에 녹는 양이 달라짐을 비교할 수 있다.", - "[6과03-03]물의 온도에 따라 용질의 녹는 양이 달라짐을 실험할 수 있다.", - "[6과03-04]용액의 진하기를 상대적으로 비교하는 방법을 고안할 수 있다.", - "[6과04-01]동물과 식물 이외의 생물을 조사하여 생물의 종류와 특징을 말할 수 있다.", - "[6과04-03]우리 생활에 첨단 생명과학이 이용된 사례를 조사하여 발표할 수 있다.", - "[6과05-01]생태계가 생물 요소와 비생물 요소로 이루어져 있음을 알고 생태계 구성 요소들이 서로 영향을 주고받음을 설명할 수 있다.", - "[6과05-03]생태계 보전의 필요성을 인식하고 생태계 보전을 위해 우리가 할 수 있는 일에 대해 토의할 수 있다.", - "[6과06-01]습도를 측정하고 습도가 우리 생활에 영향을 주는 사례를 조사할 수 있다.", - "[6과06-02]이슬, 안개, 구름의 공통점과 차이점을 이해하고 비와 눈이 내리는 과정을 설명할 수 있다.", - "[6과06-03]저기압과 고기압이 무엇인지 알고 바람이 부는 이유를 설명할 수 있다.", - "[6과06-04]계절별 날씨의 특징을 우리나라에 영향을 주는 공기의 성질과 관련지을 수 있다.", - "[6과07-01]일상생활에서 물체의 운동을 관찰하여 속력을 정성적으로 비교할 수 있다.", - "[6과07-02]물체의 이동 거리와 걸린 시간을 조사하여 속력을 구할 수 있다.", - "[6과07-03]일상생활에서 속력과 관련된 안전 사항과 안전장치의 예를 찾아 발표할 수 있다.", - "[6과08-01]우리 주변에서 볼 수 있는 여러 가지 용액을 다양한 기준으로 분류할 수 있다.", - "[6과08-02]지시약을 이용하여 여러 가지 용액을 산성 용액과 염기성 용액으로 분류할 수 있다.", - "[6과08-03]산성 용액과 염기성 용액의 여러 가지 성질을 비교하고, 산성 용액과 염기성 용액을 섞었을 때의 변화를 관찰할 수 있다.", - "[6과08-04]우리 생활에서 산성 용액과 염기성 용액을 이용하는 예를 찾아 발표할 수 있다.", - "[6과09-01]하루 동안 태양과 달의 위치가 달라지는 것을 지구의 자전으로 설명할 수 있다.", - "[6과09-02]계절에 따라 별자리가 달라진다는 것을 지구의 공전으로 설명할 수 있다.", - "[6과09-03]달의 모양과 위치가 주기적으로 바뀌는 것을 관찰할 수 있다.", - "[6과10-01]산소, 이산화 탄소를 실험을 통해 발생시키고 성질을 확인한 후, 각 기체의 성질을 설명할 수 있다.", - "[6과10-02]온도와 압력에 따라 기체의 부피가 달라지는 현상을 관찰하고, 일상생활에서 이와 관련된 사례를 찾을 수 있다.", - "[6과10-03]공기를 이루는 여러 가지 기체를 조사하여 발표할 수 있다.", - "[6과11-01]햇빛이 프리즘에서 다양한 색으로 나타나는 현상을 관찰하여, 햇빛이 여러 가지 색의 빛으로 되어 있음을 설명할 수 있다.", - "[6과11-02]빛이 유리나 물, 볼록 렌즈를 통과하면서 굴절되는 현상을 관찰하고 관찰한 내용을 그림으로 표현할 수 있다.", - "[6과11-03]볼록 렌즈를 이용하여 물체의 모습을 관찰하고 볼록 렌즈의 쓰임새를 조사할 수 있다.", - "[6과12-01]생물체를 이루고 있는 기본 단위인 세포를 현미경으로 관찰할 수 있다.", - "[6과12-02]식물의 전체적인 구조 관찰과 실험을 통해 뿌리, 줄기, 잎, 꽃의 구조와 기능을 설명할 수 있다.", - "[6과12-03]여러 가지 식물의 씨가 퍼지는 방법을 조사하고, 씨가 퍼지는 방법이 다양함을 설명할 수 있다.", - "[6과13-01]전지와 전구, 전선을 연결하여 전구에 불이 켜지는 조건을 찾아 설명할 수 있다.", - "[6과13-02]전구를 직렬연결 할 때와 병렬연결 할 때 전구의 밝기 차이를 비교할 수 있다.", - "[6과13-03]전기를 절약하고 안전하게 사용하는 방법을 토의할 수 있다.", - "[6과13-04]전자석을 만들어 영구 자석과 전자석을 비교하고, 일상생활에서 전자석이 사용되는 예를 조사할 수 있다.", - "[6과14-01]하루 동안 태양의 고도, 그림자 길이, 기온을 측정하여 이들 사이의 관계를 찾을 수 있다.", - "[6과14-02]계절에 따른 태양의 남중 고도, 낮과 밤의 길이, 기온 변화를 설명할 수 있다.", - "[6과14-03]계절 변화의 원인은 지구 자전축이 기울어진 채 공전하기 때문임을 모형실험을 통해 설명할 수 있다.", - "[6과15-01]물질이 탈 때 나타나는 공통적인 현상을 관찰하고, 연소의 조건을 찾을 수 있다.", - "[6과15-02]실험을 통해 연소 후에 생성되는 물질을 찾을 수 있다.", - "[6과15-03]연소의 조건과 관련지어 소화 방법을 제안하고 화재 안전 대책에 대해 토의할 수 있다.", - "[6과16-01]뼈와 근육의 생김새와 기능을 이해하여 몸이 움직이는 원리를 설명할 수 있다.", - "[6과16-02]소화, 순환, 호흡, 배설 기관의 종류, 위치, 생김새, 기능을 설명할 수 있다.", - "[6과16-03]감각 기관의 종류, 위치, 생김새, 기능을 알고 자극이 전달되는 과정을 설명할 수 있다.", - "[6과16-04]운동할 때 우리 몸에서 나타나는 변화를 관찰하여 우리 몸의 여러 기관이 서로 관련되어 있음을 알 수 있다.", - "[6과17-01]생물이 살아가거나 기계를 움직이는 데 에너지가 필요함을 알고, 이때 이용하는 에너지의 형태를 조사할 수 있다.", - "[6과17-02]자연 현상이나 일상생활의 예를 통해 에너지의 형태가 전환됨을 알고, 에너지를 효율적으로 사용하는 방법을 토의할 수 있다.", - "[6과01-01]일상생활에서 온도를 어림하거나 측정하는 사례를 조사하고 정확한 온도 측정이 필요한 이유를 설명할 수 있다.", - "[6과01-02]온도가 다른 두 물체를 접촉하여 온도가 같아지는 현상을 관찰하고 물체의 온도 변화를 열의 이동으로 설명할 수 있다.", - "[6과01-03]고체 물질의 종류에 따라 열이 전도되는 빠르기를 관찰을 통해 비교하고 일상생활에서 단열을 이용하는 예를 조사할 수 있다.", - "[6과01-04]액체나 기체에서 대류 현상을 관찰하고 대류 현상에서 열의 이동을 설명할 수 있다.", - "[6과02-01]태양이 지구의 에너지원임을 이해하고 태양계를 구성하는 태양과 행성을 조사할 수 있다.", - "[6과02-02]별의 의미를 알고 대표적인 별자리를 조사할 수 있다.", - "[6과02-03]북쪽 하늘의 별자리를 이용하여 북극성을 찾을 수 있다.", - "[6과03-01]물질이 물에 녹는 현상을 관찰하고 용액을 설명할 수 있다.", - "[6과03-02]용질의 종류에 따라 물에 녹는 양이 달라짐을 비교할 수 있다.", - "[6과03-03]물의 온도에 따라 용질의 녹는 양이 달라짐을 실험할 수 있다.", - "[6과03-04]용액의 진하기를 상대적으로 비교하는 방법을 고안할 수 있다.", - "[6과04-01]동물과 식물 이외의 생물을 조사하여 생물의 종류와 특징을 말할 수 있다.", - "[6과04-03]우리 생활에 첨단 생명과학이 이용된 사례를 조사하여 발표할 수 있다.", - "[6과05-01]생태계가 생물 요소와 비생물 요소로 이루어져 있음을 알고 생태계 구성 요소들이 서로 영향을 주고받음을 설명할 수 있다.", - "[6과05-03]생태계 보전의 필요성을 인식하고 생태계 보전을 위해 우리가 할 수 있는 일에 대해 토의할 수 있다.", - "[6과06-01]습도를 측정하고 습도가 우리 생활에 영향을 주는 사례를 조사할 수 있다.", - "[6과06-02]이슬, 안개, 구름의 공통점과 차이점을 이해하고 비와 눈이 내리는 과정을 설명할 수 있다.", - "[6과06-03]저기압과 고기압이 무엇인지 알고 바람이 부는 이유를 설명할 수 있다.", - "[6과06-04]계절별 날씨의 특징을 우리나라에 영향을 주는 공기의 성질과 관련지을 수 있다.", - "[6과07-01]일상생활에서 물체의 운동을 관찰하여 속력을 정성적으로 비교할 수 있다.", - "[6과07-02]물체의 이동 거리와 걸린 시간을 조사하여 속력을 구할 수 있다.", - "[6과07-03]일상생활에서 속력과 관련된 안전 사항과 안전장치의 예를 찾아 발표할 수 있다.", - "[6과08-01]우리 주변에서 볼 수 있는 여러 가지 용액을 다양한 기준으로 분류할 수 있다.", - "[6과08-02]지시약을 이용하여 여러 가지 용액을 산성 용액과 염기성 용액으로 분류할 수 있다.", - "[6과08-03]산성 용액과 염기성 용액의 여러 가지 성질을 비교하고, 산성 용액과 염기성 용액을 섞었을 때의 변화를 관찰할 수 있다.", - "[6과08-04]우리 생활에서 산성 용액과 염기성 용액을 이용하는 예를 찾아 발표할 수 있다.", - "[6과09-01]하루 동안 태양과 달의 위치가 달라지는 것을 지구의 자전으로 설명할 수 있다.", - "[6과09-02]계절에 따라 별자리가 달라진다는 것을 지구의 공전으로 설명할 수 있다.", - "[6과09-03]달의 모양과 위치가 주기적으로 바뀌는 것을 관찰할 수 있다.", - "[6과10-01]산소, 이산화 탄소를 실험을 통해 발생시키고 성질을 확인한 후, 각 기체의 성질을 설명할 수 있다.", - "[6과10-02]온도와 압력에 따라 기체의 부피가 달라지는 현상을 관찰하고, 일상생활에서 이와 관련된 사례를 찾을 수 있다.", - "[6과10-03]공기를 이루는 여러 가지 기체를 조사하여 발표할 수 있다.", - "[6과11-01]햇빛이 프리즘에서 다양한 색으로 나타나는 현상을 관찰하여, 햇빛이 여러 가지 색의 빛으로 되어 있음을 설명할 수 있다.", - "[6과11-02]빛이 유리나 물, 볼록 렌즈를 통과하면서 굴절되는 현상을 관찰하고 관찰한 내용을 그림으로 표현할 수 있다.", - "[6과11-03]볼록 렌즈를 이용하여 물체의 모습을 관찰하고 볼록 렌즈의 쓰임새를 조사할 수 있다.", - "[6과12-01]생물체를 이루고 있는 기본 단위인 세포를 현미경으로 관찰할 수 있다.", - "[6과12-02]식물의 전체적인 구조 관찰과 실험을 통해 뿌리, 줄기, 잎, 꽃의 구조와 기능을 설명할 수 있다.", - "[6과12-03]여러 가지 식물의 씨가 퍼지는 방법을 조사하고, 씨가 퍼지는 방법이 다양함을 설명할 수 있다.", - "[6과13-01]전지와 전구, 전선을 연결하여 전구에 불이 켜지는 조건을 찾아 설명할 수 있다.", - "[6과13-02]전구를 직렬연결 할 때와 병렬연결 할 때 전구의 밝기 차이를 비교할 수 있다.", - "[6과13-03]전기를 절약하고 안전하게 사용하는 방법을 토의할 수 있다.", - "[6과13-04]전자석을 만들어 영구 자석과 전자석을 비교하고, 일상생활에서 전자석이 사용되는 예를 조사할 수 있다.", - "[6과14-01]하루 동안 태양의 고도, 그림자 길이, 기온을 측정하여 이들 사이의 관계를 찾을 수 있다.", - "[6과14-02]계절에 따른 태양의 남중 고도, 낮과 밤의 길이, 기온 변화를 설명할 수 있다.", - "[6과14-03]계절 변화의 원인은 지구 자전축이 기울어진 채 공전하기 때문임을 모형실험을 통해 설명할 수 있다.", - "[6과15-01]물질이 탈 때 나타나는 공통적인 현상을 관찰하고, 연소의 조건을 찾을 수 있다.", - "[6과15-02]실험을 통해 연소 후에 생성되는 물질을 찾을 수 있다.", - "[6과15-03]연소의 조건과 관련지어 소화 방법을 제안하고 화재 안전 대책에 대해 토의할 수 있다.", - "[6과16-01]뼈와 근육의 생김새와 기능을 이해하여 몸이 움직이는 원리를 설명할 수 있다.", - "[6과16-02]소화, 순환, 호흡, 배설 기관의 종류, 위치, 생김새, 기능을 설명할 수 있다.", - "[6과16-03]감각 기관의 종류, 위치, 생김새, 기능을 알고 자극이 전달되는 과정을 설명할 수 있다.", - "[6과16-04]운동할 때 우리 몸에서 나타나는 변화를 관찰하여 우리 몸의 여러 기관이 서로 관련되어 있음을 알 수 있다.", - "[6과17-01]생물이 살아가거나 기계를 움직이는 데 에너지가 필요함을 알고, 이때 이용하는 에너지의 형태를 조사할 수 있다.", - "[6과17-02]자연 현상이나 일상생활의 예를 통해 에너지의 형태가 전환됨을 알고, 에너지를 효율적으로 사용하는 방법을 토의할 수 있다." - ], - "영어": [ - "[6영01-01]두세 개의 연속된 지시나 설명을 듣고 이해할 수 있다.", - "[6영01-02]일상생활 속의 친숙한 주제에 관한 간단한 말이나 대화를 듣고 세부 정보를 파악할 수 있다.", - "[6영01-03]그림이나 도표에 대한 쉽고 간단한 말이나 대화를 듣고 세부 정보를 파악할 수 있다.", - "[6영01-04]대상을 비교하는 쉽고 간단한 말이나 대화를 듣고 세부 정보를 파악할 수 있다.", - "[6영01-05]쉽고 간단한 말이나 대화를 듣고 줄거리를 파악할 수 있다.", - "[6영01-06]쉽고 간단한 말이나 대화를 듣고 목적을 파악할 수 있다.", - "[6영01-07]쉽고 간단한 말이나 대화를 듣고 일의 순서를 파악할 수 있다.", - "[6영02-01]그림, 실물, 동작에 관해 한두 문장으로 표현할 수 있다.", - "[6영02-02]주변 사람에 관해 쉽고 간단한 문장으로 소개할 수 있다.", - "[6영02-03]주변 사람과 사물에 관해 쉽고 간단한 문장으로 묘사할 수 있다.", - "[6영02-04]주변 위치나 장소에 관해 쉽고 간단한 문장으로 설명할 수 있다.", - "[6영02-05]간단한 그림이나 도표의 세부 정보에 대해 묻거나 답할 수 있다.", - "[6영02-06]자신의 경험이나 계획에 대해 간단히 묻거나 답할 수 있다.", - "[6영02-07]일상생활 속의 친숙한 주제에 관해 간단히 묻거나 답할 수 있다.", - "[6영03-01]쉽고 간단한 문장을 강세, 리듬, 억양에 맞게 소리 내어 읽을 수 있다.", - "[6영03-02]그림이나 도표에 대한 쉽고 짧은 글을 읽고 세부 정보를 파악할 수 있다.", - "[6영03-03]일상생활 속의 친숙한 주제에 관한 쉽고 짧은 글을 읽고 세부 정보를 파악할 수 있다.", - "[6영03-04]쉽고 짧은 글을 읽고 줄거리나 목적 등 중심 내용을 파악할 수 있다.", - "[6영04-01]소리와 철자의 관계를 바탕으로 쉽고 간단한 낱말이나 어구를 듣고 쓸 수 있다.", - "[6영04-02]알파벳 대소문자와 문장부호를 문장에서 바르게 사용할 수 있다.", - "[6영04-03]구두로 익힌 문장을 쓸 수 있다.", - "[6영04-04]실물이나 그림을 보고 한두 문장으로 표현할 수 있다.", - "[6영04-05]예시문을 참고하여 간단한 초대, 감사, 축하 등의 글을 쓸 수 있다.", - "[6영01-01]두세 개의 연속된 지시나 설명을 듣고 이해할 수 있다.", - "[6영01-02]일상생활 속의 친숙한 주제에 관한 간단한 말이나 대화를 듣고 세부 정보를 파악할 수 있다.", - "[6영01-03]그림이나 도표에 대한 쉽고 간단한 말이나 대화를 듣고 세부 정보를 파악할 수 있다.", - "[6영01-04]대상을 비교하는 쉽고 간단한 말이나 대화를 듣고 세부 정보를 파악할 수 있다.", - "[6영01-05]쉽고 간단한 말이나 대화를 듣고 줄거리를 파악할 수 있다.", - "[6영01-06]쉽고 간단한 말이나 대화를 듣고 목적을 파악할 수 있다.", - "[6영01-07]쉽고 간단한 말이나 대화를 듣고 일의 순서를 파악할 수 있다.", - "[6영02-01]그림, 실물, 동작에 관해 한두 문장으로 표현할 수 있다.", - "[6영02-02]주변 사람에 관해 쉽고 간단한 문장으로 소개할 수 있다.", - "[6영02-03]주변 사람과 사물에 관해 쉽고 간단한 문장으로 묘사할 수 있다.", - "[6영02-04]주변 위치나 장소에 관해 쉽고 간단한 문장으로 설명할 수 있다.", - "[6영02-05]간단한 그림이나 도표의 세부 정보에 대해 묻거나 답할 수 있다.", - "[6영02-06]자신의 경험이나 계획에 대해 간단히 묻거나 답할 수 있다.", - "[6영02-07]일상생활 속의 친숙한 주제에 관해 간단히 묻거나 답할 수 있다.", - "[6영03-01]쉽고 간단한 문장을 강세, 리듬, 억양에 맞게 소리 내어 읽을 수 있다.", - "[6영03-02]그림이나 도표에 대한 쉽고 짧은 글을 읽고 세부 정보를 파악할 수 있다.", - "[6영03-03]일상생활 속의 친숙한 주제에 관한 쉽고 짧은 글을 읽고 세부 정보를 파악할 수 있다.", - "[6영03-04]쉽고 짧은 글을 읽고 줄거리나 목적 등 중심 내용을 파악할 수 있다.", - "[6영04-01]소리와 철자의 관계를 바탕으로 쉽고 간단한 낱말이나 어구를 듣고 쓸 수 있다.", - "[6영04-02]알파벳 대소문자와 문장부호를 문장에서 바르게 사용할 수 있다.", - "[6영04-03]구두로 익힌 문장을 쓸 수 있다.", - "[6영04-04]실물이나 그림을 보고 한두 문장으로 표현할 수 있다.", - "[6영04-05]예시문을 참고하여 간단한 초대, 감사, 축하 등의 글을 쓸 수 있다.", - "[6영01-01]두세 개의 연속된 지시나 설명을 듣고 이해할 수 있다.", - "[6영01-02]일상생활 속의 친숙한 주제에 관한 간단한 말이나 대화를 듣고 세부 정보를 파악할 수 있다.", - "[6영01-03]그림이나 도표에 대한 쉽고 간단한 말이나 대화를 듣고 세부 정보를 파악할 수 있다.", - "[6영01-04]대상을 비교하는 쉽고 간단한 말이나 대화를 듣고 세부 정보를 파악할 수 있다.", - "[6영01-05]쉽고 간단한 말이나 대화를 듣고 줄거리를 파악할 수 있다.", - "[6영01-06]쉽고 간단한 말이나 대화를 듣고 목적을 파악할 수 있다.", - "[6영01-07]쉽고 간단한 말이나 대화를 듣고 일의 순서를 파악할 수 있다.", - "[6영02-01]그림, 실물, 동작에 관해 한두 문장으로 표현할 수 있다.", - "[6영02-02]주변 사람에 관해 쉽고 간단한 문장으로 소개할 수 있다.", - "[6영02-03]주변 사람과 사물에 관해 쉽고 간단한 문장으로 묘사할 수 있다.", - "[6영02-04]주변 위치나 장소에 관해 쉽고 간단한 문장으로 설명할 수 있다.", - "[6영02-05]간단한 그림이나 도표의 세부 정보에 대해 묻거나 답할 수 있다.", - "[6영02-06]자신의 경험이나 계획에 대해 간단히 묻거나 답할 수 있다.", - "[6영02-07]일상생활 속의 친숙한 주제에 관해 간단히 묻거나 답할 수 있다.", - "[6영03-01]쉽고 간단한 문장을 강세, 리듬, 억양에 맞게 소리 내어 읽을 수 있다.", - "[6영03-02]그림이나 도표에 대한 쉽고 짧은 글을 읽고 세부 정보를 파악할 수 있다.", - "[6영03-03]일상생활 속의 친숙한 주제에 관한 쉽고 짧은 글을 읽고 세부 정보를 파악할 수 있다.", - "[6영03-04]쉽고 짧은 글을 읽고 줄거리나 목적 등 중심 내용을 파악할 수 있다.", - "[6영04-01]소리와 철자의 관계를 바탕으로 쉽고 간단한 낱말이나 어구를 듣고 쓸 수 있다.", - "[6영04-02]알파벳 대소문자와 문장부호를 문장에서 바르게 사용할 수 있다.", - "[6영04-03]구두로 익힌 문장을 쓸 수 있다.", - "[6영04-04]실물이나 그림을 보고 한두 문장으로 표현할 수 있다.", - "[6영04-05]예시문을 참고하여 간단한 초대, 감사, 축하 등의 글을 쓸 수 있다.", - "[6영01-01]두세 개의 연속된 지시나 설명을 듣고 이해할 수 있다.", - "[6영01-02]일상생활 속의 친숙한 주제에 관한 간단한 말이나 대화를 듣고 세부 정보를 파악할 수 있다.", - "[6영01-03]그림이나 도표에 대한 쉽고 간단한 말이나 대화를 듣고 세부 정보를 파악할 수 있다.", - "[6영01-04]대상을 비교하는 쉽고 간단한 말이나 대화를 듣고 세부 정보를 파악할 수 있다.", - "[6영01-05]쉽고 간단한 말이나 대화를 듣고 줄거리를 파악할 수 있다.", - "[6영01-06]쉽고 간단한 말이나 대화를 듣고 목적을 파악할 수 있다.", - "[6영01-07]쉽고 간단한 말이나 대화를 듣고 일의 순서를 파악할 수 있다.", - "[6영02-01]그림, 실물, 동작에 관해 한두 문장으로 표현할 수 있다.", - "[6영02-02]주변 사람에 관해 쉽고 간단한 문장으로 소개할 수 있다.", - "[6영02-03]주변 사람과 사물에 관해 쉽고 간단한 문장으로 묘사할 수 있다.", - "[6영02-04]주변 위치나 장소에 관해 쉽고 간단한 문장으로 설명할 수 있다.", - "[6영02-05]간단한 그림이나 도표의 세부 정보에 대해 묻거나 답할 수 있다.", - "[6영02-06]자신의 경험이나 계획에 대해 간단히 묻거나 답할 수 있다.", - "[6영02-07]일상생활 속의 친숙한 주제에 관해 간단히 묻거나 답할 수 있다.", - "[6영03-01]쉽고 간단한 문장을 강세, 리듬, 억양에 맞게 소리 내어 읽을 수 있다.", - "[6영03-02]그림이나 도표에 대한 쉽고 짧은 글을 읽고 세부 정보를 파악할 수 있다.", - "[6영03-03]일상생활 속의 친숙한 주제에 관한 쉽고 짧은 글을 읽고 세부 정보를 파악할 수 있다.", - "[6영03-04]쉽고 짧은 글을 읽고 줄거리나 목적 등 중심 내용을 파악할 수 있다.", - "[6영04-01]소리와 철자의 관계를 바탕으로 쉽고 간단한 낱말이나 어구를 듣고 쓸 수 있다.", - "[6영04-02]알파벳 대소문자와 문장부호를 문장에서 바르게 사용할 수 있다.", - "[6영04-03]구두로 익힌 문장을 쓸 수 있다.", - "[6영04-04]실물이나 그림을 보고 한두 문장으로 표현할 수 있다.", - "[6영04-05]예시문을 참고하여 간단한 초대, 감사, 축하 등의 글을 쓸 수 있다." - ], - "음악": [ - "[6음01-01]악곡의 특징을 이해하며 노래 부르거나 악기로 연주한다.", - "[6음01-02]악곡에 어울리는 신체표현을 한다.", - "[6음01-03]제재곡의 노랫말을 바꾸거나 노랫말에 맞는 말붙임새로 만든다.", - "[6음01-04]제재곡의 일부가락을 바꾸어 표현한다.", - "[6음01-05]이야기의 장면이나 상황을 음악으로 표현한다.", - "[6음01-06]바른 자세와 호흡으로 노래 부르거나 바른 자세와 주법으로 악기를 연주한다.", - "[6음02-01]5∼6학년 수준의 음악 요소와 개념을 구별하여 표현한다.", - "[6음02-02]다양한 문화권의 음악을 듣고 음악의 특징에 대해 발표한다.", - "[6음03-01]음악을 활용하여 가정, 학교, 사회 등의 행사에 참여하고 느낌을 발표한다.", - "[6음03-02]음악이 심신 건강에 미치는 영향에 대해 발표한다.", - "[6음03-03]우리 지역에 전승되어 오는 음악 문화유산을 찾아 발표한다.", - "[6음01-01]악곡의 특징을 이해하며 노래 부르거나 악기로 연주한다.", - "[6음01-02]악곡에 어울리는 신체표현을 한다.", - "[6음01-03]제재곡의 노랫말을 바꾸거나 노랫말에 맞는 말붙임새로 만든다.", - "[6음01-04]제재곡의 일부가락을 바꾸어 표현한다.", - "[6음01-05]이야기의 장면이나 상황을 음악으로 표현한다.", - "[6음01-06]바른 자세와 호흡으로 노래 부르거나 바른 자세와 주법으로 악기를 연주한다.", - "[6음02-01]5∼6학년 수준의 음악 요소와 개념을 구별하여 표현한다.", - "[6음02-02]다양한 문화권의 음악을 듣고 음악의 특징에 대해 발표한다.", - "[6음03-01]음악을 활용하여 가정, 학교, 사회 등의 행사에 참여하고 느낌을 발표한다.", - "[6음03-02]음악이 심신 건강에 미치는 영향에 대해 발표한다.", - "[6음03-03]우리 지역에 전승되어 오는 음악 문화유산을 찾아 발표한다.", - "[6음01-01]악곡의 특징을 이해하며 노래 부르거나 악기로 연주한다.", - "[6음01-02]악곡에 어울리는 신체표현을 한다.", - "[6음01-03]제재곡의 노랫말을 바꾸거나 노랫말에 맞는 말붙임새로 만든다.", - "[6음01-04]제재곡의 일부가락을 바꾸어 표현한다.", - "[6음01-05]이야기의 장면이나 상황을 음악으로 표현한다.", - "[6음01-06]바른 자세와 호흡으로 노래 부르거나 바른 자세와 주법으로 악기를 연주한다.", - "[6음02-01]5∼6학년 수준의 음악 요소와 개념을 구별하여 표현한다.", - "[6음02-02]다양한 문화권의 음악을 듣고 음악의 특징에 대해 발표한다.", - "[6음03-01]음악을 활용하여 가정, 학교, 사회 등의 행사에 참여하고 느낌을 발표한다.", - "[6음03-02]음악이 심신 건강에 미치는 영향에 대해 발표한다.", - "[6음03-03]우리 지역에 전승되어 오는 음악 문화유산을 찾아 발표한다.", - "[6음01-01]악곡의 특징을 이해하며 노래 부르거나 악기로 연주한다.", - "[6음01-02]악곡에 어울리는 신체표현을 한다.", - "[6음01-03]제재곡의 노랫말을 바꾸거나 노랫말에 맞는 말붙임새로 만든다.", - "[6음01-04]제재곡의 일부가락을 바꾸어 표현한다.", - "[6음01-05]이야기의 장면이나 상황을 음악으로 표현한다.", - "[6음01-06]바른 자세와 호흡으로 노래 부르거나 바른 자세와 주법으로 악기를 연주한다.", - "[6음02-01]5∼6학년 수준의 음악 요소와 개념을 구별하여 표현한다.", - "[6음02-02]다양한 문화권의 음악을 듣고 음악의 특징에 대해 발표한다.", - "[6음03-01]음악을 활용하여 가정, 학교, 사회 등의 행사에 참여하고 느낌을 발표한다.", - "[6음03-02]음악이 심신 건강에 미치는 영향에 대해 발표한다.", - "[6음03-03]우리 지역에 전승되어 오는 음악 문화유산을 찾아 발표한다." - ], - "미술": [ - "[6미01-01]자신의 특징을 다양한 방법으로 탐색할 수 있다.", - "[6미01-02]대상이나 현상에서 시각적 특징을 발견할 수 있다.", - "[6미01-03]이미지가 나타내는 의미를 찾을 수 있다.", - "[6미01-04]이미지를 활용하여 자신의 느낌과 생각을 전달할 수 있다.", - "[6미01-05]미술 활동에 타 교과의 내용, 방법 등을 활용할 수 있다.", - "[6미02-01]표현 주제를 잘 나타낼 수 있는 다양한 소재를 탐색할 수 있다.", - "[6미02-02]다양한 발상 방법으로 아이디어를 발전시킬 수 있다.", - "[6미02-03]다양한 자료를 활용하여 아이디어와 관련된 표현 내용을 구체화할 수 있다.", - "[6미02-04]조형 원리(비례, 율동, 강조, 반복, 통일, 균형, 대비, 대칭, 점증・점이, 조화, 변화, 동세 등)의 특징을 탐색하고, 표현 의도에 적합하게 활용할 수 있다.", - "[6미02-05]다양한 표현 방법의 특징과 과정을 탐색하여 활용할 수 있다.", - "[6미02-06]작품 제작의 전체 과정에서 느낀 점, 알게 된 점 등을 서로 이야기할 수 있다.", - "[6미03-01]우리나라 전통 미술의 특징을 현대 미술과 비교할 수 있다.", - "[6미03-02]미술 작품이 시대적 배경과 관련된다는 것을 이해할 수 있다.", - "[6미03-03]미술 작품의 내용(소재, 주제 등)과 형식(재료와 용구, 표현 방법, 조형 요소와 원리 등)을 미술 용어를 활용하여 설명할 수 있다.", - "[6미03-04]다양한 감상 방법(비교 또는 단독 감상, 내용 또는 형식 감상 등)을 알고 활용할 수 있다.", - "[6미01-01]자신의 특징을 다양한 방법으로 탐색할 수 있다.", - "[6미01-02]대상이나 현상에서 시각적 특징을 발견할 수 있다.", - "[6미01-03]이미지가 나타내는 의미를 찾을 수 있다.", - "[6미01-04]이미지를 활용하여 자신의 느낌과 생각을 전달할 수 있다.", - "[6미01-05]미술 활동에 타 교과의 내용, 방법 등을 활용할 수 있다.", - "[6미02-01]표현 주제를 잘 나타낼 수 있는 다양한 소재를 탐색할 수 있다.", - "[6미02-02]다양한 발상 방법으로 아이디어를 발전시킬 수 있다.", - "[6미02-03]다양한 자료를 활용하여 아이디어와 관련된 표현 내용을 구체화할 수 있다.", - "[6미02-04]조형 원리(비례, 율동, 강조, 반복, 통일, 균형, 대비, 대칭, 점증・점이, 조화, 변화, 동세 등)의 특징을 탐색하고, 표현 의도에 적합하게 활용할 수 있다.", - "[6미02-05]다양한 표현 방법의 특징과 과정을 탐색하여 활용할 수 있다.", - "[6미02-06]작품 제작의 전체 과정에서 느낀 점, 알게 된 점 등을 서로 이야기할 수 있다.", - "[6미03-01]우리나라 전통 미술의 특징을 현대 미술과 비교할 수 있다.", - "[6미03-02]미술 작품이 시대적 배경과 관련된다는 것을 이해할 수 있다.", - "[6미03-03]미술 작품의 내용(소재, 주제 등)과 형식(재료와 용구, 표현 방법, 조형 요소와 원리 등)을 미술 용어를 활용하여 설명할 수 있다.", - "[6미03-04]다양한 감상 방법(비교 또는 단독 감상, 내용 또는 형식 감상 등)을 알고 활용할 수 있다.", - "[6미01-01]자신의 특징을 다양한 방법으로 탐색할 수 있다.", - "[6미01-02]대상이나 현상에서 시각적 특징을 발견할 수 있다.", - "[6미01-03]이미지가 나타내는 의미를 찾을 수 있다.", - "[6미01-04]이미지를 활용하여 자신의 느낌과 생각을 전달할 수 있다.", - "[6미01-05]미술 활동에 타 교과의 내용, 방법 등을 활용할 수 있다.", - "[6미02-01]표현 주제를 잘 나타낼 수 있는 다양한 소재를 탐색할 수 있다.", - "[6미02-02]다양한 발상 방법으로 아이디어를 발전시킬 수 있다.", - "[6미02-03]다양한 자료를 활용하여 아이디어와 관련된 표현 내용을 구체화할 수 있다.", - "[6미02-04]조형 원리(비례, 율동, 강조, 반복, 통일, 균형, 대비, 대칭, 점증・점이, 조화, 변화, 동세 등)의 특징을 탐색하고, 표현 의도에 적합하게 활용할 수 있다.", - "[6미02-05]다양한 표현 방법의 특징과 과정을 탐색하여 활용할 수 있다.", - "[6미02-06]작품 제작의 전체 과정에서 느낀 점, 알게 된 점 등을 서로 이야기할 수 있다.", - "[6미03-01]우리나라 전통 미술의 특징을 현대 미술과 비교할 수 있다.", - "[6미03-02]미술 작품이 시대적 배경과 관련된다는 것을 이해할 수 있다.", - "[6미03-03]미술 작품의 내용(소재, 주제 등)과 형식(재료와 용구, 표현 방법, 조형 요소와 원리 등)을 미술 용어를 활용하여 설명할 수 있다.", - "[6미03-04]다양한 감상 방법(비교 또는 단독 감상, 내용 또는 형식 감상 등)을 알고 활용할 수 있다.", - "[6미01-01]자신의 특징을 다양한 방법으로 탐색할 수 있다.", - "[6미01-02]대상이나 현상에서 시각적 특징을 발견할 수 있다.", - "[6미01-03]이미지가 나타내는 의미를 찾을 수 있다.", - "[6미01-04]이미지를 활용하여 자신의 느낌과 생각을 전달할 수 있다.", - "[6미01-05]미술 활동에 타 교과의 내용, 방법 등을 활용할 수 있다.", - "[6미02-01]표현 주제를 잘 나타낼 수 있는 다양한 소재를 탐색할 수 있다.", - "[6미02-02]다양한 발상 방법으로 아이디어를 발전시킬 수 있다.", - "[6미02-03]다양한 자료를 활용하여 아이디어와 관련된 표현 내용을 구체화할 수 있다.", - "[6미02-04]조형 원리(비례, 율동, 강조, 반복, 통일, 균형, 대비, 대칭, 점증・점이, 조화, 변화, 동세 등)의 특징을 탐색하고, 표현 의도에 적합하게 활용할 수 있다.", - "[6미02-05]다양한 표현 방법의 특징과 과정을 탐색하여 활용할 수 있다.", - "[6미02-06]작품 제작의 전체 과정에서 느낀 점, 알게 된 점 등을 서로 이야기할 수 있다.", - "[6미03-01]우리나라 전통 미술의 특징을 현대 미술과 비교할 수 있다.", - "[6미03-02]미술 작품이 시대적 배경과 관련된다는 것을 이해할 수 있다.", - "[6미03-03]미술 작품의 내용(소재, 주제 등)과 형식(재료와 용구, 표현 방법, 조형 요소와 원리 등)을 미술 용어를 활용하여 설명할 수 있다.", - "[6미03-04]다양한 감상 방법(비교 또는 단독 감상, 내용 또는 형식 감상 등)을 알고 활용할 수 있다." - ], - "체육": [ - "[6체01-01]성장에 따른 신체적 변화를 수용하고 건강한 성장과 발달을 저해하는 생활 양식(흡연, 음주, 약물 오남용 등)의 위험성을 인식한다.", - "[6체01-02]건강을 유지하기 위한 체력 운동을 선택하고 자신의 수준에 맞게 운동 계획을 세워 실천한다.", - "[6체01-03]신체활동 참여를 통해 부족했던 체력의 향상을 체험함으로써 타인과 다른 자신의 신체적 기량과 특성을 긍정적으로 수용 한다.", - "[6체01-04]건강한 생활을 위한 신체적 여가 활동 계획을 수립하여 실천한다.", - "[6체01-05]운동 능력을 향상시키기 위한 체력 운동을 선택하고 자신의 수준에 맞는 운동 계획을 세워 실천한다.", - "[6체01-06]건강 증진을 위해 계획에 따라 운동 및 여가 활동에 열정을 갖고 꾸준히 참여한다.", - "[6체02-01]자신의 기록을 향상시키려는 거리 도전의 개념과 특성을 탐색한다.", - "[6체02-02]거리 도전과 관련된 여러 유형의 활동에 참여해 자신의 기록을 향상할 수 있는 기본자세와 동작을 이해하고 도전 상황에 적용한다.", - "[6체02-03]거리 도전의 결과를 시기별로 측정하여 도전 과정의 장단점을 분석하고 기록을 향상할 수 있는 방법을 지속적으로 수행하고 평가한다.", - "[6체02-04]상황과 환경에 관계없이 해낼 수 있는 자신감을 갖고 적극적으로 거리 기록 향상에 도전한다.", - "[6체02-05]새로운 기록을 수립하거나 상대방의 신체적 기량에 앞서기 위해 수행하는 표적/투기 도전의 개념과 특성을 탐색한다.", - "[6체02-06]표적/투기 도전과 관련된 여러 유형의 활동에 참여해 자신의 성공 수행을 높일 수 있는 기본자세와 동작을 이해하고 도전 상황에 적용한다.", - "[6체02-07]표적/투기 도전의 결과를 지속적으로 측정 및 점검하여 그 과정의 장단점을 분석하고 보다 좋은 결과를 얻을 수 있는 방법을 지속적으로 수행하고 평가한다.", - "[6체02-08]표적/투기 도전의 참여 과정과 결과를 반성하고 어떠한 상황에서도 상대방을 존중하고 게임에 최선을 다하는 겸손한 자세로 도전한다.", - "[6체03-01]필드형 게임을 체험함으로써 동일한 공간에서 공격과 수비를 번갈아 하며 상대의 빈 공간으로 공을 보내고 정해진 구역을 돌아 점수를 얻는 필드형 경쟁의 개념과 특성을 탐색한다.", - "[6체03-02]필드형 게임의 기본 기능을 탐색하고 게임 상황에 적용한다.", - "[6체03-03]필드형 게임 방법에 대한 이해를 바탕으로 게임을 유리하게 전개할 수 있는 전략을 탐색하고 적용한다.", - "[6체03-04]필드형 경쟁 활동에 참여하면서 책임의 중요성을 인식하고 이를 바탕으로 맡은 바 역할에 최선을 다하며 게임을 수행한다.", - "[6체03-05]네트형 게임을 종합적으로 체험함으로써 네트 너머에 있는 상대의 빈 공간에 공을 보내 받아 넘기지 못하게 하여 득점하는 네트형 경쟁의 개념과 특성을 탐색한다.", - "[6체03-06]네트형 게임의 기본 기능을 탐색하고 게임 상황에 맞게 적용한다.", - "[6체03-07]네트형 게임 방법에 대한 이해를 바탕으로 게임을 유리하게 전개할 수 있는 전략을 탐색하고 적용한다.", - "[6체03-08]네트형 경쟁 활동에 참여하면서 다른 사람들의 입장을 이해하고 공감하며 게임을 수행한다.", - "[6체04-01]세계 여러 나라의 전통적인 민속 표현의 종류와 특징을 탐색한다.", - "[6체04-02]세계 여러 나라 민속 표현의 고유한 특징을 효과적으로 표현하는 데 적합한 기본 동작을 적용한다.", - "[6체04-03]민속 표현 활동에 포함된 다양한 표현 방법(기본 움직임, 대형, 리듬 등)을 바탕으로 작품을 구성하여 발표하고 이를 감상한다.", - "[6체04-04]세계 여러 민족의 문화적 특성을 이해하고 존중하는 개방적인 마음으로 참여한다.", - "[6체04-05]주제 표현을 구성하는 표현 요소(신체 인식, 공간 인식, 노력, 관계 등)와 창작 과정(발상, 계획, 구성, 수행 등)의 특징을 탐색한다.", - "[6체04-06]정해진 주제나 소재의 특징적인 면을 살려 신체활동으로 표현하는 데 적합한 기본 동작을 다양한 상황에 적용한다.", - "[6체04-07]주제 표현 활동을 하는 데 필요한 다양한 표현 방법을 바탕으로 개인 또는 모둠별로 작품을 창의적으로 구성하여 발표하고 이를 감상한다.", - "[6체04-08]주제와 관련된 다양한 표현 방식을 이해하고 자신의 느낌과 생각에 따라 창의적인 방법으로 표현한다.", - "[6체05-01]운동 시 발생할 수 있는 응급 상황(출혈, 염좌, 골절 등)의 종류와 특징을 조사하고 상황에 따른 대처법을 탐색한다.", - "[6체05-02]빙상․설상에서 발생하는 안전사고의 사례를 조사하고 예방 및 대처 방법을 익혀 위험 상황에 대처한다.", - "[6체05-03]일상생활이나 운동 중 발생할 수 있는 위험 상황에서 약속된 절차를 떠올리며 침착하게 행동한다.", - "[6체05-04]운동 시설 이용 시 발생할 수 있는 안전사고의 종류와 원인을 탐색한다.", - "[6체05-05]야외 활동에서 발생하는 안전사고의 사례를 조사하고 예방 및 대처 방법을 익혀 위험 상황에 대처한다.", - "[6체05-06]신체 부상이 우려되는 위험한 상황이나 재난 발생 시 피해 상황을 신속하게 판단하여 안전하게 대처한다.", - "[6체01-01]성장에 따른 신체적 변화를 수용하고 건강한 성장과 발달을 저해하는 생활 양식(흡연, 음주, 약물 오남용 등)의 위험성을 인식한다.", - "[6체01-02]건강을 유지하기 위한 체력 운동을 선택하고 자신의 수준에 맞게 운동 계획을 세워 실천한다.", - "[6체01-03]신체활동 참여를 통해 부족했던 체력의 향상을 체험함으로써 타인과 다른 자신의 신체적 기량과 특성을 긍정적으로 수용 한다.", - "[6체01-04]건강한 생활을 위한 신체적 여가 활동 계획을 수립하여 실천한다.", - "[6체01-05]운동 능력을 향상시키기 위한 체력 운동을 선택하고 자신의 수준에 맞는 운동 계획을 세워 실천한다.", - "[6체01-06]건강 증진을 위해 계획에 따라 운동 및 여가 활동에 열정을 갖고 꾸준히 참여한다.", - "[6체02-01]자신의 기록을 향상시키려는 거리 도전의 개념과 특성을 탐색한다.", - "[6체02-02]거리 도전과 관련된 여러 유형의 활동에 참여해 자신의 기록을 향상할 수 있는 기본자세와 동작을 이해하고 도전 상황에 적용한다.", - "[6체02-03]거리 도전의 결과를 시기별로 측정하여 도전 과정의 장단점을 분석하고 기록을 향상할 수 있는 방법을 지속적으로 수행하고 평가한다.", - "[6체02-04]상황과 환경에 관계없이 해낼 수 있는 자신감을 갖고 적극적으로 거리 기록 향상에 도전한다.", - "[6체02-05]새로운 기록을 수립하거나 상대방의 신체적 기량에 앞서기 위해 수행하는 표적/투기 도전의 개념과 특성을 탐색한다.", - "[6체02-06]표적/투기 도전과 관련된 여러 유형의 활동에 참여해 자신의 성공 수행을 높일 수 있는 기본자세와 동작을 이해하고 도전 상황에 적용한다.", - "[6체02-07]표적/투기 도전의 결과를 지속적으로 측정 및 점검하여 그 과정의 장단점을 분석하고 보다 좋은 결과를 얻을 수 있는 방법을 지속적으로 수행하고 평가한다.", - "[6체02-08]표적/투기 도전의 참여 과정과 결과를 반성하고 어떠한 상황에서도 상대방을 존중하고 게임에 최선을 다하는 겸손한 자세로 도전한다.", - "[6체03-01]필드형 게임을 체험함으로써 동일한 공간에서 공격과 수비를 번갈아 하며 상대의 빈 공간으로 공을 보내고 정해진 구역을 돌아 점수를 얻는 필드형 경쟁의 개념과 특성을 탐색한다.", - "[6체03-02]필드형 게임의 기본 기능을 탐색하고 게임 상황에 적용한다.", - "[6체03-03]필드형 게임 방법에 대한 이해를 바탕으로 게임을 유리하게 전개할 수 있는 전략을 탐색하고 적용한다.", - "[6체03-04]필드형 경쟁 활동에 참여하면서 책임의 중요성을 인식하고 이를 바탕으로 맡은 바 역할에 최선을 다하며 게임을 수행한다.", - "[6체03-05]네트형 게임을 종합적으로 체험함으로써 네트 너머에 있는 상대의 빈 공간에 공을 보내 받아 넘기지 못하게 하여 득점하는 네트형 경쟁의 개념과 특성을 탐색한다.", - "[6체03-06]네트형 게임의 기본 기능을 탐색하고 게임 상황에 맞게 적용한다.", - "[6체03-07]네트형 게임 방법에 대한 이해를 바탕으로 게임을 유리하게 전개할 수 있는 전략을 탐색하고 적용한다.", - "[6체03-08]네트형 경쟁 활동에 참여하면서 다른 사람들의 입장을 이해하고 공감하며 게임을 수행한다.", - "[6체04-01]세계 여러 나라의 전통적인 민속 표현의 종류와 특징을 탐색한다.", - "[6체04-02]세계 여러 나라 민속 표현의 고유한 특징을 효과적으로 표현하는 데 적합한 기본 동작을 적용한다.", - "[6체04-03]민속 표현 활동에 포함된 다양한 표현 방법(기본 움직임, 대형, 리듬 등)을 바탕으로 작품을 구성하여 발표하고 이를 감상한다.", - "[6체04-04]세계 여러 민족의 문화적 특성을 이해하고 존중하는 개방적인 마음으로 참여한다.", - "[6체04-05]주제 표현을 구성하는 표현 요소(신체 인식, 공간 인식, 노력, 관계 등)와 창작 과정(발상, 계획, 구성, 수행 등)의 특징을 탐색한다.", - "[6체04-06]정해진 주제나 소재의 특징적인 면을 살려 신체활동으로 표현하는 데 적합한 기본 동작을 다양한 상황에 적용한다.", - "[6체04-07]주제 표현 활동을 하는 데 필요한 다양한 표현 방법을 바탕으로 개인 또는 모둠별로 작품을 창의적으로 구성하여 발표하고 이를 감상한다.", - "[6체04-08]주제와 관련된 다양한 표현 방식을 이해하고 자신의 느낌과 생각에 따라 창의적인 방법으로 표현한다.", - "[6체05-01]운동 시 발생할 수 있는 응급 상황(출혈, 염좌, 골절 등)의 종류와 특징을 조사하고 상황에 따른 대처법을 탐색한다.", - "[6체05-02]빙상․설상에서 발생하는 안전사고의 사례를 조사하고 예방 및 대처 방법을 익혀 위험 상황에 대처한다.", - "[6체05-03]일상생활이나 운동 중 발생할 수 있는 위험 상황에서 약속된 절차를 떠올리며 침착하게 행동한다.", - "[6체05-04]운동 시설 이용 시 발생할 수 있는 안전사고의 종류와 원인을 탐색한다.", - "[6체05-05]야외 활동에서 발생하는 안전사고의 사례를 조사하고 예방 및 대처 방법을 익혀 위험 상황에 대처한다.", - "[6체05-06]신체 부상이 우려되는 위험한 상황이나 재난 발생 시 피해 상황을 신속하게 판단하여 안전하게 대처한다.", - "[6체01-01]성장에 따른 신체적 변화를 수용하고 건강한 성장과 발달을 저해하는 생활 양식(흡연, 음주, 약물 오남용 등)의 위험성을 인식한다.", - "[6체01-02]건강을 유지하기 위한 체력 운동을 선택하고 자신의 수준에 맞게 운동 계획을 세워 실천한다.", - "[6체01-03]신체활동 참여를 통해 부족했던 체력의 향상을 체험함으로써 타인과 다른 자신의 신체적 기량과 특성을 긍정적으로 수용 한다.", - "[6체01-04]건강한 생활을 위한 신체적 여가 활동 계획을 수립하여 실천한다.", - "[6체01-05]운동 능력을 향상시키기 위한 체력 운동을 선택하고 자신의 수준에 맞는 운동 계획을 세워 실천한다.", - "[6체01-06]건강 증진을 위해 계획에 따라 운동 및 여가 활동에 열정을 갖고 꾸준히 참여한다.", - "[6체02-01]자신의 기록을 향상시키려는 거리 도전의 개념과 특성을 탐색한다.", - "[6체02-02]거리 도전과 관련된 여러 유형의 활동에 참여해 자신의 기록을 향상할 수 있는 기본자세와 동작을 이해하고 도전 상황에 적용한다.", - "[6체02-03]거리 도전의 결과를 시기별로 측정하여 도전 과정의 장단점을 분석하고 기록을 향상할 수 있는 방법을 지속적으로 수행하고 평가한다.", - "[6체02-04]상황과 환경에 관계없이 해낼 수 있는 자신감을 갖고 적극적으로 거리 기록 향상에 도전한다.", - "[6체02-05]새로운 기록을 수립하거나 상대방의 신체적 기량에 앞서기 위해 수행하는 표적/투기 도전의 개념과 특성을 탐색한다.", - "[6체02-06]표적/투기 도전과 관련된 여러 유형의 활동에 참여해 자신의 성공 수행을 높일 수 있는 기본자세와 동작을 이해하고 도전 상황에 적용한다.", - "[6체02-07]표적/투기 도전의 결과를 지속적으로 측정 및 점검하여 그 과정의 장단점을 분석하고 보다 좋은 결과를 얻을 수 있는 방법을 지속적으로 수행하고 평가한다.", - "[6체02-08]표적/투기 도전의 참여 과정과 결과를 반성하고 어떠한 상황에서도 상대방을 존중하고 게임에 최선을 다하는 겸손한 자세로 도전한다.", - "[6체03-01]필드형 게임을 체험함으로써 동일한 공간에서 공격과 수비를 번갈아 하며 상대의 빈 공간으로 공을 보내고 정해진 구역을 돌아 점수를 얻는 필드형 경쟁의 개념과 특성을 탐색한다.", - "[6체03-02]필드형 게임의 기본 기능을 탐색하고 게임 상황에 적용한다.", - "[6체03-03]필드형 게임 방법에 대한 이해를 바탕으로 게임을 유리하게 전개할 수 있는 전략을 탐색하고 적용한다.", - "[6체03-04]필드형 경쟁 활동에 참여하면서 책임의 중요성을 인식하고 이를 바탕으로 맡은 바 역할에 최선을 다하며 게임을 수행한다.", - "[6체03-05]네트형 게임을 종합적으로 체험함으로써 네트 너머에 있는 상대의 빈 공간에 공을 보내 받아 넘기지 못하게 하여 득점하는 네트형 경쟁의 개념과 특성을 탐색한다.", - "[6체03-06]네트형 게임의 기본 기능을 탐색하고 게임 상황에 맞게 적용한다.", - "[6체03-07]네트형 게임 방법에 대한 이해를 바탕으로 게임을 유리하게 전개할 수 있는 전략을 탐색하고 적용한다.", - "[6체03-08]네트형 경쟁 활동에 참여하면서 다른 사람들의 입장을 이해하고 공감하며 게임을 수행한다.", - "[6체04-01]세계 여러 나라의 전통적인 민속 표현의 종류와 특징을 탐색한다.", - "[6체04-02]세계 여러 나라 민속 표현의 고유한 특징을 효과적으로 표현하는 데 적합한 기본 동작을 적용한다.", - "[6체04-03]민속 표현 활동에 포함된 다양한 표현 방법(기본 움직임, 대형, 리듬 등)을 바탕으로 작품을 구성하여 발표하고 이를 감상한다.", - "[6체04-04]세계 여러 민족의 문화적 특성을 이해하고 존중하는 개방적인 마음으로 참여한다.", - "[6체04-05]주제 표현을 구성하는 표현 요소(신체 인식, 공간 인식, 노력, 관계 등)와 창작 과정(발상, 계획, 구성, 수행 등)의 특징을 탐색한다.", - "[6체04-06]정해진 주제나 소재의 특징적인 면을 살려 신체활동으로 표현하는 데 적합한 기본 동작을 다양한 상황에 적용한다.", - "[6체04-07]주제 표현 활동을 하는 데 필요한 다양한 표현 방법을 바탕으로 개인 또는 모둠별로 작품을 창의적으로 구성하여 발표하고 이를 감상한다.", - "[6체04-08]주제와 관련된 다양한 표현 방식을 이해하고 자신의 느낌과 생각에 따라 창의적인 방법으로 표현한다.", - "[6체05-01]운동 시 발생할 수 있는 응급 상황(출혈, 염좌, 골절 등)의 종류와 특징을 조사하고 상황에 따른 대처법을 탐색한다.", - "[6체05-02]빙상․설상에서 발생하는 안전사고의 사례를 조사하고 예방 및 대처 방법을 익혀 위험 상황에 대처한다.", - "[6체05-03]일상생활이나 운동 중 발생할 수 있는 위험 상황에서 약속된 절차를 떠올리며 침착하게 행동한다.", - "[6체05-04]운동 시설 이용 시 발생할 수 있는 안전사고의 종류와 원인을 탐색한다.", - "[6체05-05]야외 활동에서 발생하는 안전사고의 사례를 조사하고 예방 및 대처 방법을 익혀 위험 상황에 대처한다.", - "[6체05-06]신체 부상이 우려되는 위험한 상황이나 재난 발생 시 피해 상황을 신속하게 판단하여 안전하게 대처한다.", - "[6체01-01]성장에 따른 신체적 변화를 수용하고 건강한 성장과 발달을 저해하는 생활 양식(흡연, 음주, 약물 오남용 등)의 위험성을 인식한다.", - "[6체01-02]건강을 유지하기 위한 체력 운동을 선택하고 자신의 수준에 맞게 운동 계획을 세워 실천한다.", - "[6체01-03]신체활동 참여를 통해 부족했던 체력의 향상을 체험함으로써 타인과 다른 자신의 신체적 기량과 특성을 긍정적으로 수용 한다.", - "[6체01-04]건강한 생활을 위한 신체적 여가 활동 계획을 수립하여 실천한다.", - "[6체01-05]운동 능력을 향상시키기 위한 체력 운동을 선택하고 자신의 수준에 맞는 운동 계획을 세워 실천한다.", - "[6체01-06]건강 증진을 위해 계획에 따라 운동 및 여가 활동에 열정을 갖고 꾸준히 참여한다.", - "[6체02-01]자신의 기록을 향상시키려는 거리 도전의 개념과 특성을 탐색한다.", - "[6체02-02]거리 도전과 관련된 여러 유형의 활동에 참여해 자신의 기록을 향상할 수 있는 기본자세와 동작을 이해하고 도전 상황에 적용한다.", - "[6체02-03]거리 도전의 결과를 시기별로 측정하여 도전 과정의 장단점을 분석하고 기록을 향상할 수 있는 방법을 지속적으로 수행하고 평가한다.", - "[6체02-04]상황과 환경에 관계없이 해낼 수 있는 자신감을 갖고 적극적으로 거리 기록 향상에 도전한다.", - "[6체02-05]새로운 기록을 수립하거나 상대방의 신체적 기량에 앞서기 위해 수행하는 표적/투기 도전의 개념과 특성을 탐색한다.", - "[6체02-06]표적/투기 도전과 관련된 여러 유형의 활동에 참여해 자신의 성공 수행을 높일 수 있는 기본자세와 동작을 이해하고 도전 상황에 적용한다.", - "[6체02-07]표적/투기 도전의 결과를 지속적으로 측정 및 점검하여 그 과정의 장단점을 분석하고 보다 좋은 결과를 얻을 수 있는 방법을 지속적으로 수행하고 평가한다.", - "[6체02-08]표적/투기 도전의 참여 과정과 결과를 반성하고 어떠한 상황에서도 상대방을 존중하고 게임에 최선을 다하는 겸손한 자세로 도전한다.", - "[6체03-01]필드형 게임을 체험함으로써 동일한 공간에서 공격과 수비를 번갈아 하며 상대의 빈 공간으로 공을 보내고 정해진 구역을 돌아 점수를 얻는 필드형 경쟁의 개념과 특성을 탐색한다.", - "[6체03-02]필드형 게임의 기본 기능을 탐색하고 게임 상황에 적용한다.", - "[6체03-03]필드형 게임 방법에 대한 이해를 바탕으로 게임을 유리하게 전개할 수 있는 전략을 탐색하고 적용한다.", - "[6체03-04]필드형 경쟁 활동에 참여하면서 책임의 중요성을 인식하고 이를 바탕으로 맡은 바 역할에 최선을 다하며 게임을 수행한다.", - "[6체03-05]네트형 게임을 종합적으로 체험함으로써 네트 너머에 있는 상대의 빈 공간에 공을 보내 받아 넘기지 못하게 하여 득점하는 네트형 경쟁의 개념과 특성을 탐색한다.", - "[6체03-06]네트형 게임의 기본 기능을 탐색하고 게임 상황에 맞게 적용한다.", - "[6체03-07]네트형 게임 방법에 대한 이해를 바탕으로 게임을 유리하게 전개할 수 있는 전략을 탐색하고 적용한다.", - "[6체03-08]네트형 경쟁 활동에 참여하면서 다른 사람들의 입장을 이해하고 공감하며 게임을 수행한다.", - "[6체04-01]세계 여러 나라의 전통적인 민속 표현의 종류와 특징을 탐색한다.", - "[6체04-02]세계 여러 나라 민속 표현의 고유한 특징을 효과적으로 표현하는 데 적합한 기본 동작을 적용한다.", - "[6체04-03]민속 표현 활동에 포함된 다양한 표현 방법(기본 움직임, 대형, 리듬 등)을 바탕으로 작품을 구성하여 발표하고 이를 감상한다.", - "[6체04-04]세계 여러 민족의 문화적 특성을 이해하고 존중하는 개방적인 마음으로 참여한다.", - "[6체04-05]주제 표현을 구성하는 표현 요소(신체 인식, 공간 인식, 노력, 관계 등)와 창작 과정(발상, 계획, 구성, 수행 등)의 특징을 탐색한다.", - "[6체04-06]정해진 주제나 소재의 특징적인 면을 살려 신체활동으로 표현하는 데 적합한 기본 동작을 다양한 상황에 적용한다.", - "[6체04-07]주제 표현 활동을 하는 데 필요한 다양한 표현 방법을 바탕으로 개인 또는 모둠별로 작품을 창의적으로 구성하여 발표하고 이를 감상한다.", - "[6체04-08]주제와 관련된 다양한 표현 방식을 이해하고 자신의 느낌과 생각에 따라 창의적인 방법으로 표현한다.", - "[6체05-01]운동 시 발생할 수 있는 응급 상황(출혈, 염좌, 골절 등)의 종류와 특징을 조사하고 상황에 따른 대처법을 탐색한다.", - "[6체05-02]빙상․설상에서 발생하는 안전사고의 사례를 조사하고 예방 및 대처 방법을 익혀 위험 상황에 대처한다.", - "[6체05-03]일상생활이나 운동 중 발생할 수 있는 위험 상황에서 약속된 절차를 떠올리며 침착하게 행동한다.", - "[6체05-04]운동 시설 이용 시 발생할 수 있는 안전사고의 종류와 원인을 탐색한다.", - "[6체05-05]야외 활동에서 발생하는 안전사고의 사례를 조사하고 예방 및 대처 방법을 익혀 위험 상황에 대처한다.", - "[6체05-06]신체 부상이 우려되는 위험한 상황이나 재난 발생 시 피해 상황을 신속하게 판단하여 안전하게 대처한다." - ], - "실과": [ - "[6실01-01]아동기의 신체적, 인지적, 정서적, 사회적 발달의 특징 및 발달의 개인차를 알아 자신을 이해하고, 건강하게 발달하기 위해 필요한 조건을 설명한다.", - "[6실01-02]아동기에 나타나는 남녀의 성적 발달 변화를 긍정적으로 이해하고 성적 발달과 관련한 자기 관리 방법을 탐색하여 실천한다.", - "[6실01-03]주변 가족의 모습을 통해 나와 가족의 관계 및 역할을 이해하고, 다양한 가족의 가정생활 공통점을 파악하여 가정생활의 중요성을 설명한다.", - "[6실01-04]건강한 가정생활을 위해 가족 구성원의 다양한 요구에 대하여 서로 간의 배려와 돌봄이 필요함을 이해한다.", - "[6실02-01]건강을 위한 균형 잡힌 식사의 중요성과 조건을 알고 자신의 식사를 평가한다.", - "[6실02-02]성장기에 필요한 간식의 중요성을 이해하고 간식을 선택하거나 만들어 먹을 수 있으며 이때 식생활 예절을 적용한다.", - "[6실02-03]옷의 기능을 이해하여 때와 장소, 상황에 맞는 옷차림을 적용한다.", - "[6실02-04]다양한 식재료의 맛을 비교․분석하여 올바른 식습관 형성에 적용한다.", - "[6실02-05]바느질의 기초를 익혀 간단한 수선에 활용한다.", - "[6실02-06]간단한 생활 소품을 창의적으로 제작하여 활용한다.", - "[6실02-07]자신의 신체 발달을 고려하여 건강하고 안전한 옷차림을 실천한다.", - "[6실02-08]생활 안전사고의 종류와 예방 방법을 알아 실생활에 적용한다.", - "[6실02-09]안전과 위생을 고려하여 식사를 선택하는 방법을 탐색하고 실생활에 적용한다.", - "[6실02-10]밥을 이용한 한 그릇 음식을 위생적이고 안전하게 준비․조리하여 평가한다.", - "[6실03-01]옷의 종류와 용도에 맞게 정리․보관하는 방법을 알고 환경과 관련지어 옷 관리의 중요성을 이해한다.", - "[6실03-02]시간 자원의 특성을 알고, 올바른 시간 관리 방법을 탐색한 후 실생활에 적용한다.", - "[6실03-03]용돈 관리의 필요성을 알고 자신의 필요와 욕구를 고려한 합리적인 소비생활 방법을 탐색하여 실생활에 적용한다.", - "[6실03-04]쾌적한 생활공간 관리의 필요성을 환경과 관련지어 이해하고 올바른 관리 방법을 계획하여 실천한다.", - "[6실03-05]가정일을 담당하고 있는 가족원들의 역할을 탐색하고, 가정생활에 미치는 영향을 이해한다.", - "[6실04-01]가꾸기와 기르기의 의미를 이해하고 동식물 자원의 중요성을 설명한다.", - "[6실04-02]생활 속 식물을 활용 목적에 따라 분류하고, 가꾸기 활동을 실행한다.", - "[6실04-03]생활 속 동물을 활용 목적에 따라 분류하고, 돌보고 기르는 과정을 실행한다.", - "[6실04-04]수송과 수송 수단의 의미를 알고, 수송 수단의 기본 요소를 설명한다.", - "[6실04-05]다양한 재료를 활용하여 수송 수단을 구상하고, 제작한다.", - "[6실04-06]자전거의 구성 요소와 안전하게 관리하는 방법을 알고 실천한다.", - "[6실04-07]소프트웨어가 적용된 사례를 찾아보고 우리 생활에 미치는 영향을 이해한다.", - "[6실04-08]절차적 사고에 의한 문제 해결의 순서를 생각하고 적용한다.", - "[6실04-09]프로그래밍 도구를 사용하여 기초적인 프로그래밍 과정을 체험한다.", - "[6실04-10]자료를 입력하고 필요한 처리를 수행한 후 결과를 출력하는 단순한 프로그램을 설계한다.", - "[6실04-11]문제를 해결하는 프로그램을 만드는 과정에서 순차, 선택, 반복 등의 구조를 이해한다.", - "[6실05-01]일과 직업의 의미와 중요성을 이해한다.", - "[6실05-02]나를 이해하고 적성, 흥미, 성격에 맞는 직업을 탐색한다.", - "[6실05-03]생활 속에 적용된 발명과 문제해결의 사례를 통해 발명의 의미와 중요성을 이해한다.", - "[6실05-04]다양한 재료를 활용하여 창의적인 제품을 구상하고 제작한다.", - "[6실05-05]사이버 중독 예방, 개인 정보 보호 및 지식 재산 보호의 의미를 알고 생활 속에서 실천한다.", - "[6실05-06]생활 속에서 로봇 활용 사례를 통해 작동 원리와 활용 분야를 이해한다.", - "[6실05-07]여러 가지 센서를 장착한 로봇을 제작한다.", - "[6실05-08]지속 가능한 미래 사회를 위한 친환경 농업의 역할과 중요성을 이해한다.", - "[6실05-09]생활 속의 농업 체험을 통해 지속 가능한 생활을 이해하고 실천 방안을 제안한다.", - "[6실01-01]아동기의 신체적, 인지적, 정서적, 사회적 발달의 특징 및 발달의 개인차를 알아 자신을 이해하고, 건강하게 발달하기 위해 필요한 조건을 설명한다.", - "[6실01-02]아동기에 나타나는 남녀의 성적 발달 변화를 긍정적으로 이해하고 성적 발달과 관련한 자기 관리 방법을 탐색하여 실천한다.", - "[6실01-03]주변 가족의 모습을 통해 나와 가족의 관계 및 역할을 이해하고, 다양한 가족의 가정생활 공통점을 파악하여 가정생활의 중요성을 설명한다.", - "[6실01-04]건강한 가정생활을 위해 가족 구성원의 다양한 요구에 대하여 서로 간의 배려와 돌봄이 필요함을 이해한다.", - "[6실02-01]건강을 위한 균형 잡힌 식사의 중요성과 조건을 알고 자신의 식사를 평가한다.", - "[6실02-02]성장기에 필요한 간식의 중요성을 이해하고 간식을 선택하거나 만들어 먹을 수 있으며 이때 식생활 예절을 적용한다.", - "[6실02-03]옷의 기능을 이해하여 때와 장소, 상황에 맞는 옷차림을 적용한다.", - "[6실02-04]다양한 식재료의 맛을 비교․분석하여 올바른 식습관 형성에 적용한다.", - "[6실02-05]바느질의 기초를 익혀 간단한 수선에 활용한다.", - "[6실02-06]간단한 생활 소품을 창의적으로 제작하여 활용한다.", - "[6실02-07]자신의 신체 발달을 고려하여 건강하고 안전한 옷차림을 실천한다.", - "[6실02-08]생활 안전사고의 종류와 예방 방법을 알아 실생활에 적용한다.", - "[6실02-09]안전과 위생을 고려하여 식사를 선택하는 방법을 탐색하고 실생활에 적용한다.", - "[6실02-10]밥을 이용한 한 그릇 음식을 위생적이고 안전하게 준비․조리하여 평가한다.", - "[6실03-01]옷의 종류와 용도에 맞게 정리․보관하는 방법을 알고 환경과 관련지어 옷 관리의 중요성을 이해한다.", - "[6실03-02]시간 자원의 특성을 알고, 올바른 시간 관리 방법을 탐색한 후 실생활에 적용한다.", - "[6실03-03]용돈 관리의 필요성을 알고 자신의 필요와 욕구를 고려한 합리적인 소비생활 방법을 탐색하여 실생활에 적용한다.", - "[6실03-04]쾌적한 생활공간 관리의 필요성을 환경과 관련지어 이해하고 올바른 관리 방법을 계획하여 실천한다.", - "[6실03-05]가정일을 담당하고 있는 가족원들의 역할을 탐색하고, 가정생활에 미치는 영향을 이해한다.", - "[6실04-01]가꾸기와 기르기의 의미를 이해하고 동식물 자원의 중요성을 설명한다.", - "[6실04-02]생활 속 식물을 활용 목적에 따라 분류하고, 가꾸기 활동을 실행한다.", - "[6실04-03]생활 속 동물을 활용 목적에 따라 분류하고, 돌보고 기르는 과정을 실행한다.", - "[6실04-04]수송과 수송 수단의 의미를 알고, 수송 수단의 기본 요소를 설명한다.", - "[6실04-05]다양한 재료를 활용하여 수송 수단을 구상하고, 제작한다.", - "[6실04-06]자전거의 구성 요소와 안전하게 관리하는 방법을 알고 실천한다.", - "[6실04-07]소프트웨어가 적용된 사례를 찾아보고 우리 생활에 미치는 영향을 이해한다.", - "[6실04-08]절차적 사고에 의한 문제 해결의 순서를 생각하고 적용한다.", - "[6실04-09]프로그래밍 도구를 사용하여 기초적인 프로그래밍 과정을 체험한다.", - "[6실04-10]자료를 입력하고 필요한 처리를 수행한 후 결과를 출력하는 단순한 프로그램을 설계한다.", - "[6실04-11]문제를 해결하는 프로그램을 만드는 과정에서 순차, 선택, 반복 등의 구조를 이해한다.", - "[6실05-01]일과 직업의 의미와 중요성을 이해한다.", - "[6실05-02]나를 이해하고 적성, 흥미, 성격에 맞는 직업을 탐색한다.", - "[6실05-03]생활 속에 적용된 발명과 문제해결의 사례를 통해 발명의 의미와 중요성을 이해한다.", - "[6실05-04]다양한 재료를 활용하여 창의적인 제품을 구상하고 제작한다.", - "[6실05-05]사이버 중독 예방, 개인 정보 보호 및 지식 재산 보호의 의미를 알고 생활 속에서 실천한다.", - "[6실05-06]생활 속에서 로봇 활용 사례를 통해 작동 원리와 활용 분야를 이해한다.", - "[6실05-07]여러 가지 센서를 장착한 로봇을 제작한다.", - "[6실05-08]지속 가능한 미래 사회를 위한 친환경 농업의 역할과 중요성을 이해한다.", - "[6실05-09]생활 속의 농업 체험을 통해 지속 가능한 생활을 이해하고 실천 방안을 제안한다.", - "[6실01-01]아동기의 신체적, 인지적, 정서적, 사회적 발달의 특징 및 발달의 개인차를 알아 자신을 이해하고, 건강하게 발달하기 위해 필요한 조건을 설명한다.", - "[6실01-02]아동기에 나타나는 남녀의 성적 발달 변화를 긍정적으로 이해하고 성적 발달과 관련한 자기 관리 방법을 탐색하여 실천한다.", - "[6실01-03]주변 가족의 모습을 통해 나와 가족의 관계 및 역할을 이해하고, 다양한 가족의 가정생활 공통점을 파악하여 가정생활의 중요성을 설명한다.", - "[6실01-04]건강한 가정생활을 위해 가족 구성원의 다양한 요구에 대하여 서로 간의 배려와 돌봄이 필요함을 이해한다.", - "[6실02-01]건강을 위한 균형 잡힌 식사의 중요성과 조건을 알고 자신의 식사를 평가한다.", - "[6실02-02]성장기에 필요한 간식의 중요성을 이해하고 간식을 선택하거나 만들어 먹을 수 있으며 이때 식생활 예절을 적용한다.", - "[6실02-03]옷의 기능을 이해하여 때와 장소, 상황에 맞는 옷차림을 적용한다.", - "[6실02-04]다양한 식재료의 맛을 비교․분석하여 올바른 식습관 형성에 적용한다.", - "[6실02-05]바느질의 기초를 익혀 간단한 수선에 활용한다.", - "[6실02-06]간단한 생활 소품을 창의적으로 제작하여 활용한다.", - "[6실02-07]자신의 신체 발달을 고려하여 건강하고 안전한 옷차림을 실천한다.", - "[6실02-08]생활 안전사고의 종류와 예방 방법을 알아 실생활에 적용한다.", - "[6실02-09]안전과 위생을 고려하여 식사를 선택하는 방법을 탐색하고 실생활에 적용한다.", - "[6실02-10]밥을 이용한 한 그릇 음식을 위생적이고 안전하게 준비․조리하여 평가한다.", - "[6실03-01]옷의 종류와 용도에 맞게 정리․보관하는 방법을 알고 환경과 관련지어 옷 관리의 중요성을 이해한다.", - "[6실03-02]시간 자원의 특성을 알고, 올바른 시간 관리 방법을 탐색한 후 실생활에 적용한다.", - "[6실03-03]용돈 관리의 필요성을 알고 자신의 필요와 욕구를 고려한 합리적인 소비생활 방법을 탐색하여 실생활에 적용한다.", - "[6실03-04]쾌적한 생활공간 관리의 필요성을 환경과 관련지어 이해하고 올바른 관리 방법을 계획하여 실천한다.", - "[6실03-05]가정일을 담당하고 있는 가족원들의 역할을 탐색하고, 가정생활에 미치는 영향을 이해한다.", - "[6실04-01]가꾸기와 기르기의 의미를 이해하고 동식물 자원의 중요성을 설명한다.", - "[6실04-02]생활 속 식물을 활용 목적에 따라 분류하고, 가꾸기 활동을 실행한다.", - "[6실04-03]생활 속 동물을 활용 목적에 따라 분류하고, 돌보고 기르는 과정을 실행한다.", - "[6실04-04]수송과 수송 수단의 의미를 알고, 수송 수단의 기본 요소를 설명한다.", - "[6실04-05]다양한 재료를 활용하여 수송 수단을 구상하고, 제작한다.", - "[6실04-06]자전거의 구성 요소와 안전하게 관리하는 방법을 알고 실천한다.", - "[6실04-07]소프트웨어가 적용된 사례를 찾아보고 우리 생활에 미치는 영향을 이해한다.", - "[6실04-08]절차적 사고에 의한 문제 해결의 순서를 생각하고 적용한다.", - "[6실04-09]프로그래밍 도구를 사용하여 기초적인 프로그래밍 과정을 체험한다.", - "[6실04-10]자료를 입력하고 필요한 처리를 수행한 후 결과를 출력하는 단순한 프로그램을 설계한다.", - "[6실04-11]문제를 해결하는 프로그램을 만드는 과정에서 순차, 선택, 반복 등의 구조를 이해한다.", - "[6실05-01]일과 직업의 의미와 중요성을 이해한다.", - "[6실05-02]나를 이해하고 적성, 흥미, 성격에 맞는 직업을 탐색한다.", - "[6실05-03]생활 속에 적용된 발명과 문제해결의 사례를 통해 발명의 의미와 중요성을 이해한다.", - "[6실05-04]다양한 재료를 활용하여 창의적인 제품을 구상하고 제작한다.", - "[6실05-05]사이버 중독 예방, 개인 정보 보호 및 지식 재산 보호의 의미를 알고 생활 속에서 실천한다.", - "[6실05-06]생활 속에서 로봇 활용 사례를 통해 작동 원리와 활용 분야를 이해한다.", - "[6실05-07]여러 가지 센서를 장착한 로봇을 제작한다.", - "[6실05-08]지속 가능한 미래 사회를 위한 친환경 농업의 역할과 중요성을 이해한다.", - "[6실05-09]생활 속의 농업 체험을 통해 지속 가능한 생활을 이해하고 실천 방안을 제안한다.", - "[6실01-01]아동기의 신체적, 인지적, 정서적, 사회적 발달의 특징 및 발달의 개인차를 알아 자신을 이해하고, 건강하게 발달하기 위해 필요한 조건을 설명한다.", - "[6실01-02]아동기에 나타나는 남녀의 성적 발달 변화를 긍정적으로 이해하고 성적 발달과 관련한 자기 관리 방법을 탐색하여 실천한다.", - "[6실01-03]주변 가족의 모습을 통해 나와 가족의 관계 및 역할을 이해하고, 다양한 가족의 가정생활 공통점을 파악하여 가정생활의 중요성을 설명한다.", - "[6실01-04]건강한 가정생활을 위해 가족 구성원의 다양한 요구에 대하여 서로 간의 배려와 돌봄이 필요함을 이해한다.", - "[6실02-01]건강을 위한 균형 잡힌 식사의 중요성과 조건을 알고 자신의 식사를 평가한다.", - "[6실02-02]성장기에 필요한 간식의 중요성을 이해하고 간식을 선택하거나 만들어 먹을 수 있으며 이때 식생활 예절을 적용한다.", - "[6실02-03]옷의 기능을 이해하여 때와 장소, 상황에 맞는 옷차림을 적용한다.", - "[6실02-04]다양한 식재료의 맛을 비교․분석하여 올바른 식습관 형성에 적용한다.", - "[6실02-05]바느질의 기초를 익혀 간단한 수선에 활용한다.", - "[6실02-06]간단한 생활 소품을 창의적으로 제작하여 활용한다.", - "[6실02-07]자신의 신체 발달을 고려하여 건강하고 안전한 옷차림을 실천한다.", - "[6실02-08]생활 안전사고의 종류와 예방 방법을 알아 실생활에 적용한다.", - "[6실02-09]안전과 위생을 고려하여 식사를 선택하는 방법을 탐색하고 실생활에 적용한다.", - "[6실02-10]밥을 이용한 한 그릇 음식을 위생적이고 안전하게 준비․조리하여 평가한다.", - "[6실03-01]옷의 종류와 용도에 맞게 정리․보관하는 방법을 알고 환경과 관련지어 옷 관리의 중요성을 이해한다.", - "[6실03-02]시간 자원의 특성을 알고, 올바른 시간 관리 방법을 탐색한 후 실생활에 적용한다.", - "[6실03-03]용돈 관리의 필요성을 알고 자신의 필요와 욕구를 고려한 합리적인 소비생활 방법을 탐색하여 실생활에 적용한다.", - "[6실03-04]쾌적한 생활공간 관리의 필요성을 환경과 관련지어 이해하고 올바른 관리 방법을 계획하여 실천한다.", - "[6실03-05]가정일을 담당하고 있는 가족원들의 역할을 탐색하고, 가정생활에 미치는 영향을 이해한다.", - "[6실04-01]가꾸기와 기르기의 의미를 이해하고 동식물 자원의 중요성을 설명한다.", - "[6실04-02]생활 속 식물을 활용 목적에 따라 분류하고, 가꾸기 활동을 실행한다.", - "[6실04-03]생활 속 동물을 활용 목적에 따라 분류하고, 돌보고 기르는 과정을 실행한다.", - "[6실04-04]수송과 수송 수단의 의미를 알고, 수송 수단의 기본 요소를 설명한다.", - "[6실04-05]다양한 재료를 활용하여 수송 수단을 구상하고, 제작한다.", - "[6실04-06]자전거의 구성 요소와 안전하게 관리하는 방법을 알고 실천한다.", - "[6실04-07]소프트웨어가 적용된 사례를 찾아보고 우리 생활에 미치는 영향을 이해한다.", - "[6실04-08]절차적 사고에 의한 문제 해결의 순서를 생각하고 적용한다.", - "[6실04-09]프로그래밍 도구를 사용하여 기초적인 프로그래밍 과정을 체험한다.", - "[6실04-10]자료를 입력하고 필요한 처리를 수행한 후 결과를 출력하는 단순한 프로그램을 설계한다.", - "[6실04-11]문제를 해결하는 프로그램을 만드는 과정에서 순차, 선택, 반복 등의 구조를 이해한다.", - "[6실05-01]일과 직업의 의미와 중요성을 이해한다.", - "[6실05-02]나를 이해하고 적성, 흥미, 성격에 맞는 직업을 탐색한다.", - "[6실05-03]생활 속에 적용된 발명과 문제해결의 사례를 통해 발명의 의미와 중요성을 이해한다.", - "[6실05-04]다양한 재료를 활용하여 창의적인 제품을 구상하고 제작한다.", - "[6실05-05]사이버 중독 예방, 개인 정보 보호 및 지식 재산 보호의 의미를 알고 생활 속에서 실천한다.", - "[6실05-06]생활 속에서 로봇 활용 사례를 통해 작동 원리와 활용 분야를 이해한다.", - "[6실05-07]여러 가지 센서를 장착한 로봇을 제작한다.", - "[6실05-08]지속 가능한 미래 사회를 위한 친환경 농업의 역할과 중요성을 이해한다.", - "[6실05-09]생활 속의 농업 체험을 통해 지속 가능한 생활을 이해하고 실천 방안을 제안한다." - ], - "도덕": [ - "[6도01-01]감정과 욕구를 조절하지 못해 나타날 수 있는 결과를 도덕적으로 상상해 보고, 올바르게 자신의 감정을 조절하고 표현할 수 있는 방법을 습관화한다.", - "[6도01-02]자주적인 삶을 위해 자신을 이해하고 존중하며 자주적인 삶의 의미와 중요성을 깨닫고 실천방법을 익힌다.", - "[6도01-03]정직의 의미와 정직하게 살아가는 것의 중요성을 탐구하고, 정직과 관련된 갈등 상황에서 정직하게 판단하고 실천하는 방법을 익힌다.", - "[6도02-01]사이버 공간에서 발생하는 여러 문제에 대한 도덕적 민감성을 기르며, 사이버 공간에서 지켜야 할 예절과 법을 알고 습관화한다.", - "[6도02-02]다양한 갈등을 평화적으로 해결하는 것의 중요성과 방법을 알고, 평화적으로 갈등을 해결하려는 의지를 기른다.", - "[6도02-03]봉사의 의미와 중요성을 알고, 주변 사람의 처지를 공감하여 도와주려는 실천 의지를 기른다.", - "[6도03-01]인권의 의미와 인권을 존중하는 삶의 중요성을 이해하고, 인권 존중의 방법을 익힌다.", - "[6도03-02]공정함의 의미와 공정한 사회의 필요성을 이해하고, 일상생활에서 공정하게 생활하려는 실천의지를 기른다.", - "[6도03-03]도덕적 상상하기를 통해 바람직한 통일의 올바른 과정을 탐구하고 통일을 이루려는 의지와 태도를 가진다.", - "[6도03-04]세계화 시대에 인류가 겪고 있는 문제와 그 원인을 토론을 통해 알아보고, 이를 해결하고자 하는 의지를 가지고 실천한다.", - "[6도04-01]긍정적 태도의 의미와 중요성을 알고, 어려움을 극복하기 위한 긍정적 삶의 태도를 습관화한다.", - "[6도04-02]올바르게 산다는 것의 의미와 중요성을 알고, 자기반성과 마음 다스리기를 통해 올바르게 살아가기 위한 능력과 실천 의지를 기른다.", - "[6도01-01]감정과 욕구를 조절하지 못해 나타날 수 있는 결과를 도덕적으로 상상해 보고, 올바르게 자신의 감정을 조절하고 표현할 수 있는 방법을 습관화한다.", - "[6도01-02]자주적인 삶을 위해 자신을 이해하고 존중하며 자주적인 삶의 의미와 중요성을 깨닫고 실천방법을 익힌다.", - "[6도01-03]정직의 의미와 정직하게 살아가는 것의 중요성을 탐구하고, 정직과 관련된 갈등 상황에서 정직하게 판단하고 실천하는 방법을 익힌다.", - "[6도02-01]사이버 공간에서 발생하는 여러 문제에 대한 도덕적 민감성을 기르며, 사이버 공간에서 지켜야 할 예절과 법을 알고 습관화한다.", - "[6도02-02]다양한 갈등을 평화적으로 해결하는 것의 중요성과 방법을 알고, 평화적으로 갈등을 해결하려는 의지를 기른다.", - "[6도02-03]봉사의 의미와 중요성을 알고, 주변 사람의 처지를 공감하여 도와주려는 실천 의지를 기른다.", - "[6도03-01]인권의 의미와 인권을 존중하는 삶의 중요성을 이해하고, 인권 존중의 방법을 익힌다.", - "[6도03-02]공정함의 의미와 공정한 사회의 필요성을 이해하고, 일상생활에서 공정하게 생활하려는 실천의지를 기른다.", - "[6도03-03]도덕적 상상하기를 통해 바람직한 통일의 올바른 과정을 탐구하고 통일을 이루려는 의지와 태도를 가진다.", - "[6도03-04]세계화 시대에 인류가 겪고 있는 문제와 그 원인을 토론을 통해 알아보고, 이를 해결하고자 하는 의지를 가지고 실천한다.", - "[6도04-01]긍정적 태도의 의미와 중요성을 알고, 어려움을 극복하기 위한 긍정적 삶의 태도를 습관화한다.", - "[6도04-02]올바르게 산다는 것의 의미와 중요성을 알고, 자기반성과 마음 다스리기를 통해 올바르게 살아가기 위한 능력과 실천 의지를 기른다.", - "[6도01-01]감정과 욕구를 조절하지 못해 나타날 수 있는 결과를 도덕적으로 상상해 보고, 올바르게 자신의 감정을 조절하고 표현할 수 있는 방법을 습관화한다.", - "[6도01-02]자주적인 삶을 위해 자신을 이해하고 존중하며 자주적인 삶의 의미와 중요성을 깨닫고 실천방법을 익힌다.", - "[6도01-03]정직의 의미와 정직하게 살아가는 것의 중요성을 탐구하고, 정직과 관련된 갈등 상황에서 정직하게 판단하고 실천하는 방법을 익힌다.", - "[6도02-01]사이버 공간에서 발생하는 여러 문제에 대한 도덕적 민감성을 기르며, 사이버 공간에서 지켜야 할 예절과 법을 알고 습관화한다.", - "[6도02-02]다양한 갈등을 평화적으로 해결하는 것의 중요성과 방법을 알고, 평화적으로 갈등을 해결하려는 의지를 기른다.", - "[6도02-03]봉사의 의미와 중요성을 알고, 주변 사람의 처지를 공감하여 도와주려는 실천 의지를 기른다.", - "[6도03-01]인권의 의미와 인권을 존중하는 삶의 중요성을 이해하고, 인권 존중의 방법을 익힌다.", - "[6도03-02]공정함의 의미와 공정한 사회의 필요성을 이해하고, 일상생활에서 공정하게 생활하려는 실천의지를 기른다.", - "[6도03-03]도덕적 상상하기를 통해 바람직한 통일의 올바른 과정을 탐구하고 통일을 이루려는 의지와 태도를 가진다.", - "[6도03-04]세계화 시대에 인류가 겪고 있는 문제와 그 원인을 토론을 통해 알아보고, 이를 해결하고자 하는 의지를 가지고 실천한다.", - "[6도04-01]긍정적 태도의 의미와 중요성을 알고, 어려움을 극복하기 위한 긍정적 삶의 태도를 습관화한다.", - "[6도04-02]올바르게 산다는 것의 의미와 중요성을 알고, 자기반성과 마음 다스리기를 통해 올바르게 살아가기 위한 능력과 실천 의지를 기른다.", - "[6도01-01]감정과 욕구를 조절하지 못해 나타날 수 있는 결과를 도덕적으로 상상해 보고, 올바르게 자신의 감정을 조절하고 표현할 수 있는 방법을 습관화한다.", - "[6도01-02]자주적인 삶을 위해 자신을 이해하고 존중하며 자주적인 삶의 의미와 중요성을 깨닫고 실천방법을 익힌다.", - "[6도01-03]정직의 의미와 정직하게 살아가는 것의 중요성을 탐구하고, 정직과 관련된 갈등 상황에서 정직하게 판단하고 실천하는 방법을 익힌다.", - "[6도02-01]사이버 공간에서 발생하는 여러 문제에 대한 도덕적 민감성을 기르며, 사이버 공간에서 지켜야 할 예절과 법을 알고 습관화한다.", - "[6도02-02]다양한 갈등을 평화적으로 해결하는 것의 중요성과 방법을 알고, 평화적으로 갈등을 해결하려는 의지를 기른다.", - "[6도02-03]봉사의 의미와 중요성을 알고, 주변 사람의 처지를 공감하여 도와주려는 실천 의지를 기른다.", - "[6도03-01]인권의 의미와 인권을 존중하는 삶의 중요성을 이해하고, 인권 존중의 방법을 익힌다.", - "[6도03-02]공정함의 의미와 공정한 사회의 필요성을 이해하고, 일상생활에서 공정하게 생활하려는 실천의지를 기른다.", - "[6도03-03]도덕적 상상하기를 통해 바람직한 통일의 올바른 과정을 탐구하고 통일을 이루려는 의지와 태도를 가진다.", - "[6도03-04]세계화 시대에 인류가 겪고 있는 문제와 그 원인을 토론을 통해 알아보고, 이를 해결하고자 하는 의지를 가지고 실천한다.", - "[6도04-01]긍정적 태도의 의미와 중요성을 알고, 어려움을 극복하기 위한 긍정적 삶의 태도를 습관화한다.", - "[6도04-02]올바르게 산다는 것의 의미와 중요성을 알고, 자기반성과 마음 다스리기를 통해 올바르게 살아가기 위한 능력과 실천 의지를 기른다." - ] - } -} - - diff --git a/spaces/qingxu98/gpt-academic/docs/waifu_plugin/live2d.js b/spaces/qingxu98/gpt-academic/docs/waifu_plugin/live2d.js deleted file mode 100644 index 2cf559be672c438dfbd35db61eea12465ed0dffb..0000000000000000000000000000000000000000 --- a/spaces/qingxu98/gpt-academic/docs/waifu_plugin/live2d.js +++ /dev/null @@ -1,4238 +0,0 @@ -! -function(t) { - function i(r) { - if (e[r]) return e[r].exports; - var o = e[r] = { - i: r, - l: !1, - exports: {} - }; - return t[r].call(o.exports, o, o.exports, i), o.l = !0, o.exports - } - var e = {}; - i.m = t, i.c = e, i.d = function(t, e, r) { - i.o(t, e) || Object.defineProperty(t, e, { - configurable: !1, - enumerable: !0, - get: r - }) - }, i.n = function(t) { - var e = t && t.__esModule ? - function() { - return t. - default - } : function() { - return t - }; - return i.d(e, "a", e), e - }, i.o = function(t, i) { - return Object.prototype.hasOwnProperty.call(t, i) - }, i.p = "", i(i.s = 4) -}([function(t, i, e) { - "use strict"; - - function r() { - this.live2DModel = null, this.modelMatrix = null, this.eyeBlink = null, this.physics = null, this.pose = null, this.debugMode = !1, this.initialized = !1, this.updating = !1, this.alpha = 1, this.accAlpha = 0, this.lipSync = !1, this.lipSyncValue = 0, this.accelX = 0, this.accelY = 0, this.accelZ = 0, this.dragX = 0, this.dragY = 0, this.startTimeMSec = null, this.mainMotionManager = new h, this.expressionManager = new h, this.motions = {}, this.expressions = {}, this.isTexLoaded = !1 - } - function o() { - AMotion.prototype.constructor.call(this), this.paramList = new Array - } - function n() { - this.id = "", this.type = -1, this.value = null - } - function s() { - this.nextBlinkTime = null, this.stateStartTime = null, this.blinkIntervalMsec = null, this.eyeState = g.STATE_FIRST, this.blinkIntervalMsec = 4e3, this.closingMotionMsec = 100, this.closedMotionMsec = 50, this.openingMotionMsec = 150, this.closeIfZero = !0, this.eyeID_L = "PARAM_EYE_L_OPEN", this.eyeID_R = "PARAM_EYE_R_OPEN" - } - function _() { - this.tr = new Float32Array(16), this.identity() - } - function a(t, i) { - _.prototype.constructor.call(this), this.width = t, this.height = i - } - function h() { - MotionQueueManager.prototype.constructor.call(this), this.currentPriority = null, this.reservePriority = null, this.super = MotionQueueManager.prototype - } - function l() { - this.physicsList = new Array, this.startTimeMSec = UtSystem.getUserTimeMSec() - } - function $() { - this.lastTime = 0, this.lastModel = null, this.partsGroups = new Array - } - function u(t) { - this.paramIndex = -1, this.partsIndex = -1, this.link = null, this.id = t - } - function p() { - this.EPSILON = .01, this.faceTargetX = 0, this.faceTargetY = 0, this.faceX = 0, this.faceY = 0, this.faceVX = 0, this.faceVY = 0, this.lastTimeSec = 0 - } - function f() { - _.prototype.constructor.call(this), this.screenLeft = null, this.screenRight = null, this.screenTop = null, this.screenBottom = null, this.maxLeft = null, this.maxRight = null, this.maxTop = null, this.maxBottom = null, this.max = Number.MAX_VALUE, this.min = 0 - } - function c() {} - var d = 0; - r.prototype.getModelMatrix = function() { - return this.modelMatrix - }, r.prototype.setAlpha = function(t) { - t > .999 && (t = 1), t < .001 && (t = 0), this.alpha = t - }, r.prototype.getAlpha = function() { - return this.alpha - }, r.prototype.isInitialized = function() { - return this.initialized - }, r.prototype.setInitialized = function(t) { - this.initialized = t - }, r.prototype.isUpdating = function() { - return this.updating - }, r.prototype.setUpdating = function(t) { - this.updating = t - }, r.prototype.getLive2DModel = function() { - return this.live2DModel - }, r.prototype.setLipSync = function(t) { - this.lipSync = t - }, r.prototype.setLipSyncValue = function(t) { - this.lipSyncValue = t - }, r.prototype.setAccel = function(t, i, e) { - this.accelX = t, this.accelY = i, this.accelZ = e - }, r.prototype.setDrag = function(t, i) { - this.dragX = t, this.dragY = i - }, r.prototype.getMainMotionManager = function() { - return this.mainMotionManager - }, r.prototype.getExpressionManager = function() { - return this.expressionManager - }, r.prototype.loadModelData = function(t, i) { - var e = c.getPlatformManager(); - this.debugMode && e.log("Load model : " + t); - var r = this; - e.loadLive2DModel(t, function(t) { - if (r.live2DModel = t, r.live2DModel.saveParam(), 0 != Live2D.getError()) return void console.error("Error : Failed to loadModelData()."); - r.modelMatrix = new a(r.live2DModel.getCanvasWidth(), r.live2DModel.getCanvasHeight()), r.modelMatrix.setWidth(2), r.modelMatrix.setCenterPosition(0, 0), i(r.live2DModel) - }) - }, r.prototype.loadTexture = function(t, i, e) { - d++; - var r = c.getPlatformManager(); - this.debugMode && r.log("Load Texture : " + i); - var o = this; - r.loadTexture(this.live2DModel, t, i, function() { - d--, 0 == d && (o.isTexLoaded = !0), "function" == typeof e && e() - }) - }, r.prototype.loadMotion = function(t, i, e) { - var r = c.getPlatformManager(); - this.debugMode && r.log("Load Motion : " + i); - var o = null, - n = this; - r.loadBytes(i, function(i) { - o = Live2DMotion.loadMotion(i), null != t && (n.motions[t] = o), e(o) - }) - }, r.prototype.loadExpression = function(t, i, e) { - var r = c.getPlatformManager(); - this.debugMode && r.log("Load Expression : " + i); - var n = this; - r.loadBytes(i, function(i) { - null != t && (n.expressions[t] = o.loadJson(i)), "function" == typeof e && e() - }) - }, r.prototype.loadPose = function(t, i) { - var e = c.getPlatformManager(); - this.debugMode && e.log("Load Pose : " + t); - var r = this; - try { - e.loadBytes(t, function(t) { - r.pose = $.load(t), "function" == typeof i && i() - }) - } catch (t) { - console.warn(t) - } - }, r.prototype.loadPhysics = function(t) { - var i = c.getPlatformManager(); - this.debugMode && i.log("Load Physics : " + t); - var e = this; - try { - i.loadBytes(t, function(t) { - e.physics = l.load(t) - }) - } catch (t) { - console.warn(t) - } - }, r.prototype.hitTestSimple = function(t, i, e) { - if (null === this.live2DModel) return !1; - var r = this.live2DModel.getDrawDataIndex(t); - if (r < 0) return !1; - for (var o = this.live2DModel.getTransformedPoints(r), n = this.live2DModel.getCanvasWidth(), s = 0, _ = this.live2DModel.getCanvasHeight(), a = 0, h = 0; h < o.length; h += 2) { - var l = o[h], - $ = o[h + 1]; - l < n && (n = l), l > s && (s = l), $ < _ && (_ = $), $ > a && (a = $) - } - var u = this.modelMatrix.invertTransformX(i), - p = this.modelMatrix.invertTransformY(e); - return n <= u && u <= s && _ <= p && p <= a - }, r.prototype.hitTestSimpleCustom = function(t, i, e, r) { - return null !== this.live2DModel && (e >= t[0] && e <= i[0] && r <= t[1] && r >= i[1]) - }, o.prototype = new AMotion, o.EXPRESSION_DEFAULT = "DEFAULT", o.TYPE_SET = 0, o.TYPE_ADD = 1, o.TYPE_MULT = 2, o.loadJson = function(t) { - var i = new o, - e = c.getPlatformManager(), - r = e.jsonParseFromBytes(t); - if (i.setFadeIn(parseInt(r.fade_in) > 0 ? parseInt(r.fade_in) : 1e3), i.setFadeOut(parseInt(r.fade_out) > 0 ? parseInt(r.fade_out) : 1e3), null == r.params) return i; - var s = r.params, - _ = s.length; - i.paramList = []; - for (var a = 0; a < _; a++) { - var h = s[a], - l = h.id.toString(), - $ = parseFloat(h.val), - u = o.TYPE_ADD, - p = null != h.calc ? h.calc.toString() : "add"; - if ((u = "add" === p ? o.TYPE_ADD : "mult" === p ? o.TYPE_MULT : "set" === p ? o.TYPE_SET : o.TYPE_ADD) == o.TYPE_ADD) { - var f = null == h.def ? 0 : parseFloat(h.def); - $ -= f - } else if (u == o.TYPE_MULT) { - var f = null == h.def ? 1 : parseFloat(h.def); - 0 == f && (f = 1), $ /= f - } - var d = new n; - d.id = l, d.type = u, d.value = $, i.paramList.push(d) - } - return i - }, o.prototype.updateParamExe = function(t, i, e, r) { - for (var n = this.paramList.length - 1; n >= 0; --n) { - var s = this.paramList[n]; - s.type == o.TYPE_ADD ? t.addToParamFloat(s.id, s.value, e) : s.type == o.TYPE_MULT ? t.multParamFloat(s.id, s.value, e) : s.type == o.TYPE_SET && t.setParamFloat(s.id, s.value, e) - } - }, s.prototype.calcNextBlink = function() { - return UtSystem.getUserTimeMSec() + Math.random() * (2 * this.blinkIntervalMsec - 1) - }, s.prototype.setInterval = function(t) { - this.blinkIntervalMsec = t - }, s.prototype.setEyeMotion = function(t, i, e) { - this.closingMotionMsec = t, this.closedMotionMsec = i, this.openingMotionMsec = e - }, s.prototype.updateParam = function(t) { - var i, e = UtSystem.getUserTimeMSec(), - r = 0; - switch (this.eyeState) { - case g.STATE_CLOSING: - r = (e - this.stateStartTime) / this.closingMotionMsec, r >= 1 && (r = 1, this.eyeState = g.STATE_CLOSED, this.stateStartTime = e), i = 1 - r; - break; - case g.STATE_CLOSED: - r = (e - this.stateStartTime) / this.closedMotionMsec, r >= 1 && (this.eyeState = g.STATE_OPENING, this.stateStartTime = e), i = 0; - break; - case g.STATE_OPENING: - r = (e - this.stateStartTime) / this.openingMotionMsec, r >= 1 && (r = 1, this.eyeState = g.STATE_INTERVAL, this.nextBlinkTime = this.calcNextBlink()), i = r; - break; - case g.STATE_INTERVAL: - this.nextBlinkTime < e && (this.eyeState = g.STATE_CLOSING, this.stateStartTime = e), i = 1; - break; - case g.STATE_FIRST: - default: - this.eyeState = g.STATE_INTERVAL, this.nextBlinkTime = this.calcNextBlink(), i = 1 - } - this.closeIfZero || (i = -i), t.setParamFloat(this.eyeID_L, i), t.setParamFloat(this.eyeID_R, i) - }; - var g = function() {}; - g.STATE_FIRST = "STATE_FIRST", g.STATE_INTERVAL = "STATE_INTERVAL", g.STATE_CLOSING = "STATE_CLOSING", g.STATE_CLOSED = "STATE_CLOSED", g.STATE_OPENING = "STATE_OPENING", _.mul = function(t, i, e) { - var r, o, n, s = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]; - for (r = 0; r < 4; r++) for (o = 0; o < 4; o++) for (n = 0; n < 4; n++) s[r + 4 * o] += t[r + 4 * n] * i[n + 4 * o]; - for (r = 0; r < 16; r++) e[r] = s[r] - }, _.prototype.identity = function() { - for (var t = 0; t < 16; t++) this.tr[t] = t % 5 == 0 ? 1 : 0 - }, _.prototype.getArray = function() { - return this.tr - }, _.prototype.getCopyMatrix = function() { - return new Float32Array(this.tr) - }, _.prototype.setMatrix = function(t) { - if (null != this.tr && this.tr.length == this.tr.length) for (var i = 0; i < 16; i++) this.tr[i] = t[i] - }, _.prototype.getScaleX = function() { - return this.tr[0] - }, _.prototype.getScaleY = function() { - return this.tr[5] - }, _.prototype.transformX = function(t) { - return this.tr[0] * t + this.tr[12] - }, _.prototype.transformY = function(t) { - return this.tr[5] * t + this.tr[13] - }, _.prototype.invertTransformX = function(t) { - return (t - this.tr[12]) / this.tr[0] - }, _.prototype.invertTransformY = function(t) { - return (t - this.tr[13]) / this.tr[5] - }, _.prototype.multTranslate = function(t, i) { - var e = [1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, t, i, 0, 1]; - _.mul(e, this.tr, this.tr) - }, _.prototype.translate = function(t, i) { - this.tr[12] = t, this.tr[13] = i - }, _.prototype.translateX = function(t) { - this.tr[12] = t - }, _.prototype.translateY = function(t) { - this.tr[13] = t - }, _.prototype.multScale = function(t, i) { - var e = [t, 0, 0, 0, 0, i, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1]; - _.mul(e, this.tr, this.tr) - }, _.prototype.scale = function(t, i) { - this.tr[0] = t, this.tr[5] = i - }, a.prototype = new _, a.prototype.setPosition = function(t, i) { - this.translate(t, i) - }, a.prototype.setCenterPosition = function(t, i) { - var e = this.width * this.getScaleX(), - r = this.height * this.getScaleY(); - this.translate(t - e / 2, i - r / 2) - }, a.prototype.top = function(t) { - this.setY(t) - }, a.prototype.bottom = function(t) { - var i = this.height * this.getScaleY(); - this.translateY(t - i) - }, a.prototype.left = function(t) { - this.setX(t) - }, a.prototype.right = function(t) { - var i = this.width * this.getScaleX(); - this.translateX(t - i) - }, a.prototype.centerX = function(t) { - var i = this.width * this.getScaleX(); - this.translateX(t - i / 2) - }, a.prototype.centerY = function(t) { - var i = this.height * this.getScaleY(); - this.translateY(t - i / 2) - }, a.prototype.setX = function(t) { - this.translateX(t) - }, a.prototype.setY = function(t) { - this.translateY(t) - }, a.prototype.setHeight = function(t) { - var i = t / this.height, - e = -i; - this.scale(i, e) - }, a.prototype.setWidth = function(t) { - var i = t / this.width, - e = -i; - this.scale(i, e) - }, h.prototype = new MotionQueueManager, h.prototype.getCurrentPriority = function() { - return this.currentPriority - }, h.prototype.getReservePriority = function() { - return this.reservePriority - }, h.prototype.reserveMotion = function(t) { - return !(this.reservePriority >= t) && (!(this.currentPriority >= t) && (this.reservePriority = t, !0)) - }, h.prototype.setReservePriority = function(t) { - this.reservePriority = t - }, h.prototype.updateParam = function(t) { - var i = MotionQueueManager.prototype.updateParam.call(this, t); - return this.isFinished() && (this.currentPriority = 0), i - }, h.prototype.startMotionPrio = function(t, i) { - return i == this.reservePriority && (this.reservePriority = 0), this.currentPriority = i, this.startMotion(t, !1) - }, l.load = function(t) { - for (var i = new l, e = c.getPlatformManager(), r = e.jsonParseFromBytes(t), o = r.physics_hair, n = o.length, s = 0; s < n; s++) { - var _ = o[s], - a = new PhysicsHair, - h = _.setup, - $ = parseFloat(h.length), - u = parseFloat(h.regist), - p = parseFloat(h.mass); - a.setup($, u, p); - for (var f = _.src, d = f.length, g = 0; g < d; g++) { - var y = f[g], - m = y.id, - T = PhysicsHair.Src.SRC_TO_X, - P = y.ptype; - "x" === P ? T = PhysicsHair.Src.SRC_TO_X : "y" === P ? T = PhysicsHair.Src.SRC_TO_Y : "angle" === P ? T = PhysicsHair.Src.SRC_TO_G_ANGLE : UtDebug.error("live2d", "Invalid parameter:PhysicsHair.Src"); - var S = parseFloat(y.scale), - v = parseFloat(y.weight); - a.addSrcParam(T, m, S, v) - } - for (var L = _.targets, M = L.length, g = 0; g < M; g++) { - var E = L[g], - m = E.id, - T = PhysicsHair.Target.TARGET_FROM_ANGLE, - P = E.ptype; - "angle" === P ? T = PhysicsHair.Target.TARGET_FROM_ANGLE : "angle_v" === P ? T = PhysicsHair.Target.TARGET_FROM_ANGLE_V : UtDebug.error("live2d", "Invalid parameter:PhysicsHair.Target"); - var S = parseFloat(E.scale), - v = parseFloat(E.weight); - a.addTargetParam(T, m, S, v) - } - i.physicsList.push(a) - } - return i - }, l.prototype.updateParam = function(t) { - for (var i = UtSystem.getUserTimeMSec() - this.startTimeMSec, e = 0; e < this.physicsList.length; e++) this.physicsList[e].update(t, i) - }, $.load = function(t) { - for (var i = new $, e = c.getPlatformManager(), r = e.jsonParseFromBytes(t), o = r.parts_visible, n = o.length, s = 0; s < n; s++) { - for (var _ = o[s], a = _.group, h = a.length, l = new Array, p = 0; p < h; p++) { - var f = a[p], - d = new u(f.id); - if (l[p] = d, null != f.link) { - var g = f.link, - y = g.length; - d.link = new Array; - for (var m = 0; m < y; m++) { - var T = new u(g[m]); - d.link.push(T) - } - } - } - i.partsGroups.push(l) - } - return i - }, $.prototype.updateParam = function(t) { - if (null != t) { - t != this.lastModel && this.initParam(t), this.lastModel = t; - var i = UtSystem.getUserTimeMSec(), - e = 0 == this.lastTime ? 0 : (i - this.lastTime) / 1e3; - this.lastTime = i, e < 0 && (e = 0); - for (var r = 0; r < this.partsGroups.length; r++) this.normalizePartsOpacityGroup(t, this.partsGroups[r], e), this.copyOpacityOtherParts(t, this.partsGroups[r]) - } - }, $.prototype.initParam = function(t) { - if (null != t) for (var i = 0; i < this.partsGroups.length; i++) for (var e = this.partsGroups[i], r = 0; r < e.length; r++) { - e[r].initIndex(t); - var o = e[r].partsIndex, - n = e[r].paramIndex; - if (!(o < 0)) { - var s = 0 != t.getParamFloat(n); - if (t.setPartsOpacity(o, s ? 1 : 0), t.setParamFloat(n, s ? 1 : 0), null != e[r].link) for (var _ = 0; _ < e[r].link.length; _++) e[r].link[_].initIndex(t) - } - } - }, $.prototype.normalizePartsOpacityGroup = function(t, i, e) { - for (var r = -1, o = 1, n = 0; n < i.length; n++) { - var s = i[n].partsIndex, - _ = i[n].paramIndex; - if (!(s < 0) && 0 != t.getParamFloat(_)) { - if (r >= 0) break; - r = n, o = t.getPartsOpacity(s), o += e / .5, o > 1 && (o = 1) - } - } - r < 0 && (r = 0, o = 1); - for (var n = 0; n < i.length; n++) { - var s = i[n].partsIndex; - if (!(s < 0)) if (r == n) t.setPartsOpacity(s, o); - else { - var a, h = t.getPartsOpacity(s); - a = o < .5 ? -.5 * o / .5 + 1 : .5 * (1 - o) / .5; - var l = (1 - a) * (1 - o); - l > .15 && (a = 1 - .15 / (1 - o)), h > a && (h = a), t.setPartsOpacity(s, h) - } - } - }, $.prototype.copyOpacityOtherParts = function(t, i) { - for (var e = 0; e < i.length; e++) { - var r = i[e]; - if (null != r.link && !(r.partsIndex < 0)) for (var o = t.getPartsOpacity(r.partsIndex), n = 0; n < r.link.length; n++) { - var s = r.link[n]; - s.partsIndex < 0 || t.setPartsOpacity(s.partsIndex, o) - } - } - }, u.prototype.initIndex = function(t) { - this.paramIndex = t.getParamIndex("VISIBLE:" + this.id), this.partsIndex = t.getPartsDataIndex(PartsDataID.getID(this.id)), t.setParamFloat(this.paramIndex, 1) - }, p.FRAME_RATE = 30, p.prototype.setPoint = function(t, i) { - this.faceTargetX = t, this.faceTargetY = i - }, p.prototype.getX = function() { - return this.faceX - }, p.prototype.getY = function() { - return this.faceY - }, p.prototype.update = function() { - var t = 40 / 7.5 / p.FRAME_RATE; - if (0 == this.lastTimeSec) return void(this.lastTimeSec = UtSystem.getUserTimeMSec()); - var i = UtSystem.getUserTimeMSec(), - e = (i - this.lastTimeSec) * p.FRAME_RATE / 1e3; - this.lastTimeSec = i; - var r = .15 * p.FRAME_RATE, - o = e * t / r, - n = this.faceTargetX - this.faceX, - s = this.faceTargetY - this.faceY; - if (!(Math.abs(n) <= this.EPSILON && Math.abs(s) <= this.EPSILON)) { - var _ = Math.sqrt(n * n + s * s), - a = t * n / _, - h = t * s / _, - l = a - this.faceVX, - $ = h - this.faceVY, - u = Math.sqrt(l * l + $ * $); - (u < -o || u > o) && (l *= o / u, $ *= o / u, u = o), this.faceVX += l, this.faceVY += $; - var f = .5 * (Math.sqrt(o * o + 16 * o * _ - 8 * o * _) - o), - c = Math.sqrt(this.faceVX * this.faceVX + this.faceVY * this.faceVY); - c > f && (this.faceVX *= f / c, this.faceVY *= f / c), this.faceX += this.faceVX, this.faceY += this.faceVY - } - }, f.prototype = new _, f.prototype.getMaxScale = function() { - return this.max - }, f.prototype.getMinScale = function() { - return this.min - }, f.prototype.setMaxScale = function(t) { - this.max = t - }, f.prototype.setMinScale = function(t) { - this.min = t - }, f.prototype.isMaxScale = function() { - return this.getScaleX() == this.max - }, f.prototype.isMinScale = function() { - return this.getScaleX() == this.min - }, f.prototype.adjustTranslate = function(t, i) { - this.tr[0] * this.maxLeft + (this.tr[12] + t) > this.screenLeft && (t = this.screenLeft - this.tr[0] * this.maxLeft - this.tr[12]), this.tr[0] * this.maxRight + (this.tr[12] + t) < this.screenRight && (t = this.screenRight - this.tr[0] * this.maxRight - this.tr[12]), this.tr[5] * this.maxTop + (this.tr[13] + i) < this.screenTop && (i = this.screenTop - this.tr[5] * this.maxTop - this.tr[13]), this.tr[5] * this.maxBottom + (this.tr[13] + i) > this.screenBottom && (i = this.screenBottom - this.tr[5] * this.maxBottom - this.tr[13]); - var e = [1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, t, i, 0, 1]; - _.mul(e, this.tr, this.tr) - }, f.prototype.adjustScale = function(t, i, e) { - var r = e * this.tr[0]; - r < this.min ? this.tr[0] > 0 && (e = this.min / this.tr[0]) : r > this.max && this.tr[0] > 0 && (e = this.max / this.tr[0]); - var o = [1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, t, i, 0, 1], - n = [e, 0, 0, 0, 0, e, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1], - s = [1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, -t, -i, 0, 1]; - _.mul(s, this.tr, this.tr), _.mul(n, this.tr, this.tr), _.mul(o, this.tr, this.tr) - }, f.prototype.setScreenRect = function(t, i, e, r) { - this.screenLeft = t, this.screenRight = i, this.screenTop = r, this.screenBottom = e - }, f.prototype.setMaxScreenRect = function(t, i, e, r) { - this.maxLeft = t, this.maxRight = i, this.maxTop = r, this.maxBottom = e - }, f.prototype.getScreenLeft = function() { - return this.screenLeft - }, f.prototype.getScreenRight = function() { - return this.screenRight - }, f.prototype.getScreenBottom = function() { - return this.screenBottom - }, f.prototype.getScreenTop = function() { - return this.screenTop - }, f.prototype.getMaxLeft = function() { - return this.maxLeft - }, f.prototype.getMaxRight = function() { - return this.maxRight - }, f.prototype.getMaxBottom = function() { - return this.maxBottom - }, f.prototype.getMaxTop = function() { - return this.maxTop - }, c.platformManager = null, c.getPlatformManager = function() { - return c.platformManager - }, c.setPlatformManager = function(t) { - c.platformManager = t - }, t.exports = { - L2DTargetPoint: p, - Live2DFramework: c, - L2DViewMatrix: f, - L2DPose: $, - L2DPartsParam: u, - L2DPhysics: l, - L2DMotionManager: h, - L2DModelMatrix: a, - L2DMatrix44: _, - EYE_STATE: g, - L2DEyeBlink: s, - L2DExpressionParam: n, - L2DExpressionMotion: o, - L2DBaseModel: r - } -}, function(t, i, e) { - "use strict"; - var r = { - DEBUG_LOG: !1, - DEBUG_MOUSE_LOG: !1, - DEBUG_DRAW_HIT_AREA: !1, - DEBUG_DRAW_ALPHA_MODEL: !1, - VIEW_MAX_SCALE: 2, - VIEW_MIN_SCALE: .8, - VIEW_LOGICAL_LEFT: -1, - VIEW_LOGICAL_RIGHT: 1, - VIEW_LOGICAL_MAX_LEFT: -2, - VIEW_LOGICAL_MAX_RIGHT: 2, - VIEW_LOGICAL_MAX_BOTTOM: -2, - VIEW_LOGICAL_MAX_TOP: 2, - PRIORITY_NONE: 0, - PRIORITY_IDLE: 1, - PRIORITY_SLEEPY: 2, - PRIORITY_NORMAL: 3, - PRIORITY_FORCE: 4, - MOTION_GROUP_IDLE: "idle", - MOTION_GROUP_SLEEPY: "sleepy", - MOTION_GROUP_TAP_BODY: "tap_body", - MOTION_GROUP_FLICK_HEAD: "flick_head", - MOTION_GROUP_PINCH_IN: "pinch_in", - MOTION_GROUP_PINCH_OUT: "pinch_out", - MOTION_GROUP_SHAKE: "shake", - HIT_AREA_HEAD: "head", - HIT_AREA_BODY: "body" - }; - t.exports = r -}, function(t, i, e) { - "use strict"; - - function r(t) { - n = t - } - function o() { - return n - } - Object.defineProperty(i, "__esModule", { - value: !0 - }), i.setContext = r, i.getContext = o; - var n = void 0 -}, function(t, i, e) { - "use strict"; - - function r() {} - r.matrixStack = [1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1], r.depth = 0, r.currentMatrix = [1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1], r.tmp = new Array(16), r.reset = function() { - this.depth = 0 - }, r.loadIdentity = function() { - for (var t = 0; t < 16; t++) this.currentMatrix[t] = t % 5 == 0 ? 1 : 0 - }, r.push = function() { - var t = (this.depth, 16 * (this.depth + 1)); - this.matrixStack.length < t + 16 && (this.matrixStack.length = t + 16); - for (var i = 0; i < 16; i++) this.matrixStack[t + i] = this.currentMatrix[i]; - this.depth++ - }, r.pop = function() { - --this.depth < 0 && (myError("Invalid matrix stack."), this.depth = 0); - for (var t = 16 * this.depth, i = 0; i < 16; i++) this.currentMatrix[i] = this.matrixStack[t + i] - }, r.getMatrix = function() { - return this.currentMatrix - }, r.multMatrix = function(t) { - var i, e, r; - for (i = 0; i < 16; i++) this.tmp[i] = 0; - for (i = 0; i < 4; i++) for (e = 0; e < 4; e++) for (r = 0; r < 4; r++) this.tmp[i + 4 * e] += this.currentMatrix[i + 4 * r] * t[r + 4 * e]; - for (i = 0; i < 16; i++) this.currentMatrix[i] = this.tmp[i] - }, t.exports = r -}, function(t, i, e) { - t.exports = e(5) -}, function(t, i, e) { - "use strict"; - - function r(t) { - return t && t.__esModule ? t : { - default: - t - } - } - function o(t) { - C = document.getElementById(t), C.addEventListener && (window.addEventListener("click", g), window.addEventListener("mousedown", g), window.addEventListener("mousemove", g), window.addEventListener("mouseup", g), document.addEventListener("mouseout", g), window.addEventListener("touchstart", y), window.addEventListener("touchend", y), window.addEventListener("touchmove", y)) - } - function n(t) { - var i = C.width, - e = C.height; - N = new M.L2DTargetPoint; - var r = e / i, - o = w. - default.VIEW_LOGICAL_LEFT, - n = w. - default.VIEW_LOGICAL_RIGHT, - _ = -r, - h = r; - if (window.Live2D.captureFrame = !1, B = new M.L2DViewMatrix, B.setScreenRect(o, n, _, h), B.setMaxScreenRect(w. - default.VIEW_LOGICAL_MAX_LEFT, w. - default.VIEW_LOGICAL_MAX_RIGHT, w. - default.VIEW_LOGICAL_MAX_BOTTOM, w. - default.VIEW_LOGICAL_MAX_TOP), B.setMaxScale(w. - default.VIEW_MAX_SCALE), B.setMinScale(w. - default.VIEW_MIN_SCALE), U = new M.L2DMatrix44, U.multScale(1, i / e), G = new M.L2DMatrix44, G.multTranslate(-i / 2, -e / 2), G.multScale(2 / i, -2 / i), F = v(), (0, D.setContext)(F), !F) return console.error("Failed to create WebGL context."), void(window.WebGLRenderingContext && console.error("Your browser don't support WebGL, check https://get.webgl.org/ for futher information.")); - window.Live2D.setGL(F), F.clearColor(0, 0, 0, 0), a(t), s() - } - function s() { - b || (b = !0, function t() { - _(); - var i = window.requestAnimationFrame || window.mozRequestAnimationFrame || window.webkitRequestAnimationFrame || window.msRequestAnimationFrame; - if (window.Live2D.captureFrame) { - window.Live2D.captureFrame = !1; - var e = document.createElement("a"); - document.body.appendChild(e), e.setAttribute("type", "hidden"), e.href = C.toDataURL(), e.download = window.Live2D.captureName || "live2d.png", e.click() - } - i(t, C) - }()) - } - function _() { - O. - default.reset(), O. - default.loadIdentity(), N.update(), R.setDrag(N.getX(), N.getY()), F.clear(F.COLOR_BUFFER_BIT), O. - default.multMatrix(U.getArray()), O. - default.multMatrix(B.getArray()), O. - default.push(); - for (var t = 0; t < R.numModels(); t++) { - var i = R.getModel(t); - if (null == i) return; - i.initialized && !i.updating && (i.update(), i.draw(F)) - } - O. - default.pop() - } - function a(t) { - R.reloadFlg = !0, R.count++, R.changeModel(F, t) - } - function h(t, i) { - return t.x * i.x + t.y * i.y - } - function l(t, i) { - var e = Math.sqrt(t * t + i * i); - return { - x: t / e, - y: i / e - } - } - function $(t, i, e) { - function r(t, i) { - return 180 * Math.acos(h({ - x: 0, - y: 1 - }, l(t, i))) / Math.PI - } - if (i.x < e.left + e.width && i.y < e.top + e.height && i.x > e.left && i.y > e.top) return i; - var o = t.x - i.x, - n = t.y - i.y, - s = r(o, n); - i.x < t.x && (s = 360 - s); - var _ = 360 - r(e.left - t.x, -1 * (e.top - t.y)), - a = 360 - r(e.left - t.x, -1 * (e.top + e.height - t.y)), - $ = r(e.left + e.width - t.x, -1 * (e.top - t.y)), - u = r(e.left + e.width - t.x, -1 * (e.top + e.height - t.y)), - p = n / o, - f = {}; - if (s < $) { - var c = e.top - t.y, - d = c / p; - f = { - y: t.y + c, - x: t.x + d - } - } else if (s < u) { - var g = e.left + e.width - t.x, - y = g * p; - f = { - y: t.y + y, - x: t.x + g - } - } else if (s < a) { - var m = e.top + e.height - t.y, - T = m / p; - f = { - y: t.y + m, - x: t.x + T - } - } else if (s < _) { - var P = t.x - e.left, - S = P * p; - f = { - y: t.y - S, - x: t.x - P - } - } else { - var v = e.top - t.y, - L = v / p; - f = { - y: t.y + v, - x: t.x + L - } - } - return f - } - function u(t) { - Y = !0; - var i = C.getBoundingClientRect(), - e = P(t.clientX - i.left), - r = S(t.clientY - i.top), - o = $({ - x: i.left + i.width / 2, - y: i.top + i.height * X - }, { - x: t.clientX, - y: t.clientY - }, i), - n = m(o.x - i.left), - s = T(o.y - i.top); - w. - default.DEBUG_MOUSE_LOG && console.log("onMouseMove device( x:" + t.clientX + " y:" + t.clientY + " ) view( x:" + n + " y:" + s + ")"), k = e, V = r, N.setPoint(n, s) - } - function p(t) { - Y = !0; - var i = C.getBoundingClientRect(), - e = P(t.clientX - i.left), - r = S(t.clientY - i.top), - o = $({ - x: i.left + i.width / 2, - y: i.top + i.height * X - }, { - x: t.clientX, - y: t.clientY - }, i), - n = m(o.x - i.left), - s = T(o.y - i.top); - w. - default.DEBUG_MOUSE_LOG && console.log("onMouseDown device( x:" + t.clientX + " y:" + t.clientY + " ) view( x:" + n + " y:" + s + ")"), k = e, V = r, R.tapEvent(n, s) - } - function f(t) { - var i = C.getBoundingClientRect(), - e = P(t.clientX - i.left), - r = S(t.clientY - i.top), - o = $({ - x: i.left + i.width / 2, - y: i.top + i.height * X - }, { - x: t.clientX, - y: t.clientY - }, i), - n = m(o.x - i.left), - s = T(o.y - i.top); - w. - default.DEBUG_MOUSE_LOG && console.log("onMouseMove device( x:" + t.clientX + " y:" + t.clientY + " ) view( x:" + n + " y:" + s + ")"), Y && (k = e, V = r, N.setPoint(n, s)) - } - function c() { - Y && (Y = !1), N.setPoint(0, 0) - } - function d() { - w. - default.DEBUG_LOG && console.log("Set Session Storage."), sessionStorage.setItem("Sleepy", "1") - } - function g(t) { - if ("mousewheel" == t.type); - else if ("mousedown" == t.type) p(t); - else if ("mousemove" == t.type) { - var i = sessionStorage.getItem("Sleepy"); - "1" === i && sessionStorage.setItem("Sleepy", "0"), u(t) - } else if ("mouseup" == t.type) { - if ("button" in t && 0 != t.button) return - } else if ("mouseout" == t.type) { - w. - default.DEBUG_LOG && console.log("Mouse out Window."), c(); - var e = sessionStorage.getItem("SleepyTimer"); - window.clearTimeout(e), e = window.setTimeout(d, 5e4), sessionStorage.setItem("SleepyTimer", e) - } - } - function y(t) { - var i = t.touches[0]; - "touchstart" == t.type ? 1 == t.touches.length && u(i) : "touchmove" == t.type ? f(i) : "touchend" == t.type && c() - } - function m(t) { - var i = G.transformX(t); - return B.invertTransformX(i) - } - function T(t) { - var i = G.transformY(t); - return B.invertTransformY(i) - } - function P(t) { - return G.transformX(t) - } - function S(t) { - return G.transformY(t) - } - function v() { - for (var t = ["webgl", "experimental-webgl", "webkit-3d", "moz-webgl"], i = 0; i < t.length; i++) try { - var e = C.getContext(t[i], { - premultipliedAlpha: !0 - }); - if (e) return e - } catch (t) {} - return null - } - function L(t, i, e) { - X = void 0 === e ? .5 : e, o(t), n(i) - } - e(6); - var M = e(0), - E = e(8), - A = r(E), - I = e(1), - w = r(I), - x = e(3), - O = r(x), - D = e(2), - R = (window.navigator.platform.toLowerCase(), new A. - default), - b = !1, - F = null, - C = null, - N = null, - B = null, - U = null, - G = null, - Y = !1, - k = 0, - V = 0, - X = .5; - window.loadlive2d = L -}, function(t, i, e) { - "use strict"; - (function(t) { - ! - function() { - function i() { - At || (this._$MT = null, this._$5S = null, this._$NP = 0, i._$42++, this._$5S = new Y(this)) - } - function e(t) { - if (!At) { - this.clipContextList = new Array, this.glcontext = t.gl, this.dp_webgl = t, this.curFrameNo = 0, this.firstError_clipInNotUpdate = !0, this.colorBuffer = 0, this.isInitGLFBFunc = !1, this.tmpBoundsOnModel = new S, at.glContext.length > at.frameBuffers.length && (this.curFrameNo = this.getMaskRenderTexture()), this.tmpModelToViewMatrix = new R, this.tmpMatrix2 = new R, this.tmpMatrixForMask = new R, this.tmpMatrixForDraw = new R, this.CHANNEL_COLORS = new Array; - var i = new A; - i = new A, i.r = 0, i.g = 0, i.b = 0, i.a = 1, this.CHANNEL_COLORS.push(i), i = new A, i.r = 1, i.g = 0, i.b = 0, i.a = 0, this.CHANNEL_COLORS.push(i), i = new A, i.r = 0, i.g = 1, i.b = 0, i.a = 0, this.CHANNEL_COLORS.push(i), i = new A, i.r = 0, i.g = 0, i.b = 1, i.a = 0, this.CHANNEL_COLORS.push(i); - for (var e = 0; e < this.CHANNEL_COLORS.length; e++) this.dp_webgl.setChannelFlagAsColor(e, this.CHANNEL_COLORS[e]) - } - } - function r(t, i, e) { - this.clipIDList = new Array, this.clipIDList = e, this.clippingMaskDrawIndexList = new Array; - for (var r = 0; r < e.length; r++) this.clippingMaskDrawIndexList.push(i.getDrawDataIndex(e[r])); - this.clippedDrawContextList = new Array, this.isUsing = !0, this.layoutChannelNo = 0, this.layoutBounds = new S, this.allClippedDrawRect = new S, this.matrixForMask = new Float32Array(16), this.matrixForDraw = new Float32Array(16), this.owner = t - } - function o(t, i) { - this._$gP = t, this.drawDataIndex = i - } - function n() { - At || (this.color = null) - } - function s() { - At || (this._$dP = null, this._$eo = null, this._$V0 = null, this._$dP = 1e3, this._$eo = 1e3, this._$V0 = 1, this._$a0()) - } - function _() {} - function a() { - this._$r = null, this._$0S = null - } - function h() { - At || (this.x = null, this.y = null, this.width = null, this.height = null) - } - function l(t) { - At || et.prototype.constructor.call(this, t) - } - function $() {} - function u(t) { - At || et.prototype.constructor.call(this, t) - } - function p() { - At || (this._$vo = null, this._$F2 = null, this._$ao = 400, this._$1S = 400, p._$42++) - } - function f() { - At || (this.p1 = new c, this.p2 = new c, this._$Fo = 0, this._$Db = 0, this._$L2 = 0, this._$M2 = 0, this._$ks = 0, this._$9b = 0, this._$iP = 0, this._$iT = 0, this._$lL = new Array, this._$qP = new Array, this.setup(.3, .5, .1)) - } - function c() { - this._$p = 1, this.x = 0, this.y = 0, this.vx = 0, this.vy = 0, this.ax = 0, this.ay = 0, this.fx = 0, this.fy = 0, this._$s0 = 0, this._$70 = 0, this._$7L = 0, this._$HL = 0 - } - function d(t, i, e) { - this._$wL = null, this.scale = null, this._$V0 = null, this._$wL = t, this.scale = i, this._$V0 = e - } - function g(t, i, e, r) { - d.prototype.constructor.call(this, i, e, r), this._$tL = null, this._$tL = t - } - function y(t, i, e) { - this._$wL = null, this.scale = null, this._$V0 = null, this._$wL = t, this.scale = i, this._$V0 = e - } - function T(t, i, e, r) { - y.prototype.constructor.call(this, i, e, r), this._$YP = null, this._$YP = t - } - function P() { - At || (this._$fL = 0, this._$gL = 0, this._$B0 = 1, this._$z0 = 1, this._$qT = 0, this.reflectX = !1, this.reflectY = !1) - } - function S() { - At || (this.x = null, this.y = null, this.width = null, this.height = null) - } - function v() {} - function L() { - At || (this.x = null, this.y = null) - } - function M() { - At || (this._$gP = null, this._$dr = null, this._$GS = null, this._$qb = null, this._$Lb = null, this._$mS = null, this.clipID = null, this.clipIDList = new Array) - } - function E() { - At || (this._$Eb = E._$ps, this._$lT = 1, this._$C0 = 1, this._$tT = 1, this._$WL = 1, this.culling = !1, this.matrix4x4 = new Float32Array(16), this.premultipliedAlpha = !1, this.anisotropy = 0, this.clippingProcess = E.CLIPPING_PROCESS_NONE, this.clipBufPre_clipContextMask = null, this.clipBufPre_clipContextDraw = null, this.CHANNEL_COLORS = new Array) - } - function A() { - At || (this.a = 1, this.r = 1, this.g = 1, this.b = 1, this.scale = 1, this._$ho = 1, this.blendMode = at.L2D_COLOR_BLEND_MODE_MULT) - } - function I() { - At || (this._$kP = null, this._$dr = null, this._$Ai = !0, this._$mS = null) - } - function w() {} - function x() { - At || (this._$VP = 0, this._$wL = null, this._$GP = null, this._$8o = x._$ds, this._$2r = -1, this._$O2 = 0, this._$ri = 0) - } - function O() {} - function D() { - At || (this._$Ob = null) - } - function R() { - this.m = new Float32Array(16), this.identity() - } - function b(t) { - At || et.prototype.constructor.call(this, t) - } - function F() { - At || (this._$7 = 1, this._$f = 0, this._$H = 0, this._$g = 1, this._$k = 0, this._$w = 0, this._$hi = STATE_IDENTITY, this._$Z = _$pS) - } - function C() { - At || (s.prototype.constructor.call(this), this.motions = new Array, this._$7r = null, this._$7r = C._$Co++, this._$D0 = 30, this._$yT = 0, this._$E = !0, this.loopFadeIn = !0, this._$AS = -1, _$a0()) - } - function N() { - this._$P = new Float32Array(100), this.size = 0 - } - function B() { - this._$4P = null, this._$I0 = null, this._$RP = null - } - function U() {} - function G() {} - function Y(t) { - At || (this._$QT = !0, this._$co = -1, this._$qo = 0, this._$pb = new Array(Y._$is), this._$_2 = new Float32Array(Y._$is), this._$vr = new Float32Array(Y._$is), this._$Rr = new Float32Array(Y._$is), this._$Or = new Float32Array(Y._$is), this._$fs = new Float32Array(Y._$is), this._$Js = new Array(Y._$is), this._$3S = new Array, this._$aS = new Array, this._$Bo = null, this._$F2 = new Array, this._$db = new Array, this._$8b = new Array, this._$Hr = new Array, this._$Ws = null, this._$Vs = null, this._$Er = null, this._$Es = new Int16Array(U._$Qb), this._$ZP = new Float32Array(2 * U._$1r), this._$Ri = t, this._$b0 = Y._$HP++, this.clipManager = null, this.dp_webgl = null) - } - function k() {} - function V() { - At || (this._$12 = null, this._$bb = null, this._$_L = null, this._$jo = null, this._$iL = null, this._$0L = null, this._$Br = null, this._$Dr = null, this._$Cb = null, this._$mr = null, this._$_L = wt.STATE_FIRST, this._$Br = 4e3, this._$Dr = 100, this._$Cb = 50, this._$mr = 150, this._$jo = !0, this._$iL = "PARAM_EYE_L_OPEN", this._$0L = "PARAM_EYE_R_OPEN") - } - function X() { - At || (E.prototype.constructor.call(this), this._$sb = new Int32Array(X._$As), this._$U2 = new Array, this.transform = null, this.gl = null, null == X._$NT && (X._$NT = X._$9r(256), X._$vS = X._$9r(256), X._$no = X._$vb(256))) - } - function z() { - At || (I.prototype.constructor.call(this), this._$GS = null, this._$Y0 = null) - } - function H(t) { - _t.prototype.constructor.call(this, t), this._$8r = I._$ur, this._$Yr = null, this._$Wr = null - } - function W() { - At || (M.prototype.constructor.call(this), this._$gP = null, this._$dr = null, this._$GS = null, this._$qb = null, this._$Lb = null, this._$mS = null) - } - function j() { - At || (this._$NL = null, this._$3S = null, this._$aS = null, j._$42++) - } - function q() { - At || (i.prototype.constructor.call(this), this._$zo = new X) - } - function J() { - At || (s.prototype.constructor.call(this), this.motions = new Array, this._$o2 = null, this._$7r = J._$Co++, this._$D0 = 30, this._$yT = 0, this._$E = !1, this.loopFadeIn = !0, this._$rr = -1, this._$eP = 0) - } - function Q(t, i) { - return String.fromCharCode(t.getUint8(i)) - } - function N() { - this._$P = new Float32Array(100), this.size = 0 - } - function B() { - this._$4P = null, this._$I0 = null, this._$RP = null - } - function Z() { - At || (I.prototype.constructor.call(this), this._$o = 0, this._$A = 0, this._$GS = null, this._$Eo = null) - } - function K(t) { - _t.prototype.constructor.call(this, t), this._$8r = I._$ur, this._$Cr = null, this._$hr = null - } - function tt() { - At || (this.visible = !0, this._$g0 = !1, this._$NL = null, this._$3S = null, this._$aS = null, tt._$42++) - } - function it(t) { - this._$VS = null, this._$e0 = null, this._$e0 = t - } - function et(t) { - At || (this.id = t) - } - function rt() {} - function ot() { - At || (this._$4S = null) - } - function nt(t, i) { - this.canvas = t, this.context = i, this.viewport = new Array(0, 0, t.width, t.height), this._$6r = 1, this._$xP = 0, this._$3r = 1, this._$uP = 0, this._$Qo = -1, this.cacheImages = {} - } - function st() { - At || (this._$TT = null, this._$LT = null, this._$FS = null, this._$wL = null) - } - function _t(t) { - At || (this._$e0 = null, this._$IP = null, this._$JS = !1, this._$AT = !0, this._$e0 = t, this.totalScale = 1, this._$7s = 1, this.totalOpacity = 1) - } - function at() {} - function ht() {} - function lt(t) { - At || (this._$ib = t) - } - function $t() { - At || (W.prototype.constructor.call(this), this._$LP = -1, this._$d0 = 0, this._$Yo = 0, this._$JP = null, this._$5P = null, this._$BP = null, this._$Eo = null, this._$Qi = null, this._$6s = $t._$ms, this.culling = !0, this.gl_cacheImage = null, this.instanceNo = $t._$42++) - } - function ut(t) { - Mt.prototype.constructor.call(this, t), this._$8r = W._$ur, this._$Cr = null, this._$hr = null - } - function pt() { - At || (this.x = null, this.y = null) - } - function ft(t) { - At || (i.prototype.constructor.call(this), this.drawParamWebGL = new mt(t), this.drawParamWebGL.setGL(at.getGL(t))) - } - function ct() { - At || (this.motions = null, this._$eb = !1, this.motions = new Array) - } - function dt() { - this._$w0 = null, this._$AT = !0, this._$9L = !1, this._$z2 = -1, this._$bs = -1, this._$Do = -1, this._$sr = null, this._$sr = dt._$Gs++ - } - function gt() { - this.m = new Array(1, 0, 0, 0, 1, 0, 0, 0, 1) - } - function yt(t) { - At || et.prototype.constructor.call(this, t) - } - function mt(t) { - At || (E.prototype.constructor.call(this), this.textures = new Array, this.transform = null, this.gl = null, this.glno = t, this.firstDraw = !0, this.anisotropyExt = null, this.maxAnisotropy = 0, this._$As = 32, this._$Gr = !1, this._$NT = null, this._$vS = null, this._$no = null, this.vertShader = null, this.fragShader = null, this.vertShaderOff = null, this.fragShaderOff = null) - } - function Tt(t, i, e) { - return null == i && (i = t.createBuffer()), t.bindBuffer(t.ARRAY_BUFFER, i), t.bufferData(t.ARRAY_BUFFER, e, t.DYNAMIC_DRAW), i - } - function Pt(t, i, e) { - return null == i && (i = t.createBuffer()), t.bindBuffer(t.ELEMENT_ARRAY_BUFFER, i), t.bufferData(t.ELEMENT_ARRAY_BUFFER, e, t.DYNAMIC_DRAW), i - } - function St(t) { - At || (this._$P = new Int8Array(8), this._$R0 = new DataView(this._$P.buffer), this._$3i = new Int8Array(1e3), this._$hL = 0, this._$v0 = 0, this._$S2 = 0, this._$Ko = new Array, this._$T = t, this._$F = 0) - } - function vt() {} - function Lt() {} - function Mt(t) { - At || (this._$e0 = null, this._$IP = null, this._$Us = null, this._$7s = null, this._$IS = [!1], this._$VS = null, this._$AT = !0, this.baseOpacity = 1, this.clipBufPre_clipContext = null, this._$e0 = t) - } - function Et() {} - var At = !0; - i._$0s = 1, i._$4s = 2, i._$42 = 0, i._$62 = function(t, e) { - try { - if (e instanceof ArrayBuffer && (e = new DataView(e)), !(e instanceof DataView)) throw new lt("_$SS#loadModel(b) / b _$x be DataView or ArrayBuffer"); - var r, o = new St(e), - n = o._$ST(), - s = o._$ST(), - a = o._$ST(); - if (109 != n || 111 != s || 99 != a) throw new lt("_$gi _$C _$li , _$Q0 _$P0."); - if (r = o._$ST(), o._$gr(r), r > G._$T7) { - t._$NP |= i._$4s; - throw new lt("_$gi _$C _$li , _$n0 _$_ version _$li ( SDK : " + G._$T7 + " < _$f0 : " + r + " )@_$SS#loadModel()\n") - } - var h = o._$nP(); - if (r >= G._$s7) { - var l = o._$9T(), - $ = o._$9T(); - if (-30584 != l || -30584 != $) throw t._$NP |= i._$0s, new lt("_$gi _$C _$li , _$0 _$6 _$Ui.") - } - t._$KS(h); - var u = t.getModelContext(); - u.setDrawParam(t.getDrawParam()), u.init() - } catch (t) { - _._$Rb(t) - } - }, i.prototype._$KS = function(t) { - this._$MT = t - }, i.prototype.getModelImpl = function() { - return null == this._$MT && (this._$MT = new p, this._$MT._$zP()), this._$MT - }, i.prototype.getCanvasWidth = function() { - return null == this._$MT ? 0 : this._$MT.getCanvasWidth() - }, i.prototype.getCanvasHeight = function() { - return null == this._$MT ? 0 : this._$MT.getCanvasHeight() - }, i.prototype.getParamFloat = function(t) { - return "number" != typeof t && (t = this._$5S.getParamIndex(u.getID(t))), this._$5S.getParamFloat(t) - }, i.prototype.setParamFloat = function(t, i, e) { - "number" != typeof t && (t = this._$5S.getParamIndex(u.getID(t))), arguments.length < 3 && (e = 1), this._$5S.setParamFloat(t, this._$5S.getParamFloat(t) * (1 - e) + i * e) - }, i.prototype.addToParamFloat = function(t, i, e) { - "number" != typeof t && (t = this._$5S.getParamIndex(u.getID(t))), arguments.length < 3 && (e = 1), this._$5S.setParamFloat(t, this._$5S.getParamFloat(t) + i * e) - }, i.prototype.multParamFloat = function(t, i, e) { - "number" != typeof t && (t = this._$5S.getParamIndex(u.getID(t))), arguments.length < 3 && (e = 1), this._$5S.setParamFloat(t, this._$5S.getParamFloat(t) * (1 + (i - 1) * e)) - }, i.prototype.getParamIndex = function(t) { - return this._$5S.getParamIndex(u.getID(t)) - }, i.prototype.loadParam = function() { - this._$5S.loadParam() - }, i.prototype.saveParam = function() { - this._$5S.saveParam() - }, i.prototype.init = function() { - this._$5S.init() - }, i.prototype.update = function() { - this._$5S.update() - }, i.prototype._$Rs = function() { - return _._$li("_$60 _$PT _$Rs()"), -1 - }, i.prototype._$Ds = function(t) { - _._$li("_$60 _$PT _$SS#_$Ds() \n") - }, i.prototype._$K2 = function() {}, i.prototype.draw = function() {}, i.prototype.getModelContext = function() { - return this._$5S - }, i.prototype._$s2 = function() { - return this._$NP - }, i.prototype._$P7 = function(t, i, e, r) { - var o = -1, - n = 0, - s = this; - if (0 != e) if (1 == t.length) { - var _ = t[0], - a = 0 != s.getParamFloat(_), - h = i[0], - l = s.getPartsOpacity(h), - $ = e / r; - a ? (l += $) > 1 && (l = 1) : (l -= $) < 0 && (l = 0), s.setPartsOpacity(h, l) - } else { - for (var u = 0; u < t.length; u++) { - var _ = t[u], - p = 0 != s.getParamFloat(_); - if (p) { - if (o >= 0) break; - o = u; - var h = i[u]; - n = s.getPartsOpacity(h), n += e / r, n > 1 && (n = 1) - } - } - o < 0 && (console.log("No _$wi _$q0/ _$U default[%s]", t[0]), o = 0, n = 1, s.loadParam(), s.setParamFloat(t[o], n), s.saveParam()); - for (var u = 0; u < t.length; u++) { - var h = i[u]; - if (o == u) s.setPartsOpacity(h, n); - else { - var f, c = s.getPartsOpacity(h); - f = n < .5 ? -.5 * n / .5 + 1 : .5 * (1 - n) / .5; - var d = (1 - f) * (1 - n); - d > .15 && (f = 1 - .15 / (1 - n)), c > f && (c = f), s.setPartsOpacity(h, c) - } - } - } else for (var u = 0; u < t.length; u++) { - var _ = t[u], - h = i[u], - p = 0 != s.getParamFloat(_); - s.setPartsOpacity(h, p ? 1 : 0) - } - }, i.prototype.setPartsOpacity = function(t, i) { - "number" != typeof t && (t = this._$5S.getPartsDataIndex(l.getID(t))), this._$5S.setPartsOpacity(t, i) - }, i.prototype.getPartsDataIndex = function(t) { - return t instanceof l || (t = l.getID(t)), this._$5S.getPartsDataIndex(t) - }, i.prototype.getPartsOpacity = function(t) { - return "number" != typeof t && (t = this._$5S.getPartsDataIndex(l.getID(t))), t < 0 ? 0 : this._$5S.getPartsOpacity(t) - }, i.prototype.getDrawParam = function() {}, i.prototype.getDrawDataIndex = function(t) { - return this._$5S.getDrawDataIndex(b.getID(t)) - }, i.prototype.getDrawData = function(t) { - return this._$5S.getDrawData(t) - }, i.prototype.getTransformedPoints = function(t) { - var i = this._$5S._$C2(t); - return i instanceof ut ? i.getTransformedPoints() : null - }, i.prototype.getIndexArray = function(t) { - if (t < 0 || t >= this._$5S._$aS.length) return null; - var i = this._$5S._$aS[t]; - return null != i && i.getType() == W._$wb && i instanceof $t ? i.getIndexArray() : null - }, e.CHANNEL_COUNT = 4, e.RENDER_TEXTURE_USE_MIPMAP = !1, e.NOT_USED_FRAME = -100, e.prototype._$L7 = function() { - if (this.tmpModelToViewMatrix && (this.tmpModelToViewMatrix = null), this.tmpMatrix2 && (this.tmpMatrix2 = null), this.tmpMatrixForMask && (this.tmpMatrixForMask = null), this.tmpMatrixForDraw && (this.tmpMatrixForDraw = null), this.tmpBoundsOnModel && (this.tmpBoundsOnModel = null), this.CHANNEL_COLORS) { - for (var t = this.CHANNEL_COLORS.length - 1; t >= 0; --t) this.CHANNEL_COLORS.splice(t, 1); - this.CHANNEL_COLORS = [] - } - this.releaseShader() - }, e.prototype.releaseShader = function() { - for (var t = at.frameBuffers.length, i = 0; i < t; i++) this.gl.deleteFramebuffer(at.frameBuffers[i].framebuffer); - at.frameBuffers = [], at.glContext = [] - }, e.prototype.init = function(t, i, e) { - for (var o = 0; o < i.length; o++) { - var n = i[o].getClipIDList(); - if (null != n) { - var s = this.findSameClip(n); - null == s && (s = new r(this, t, n), this.clipContextList.push(s)); - var _ = i[o].getDrawDataID(), - a = t.getDrawDataIndex(_); - s.addClippedDrawData(_, a); - e[o].clipBufPre_clipContext = s - } - } - }, e.prototype.getMaskRenderTexture = function() { - var t = null; - return t = this.dp_webgl.createFramebuffer(), at.frameBuffers[this.dp_webgl.glno] = t, this.dp_webgl.glno - }, e.prototype.setupClip = function(t, i) { - for (var e = 0, r = 0; r < this.clipContextList.length; r++) { - var o = this.clipContextList[r]; - this.calcClippedDrawTotalBounds(t, o), o.isUsing && e++ - } - if (e > 0) { - var n = i.gl.getParameter(i.gl.FRAMEBUFFER_BINDING), - s = new Array(4); - s[0] = 0, s[1] = 0, s[2] = i.gl.canvas.width, s[3] = i.gl.canvas.height, i.gl.viewport(0, 0, at.clippingMaskBufferSize, at.clippingMaskBufferSize), this.setupLayoutBounds(e), i.gl.bindFramebuffer(i.gl.FRAMEBUFFER, at.frameBuffers[this.curFrameNo].framebuffer), i.gl.clearColor(0, 0, 0, 0), i.gl.clear(i.gl.COLOR_BUFFER_BIT); - for (var r = 0; r < this.clipContextList.length; r++) { - var o = this.clipContextList[r], - _ = o.allClippedDrawRect, - a = (o.layoutChannelNo, o.layoutBounds); - this.tmpBoundsOnModel._$jL(_), this.tmpBoundsOnModel.expand(.05 * _.width, .05 * _.height); - var h = a.width / this.tmpBoundsOnModel.width, - l = a.height / this.tmpBoundsOnModel.height; - this.tmpMatrix2.identity(), this.tmpMatrix2.translate(-1, -1, 0), this.tmpMatrix2.scale(2, 2, 1), this.tmpMatrix2.translate(a.x, a.y, 0), this.tmpMatrix2.scale(h, l, 1), this.tmpMatrix2.translate(-this.tmpBoundsOnModel.x, -this.tmpBoundsOnModel.y, 0), this.tmpMatrixForMask.setMatrix(this.tmpMatrix2.m), this.tmpMatrix2.identity(), this.tmpMatrix2.translate(a.x, a.y, 0), this.tmpMatrix2.scale(h, l, 1), this.tmpMatrix2.translate(-this.tmpBoundsOnModel.x, -this.tmpBoundsOnModel.y, 0), this.tmpMatrixForDraw.setMatrix(this.tmpMatrix2.m); - for (var $ = this.tmpMatrixForMask.getArray(), u = 0; u < 16; u++) o.matrixForMask[u] = $[u]; - for (var p = this.tmpMatrixForDraw.getArray(), u = 0; u < 16; u++) o.matrixForDraw[u] = p[u]; - for (var f = o.clippingMaskDrawIndexList.length, c = 0; c < f; c++) { - var d = o.clippingMaskDrawIndexList[c], - g = t.getDrawData(d), - y = t._$C2(d); - i.setClipBufPre_clipContextForMask(o), g.draw(i, t, y) - } - } - i.gl.bindFramebuffer(i.gl.FRAMEBUFFER, n), i.setClipBufPre_clipContextForMask(null), i.gl.viewport(s[0], s[1], s[2], s[3]) - } - }, e.prototype.getColorBuffer = function() { - return this.colorBuffer - }, e.prototype.findSameClip = function(t) { - for (var i = 0; i < this.clipContextList.length; i++) { - var e = this.clipContextList[i], - r = e.clipIDList.length; - if (r == t.length) { - for (var o = 0, n = 0; n < r; n++) for (var s = e.clipIDList[n], _ = 0; _ < r; _++) if (t[_] == s) { - o++; - break - } - if (o == r) return e - } - } - return null - }, e.prototype.calcClippedDrawTotalBounds = function(t, i) { - for (var e = t._$Ri.getModelImpl().getCanvasWidth(), r = t._$Ri.getModelImpl().getCanvasHeight(), o = e > r ? e : r, n = o, s = o, _ = 0, a = 0, h = i.clippedDrawContextList.length, l = 0; l < h; l++) { - var $ = i.clippedDrawContextList[l], - u = $.drawDataIndex, - p = t._$C2(u); - if (p._$yo()) { - for (var f = p.getTransformedPoints(), c = f.length, d = [], g = [], y = 0, m = U._$i2; m < c; m += U._$No) d[y] = f[m], g[y] = f[m + 1], y++; - var T = Math.min.apply(null, d), - P = Math.min.apply(null, g), - S = Math.max.apply(null, d), - v = Math.max.apply(null, g); - T < n && (n = T), P < s && (s = P), S > _ && (_ = S), v > a && (a = v) - } - } - if (n == o) i.allClippedDrawRect.x = 0, i.allClippedDrawRect.y = 0, i.allClippedDrawRect.width = 0, i.allClippedDrawRect.height = 0, i.isUsing = !1; - else { - var L = _ - n, - M = a - s; - i.allClippedDrawRect.x = n, i.allClippedDrawRect.y = s, i.allClippedDrawRect.width = L, i.allClippedDrawRect.height = M, i.isUsing = !0 - } - }, e.prototype.setupLayoutBounds = function(t) { - var i = t / e.CHANNEL_COUNT, - r = t % e.CHANNEL_COUNT; - i = ~~i, r = ~~r; - for (var o = 0, n = 0; n < e.CHANNEL_COUNT; n++) { - var s = i + (n < r ? 1 : 0); - if (0 == s); - else if (1 == s) { - var a = this.clipContextList[o++]; - a.layoutChannelNo = n, a.layoutBounds.x = 0, a.layoutBounds.y = 0, a.layoutBounds.width = 1, a.layoutBounds.height = 1 - } else if (2 == s) for (var h = 0; h < s; h++) { - var l = h % 2, - $ = 0; - l = ~~l; - var a = this.clipContextList[o++]; - a.layoutChannelNo = n, a.layoutBounds.x = .5 * l, a.layoutBounds.y = 0, a.layoutBounds.width = .5, a.layoutBounds.height = 1 - } else if (s <= 4) for (var h = 0; h < s; h++) { - var l = h % 2, - $ = h / 2; - l = ~~l, $ = ~~$; - var a = this.clipContextList[o++]; - a.layoutChannelNo = n, a.layoutBounds.x = .5 * l, a.layoutBounds.y = .5 * $, a.layoutBounds.width = .5, a.layoutBounds.height = .5 - } else if (s <= 9) for (var h = 0; h < s; h++) { - var l = h % 3, - $ = h / 3; - l = ~~l, $ = ~~$; - var a = this.clipContextList[o++]; - a.layoutChannelNo = n, a.layoutBounds.x = l / 3, a.layoutBounds.y = $ / 3, a.layoutBounds.width = 1 / 3, a.layoutBounds.height = 1 / 3 - } else _._$li("_$6 _$0P mask count : %d", s) - } - }, r.prototype.addClippedDrawData = function(t, i) { - var e = new o(t, i); - this.clippedDrawContextList.push(e) - }, s._$JT = function(t, i, e) { - var r = t / i, - o = e / i, - n = o, - s = 1 - (1 - o) * (1 - o), - _ = 1 - (1 - n) * (1 - n), - a = 1 / 3 * (1 - o) * s + (n * (2 / 3) + 1 / 3 * (1 - n)) * (1 - s), - h = (n + 2 / 3 * (1 - n)) * _ + (o * (1 / 3) + 2 / 3 * (1 - o)) * (1 - _), - l = 1 - 3 * h + 3 * a - 0, - $ = 3 * h - 6 * a + 0, - u = 3 * a - 0; - if (r <= 0) return 0; - if (r >= 1) return 1; - var p = r, - f = p * p; - return l * (p * f) + $ * f + u * p + 0 - }, s.prototype._$a0 = function() {}, s.prototype.setFadeIn = function(t) { - this._$dP = t - }, s.prototype.setFadeOut = function(t) { - this._$eo = t - }, s.prototype._$pT = function(t) { - this._$V0 = t - }, s.prototype.getFadeOut = function() { - return this._$eo - }, s.prototype._$4T = function() { - return this._$eo - }, s.prototype._$mT = function() { - return this._$V0 - }, s.prototype.getDurationMSec = function() { - return -1 - }, s.prototype.getLoopDurationMSec = function() { - return -1 - }, s.prototype.updateParam = function(t, i) { - if (i._$AT && !i._$9L) { - var e = w.getUserTimeMSec(); - if (i._$z2 < 0) { - i._$z2 = e, i._$bs = e; - var r = this.getDurationMSec(); - i._$Do < 0 && (i._$Do = r <= 0 ? -1 : i._$z2 + r) - } - var o = this._$V0; - o = o * (0 == this._$dP ? 1 : ht._$r2((e - i._$bs) / this._$dP)) * (0 == this._$eo || i._$Do < 0 ? 1 : ht._$r2((i._$Do - e) / this._$eo)), 0 <= o && o <= 1 || console.log("### assert!! ### "), this.updateParamExe(t, e, o, i), i._$Do > 0 && i._$Do < e && (i._$9L = !0) - } - }, s.prototype.updateParamExe = function(t, i, e, r) {}, _._$8s = 0, _._$fT = new Object, _.start = function(t) { - var i = _._$fT[t]; - null == i && (i = new a, i._$r = t, _._$fT[t] = i), i._$0S = w.getSystemTimeMSec() - }, _.dump = function(t) { - var i = _._$fT[t]; - if (null != i) { - var e = w.getSystemTimeMSec(), - r = e - i._$0S; - return console.log(t + " : " + r + "ms"), r - } - return -1 - }, _.end = function(t) { - var i = _._$fT[t]; - if (null != i) { - return w.getSystemTimeMSec() - i._$0S - } - return -1 - }, _._$li = function(t, i) { - console.log("_$li : " + t + "\n", i) - }, _._$Ji = function(t, i) { - console.log(t, i) - }, _._$dL = function(t, i) { - console.log(t, i), console.log("\n") - }, _._$KL = function(t, i) { - for (var e = 0; e < i; e++) e % 16 == 0 && e > 0 ? console.log("\n") : e % 8 == 0 && e > 0 && console.log(" "), console.log("%02X ", 255 & t[e]); - console.log("\n") - }, _._$nr = function(t, i, e) { - console.log("%s\n", t); - for (var r = i.length, o = 0; o < r; ++o) console.log("%5d", i[o]), console.log("%s\n", e), console.log(","); - console.log("\n") - }, _._$Rb = function(t) { - console.log("dump exception : " + t), console.log("stack :: " + t.stack) - }, h.prototype._$8P = function() { - return .5 * (this.x + this.x + this.width) - }, h.prototype._$6P = function() { - return .5 * (this.y + this.y + this.height) - }, h.prototype._$EL = function() { - return this.x + this.width - }, h.prototype._$5T = function() { - return this.y + this.height - }, h.prototype._$jL = function(t, i, e, r) { - this.x = t, this.y = i, this.width = e, this.height = r - }, h.prototype._$jL = function(t) { - this.x = t.x, this.y = t.y, this.width = t.width, this.height = t.height - }, l.prototype = new et, l._$tP = new Object, l._$27 = function() { - l._$tP.clear() - }, l.getID = function(t) { - var i = l._$tP[t]; - return null == i && (i = new l(t), l._$tP[t] = i), i - }, l.prototype._$3s = function() { - return new l - }, u.prototype = new et, u._$tP = new Object, u._$27 = function() { - u._$tP.clear() - }, u.getID = function(t) { - var i = u._$tP[t]; - return null == i && (i = new u(t), u._$tP[t] = i), i - }, u.prototype._$3s = function() { - return new u - }, p._$42 = 0, p.prototype._$zP = function() { - null == this._$vo && (this._$vo = new ot), null == this._$F2 && (this._$F2 = new Array) - }, p.prototype.getCanvasWidth = function() { - return this._$ao - }, p.prototype.getCanvasHeight = function() { - return this._$1S - }, p.prototype._$F0 = function(t) { - this._$vo = t._$nP(), this._$F2 = t._$nP(), this._$ao = t._$6L(), this._$1S = t._$6L() - }, p.prototype._$6S = function(t) { - this._$F2.push(t) - }, p.prototype._$Xr = function() { - return this._$F2 - }, p.prototype._$E2 = function() { - return this._$vo - }, f.prototype.setup = function(t, i, e) { - this._$ks = this._$Yb(), this.p2._$xT(), 3 == arguments.length && (this._$Fo = t, this._$L2 = i, this.p1._$p = e, this.p2._$p = e, this.p2.y = t, this.setup()) - }, f.prototype.getPhysicsPoint1 = function() { - return this.p1 - }, f.prototype.getPhysicsPoint2 = function() { - return this.p2 - }, f.prototype._$qr = function() { - return this._$Db - }, f.prototype._$pr = function(t) { - this._$Db = t - }, f.prototype._$5r = function() { - return this._$M2 - }, f.prototype._$Cs = function() { - return this._$9b - }, f.prototype._$Yb = function() { - return -180 * Math.atan2(this.p1.x - this.p2.x, -(this.p1.y - this.p2.y)) / Math.PI - }, f.prototype.addSrcParam = function(t, i, e, r) { - var o = new g(t, i, e, r); - this._$lL.push(o) - }, f.prototype.addTargetParam = function(t, i, e, r) { - var o = new T(t, i, e, r); - this._$qP.push(o) - }, f.prototype.update = function(t, i) { - if (0 == this._$iP) return this._$iP = this._$iT = i, void(this._$Fo = Math.sqrt((this.p1.x - this.p2.x) * (this.p1.x - this.p2.x) + (this.p1.y - this.p2.y) * (this.p1.y - this.p2.y))); - var e = (i - this._$iT) / 1e3; - if (0 != e) { - for (var r = this._$lL.length - 1; r >= 0; --r) { - this._$lL[r]._$oP(t, this) - } - this._$oo(t, e), this._$M2 = this._$Yb(), this._$9b = (this._$M2 - this._$ks) / e, this._$ks = this._$M2 - } - for (var r = this._$qP.length - 1; r >= 0; --r) { - this._$qP[r]._$YS(t, this) - } - this._$iT = i - }, f.prototype._$oo = function(t, i) { - i < .033 && (i = .033); - var e = 1 / i; - this.p1.vx = (this.p1.x - this.p1._$s0) * e, this.p1.vy = (this.p1.y - this.p1._$70) * e, this.p1.ax = (this.p1.vx - this.p1._$7L) * e, this.p1.ay = (this.p1.vy - this.p1._$HL) * e, this.p1.fx = this.p1.ax * this.p1._$p, this.p1.fy = this.p1.ay * this.p1._$p, this.p1._$xT(); - var r, o, n = -Math.atan2(this.p1.y - this.p2.y, this.p1.x - this.p2.x), - s = Math.cos(n), - _ = Math.sin(n), - a = 9.8 * this.p2._$p, - h = this._$Db * Lt._$bS, - l = a * Math.cos(n - h); - r = l * _, o = l * s; - var $ = -this.p1.fx * _ * _, - u = -this.p1.fy * _ * s, - p = -this.p2.vx * this._$L2, - f = -this.p2.vy * this._$L2; - this.p2.fx = r + $ + p, this.p2.fy = o + u + f, this.p2.ax = this.p2.fx / this.p2._$p, this.p2.ay = this.p2.fy / this.p2._$p, this.p2.vx += this.p2.ax * i, this.p2.vy += this.p2.ay * i, this.p2.x += this.p2.vx * i, this.p2.y += this.p2.vy * i; - var c = Math.sqrt((this.p1.x - this.p2.x) * (this.p1.x - this.p2.x) + (this.p1.y - this.p2.y) * (this.p1.y - this.p2.y)); - this.p2.x = this.p1.x + this._$Fo * (this.p2.x - this.p1.x) / c, this.p2.y = this.p1.y + this._$Fo * (this.p2.y - this.p1.y) / c, this.p2.vx = (this.p2.x - this.p2._$s0) * e, this.p2.vy = (this.p2.y - this.p2._$70) * e, this.p2._$xT() - }, c.prototype._$xT = function() { - this._$s0 = this.x, this._$70 = this.y, this._$7L = this.vx, this._$HL = this.vy - }, d.prototype._$oP = function(t, i) {}, g.prototype = new d, g.prototype._$oP = function(t, i) { - var e = this.scale * t.getParamFloat(this._$wL), - r = i.getPhysicsPoint1(); - switch (this._$tL) { - default: - case f.Src.SRC_TO_X: - r.x = r.x + (e - r.x) * this._$V0; - break; - case f.Src.SRC_TO_Y: - r.y = r.y + (e - r.y) * this._$V0; - break; - case f.Src.SRC_TO_G_ANGLE: - var o = i._$qr(); - o += (e - o) * this._$V0, i._$pr(o) - } - }, y.prototype._$YS = function(t, i) {}, T.prototype = new y, T.prototype._$YS = function(t, i) { - switch (this._$YP) { - default: - case f.Target.TARGET_FROM_ANGLE: - t.setParamFloat(this._$wL, this.scale * i._$5r(), this._$V0); - break; - case f.Target.TARGET_FROM_ANGLE_V: - t.setParamFloat(this._$wL, this.scale * i._$Cs(), this._$V0) - } - }, f.Src = function() {}, f.Src.SRC_TO_X = "SRC_TO_X", f.Src.SRC_TO_Y = "SRC_TO_Y", f.Src.SRC_TO_G_ANGLE = "SRC_TO_G_ANGLE", f.Target = function() {}, f.Target.TARGET_FROM_ANGLE = "TARGET_FROM_ANGLE", f.Target.TARGET_FROM_ANGLE_V = "TARGET_FROM_ANGLE_V", P.prototype.init = function(t) { - this._$fL = t._$fL, this._$gL = t._$gL, this._$B0 = t._$B0, this._$z0 = t._$z0, this._$qT = t._$qT, this.reflectX = t.reflectX, this.reflectY = t.reflectY - }, P.prototype._$F0 = function(t) { - this._$fL = t._$_T(), this._$gL = t._$_T(), this._$B0 = t._$_T(), this._$z0 = t._$_T(), this._$qT = t._$_T(), t.getFormatVersion() >= G.LIVE2D_FORMAT_VERSION_V2_10_SDK2 && (this.reflectX = t._$po(), this.reflectY = t._$po()) - }, P.prototype._$e = function() {}; - var It = function() {}; - It._$ni = function(t, i, e, r, o, n, s, _, a) { - var h = s * n - _ * o; - if (0 == h) return null; - var l, $ = ((t - e) * n - (i - r) * o) / h; - return l = 0 != o ? (t - e - $ * s) / o : (i - r - $ * _) / n, isNaN(l) && (l = (t - e - $ * s) / o, isNaN(l) && (l = (i - r - $ * _) / n), isNaN(l) && (console.log("a is NaN @UtVector#_$ni() "), console.log("v1x : " + o), console.log("v1x != 0 ? " + (0 != o)))), null == a ? new Array(l, $) : (a[0] = l, a[1] = $, a) - }, S.prototype._$8P = function() { - return this.x + .5 * this.width - }, S.prototype._$6P = function() { - return this.y + .5 * this.height - }, S.prototype._$EL = function() { - return this.x + this.width - }, S.prototype._$5T = function() { - return this.y + this.height - }, S.prototype._$jL = function(t, i, e, r) { - this.x = t, this.y = i, this.width = e, this.height = r - }, S.prototype._$jL = function(t) { - this.x = t.x, this.y = t.y, this.width = t.width, this.height = t.height - }, S.prototype.contains = function(t, i) { - return this.x <= this.x && this.y <= this.y && this.x <= this.x + this.width && this.y <= this.y + this.height - }, S.prototype.expand = function(t, i) { - this.x -= t, this.y -= i, this.width += 2 * t, this.height += 2 * i - }, v._$Z2 = function(t, i, e, r) { - var o = i._$Q2(t, e), - n = t._$vs(), - s = t._$Tr(); - if (i._$zr(n, s, o), o <= 0) return r[n[0]]; - if (1 == o) { - var _ = r[n[0]], - a = r[n[1]], - h = s[0]; - return _ + (a - _) * h | 0 - } - if (2 == o) { - var _ = r[n[0]], - a = r[n[1]], - l = r[n[2]], - $ = r[n[3]], - h = s[0], - u = s[1], - p = _ + (a - _) * h | 0, - f = l + ($ - l) * h | 0; - return p + (f - p) * u | 0 - } - if (3 == o) { - var c = r[n[0]], - d = r[n[1]], - g = r[n[2]], - y = r[n[3]], - m = r[n[4]], - T = r[n[5]], - P = r[n[6]], - S = r[n[7]], - h = s[0], - u = s[1], - v = s[2], - _ = c + (d - c) * h | 0, - a = g + (y - g) * h | 0, - l = m + (T - m) * h | 0, - $ = P + (S - P) * h | 0, - p = _ + (a - _) * u | 0, - f = l + ($ - l) * u | 0; - return p + (f - p) * v | 0 - } - if (4 == o) { - var L = r[n[0]], - M = r[n[1]], - E = r[n[2]], - A = r[n[3]], - I = r[n[4]], - w = r[n[5]], - x = r[n[6]], - O = r[n[7]], - D = r[n[8]], - R = r[n[9]], - b = r[n[10]], - F = r[n[11]], - C = r[n[12]], - N = r[n[13]], - B = r[n[14]], - U = r[n[15]], - h = s[0], - u = s[1], - v = s[2], - G = s[3], - c = L + (M - L) * h | 0, - d = E + (A - E) * h | 0, - g = I + (w - I) * h | 0, - y = x + (O - x) * h | 0, - m = D + (R - D) * h | 0, - T = b + (F - b) * h | 0, - P = C + (N - C) * h | 0, - S = B + (U - B) * h | 0, - _ = c + (d - c) * u | 0, - a = g + (y - g) * u | 0, - l = m + (T - m) * u | 0, - $ = P + (S - P) * u | 0, - p = _ + (a - _) * v | 0, - f = l + ($ - l) * v | 0; - return p + (f - p) * G | 0 - } - for (var Y = 1 << o, k = new Float32Array(Y), V = 0; V < Y; V++) { - for (var X = V, z = 1, H = 0; H < o; H++) z *= X % 2 == 0 ? 1 - s[H] : s[H], X /= 2; - k[V] = z - } - for (var W = new Float32Array(Y), j = 0; j < Y; j++) W[j] = r[n[j]]; - for (var q = 0, j = 0; j < Y; j++) q += k[j] * W[j]; - return q + .5 | 0 - }, v._$br = function(t, i, e, r) { - var o = i._$Q2(t, e), - n = t._$vs(), - s = t._$Tr(); - if (i._$zr(n, s, o), o <= 0) return r[n[0]]; - if (1 == o) { - var _ = r[n[0]], - a = r[n[1]], - h = s[0]; - return _ + (a - _) * h - } - if (2 == o) { - var _ = r[n[0]], - a = r[n[1]], - l = r[n[2]], - $ = r[n[3]], - h = s[0], - u = s[1]; - return (1 - u) * (_ + (a - _) * h) + u * (l + ($ - l) * h) - } - if (3 == o) { - var p = r[n[0]], - f = r[n[1]], - c = r[n[2]], - d = r[n[3]], - g = r[n[4]], - y = r[n[5]], - m = r[n[6]], - T = r[n[7]], - h = s[0], - u = s[1], - P = s[2]; - return (1 - P) * ((1 - u) * (p + (f - p) * h) + u * (c + (d - c) * h)) + P * ((1 - u) * (g + (y - g) * h) + u * (m + (T - m) * h)) - } - if (4 == o) { - var S = r[n[0]], - v = r[n[1]], - L = r[n[2]], - M = r[n[3]], - E = r[n[4]], - A = r[n[5]], - I = r[n[6]], - w = r[n[7]], - x = r[n[8]], - O = r[n[9]], - D = r[n[10]], - R = r[n[11]], - b = r[n[12]], - F = r[n[13]], - C = r[n[14]], - N = r[n[15]], - h = s[0], - u = s[1], - P = s[2], - B = s[3]; - return (1 - B) * ((1 - P) * ((1 - u) * (S + (v - S) * h) + u * (L + (M - L) * h)) + P * ((1 - u) * (E + (A - E) * h) + u * (I + (w - I) * h))) + B * ((1 - P) * ((1 - u) * (x + (O - x) * h) + u * (D + (R - D) * h)) + P * ((1 - u) * (b + (F - b) * h) + u * (C + (N - C) * h))) - } - for (var U = 1 << o, G = new Float32Array(U), Y = 0; Y < U; Y++) { - for (var k = Y, V = 1, X = 0; X < o; X++) V *= k % 2 == 0 ? 1 - s[X] : s[X], k /= 2; - G[Y] = V - } - for (var z = new Float32Array(U), H = 0; H < U; H++) z[H] = r[n[H]]; - for (var W = 0, H = 0; H < U; H++) W += G[H] * z[H]; - return W - }, v._$Vr = function(t, i, e, r, o, n, s, _) { - var a = i._$Q2(t, e), - h = t._$vs(), - l = t._$Tr(); - i._$zr(h, l, a); - var $ = 2 * r, - u = s; - if (a <= 0) { - var p = h[0], - f = o[p]; - if (2 == _ && 0 == s) w._$jT(f, 0, n, 0, $); - else for (var c = 0; c < $;) n[u] = f[c++], n[u + 1] = f[c++], u += _ - } else if (1 == a) for (var f = o[h[0]], d = o[h[1]], g = l[0], y = 1 - g, c = 0; c < $;) n[u] = f[c] * y + d[c] * g, ++c, n[u + 1] = f[c] * y + d[c] * g, ++c, u += _; - else if (2 == a) for (var f = o[h[0]], d = o[h[1]], m = o[h[2]], T = o[h[3]], g = l[0], P = l[1], y = 1 - g, S = 1 - P, v = S * y, L = S * g, M = P * y, E = P * g, c = 0; c < $;) n[u] = v * f[c] + L * d[c] + M * m[c] + E * T[c], ++c, n[u + 1] = v * f[c] + L * d[c] + M * m[c] + E * T[c], ++c, u += _; - else if (3 == a) for (var A = o[h[0]], I = o[h[1]], x = o[h[2]], O = o[h[3]], D = o[h[4]], R = o[h[5]], b = o[h[6]], F = o[h[7]], g = l[0], P = l[1], C = l[2], y = 1 - g, S = 1 - P, N = 1 - C, B = N * S * y, U = N * S * g, G = N * P * y, Y = N * P * g, k = C * S * y, V = C * S * g, X = C * P * y, z = C * P * g, c = 0; c < $;) n[u] = B * A[c] + U * I[c] + G * x[c] + Y * O[c] + k * D[c] + V * R[c] + X * b[c] + z * F[c], ++c, n[u + 1] = B * A[c] + U * I[c] + G * x[c] + Y * O[c] + k * D[c] + V * R[c] + X * b[c] + z * F[c], ++c, u += _; - else if (4 == a) for (var H = o[h[0]], W = o[h[1]], j = o[h[2]], q = o[h[3]], J = o[h[4]], Q = o[h[5]], Z = o[h[6]], K = o[h[7]], tt = o[h[8]], it = o[h[9]], et = o[h[10]], rt = o[h[11]], ot = o[h[12]], nt = o[h[13]], st = o[h[14]], _t = o[h[15]], g = l[0], P = l[1], C = l[2], at = l[3], y = 1 - g, S = 1 - P, N = 1 - C, ht = 1 - at, lt = ht * N * S * y, $t = ht * N * S * g, ut = ht * N * P * y, pt = ht * N * P * g, ft = ht * C * S * y, ct = ht * C * S * g, dt = ht * C * P * y, gt = ht * C * P * g, yt = at * N * S * y, mt = at * N * S * g, Tt = at * N * P * y, Pt = at * N * P * g, St = at * C * S * y, vt = at * C * S * g, Lt = at * C * P * y, Mt = at * C * P * g, c = 0; c < $;) n[u] = lt * H[c] + $t * W[c] + ut * j[c] + pt * q[c] + ft * J[c] + ct * Q[c] + dt * Z[c] + gt * K[c] + yt * tt[c] + mt * it[c] + Tt * et[c] + Pt * rt[c] + St * ot[c] + vt * nt[c] + Lt * st[c] + Mt * _t[c], ++c, n[u + 1] = lt * H[c] + $t * W[c] + ut * j[c] + pt * q[c] + ft * J[c] + ct * Q[c] + dt * Z[c] + gt * K[c] + yt * tt[c] + mt * it[c] + Tt * et[c] + Pt * rt[c] + St * ot[c] + vt * nt[c] + Lt * st[c] + Mt * _t[c], ++c, u += _; - else { - for (var Et = 1 << a, At = new Float32Array(Et), It = 0; It < Et; It++) { - for (var wt = It, xt = 1, Ot = 0; Ot < a; Ot++) xt *= wt % 2 == 0 ? 1 - l[Ot] : l[Ot], wt /= 2; - At[It] = xt - } - for (var Dt = new Float32Array(Et), Rt = 0; Rt < Et; Rt++) Dt[Rt] = o[h[Rt]]; - for (var c = 0; c < $;) { - for (var bt = 0, Ft = 0, Ct = c + 1, Rt = 0; Rt < Et; Rt++) bt += At[Rt] * Dt[Rt][c], Ft += At[Rt] * Dt[Rt][Ct]; - c += 2, n[u] = bt, n[u + 1] = Ft, u += _ - } - } - }, L.prototype._$HT = function(t, i) { - this.x = t, this.y = i - }, L.prototype._$HT = function(t) { - this.x = t.x, this.y = t.y - }, M._$ur = -2, M._$ES = 500, M._$wb = 2, M._$8S = 3, M._$52 = M._$ES, M._$R2 = M._$ES, M._$or = function() { - return M._$52 - }, M._$Pr = function() { - return M._$R2 - }, M.prototype.convertClipIDForV2_11 = function(t) { - var i = []; - return null == t ? null : 0 == t.length ? null : /,/.test(t) ? i = t.id.split(",") : (i.push(t.id), i) - }, M.prototype._$F0 = function(t) { - this._$gP = t._$nP(), this._$dr = t._$nP(), this._$GS = t._$nP(), this._$qb = t._$6L(), this._$Lb = t._$cS(), this._$mS = t._$Tb(), t.getFormatVersion() >= G._$T7 ? (this.clipID = t._$nP(), this.clipIDList = this.convertClipIDForV2_11(this.clipID)) : this.clipIDList = [], this._$MS(this._$Lb) - }, M.prototype.getClipIDList = function() { - return this.clipIDList - }, M.prototype.init = function(t) {}, M.prototype._$Nr = function(t, i) { - if (i._$IS[0] = !1, i._$Us = v._$Z2(t, this._$GS, i._$IS, this._$Lb), at._$Zs); - else if (i._$IS[0]) return; - i._$7s = v._$br(t, this._$GS, i._$IS, this._$mS) - }, M.prototype._$2b = function(t, i) {}, M.prototype.getDrawDataID = function() { - return this._$gP - }, M.prototype._$j2 = function(t) { - this._$gP = t - }, M.prototype.getOpacity = function(t, i) { - return i._$7s - }, M.prototype._$zS = function(t, i) { - return i._$Us - }, M.prototype._$MS = function(t) { - for (var i = t.length - 1; i >= 0; --i) { - var e = t[i]; - e < M._$52 ? M._$52 = e : e > M._$R2 && (M._$R2 = e) - } - }, M.prototype.getTargetBaseDataID = function() { - return this._$dr - }, M.prototype._$gs = function(t) { - this._$dr = t - }, M.prototype._$32 = function() { - return null != this._$dr && this._$dr != yt._$2o() - }, M.prototype.preDraw = function(t, i, e) {}, M.prototype.draw = function(t, i, e) {}, M.prototype.getType = function() {}, M.prototype._$B2 = function(t, i, e) {}, E._$ps = 32, E.CLIPPING_PROCESS_NONE = 0, E.CLIPPING_PROCESS_OVERWRITE_ALPHA = 1, E.CLIPPING_PROCESS_MULTIPLY_ALPHA = 2, E.CLIPPING_PROCESS_DRAW = 3, E.CLIPPING_PROCESS_CLEAR_ALPHA = 4, E.prototype.setChannelFlagAsColor = function(t, i) { - this.CHANNEL_COLORS[t] = i - }, E.prototype.getChannelFlagAsColor = function(t) { - return this.CHANNEL_COLORS[t] - }, E.prototype._$ZT = function() {}, E.prototype._$Uo = function(t, i, e, r, o, n, s) {}, E.prototype._$Rs = function() { - return -1 - }, E.prototype._$Ds = function(t) {}, E.prototype.setBaseColor = function(t, i, e, r) { - t < 0 ? t = 0 : t > 1 && (t = 1), i < 0 ? i = 0 : i > 1 && (i = 1), e < 0 ? e = 0 : e > 1 && (e = 1), r < 0 ? r = 0 : r > 1 && (r = 1), this._$lT = t, this._$C0 = i, this._$tT = e, this._$WL = r - }, E.prototype._$WP = function(t) { - this.culling = t - }, E.prototype.setMatrix = function(t) { - for (var i = 0; i < 16; i++) this.matrix4x4[i] = t[i] - }, E.prototype._$IT = function() { - return this.matrix4x4 - }, E.prototype.setPremultipliedAlpha = function(t) { - this.premultipliedAlpha = t - }, E.prototype.isPremultipliedAlpha = function() { - return this.premultipliedAlpha - }, E.prototype.setAnisotropy = function(t) { - this.anisotropy = t - }, E.prototype.getAnisotropy = function() { - return this.anisotropy - }, E.prototype.getClippingProcess = function() { - return this.clippingProcess - }, E.prototype.setClippingProcess = function(t) { - this.clippingProcess = t - }, E.prototype.setClipBufPre_clipContextForMask = function(t) { - this.clipBufPre_clipContextMask = t - }, E.prototype.getClipBufPre_clipContextMask = function() { - return this.clipBufPre_clipContextMask - }, E.prototype.setClipBufPre_clipContextForDraw = function(t) { - this.clipBufPre_clipContextDraw = t - }, E.prototype.getClipBufPre_clipContextDraw = function() { - return this.clipBufPre_clipContextDraw - }, I._$ur = -2, I._$c2 = 1, I._$_b = 2, I.prototype._$F0 = function(t) { - this._$kP = t._$nP(), this._$dr = t._$nP() - }, I.prototype.readV2_opacity = function(t) { - t.getFormatVersion() >= G.LIVE2D_FORMAT_VERSION_V2_10_SDK2 && (this._$mS = t._$Tb()) - }, I.prototype.init = function(t) {}, I.prototype._$Nr = function(t, i) {}, I.prototype.interpolateOpacity = function(t, i, e, r) { - null == this._$mS ? e.setInterpolatedOpacity(1) : e.setInterpolatedOpacity(v._$br(t, i, r, this._$mS)) - }, I.prototype._$2b = function(t, i) {}, I.prototype._$nb = function(t, i, e, r, o, n, s) {}, I.prototype.getType = function() {}, I.prototype._$gs = function(t) { - this._$dr = t - }, I.prototype._$a2 = function(t) { - this._$kP = t - }, I.prototype.getTargetBaseDataID = function() { - return this._$dr - }, I.prototype.getBaseDataID = function() { - return this._$kP - }, I.prototype._$32 = function() { - return null != this._$dr && this._$dr != yt._$2o() - }, w._$W2 = 0, w._$CS = w._$W2, w._$Mo = function() { - return !0 - }, w._$XP = function(t) { - try { - for (var i = getTimeMSec(); getTimeMSec() - i < t;); - } catch (t) { - t._$Rb() - } - }, w.getUserTimeMSec = function() { - return w._$CS == w._$W2 ? w.getSystemTimeMSec() : w._$CS - }, w.setUserTimeMSec = function(t) { - w._$CS = t - }, w.updateUserTimeMSec = function() { - return w._$CS = w.getSystemTimeMSec() - }, w.getTimeMSec = function() { - return (new Date).getTime() - }, w.getSystemTimeMSec = function() { - return (new Date).getTime() - }, w._$Q = function(t) {}, w._$jT = function(t, i, e, r, o) { - for (var n = 0; n < o; n++) e[r + n] = t[i + n] - }, x._$ds = -2, x.prototype._$F0 = function(t) { - this._$wL = t._$nP(), this._$VP = t._$6L(), this._$GP = t._$nP() - }, x.prototype.getParamIndex = function(t) { - return this._$2r != t && (this._$8o = x._$ds), this._$8o - }, x.prototype._$Pb = function(t, i) { - this._$8o = t, this._$2r = i - }, x.prototype.getParamID = function() { - return this._$wL - }, x.prototype._$yP = function(t) { - this._$wL = t - }, x.prototype._$N2 = function() { - return this._$VP - }, x.prototype._$d2 = function() { - return this._$GP - }, x.prototype._$t2 = function(t, i) { - this._$VP = t, this._$GP = i - }, x.prototype._$Lr = function() { - return this._$O2 - }, x.prototype._$wr = function(t) { - this._$O2 = t - }, x.prototype._$SL = function() { - return this._$ri - }, x.prototype._$AL = function(t) { - this._$ri = t - }, O.startsWith = function(t, i, e) { - var r = i + e.length; - if (r >= t.length) return !1; - for (var o = i; o < r; o++) if (O.getChar(t, o) != e.charAt(o - i)) return !1; - return !0 - }, O.getChar = function(t, i) { - return String.fromCharCode(t.getUint8(i)) - }, O.createString = function(t, i, e) { - for (var r = new ArrayBuffer(2 * e), o = new Uint16Array(r), n = 0; n < e; n++) o[n] = t.getUint8(i + n); - return String.fromCharCode.apply(null, o) - }, O._$LS = function(t, i, e, r) { - t instanceof ArrayBuffer && (t = new DataView(t)); - var o = e, - n = !1, - s = !1, - _ = 0, - a = O.getChar(t, o); - "-" == a && (n = !0, o++); - for (var h = !1; o < i; o++) { - switch (a = O.getChar(t, o)) { - case "0": - _ *= 10; - break; - case "1": - _ = 10 * _ + 1; - break; - case "2": - _ = 10 * _ + 2; - break; - case "3": - _ = 10 * _ + 3; - break; - case "4": - _ = 10 * _ + 4; - break; - case "5": - _ = 10 * _ + 5; - break; - case "6": - _ = 10 * _ + 6; - break; - case "7": - _ = 10 * _ + 7; - break; - case "8": - _ = 10 * _ + 8; - break; - case "9": - _ = 10 * _ + 9; - break; - case ".": - s = !0, o++, h = !0; - break; - default: - h = !0 - } - if (h) break - } - if (s) for (var l = .1, $ = !1; o < i; o++) { - switch (a = O.getChar(t, o)) { - case "0": - break; - case "1": - _ += 1 * l; - break; - case "2": - _ += 2 * l; - break; - case "3": - _ += 3 * l; - break; - case "4": - _ += 4 * l; - break; - case "5": - _ += 5 * l; - break; - case "6": - _ += 6 * l; - break; - case "7": - _ += 7 * l; - break; - case "8": - _ += 8 * l; - break; - case "9": - _ += 9 * l; - break; - default: - $ = !0 - } - if (l *= .1, $) break - } - return n && (_ = -_), r[0] = o, _ - }, D.prototype._$zP = function() { - this._$Ob = new Array - }, D.prototype._$F0 = function(t) { - this._$Ob = t._$nP() - }, D.prototype._$Ur = function(t) { - if (t._$WS()) return !0; - for (var i = t._$v2(), e = this._$Ob.length - 1; e >= 0; --e) { - var r = this._$Ob[e].getParamIndex(i); - if (r == x._$ds && (r = t.getParamIndex(this._$Ob[e].getParamID())), t._$Xb(r)) return !0 - } - return !1 - }, D.prototype._$Q2 = function(t, i) { - for (var e, r, o = this._$Ob.length, n = t._$v2(), s = 0, _ = 0; _ < o; _++) { - var a = this._$Ob[_]; - if (e = a.getParamIndex(n), e == x._$ds && (e = t.getParamIndex(a.getParamID()), a._$Pb(e, n)), e < 0) throw new Exception("err 23242 : " + a.getParamID()); - var h = e < 0 ? 0 : t.getParamFloat(e); - r = a._$N2(); - var l, $, u = a._$d2(), - p = -1, - f = 0; - if (r < 1); - else if (1 == r) l = u[0], l - U._$J < h && h < l + U._$J ? (p = 0, f = 0) : (p = 0, i[0] = !0); - else if (l = u[0], h < l - U._$J) p = 0, i[0] = !0; - else if (h < l + U._$J) p = 0; - else { - for (var c = !1, d = 1; d < r; ++d) { - if ($ = u[d], h < $ + U._$J) { - $ - U._$J < h ? p = d : (p = d - 1, f = (h - l) / ($ - l), s++), c = !0; - break - } - l = $ - } - c || (p = r - 1, f = 0, i[0] = !0) - } - a._$wr(p), a._$AL(f) - } - return s - }, D.prototype._$zr = function(t, i, e) { - var r = 1 << e; - r + 1 > U._$Qb && console.log("err 23245\n"); - for (var o = this._$Ob.length, n = 1, s = 1, _ = 0, a = 0; a < r; ++a) t[a] = 0; - for (var h = 0; h < o; ++h) { - var l = this._$Ob[h]; - if (0 == l._$SL()) { - var $ = l._$Lr() * n; - if ($ < 0 && at._$3T) throw new Exception("err 23246"); - for (var a = 0; a < r; ++a) t[a] += $ - } else { - for (var $ = n * l._$Lr(), u = n * (l._$Lr() + 1), a = 0; a < r; ++a) t[a] += (a / s | 0) % 2 == 0 ? $ : u; - i[_++] = l._$SL(), s *= 2 - } - n *= l._$N2() - } - t[r] = 65535, i[_] = -1 - }, D.prototype._$h2 = function(t, i, e) { - for (var r = new Float32Array(i), o = 0; o < i; ++o) r[o] = e[o]; - var n = new x; - n._$yP(t), n._$t2(i, r), this._$Ob.push(n) - }, D.prototype._$J2 = function(t) { - for (var i = t, e = this._$Ob.length, r = 0; r < e; ++r) { - var o = this._$Ob[r], - n = o._$N2(), - s = i % o._$N2(), - _ = o._$d2()[s]; - console.log("%s[%d]=%7.2f / ", o.getParamID(), s, _), i /= n - } - console.log("\n") - }, D.prototype.getParamCount = function() { - return this._$Ob.length - }, D.prototype._$zs = function() { - return this._$Ob - }, R.prototype.identity = function() { - for (var t = 0; t < 16; t++) this.m[t] = t % 5 == 0 ? 1 : 0 - }, R.prototype.getArray = function() { - return this.m - }, R.prototype.getCopyMatrix = function() { - return new Float32Array(this.m) - }, R.prototype.setMatrix = function(t) { - if (null != t && 16 == t.length) for (var i = 0; i < 16; i++) this.m[i] = t[i] - }, R.prototype.mult = function(t, i, e) { - return null == i ? null : (this == i ? this.mult_safe(this.m, t.m, i.m, e) : this.mult_fast(this.m, t.m, i.m, e), i) - }, R.prototype.mult_safe = function(t, i, e, r) { - if (t == e) { - var o = new Array(16); - this.mult_fast(t, i, o, r); - for (var n = 15; n >= 0; --n) e[n] = o[n] - } else this.mult_fast(t, i, e, r) - }, R.prototype.mult_fast = function(t, i, e, r) { - r ? (e[0] = t[0] * i[0] + t[4] * i[1] + t[8] * i[2], e[4] = t[0] * i[4] + t[4] * i[5] + t[8] * i[6], e[8] = t[0] * i[8] + t[4] * i[9] + t[8] * i[10], e[12] = t[0] * i[12] + t[4] * i[13] + t[8] * i[14] + t[12], e[1] = t[1] * i[0] + t[5] * i[1] + t[9] * i[2], e[5] = t[1] * i[4] + t[5] * i[5] + t[9] * i[6], e[9] = t[1] * i[8] + t[5] * i[9] + t[9] * i[10], e[13] = t[1] * i[12] + t[5] * i[13] + t[9] * i[14] + t[13], e[2] = t[2] * i[0] + t[6] * i[1] + t[10] * i[2], e[6] = t[2] * i[4] + t[6] * i[5] + t[10] * i[6], e[10] = t[2] * i[8] + t[6] * i[9] + t[10] * i[10], e[14] = t[2] * i[12] + t[6] * i[13] + t[10] * i[14] + t[14], e[3] = e[7] = e[11] = 0, e[15] = 1) : (e[0] = t[0] * i[0] + t[4] * i[1] + t[8] * i[2] + t[12] * i[3], e[4] = t[0] * i[4] + t[4] * i[5] + t[8] * i[6] + t[12] * i[7], e[8] = t[0] * i[8] + t[4] * i[9] + t[8] * i[10] + t[12] * i[11], e[12] = t[0] * i[12] + t[4] * i[13] + t[8] * i[14] + t[12] * i[15], e[1] = t[1] * i[0] + t[5] * i[1] + t[9] * i[2] + t[13] * i[3], e[5] = t[1] * i[4] + t[5] * i[5] + t[9] * i[6] + t[13] * i[7], e[9] = t[1] * i[8] + t[5] * i[9] + t[9] * i[10] + t[13] * i[11], e[13] = t[1] * i[12] + t[5] * i[13] + t[9] * i[14] + t[13] * i[15], e[2] = t[2] * i[0] + t[6] * i[1] + t[10] * i[2] + t[14] * i[3], e[6] = t[2] * i[4] + t[6] * i[5] + t[10] * i[6] + t[14] * i[7], e[10] = t[2] * i[8] + t[6] * i[9] + t[10] * i[10] + t[14] * i[11], e[14] = t[2] * i[12] + t[6] * i[13] + t[10] * i[14] + t[14] * i[15], e[3] = t[3] * i[0] + t[7] * i[1] + t[11] * i[2] + t[15] * i[3], e[7] = t[3] * i[4] + t[7] * i[5] + t[11] * i[6] + t[15] * i[7], e[11] = t[3] * i[8] + t[7] * i[9] + t[11] * i[10] + t[15] * i[11], e[15] = t[3] * i[12] + t[7] * i[13] + t[11] * i[14] + t[15] * i[15]) - }, R.prototype.translate = function(t, i, e) { - this.m[12] = this.m[0] * t + this.m[4] * i + this.m[8] * e + this.m[12], this.m[13] = this.m[1] * t + this.m[5] * i + this.m[9] * e + this.m[13], this.m[14] = this.m[2] * t + this.m[6] * i + this.m[10] * e + this.m[14], this.m[15] = this.m[3] * t + this.m[7] * i + this.m[11] * e + this.m[15] - }, R.prototype.scale = function(t, i, e) { - this.m[0] *= t, this.m[4] *= i, this.m[8] *= e, this.m[1] *= t, this.m[5] *= i, this.m[9] *= e, this.m[2] *= t, this.m[6] *= i, this.m[10] *= e, this.m[3] *= t, this.m[7] *= i, this.m[11] *= e - }, R.prototype.rotateX = function(t) { - var i = Lt.fcos(t), - e = Lt._$9(t), - r = this.m[4]; - this.m[4] = r * i + this.m[8] * e, this.m[8] = r * -e + this.m[8] * i, r = this.m[5], this.m[5] = r * i + this.m[9] * e, this.m[9] = r * -e + this.m[9] * i, r = this.m[6], this.m[6] = r * i + this.m[10] * e, this.m[10] = r * -e + this.m[10] * i, r = this.m[7], this.m[7] = r * i + this.m[11] * e, this.m[11] = r * -e + this.m[11] * i - }, R.prototype.rotateY = function(t) { - var i = Lt.fcos(t), - e = Lt._$9(t), - r = this.m[0]; - this.m[0] = r * i + this.m[8] * -e, this.m[8] = r * e + this.m[8] * i, r = this.m[1], this.m[1] = r * i + this.m[9] * -e, this.m[9] = r * e + this.m[9] * i, r = m[2], this.m[2] = r * i + this.m[10] * -e, this.m[10] = r * e + this.m[10] * i, r = m[3], this.m[3] = r * i + this.m[11] * -e, this.m[11] = r * e + this.m[11] * i - }, R.prototype.rotateZ = function(t) { - var i = Lt.fcos(t), - e = Lt._$9(t), - r = this.m[0]; - this.m[0] = r * i + this.m[4] * e, this.m[4] = r * -e + this.m[4] * i, r = this.m[1], this.m[1] = r * i + this.m[5] * e, this.m[5] = r * -e + this.m[5] * i, r = this.m[2], this.m[2] = r * i + this.m[6] * e, this.m[6] = r * -e + this.m[6] * i, r = this.m[3], this.m[3] = r * i + this.m[7] * e, this.m[7] = r * -e + this.m[7] * i - }, b.prototype = new et, b._$tP = new Object, b._$27 = function() { - b._$tP.clear() - }, b.getID = function(t) { - var i = b._$tP[t]; - return null == i && (i = new b(t), b._$tP[t] = i), i - }, b.prototype._$3s = function() { - return new b - }, F._$kS = -1, F._$pS = 0, F._$hb = 1, F.STATE_IDENTITY = 0, F._$gb = 1, F._$fo = 2, F._$go = 4, F.prototype.transform = function(t, i, e) { - var r, o, n, s, _, a, h = 0, - l = 0; - switch (this._$hi) { - default: - return; - case F._$go | F._$fo | F._$gb: - for (r = this._$7, o = this._$H, n = this._$k, s = this._$f, _ = this._$g, a = this._$w; --e >= 0;) { - var $ = t[h++], - u = t[h++]; - i[l++] = r * $ + o * u + n, i[l++] = s * $ + _ * u + a - } - return; - case F._$go | F._$fo: - for (r = this._$7, o = this._$H, s = this._$f, _ = this._$g; --e >= 0;) { - var $ = t[h++], - u = t[h++]; - i[l++] = r * $ + o * u, i[l++] = s * $ + _ * u - } - return; - case F._$go | F._$gb: - for (o = this._$H, n = this._$k, s = this._$f, a = this._$w; --e >= 0;) { - var $ = t[h++]; - i[l++] = o * t[h++] + n, i[l++] = s * $ + a - } - return; - case F._$go: - for (o = this._$H, s = this._$f; --e >= 0;) { - var $ = t[h++]; - i[l++] = o * t[h++], i[l++] = s * $ - } - return; - case F._$fo | F._$gb: - for (r = this._$7, n = this._$k, _ = this._$g, a = this._$w; --e >= 0;) i[l++] = r * t[h++] + n, i[l++] = _ * t[h++] + a; - return; - case F._$fo: - for (r = this._$7, _ = this._$g; --e >= 0;) i[l++] = r * t[h++], i[l++] = _ * t[h++]; - return; - case F._$gb: - for (n = this._$k, a = this._$w; --e >= 0;) i[l++] = t[h++] + n, i[l++] = t[h++] + a; - return; - case F.STATE_IDENTITY: - return void(t == i && h == l || w._$jT(t, h, i, l, 2 * e)) - } - }, F.prototype.update = function() { - 0 == this._$H && 0 == this._$f ? 1 == this._$7 && 1 == this._$g ? 0 == this._$k && 0 == this._$w ? (this._$hi = F.STATE_IDENTITY, this._$Z = F._$pS) : (this._$hi = F._$gb, this._$Z = F._$hb) : 0 == this._$k && 0 == this._$w ? (this._$hi = F._$fo, this._$Z = F._$kS) : (this._$hi = F._$fo | F._$gb, this._$Z = F._$kS) : 0 == this._$7 && 0 == this._$g ? 0 == this._$k && 0 == this._$w ? (this._$hi = F._$go, this._$Z = F._$kS) : (this._$hi = F._$go | F._$gb, this._$Z = F._$kS) : 0 == this._$k && 0 == this._$w ? (this._$hi = F._$go | F._$fo, this._$Z = F._$kS) : (this._$hi = F._$go | F._$fo | F._$gb, this._$Z = F._$kS) - }, F.prototype._$RT = function(t) { - this._$IT(t); - var i = t[0], - e = t[2], - r = t[1], - o = t[3], - n = Math.sqrt(i * i + r * r), - s = i * o - e * r; - 0 == n ? at._$so && console.log("affine._$RT() / rt==0") : (t[0] = n, t[1] = s / n, t[2] = (r * o + i * e) / s, t[3] = Math.atan2(r, i)) - }, F.prototype._$ho = function(t, i, e, r) { - var o = new Float32Array(6), - n = new Float32Array(6); - t._$RT(o), i._$RT(n); - var s = new Float32Array(6); - s[0] = o[0] + (n[0] - o[0]) * e, s[1] = o[1] + (n[1] - o[1]) * e, s[2] = o[2] + (n[2] - o[2]) * e, s[3] = o[3] + (n[3] - o[3]) * e, s[4] = o[4] + (n[4] - o[4]) * e, s[5] = o[5] + (n[5] - o[5]) * e, r._$CT(s) - }, F.prototype._$CT = function(t) { - var i = Math.cos(t[3]), - e = Math.sin(t[3]); - this._$7 = t[0] * i, this._$f = t[0] * e, this._$H = t[1] * (t[2] * i - e), this._$g = t[1] * (t[2] * e + i), this._$k = t[4], this._$w = t[5], this.update() - }, F.prototype._$IT = function(t) { - t[0] = this._$7, t[1] = this._$f, t[2] = this._$H, t[3] = this._$g, t[4] = this._$k, t[5] = this._$w - }, C.prototype = new s, C._$cs = "VISIBLE:", C._$ar = "LAYOUT:", C._$Co = 0, C._$D2 = [], C._$1T = 1, C.loadMotion = function(t) { - var i = new C, - e = [0], - r = t.length; - i._$yT = 0; - for (var o = 0; o < r; ++o) { - var n = 255 & t[o]; - if ("\n" != n && "\r" != n) if ("#" != n) if ("$" != n) { - if ("a" <= n && n <= "z" || "A" <= n && n <= "Z" || "_" == n) { - for (var s = o, _ = -1; o < r && ("\r" != (n = 255 & t[o]) && "\n" != n); ++o) if ("=" == n) { - _ = o; - break - } - if (_ >= 0) { - var a = new B; - O.startsWith(t, s, C._$cs) ? (a._$RP = B._$hs, a._$4P = new String(t, s, _ - s)) : O.startsWith(t, s, C._$ar) ? (a._$4P = new String(t, s + 7, _ - s - 7), O.startsWith(t, s + 7, "ANCHOR_X") ? a._$RP = B._$xs : O.startsWith(t, s + 7, "ANCHOR_Y") ? a._$RP = B._$us : O.startsWith(t, s + 7, "SCALE_X") ? a._$RP = B._$qs : O.startsWith(t, s + 7, "SCALE_Y") ? a._$RP = B._$Ys : O.startsWith(t, s + 7, "X") ? a._$RP = B._$ws : O.startsWith(t, s + 7, "Y") && (a._$RP = B._$Ns)) : (a._$RP = B._$Fr, a._$4P = new String(t, s, _ - s)), i.motions.push(a); - var h = 0; - for (C._$D2.clear(), o = _ + 1; o < r && ("\r" != (n = 255 & t[o]) && "\n" != n); ++o) if ("," != n && " " != n && "\t" != n) { - var l = O._$LS(t, r, o, e); - if (e[0] > 0) { - C._$D2.push(l), h++; - var $ = e[0]; - if ($ < o) { - console.log("_$n0 _$hi . @Live2DMotion loadMotion()\n"); - break - } - o = $ - } - } - a._$I0 = C._$D2._$BL(), h > i._$yT && (i._$yT = h) - } - } - } else { - for (var s = o, _ = -1; o < r && ("\r" != (n = 255 & t[o]) && "\n" != n); ++o) if ("=" == n) { - _ = o; - break - } - var u = !1; - if (_ >= 0) for (_ == s + 4 && "f" == t[s + 1] && "p" == t[s + 2] && "s" == t[s + 3] && (u = !0), o = _ + 1; o < r && ("\r" != (n = 255 & t[o]) && "\n" != n); ++o) if ("," != n && " " != n && "\t" != n) { - var l = O._$LS(t, r, o, e); - e[0] > 0 && u && 5 < l && l < 121 && (i._$D0 = l), o = e[0] - } - for (; o < r && ("\n" != t[o] && "\r" != t[o]); ++o); - } else for (; o < r && ("\n" != t[o] && "\r" != t[o]); ++o); - } - return i._$AS = 1e3 * i._$yT / i._$D0 | 0, i - }, C.prototype.getDurationMSec = function() { - return this._$AS - }, C.prototype.dump = function() { - for (var t = 0; t < this.motions.length; t++) { - var i = this.motions[t]; - console.log("_$wL[%s] [%d]. ", i._$4P, i._$I0.length); - for (var e = 0; e < i._$I0.length && e < 10; e++) console.log("%5.2f ,", i._$I0[e]); - console.log("\n") - } - }, C.prototype.updateParamExe = function(t, i, e, r) { - for (var o = i - r._$z2, n = o * this._$D0 / 1e3, s = 0 | n, _ = n - s, a = 0; a < this.motions.length; a++) { - var h = this.motions[a], - l = h._$I0.length, - $ = h._$4P; - if (h._$RP == B._$hs) { - var u = h._$I0[s >= l ? l - 1 : s]; - t.setParamFloat($, u) - } else if (B._$ws <= h._$RP && h._$RP <= B._$Ys); - else { - var p = t.getParamFloat($), - f = h._$I0[s >= l ? l - 1 : s], - c = h._$I0[s + 1 >= l ? l - 1 : s + 1], - d = f + (c - f) * _, - g = p + (d - p) * e; - t.setParamFloat($, g) - } - } - s >= this._$yT && (this._$E ? (r._$z2 = i, this.loopFadeIn && (r._$bs = i)) : r._$9L = !0) - }, C.prototype._$r0 = function() { - return this._$E - }, C.prototype._$aL = function(t) { - this._$E = t - }, C.prototype.isLoopFadeIn = function() { - return this.loopFadeIn - }, C.prototype.setLoopFadeIn = function(t) { - this.loopFadeIn = t - }, N.prototype.clear = function() { - this.size = 0 - }, N.prototype.add = function(t) { - if (this._$P.length <= this.size) { - var i = new Float32Array(2 * this.size); - w._$jT(this._$P, 0, i, 0, this.size), this._$P = i - } - this._$P[this.size++] = t - }, N.prototype._$BL = function() { - var t = new Float32Array(this.size); - return w._$jT(this._$P, 0, t, 0, this.size), t - }, B._$Fr = 0, B._$hs = 1, B._$ws = 100, B._$Ns = 101, B._$xs = 102, B._$us = 103, B._$qs = 104, B._$Ys = 105, U._$Ms = 1, U._$Qs = 2, U._$i2 = 0, U._$No = 2, U._$do = U._$Ms, U._$Ls = !0, U._$1r = 5, U._$Qb = 65, U._$J = 1e-4, U._$FT = .001, U._$Ss = 3, G._$o7 = 6, G._$S7 = 7, G._$s7 = 8, G._$77 = 9, G.LIVE2D_FORMAT_VERSION_V2_10_SDK2 = 10, G.LIVE2D_FORMAT_VERSION_V2_11_SDK2_1 = 11, G._$T7 = G.LIVE2D_FORMAT_VERSION_V2_11_SDK2_1, G._$Is = -2004318072, G._$h0 = 0, G._$4L = 23, G._$7P = 33, G._$uT = function(t) { - console.log("_$bo :: _$6 _$mo _$E0 : %d\n", t) - }, G._$9o = function(t) { - if (t < 40) return G._$uT(t), null; - if (t < 50) return G._$uT(t), null; - if (t < 60) return G._$uT(t), null; - if (t < 100) switch (t) { - case 65: - return new Z; - case 66: - return new D; - case 67: - return new x; - case 68: - return new z; - case 69: - return new P; - case 70: - return new $t; - default: - return G._$uT(t), null - } else if (t < 150) switch (t) { - case 131: - return new st; - case 133: - return new tt; - case 136: - return new p; - case 137: - return new ot; - case 142: - return new j - } - return G._$uT(t), null - }, Y._$HP = 0, Y._$_0 = !0; - Y._$V2 = -1, Y._$W0 = -1, Y._$jr = !1, Y._$ZS = !0, Y._$tr = -1e6, Y._$lr = 1e6, Y._$is = 32, Y._$e = !1, Y.prototype.getDrawDataIndex = function(t) { - for (var i = this._$aS.length - 1; i >= 0; --i) if (null != this._$aS[i] && this._$aS[i].getDrawDataID() == t) return i; - return -1 - }, Y.prototype.getDrawData = function(t) { - if (t instanceof b) { - if (null == this._$Bo) { - this._$Bo = new Object; - for (var i = this._$aS.length, e = 0; e < i; e++) { - var r = this._$aS[e], - o = r.getDrawDataID(); - null != o && (this._$Bo[o] = r) - } - } - return this._$Bo[id] - } - return t < this._$aS.length ? this._$aS[t] : null - }, Y.prototype.release = function() { - this._$3S.clear(), this._$aS.clear(), this._$F2.clear(), null != this._$Bo && this._$Bo.clear(), this._$db.clear(), this._$8b.clear(), this._$Hr.clear() - }, Y.prototype.init = function() { - this._$co++, this._$F2.length > 0 && this.release(); - for (var t = this._$Ri.getModelImpl(), i = t._$Xr(), r = i.length, o = new Array, n = new Array, s = 0; s < r; ++s) { - var _ = i[s]; - this._$F2.push(_), this._$Hr.push(_.init(this)); - for (var a = _.getBaseData(), h = a.length, l = 0; l < h; ++l) o.push(a[l]); - for (var l = 0; l < h; ++l) { - var $ = a[l].init(this); - $._$l2(s), n.push($) - } - for (var u = _.getDrawData(), p = u.length, l = 0; l < p; ++l) { - var f = u[l], - c = f.init(this); - c._$IP = s, this._$aS.push(f), this._$8b.push(c) - } - } - for (var d = o.length, g = yt._$2o();;) { - for (var y = !1, s = 0; s < d; ++s) { - var m = o[s]; - if (null != m) { - var T = m.getTargetBaseDataID(); - (null == T || T == g || this.getBaseDataIndex(T) >= 0) && (this._$3S.push(m), this._$db.push(n[s]), o[s] = null, y = !0) - } - } - if (!y) break - } - var P = t._$E2(); - if (null != P) { - var S = P._$1s(); - if (null != S) for (var v = S.length, s = 0; s < v; ++s) { - var L = S[s]; - null != L && this._$02(L.getParamID(), L.getDefaultValue(), L.getMinValue(), L.getMaxValue()) - } - } - this.clipManager = new e(this.dp_webgl), this.clipManager.init(this, this._$aS, this._$8b), this._$QT = !0 - }, Y.prototype.update = function() { - Y._$e && _.start("_$zL"); - for (var t = this._$_2.length, i = 0; i < t; i++) this._$_2[i] != this._$vr[i] && (this._$Js[i] = Y._$ZS, this._$vr[i] = this._$_2[i]); - var e = this._$3S.length, - r = this._$aS.length, - o = W._$or(), - n = W._$Pr(), - s = n - o + 1; - (null == this._$Ws || this._$Ws.length < s) && (this._$Ws = new Int16Array(s), this._$Vs = new Int16Array(s)); - for (var i = 0; i < s; i++) this._$Ws[i] = Y._$V2, this._$Vs[i] = Y._$V2; - (null == this._$Er || this._$Er.length < r) && (this._$Er = new Int16Array(r)); - for (var i = 0; i < r; i++) this._$Er[i] = Y._$W0; - Y._$e && _.dump("_$zL"), Y._$e && _.start("_$UL"); - for (var a = null, h = 0; h < e; ++h) { - var l = this._$3S[h], - $ = this._$db[h]; - try { - l._$Nr(this, $), l._$2b(this, $) - } catch (t) { - null == a && (a = t) - } - } - null != a && Y._$_0 && _._$Rb(a), Y._$e && _.dump("_$UL"), Y._$e && _.start("_$DL"); - for (var u = null, p = 0; p < r; ++p) { - var f = this._$aS[p], - c = this._$8b[p]; - try { - if (f._$Nr(this, c), c._$u2()) continue; - f._$2b(this, c); - var d, g = Math.floor(f._$zS(this, c) - o); - try { - d = this._$Vs[g] - } catch (t) { - console.log("_$li :: %s / %s \t\t\t\t@@_$fS\n", t.toString(), f.getDrawDataID().toString()), g = Math.floor(f._$zS(this, c) - o); - continue - } - d == Y._$V2 ? this._$Ws[g] = p : this._$Er[d] = p, this._$Vs[g] = p - } catch (t) { - null == u && (u = t, at._$sT(at._$H7)) - } - } - null != u && Y._$_0 && _._$Rb(u), Y._$e && _.dump("_$DL"), Y._$e && _.start("_$eL"); - for (var i = this._$Js.length - 1; i >= 0; i--) this._$Js[i] = Y._$jr; - return this._$QT = !1, Y._$e && _.dump("_$eL"), !1 - }, Y.prototype.preDraw = function(t) { - null != this.clipManager && (t._$ZT(), this.clipManager.setupClip(this, t)) - }, Y.prototype.draw = function(t) { - if (null == this._$Ws) return void _._$li("call _$Ri.update() before _$Ri.draw() "); - var i = this._$Ws.length; - t._$ZT(); - for (var e = 0; e < i; ++e) { - var r = this._$Ws[e]; - if (r != Y._$V2) for (;;) { - var o = this._$aS[r], - n = this._$8b[r]; - if (n._$yo()) { - var s = n._$IP, - a = this._$Hr[s]; - n._$VS = a.getPartsOpacity(), o.draw(t, this, n) - } - var h = this._$Er[r]; - if (h <= r || h == Y._$W0) break; - r = h - } - } - }, Y.prototype.getParamIndex = function(t) { - for (var i = this._$pb.length - 1; i >= 0; --i) if (this._$pb[i] == t) return i; - return this._$02(t, 0, Y._$tr, Y._$lr) - }, Y.prototype._$BS = function(t) { - return this.getBaseDataIndex(t) - }, Y.prototype.getBaseDataIndex = function(t) { - for (var i = this._$3S.length - 1; i >= 0; --i) if (null != this._$3S[i] && this._$3S[i].getBaseDataID() == t) return i; - return -1 - }, Y.prototype._$UT = function(t, i) { - var e = new Float32Array(i); - return w._$jT(t, 0, e, 0, t.length), e - }, Y.prototype._$02 = function(t, i, e, r) { - if (this._$qo >= this._$pb.length) { - var o = this._$pb.length, - n = new Array(2 * o); - w._$jT(this._$pb, 0, n, 0, o), this._$pb = n, this._$_2 = this._$UT(this._$_2, 2 * o), this._$vr = this._$UT(this._$vr, 2 * o), this._$Rr = this._$UT(this._$Rr, 2 * o), this._$Or = this._$UT(this._$Or, 2 * o); - var s = new Array; - w._$jT(this._$Js, 0, s, 0, o), this._$Js = s - } - return this._$pb[this._$qo] = t, this._$_2[this._$qo] = i, this._$vr[this._$qo] = i, this._$Rr[this._$qo] = e, this._$Or[this._$qo] = r, this._$Js[this._$qo] = Y._$ZS, this._$qo++ - }, Y.prototype._$Zo = function(t, i) { - this._$3S[t] = i - }, Y.prototype.setParamFloat = function(t, i) { - i < this._$Rr[t] && (i = this._$Rr[t]), i > this._$Or[t] && (i = this._$Or[t]), this._$_2[t] = i - }, Y.prototype.loadParam = function() { - var t = this._$_2.length; - t > this._$fs.length && (t = this._$fs.length), w._$jT(this._$fs, 0, this._$_2, 0, t) - }, Y.prototype.saveParam = function() { - var t = this._$_2.length; - t > this._$fs.length && (this._$fs = new Float32Array(t)), w._$jT(this._$_2, 0, this._$fs, 0, t) - }, Y.prototype._$v2 = function() { - return this._$co - }, Y.prototype._$WS = function() { - return this._$QT - }, Y.prototype._$Xb = function(t) { - return this._$Js[t] == Y._$ZS - }, Y.prototype._$vs = function() { - return this._$Es - }, Y.prototype._$Tr = function() { - return this._$ZP - }, Y.prototype.getBaseData = function(t) { - return this._$3S[t] - }, Y.prototype.getParamFloat = function(t) { - return this._$_2[t] - }, Y.prototype.getParamMax = function(t) { - return this._$Or[t] - }, Y.prototype.getParamMin = function(t) { - return this._$Rr[t] - }, Y.prototype.setPartsOpacity = function(t, i) { - this._$Hr[t].setPartsOpacity(i) - }, Y.prototype.getPartsOpacity = function(t) { - return this._$Hr[t].getPartsOpacity() - }, Y.prototype.getPartsDataIndex = function(t) { - for (var i = this._$F2.length - 1; i >= 0; --i) if (null != this._$F2[i] && this._$F2[i]._$p2() == t) return i; - return -1 - }, Y.prototype._$q2 = function(t) { - return this._$db[t] - }, Y.prototype._$C2 = function(t) { - return this._$8b[t] - }, Y.prototype._$Bb = function(t) { - return this._$Hr[t] - }, Y.prototype._$5s = function(t, i) { - for (var e = this._$Ws.length, r = t, o = 0; o < e; ++o) { - var n = this._$Ws[o]; - if (n != Y._$V2) for (;;) { - var s = this._$8b[n]; - s._$yo() && (s._$GT()._$B2(this, s, r), r += i); - var _ = this._$Er[n]; - if (_ <= n || _ == Y._$W0) break; - n = _ - } - } - }, Y.prototype.setDrawParam = function(t) { - this.dp_webgl = t - }, Y.prototype.getDrawParam = function() { - return this.dp_webgl - }, k._$0T = function(t) { - return k._$0T(new _$5(t)) - }, k._$0T = function(t) { - if (!t.exists()) throw new _$ls(t._$3b()); - for (var i, e = t.length(), r = new Int8Array(e), o = new _$Xs(new _$kb(t), 8192), n = 0; - (i = o.read(r, n, e - n)) > 0;) n += i; - return r - }, k._$C = function(t) { - var i = null, - e = null; - try { - i = t instanceof Array ? t : new _$Xs(t, 8192), e = new _$js; - for (var r, o = new Int8Array(1e3); - (r = i.read(o)) > 0;) e.write(o, 0, r); - return e._$TS() - } finally { - null != t && t.close(), null != e && (e.flush(), e.close()) - } - }, V.prototype._$T2 = function() { - return w.getUserTimeMSec() + Math._$10() * (2 * this._$Br - 1) - }, V.prototype._$uo = function(t) { - this._$Br = t - }, V.prototype._$QS = function(t, i, e) { - this._$Dr = t, this._$Cb = i, this._$mr = e - }, V.prototype._$7T = function(t) { - var i, e = w.getUserTimeMSec(), - r = 0; - switch (this._$_L) { - case STATE_CLOSING: - r = (e - this._$bb) / this._$Dr, r >= 1 && (r = 1, this._$_L = wt.STATE_CLOSED, this._$bb = e), i = 1 - r; - break; - case STATE_CLOSED: - r = (e - this._$bb) / this._$Cb, r >= 1 && (this._$_L = wt.STATE_OPENING, this._$bb = e), i = 0; - break; - case STATE_OPENING: - r = (e - this._$bb) / this._$mr, r >= 1 && (r = 1, this._$_L = wt.STATE_INTERVAL, this._$12 = this._$T2()), i = r; - break; - case STATE_INTERVAL: - this._$12 < e && (this._$_L = wt.STATE_CLOSING, this._$bb = e), i = 1; - break; - case STATE_FIRST: - default: - this._$_L = wt.STATE_INTERVAL, this._$12 = this._$T2(), i = 1 - } - this._$jo || (i = -i), t.setParamFloat(this._$iL, i), t.setParamFloat(this._$0L, i) - }; - var wt = function() {}; - wt.STATE_FIRST = "STATE_FIRST", wt.STATE_INTERVAL = "STATE_INTERVAL", wt.STATE_CLOSING = "STATE_CLOSING", wt.STATE_CLOSED = "STATE_CLOSED", wt.STATE_OPENING = "STATE_OPENING", X.prototype = new E, X._$As = 32, X._$Gr = !1, X._$NT = null, X._$vS = null, X._$no = null, X._$9r = function(t) { - return new Float32Array(t) - }, X._$vb = function(t) { - return new Int16Array(t) - }, X._$cr = function(t, i) { - return null == t || t._$yL() < i.length ? (t = X._$9r(2 * i.length), t.put(i), t._$oT(0)) : (t.clear(), t.put(i), t._$oT(0)), t - }, X._$mb = function(t, i) { - return null == t || t._$yL() < i.length ? (t = X._$vb(2 * i.length), t.put(i), t._$oT(0)) : (t.clear(), t.put(i), t._$oT(0)), t - }, X._$Hs = function() { - return X._$Gr - }, X._$as = function(t) { - X._$Gr = t - }, X.prototype.setGL = function(t) { - this.gl = t - }, X.prototype.setTransform = function(t) { - this.transform = t - }, X.prototype._$ZT = function() {}, X.prototype._$Uo = function(t, i, e, r, o, n, s, _) { - if (!(n < .01)) { - var a = this._$U2[t], - h = n > .9 ? at.EXPAND_W : 0; - this.gl.drawElements(a, e, r, o, n, h, this.transform, _) - } - }, X.prototype._$Rs = function() { - throw new Error("_$Rs") - }, X.prototype._$Ds = function(t) { - throw new Error("_$Ds") - }, X.prototype._$K2 = function() { - for (var t = 0; t < this._$sb.length; t++) { - 0 != this._$sb[t] && (this.gl._$Sr(1, this._$sb, t), this._$sb[t] = 0) - } - }, X.prototype.setTexture = function(t, i) { - this._$sb.length < t + 1 && this._$nS(t), this._$sb[t] = i - }, X.prototype.setTexture = function(t, i) { - this._$sb.length < t + 1 && this._$nS(t), this._$U2[t] = i - }, X.prototype._$nS = function(t) { - var i = Math.max(2 * this._$sb.length, t + 1 + 10), - e = new Int32Array(i); - w._$jT(this._$sb, 0, e, 0, this._$sb.length), this._$sb = e; - var r = new Array; - w._$jT(this._$U2, 0, r, 0, this._$U2.length), this._$U2 = r - }, z.prototype = new I, z._$Xo = new Float32Array(2), z._$io = new Float32Array(2), z._$0o = new Float32Array(2), z._$Lo = new Float32Array(2), z._$To = new Float32Array(2), z._$Po = new Float32Array(2), z._$gT = new Array, z.prototype._$zP = function() { - this._$GS = new D, this._$GS._$zP(), this._$Y0 = new Array - }, z.prototype.getType = function() { - return I._$c2 - }, z.prototype._$F0 = function(t) { - I.prototype._$F0.call(this, t), this._$GS = t._$nP(), this._$Y0 = t._$nP(), I.prototype.readV2_opacity.call(this, t) - }, z.prototype.init = function(t) { - var i = new H(this); - return i._$Yr = new P, this._$32() && (i._$Wr = new P), i - }, z.prototype._$Nr = function(t, i) { - this != i._$GT() && console.log("### assert!! ### "); - var e = i; - if (this._$GS._$Ur(t)) { - var r = z._$gT; - r[0] = !1; - var o = this._$GS._$Q2(t, r); - i._$Ib(r[0]), this.interpolateOpacity(t, this._$GS, i, r); - var n = t._$vs(), - s = t._$Tr(); - if (this._$GS._$zr(n, s, o), o <= 0) { - var _ = this._$Y0[n[0]]; - e._$Yr.init(_) - } else if (1 == o) { - var _ = this._$Y0[n[0]], - a = this._$Y0[n[1]], - h = s[0]; - e._$Yr._$fL = _._$fL + (a._$fL - _._$fL) * h, e._$Yr._$gL = _._$gL + (a._$gL - _._$gL) * h, e._$Yr._$B0 = _._$B0 + (a._$B0 - _._$B0) * h, e._$Yr._$z0 = _._$z0 + (a._$z0 - _._$z0) * h, e._$Yr._$qT = _._$qT + (a._$qT - _._$qT) * h - } else if (2 == o) { - var _ = this._$Y0[n[0]], - a = this._$Y0[n[1]], - l = this._$Y0[n[2]], - $ = this._$Y0[n[3]], - h = s[0], - u = s[1], - p = _._$fL + (a._$fL - _._$fL) * h, - f = l._$fL + ($._$fL - l._$fL) * h; - e._$Yr._$fL = p + (f - p) * u, p = _._$gL + (a._$gL - _._$gL) * h, f = l._$gL + ($._$gL - l._$gL) * h, e._$Yr._$gL = p + (f - p) * u, p = _._$B0 + (a._$B0 - _._$B0) * h, f = l._$B0 + ($._$B0 - l._$B0) * h, e._$Yr._$B0 = p + (f - p) * u, p = _._$z0 + (a._$z0 - _._$z0) * h, f = l._$z0 + ($._$z0 - l._$z0) * h, e._$Yr._$z0 = p + (f - p) * u, p = _._$qT + (a._$qT - _._$qT) * h, f = l._$qT + ($._$qT - l._$qT) * h, e._$Yr._$qT = p + (f - p) * u - } else if (3 == o) { - var c = this._$Y0[n[0]], - d = this._$Y0[n[1]], - g = this._$Y0[n[2]], - y = this._$Y0[n[3]], - m = this._$Y0[n[4]], - T = this._$Y0[n[5]], - P = this._$Y0[n[6]], - S = this._$Y0[n[7]], - h = s[0], - u = s[1], - v = s[2], - p = c._$fL + (d._$fL - c._$fL) * h, - f = g._$fL + (y._$fL - g._$fL) * h, - L = m._$fL + (T._$fL - m._$fL) * h, - M = P._$fL + (S._$fL - P._$fL) * h; - e._$Yr._$fL = (1 - v) * (p + (f - p) * u) + v * (L + (M - L) * u), p = c._$gL + (d._$gL - c._$gL) * h, f = g._$gL + (y._$gL - g._$gL) * h, L = m._$gL + (T._$gL - m._$gL) * h, M = P._$gL + (S._$gL - P._$gL) * h, e._$Yr._$gL = (1 - v) * (p + (f - p) * u) + v * (L + (M - L) * u), p = c._$B0 + (d._$B0 - c._$B0) * h, f = g._$B0 + (y._$B0 - g._$B0) * h, L = m._$B0 + (T._$B0 - m._$B0) * h, M = P._$B0 + (S._$B0 - P._$B0) * h, e._$Yr._$B0 = (1 - v) * (p + (f - p) * u) + v * (L + (M - L) * u), p = c._$z0 + (d._$z0 - c._$z0) * h, f = g._$z0 + (y._$z0 - g._$z0) * h, L = m._$z0 + (T._$z0 - m._$z0) * h, M = P._$z0 + (S._$z0 - P._$z0) * h, e._$Yr._$z0 = (1 - v) * (p + (f - p) * u) + v * (L + (M - L) * u), p = c._$qT + (d._$qT - c._$qT) * h, f = g._$qT + (y._$qT - g._$qT) * h, L = m._$qT + (T._$qT - m._$qT) * h, M = P._$qT + (S._$qT - P._$qT) * h, e._$Yr._$qT = (1 - v) * (p + (f - p) * u) + v * (L + (M - L) * u) - } else if (4 == o) { - var E = this._$Y0[n[0]], - A = this._$Y0[n[1]], - I = this._$Y0[n[2]], - w = this._$Y0[n[3]], - x = this._$Y0[n[4]], - O = this._$Y0[n[5]], - D = this._$Y0[n[6]], - R = this._$Y0[n[7]], - b = this._$Y0[n[8]], - F = this._$Y0[n[9]], - C = this._$Y0[n[10]], - N = this._$Y0[n[11]], - B = this._$Y0[n[12]], - U = this._$Y0[n[13]], - G = this._$Y0[n[14]], - Y = this._$Y0[n[15]], - h = s[0], - u = s[1], - v = s[2], - k = s[3], - p = E._$fL + (A._$fL - E._$fL) * h, - f = I._$fL + (w._$fL - I._$fL) * h, - L = x._$fL + (O._$fL - x._$fL) * h, - M = D._$fL + (R._$fL - D._$fL) * h, - V = b._$fL + (F._$fL - b._$fL) * h, - X = C._$fL + (N._$fL - C._$fL) * h, - H = B._$fL + (U._$fL - B._$fL) * h, - W = G._$fL + (Y._$fL - G._$fL) * h; - e._$Yr._$fL = (1 - k) * ((1 - v) * (p + (f - p) * u) + v * (L + (M - L) * u)) + k * ((1 - v) * (V + (X - V) * u) + v * (H + (W - H) * u)), p = E._$gL + (A._$gL - E._$gL) * h, f = I._$gL + (w._$gL - I._$gL) * h, L = x._$gL + (O._$gL - x._$gL) * h, M = D._$gL + (R._$gL - D._$gL) * h, V = b._$gL + (F._$gL - b._$gL) * h, X = C._$gL + (N._$gL - C._$gL) * h, H = B._$gL + (U._$gL - B._$gL) * h, W = G._$gL + (Y._$gL - G._$gL) * h, e._$Yr._$gL = (1 - k) * ((1 - v) * (p + (f - p) * u) + v * (L + (M - L) * u)) + k * ((1 - v) * (V + (X - V) * u) + v * (H + (W - H) * u)), p = E._$B0 + (A._$B0 - E._$B0) * h, f = I._$B0 + (w._$B0 - I._$B0) * h, L = x._$B0 + (O._$B0 - x._$B0) * h, M = D._$B0 + (R._$B0 - D._$B0) * h, V = b._$B0 + (F._$B0 - b._$B0) * h, X = C._$B0 + (N._$B0 - C._$B0) * h, H = B._$B0 + (U._$B0 - B._$B0) * h, W = G._$B0 + (Y._$B0 - G._$B0) * h, e._$Yr._$B0 = (1 - k) * ((1 - v) * (p + (f - p) * u) + v * (L + (M - L) * u)) + k * ((1 - v) * (V + (X - V) * u) + v * (H + (W - H) * u)), p = E._$z0 + (A._$z0 - E._$z0) * h, f = I._$z0 + (w._$z0 - I._$z0) * h, L = x._$z0 + (O._$z0 - x._$z0) * h, M = D._$z0 + (R._$z0 - D._$z0) * h, V = b._$z0 + (F._$z0 - b._$z0) * h, X = C._$z0 + (N._$z0 - C._$z0) * h, H = B._$z0 + (U._$z0 - B._$z0) * h, W = G._$z0 + (Y._$z0 - G._$z0) * h, e._$Yr._$z0 = (1 - k) * ((1 - v) * (p + (f - p) * u) + v * (L + (M - L) * u)) + k * ((1 - v) * (V + (X - V) * u) + v * (H + (W - H) * u)), p = E._$qT + (A._$qT - E._$qT) * h, f = I._$qT + (w._$qT - I._$qT) * h, L = x._$qT + (O._$qT - x._$qT) * h, M = D._$qT + (R._$qT - D._$qT) * h, V = b._$qT + (F._$qT - b._$qT) * h, X = C._$qT + (N._$qT - C._$qT) * h, H = B._$qT + (U._$qT - B._$qT) * h, W = G._$qT + (Y._$qT - G._$qT) * h, e._$Yr._$qT = (1 - k) * ((1 - v) * (p + (f - p) * u) + v * (L + (M - L) * u)) + k * ((1 - v) * (V + (X - V) * u) + v * (H + (W - H) * u)) - } else { - for (var j = 0 | Math.pow(2, o), q = new Float32Array(j), J = 0; J < j; J++) { - for (var Q = J, Z = 1, K = 0; K < o; K++) Z *= Q % 2 == 0 ? 1 - s[K] : s[K], Q /= 2; - q[J] = Z - } - for (var tt = new Array, it = 0; it < j; it++) tt[it] = this._$Y0[n[it]]; - for (var et = 0, rt = 0, ot = 0, nt = 0, st = 0, it = 0; it < j; it++) et += q[it] * tt[it]._$fL, rt += q[it] * tt[it]._$gL, ot += q[it] * tt[it]._$B0, nt += q[it] * tt[it]._$z0, st += q[it] * tt[it]._$qT; - e._$Yr._$fL = et, e._$Yr._$gL = rt, e._$Yr._$B0 = ot, e._$Yr._$z0 = nt, e._$Yr._$qT = st - } - var _ = this._$Y0[n[0]]; - e._$Yr.reflectX = _.reflectX, e._$Yr.reflectY = _.reflectY - } - }, z.prototype._$2b = function(t, i) { - this != i._$GT() && console.log("### assert!! ### "); - var e = i; - if (e._$hS(!0), this._$32()) { - var r = this.getTargetBaseDataID(); - if (e._$8r == I._$ur && (e._$8r = t.getBaseDataIndex(r)), e._$8r < 0) at._$so && _._$li("_$L _$0P _$G :: %s", r), e._$hS(!1); - else { - var o = t.getBaseData(e._$8r); - if (null != o) { - var n = t._$q2(e._$8r), - s = z._$Xo; - s[0] = e._$Yr._$fL, s[1] = e._$Yr._$gL; - var a = z._$io; - a[0] = 0, a[1] = -.1; - n._$GT().getType() == I._$c2 ? a[1] = -10 : a[1] = -.1; - var h = z._$0o; - this._$Jr(t, o, n, s, a, h); - var l = Lt._$92(a, h); - o._$nb(t, n, s, s, 1, 0, 2), e._$Wr._$fL = s[0], e._$Wr._$gL = s[1], e._$Wr._$B0 = e._$Yr._$B0, e._$Wr._$z0 = e._$Yr._$z0, e._$Wr._$qT = e._$Yr._$qT - l * Lt._$NS; - var $ = n.getTotalScale(); - e.setTotalScale_notForClient($ * e._$Wr._$B0); - var u = n.getTotalOpacity(); - e.setTotalOpacity(u * e.getInterpolatedOpacity()), e._$Wr.reflectX = e._$Yr.reflectX, e._$Wr.reflectY = e._$Yr.reflectY, e._$hS(n._$yo()) - } else e._$hS(!1) - } - } else e.setTotalScale_notForClient(e._$Yr._$B0), e.setTotalOpacity(e.getInterpolatedOpacity()) - }, z.prototype._$nb = function(t, i, e, r, o, n, s) { - this != i._$GT() && console.log("### assert!! ### "); - for (var _, a, h = i, l = null != h._$Wr ? h._$Wr : h._$Yr, $ = Math.sin(Lt._$bS * l._$qT), u = Math.cos(Lt._$bS * l._$qT), p = h.getTotalScale(), f = l.reflectX ? -1 : 1, c = l.reflectY ? -1 : 1, d = u * p * f, g = -$ * p * c, y = $ * p * f, m = u * p * c, T = l._$fL, P = l._$gL, S = o * s, v = n; v < S; v += s) _ = e[v], a = e[v + 1], r[v] = d * _ + g * a + T, r[v + 1] = y * _ + m * a + P - }, z.prototype._$Jr = function(t, i, e, r, o, n) { - i != e._$GT() && console.log("### assert!! ### "); - var s = z._$Lo; - z._$Lo[0] = r[0], z._$Lo[1] = r[1], i._$nb(t, e, s, s, 1, 0, 2); - for (var _ = z._$To, a = z._$Po, h = 1, l = 0; l < 10; l++) { - if (a[0] = r[0] + h * o[0], a[1] = r[1] + h * o[1], i._$nb(t, e, a, _, 1, 0, 2), _[0] -= s[0], _[1] -= s[1], 0 != _[0] || 0 != _[1]) return n[0] = _[0], void(n[1] = _[1]); - if (a[0] = r[0] - h * o[0], a[1] = r[1] - h * o[1], i._$nb(t, e, a, _, 1, 0, 2), _[0] -= s[0], _[1] -= s[1], 0 != _[0] || 0 != _[1]) return _[0] = -_[0], _[0] = -_[0], n[0] = _[0], void(n[1] = _[1]); - h *= .1 - } - at._$so && console.log("_$L0 to transform _$SP\n") - }, H.prototype = new _t, W.prototype = new M, W._$ur = -2, W._$ES = 500, W._$wb = 2, W._$8S = 3, W._$os = 4, W._$52 = W._$ES, W._$R2 = W._$ES, W._$Sb = function(t) { - for (var i = t.length - 1; i >= 0; --i) { - var e = t[i]; - e < W._$52 ? W._$52 = e : e > W._$R2 && (W._$R2 = e) - } - }, W._$or = function() { - return W._$52 - }, W._$Pr = function() { - return W._$R2 - }, W.prototype._$F0 = function(t) { - this._$gP = t._$nP(), this._$dr = t._$nP(), this._$GS = t._$nP(), this._$qb = t._$6L(), this._$Lb = t._$cS(), this._$mS = t._$Tb(), t.getFormatVersion() >= G._$T7 ? (this.clipID = t._$nP(), this.clipIDList = this.convertClipIDForV2_11(this.clipID)) : this.clipIDList = null, W._$Sb(this._$Lb) - }, W.prototype.getClipIDList = function() { - return this.clipIDList - }, W.prototype._$Nr = function(t, i) { - if (i._$IS[0] = !1, i._$Us = v._$Z2(t, this._$GS, i._$IS, this._$Lb), at._$Zs); - else if (i._$IS[0]) return; - i._$7s = v._$br(t, this._$GS, i._$IS, this._$mS) - }, W.prototype._$2b = function(t) {}, W.prototype.getDrawDataID = function() { - return this._$gP - }, W.prototype._$j2 = function(t) { - this._$gP = t - }, W.prototype.getOpacity = function(t, i) { - return i._$7s - }, W.prototype._$zS = function(t, i) { - return i._$Us - }, W.prototype.getTargetBaseDataID = function() { - return this._$dr - }, W.prototype._$gs = function(t) { - this._$dr = t - }, W.prototype._$32 = function() { - return null != this._$dr && this._$dr != yt._$2o() - }, W.prototype.getType = function() {}, j._$42 = 0, j.prototype._$1b = function() { - return this._$3S - }, j.prototype.getDrawDataList = function() { - return this._$aS - }, j.prototype._$F0 = function(t) { - this._$NL = t._$nP(), this._$aS = t._$nP(), this._$3S = t._$nP() - }, j.prototype._$kr = function(t) { - t._$Zo(this._$3S), t._$xo(this._$aS), this._$3S = null, this._$aS = null - }, q.prototype = new i, q.loadModel = function(t) { - var e = new q; - return i._$62(e, t), e - }, q.loadModel = function(t) { - var e = new q; - return i._$62(e, t), e - }, q._$to = function() { - return new q - }, q._$er = function(t) { - var i = new _$5("../_$_r/_$t0/_$Ri/_$_P._$d"); - if (0 == i.exists()) throw new _$ls("_$t0 _$_ _$6 _$Ui :: " + i._$PL()); - for (var e = ["../_$_r/_$t0/_$Ri/_$_P.512/_$CP._$1", "../_$_r/_$t0/_$Ri/_$_P.512/_$vP._$1", "../_$_r/_$t0/_$Ri/_$_P.512/_$EP._$1", "../_$_r/_$t0/_$Ri/_$_P.512/_$pP._$1"], r = q.loadModel(i._$3b()), o = 0; o < e.length; o++) { - var n = new _$5(e[o]); - if (0 == n.exists()) throw new _$ls("_$t0 _$_ _$6 _$Ui :: " + n._$PL()); - r.setTexture(o, _$nL._$_o(t, n._$3b())) - } - return r - }, q.prototype.setGL = function(t) { - this._$zo.setGL(t) - }, q.prototype.setTransform = function(t) { - this._$zo.setTransform(t) - }, q.prototype.draw = function() { - this._$5S.draw(this._$zo) - }, q.prototype._$K2 = function() { - this._$zo._$K2() - }, q.prototype.setTexture = function(t, i) { - null == this._$zo && _._$li("_$Yi for QT _$ki / _$XS() is _$6 _$ui!!"), this._$zo.setTexture(t, i) - }, q.prototype.setTexture = function(t, i) { - null == this._$zo && _._$li("_$Yi for QT _$ki / _$XS() is _$6 _$ui!!"), this._$zo.setTexture(t, i) - }, q.prototype._$Rs = function() { - return this._$zo._$Rs() - }, q.prototype._$Ds = function(t) { - this._$zo._$Ds(t) - }, q.prototype.getDrawParam = function() { - return this._$zo - }, J.prototype = new s, J._$cs = "VISIBLE:", J._$ar = "LAYOUT:", J.MTN_PREFIX_FADEIN = "FADEIN:", J.MTN_PREFIX_FADEOUT = "FADEOUT:", J._$Co = 0, J._$1T = 1, J.loadMotion = function(t) { - var i = k._$C(t); - return J.loadMotion(i) - }, J.loadMotion = function(t) { - t instanceof ArrayBuffer && (t = new DataView(t)); - var i = new J, - e = [0], - r = t.byteLength; - i._$yT = 0; - for (var o = 0; o < r; ++o) { - var n = Q(t, o), - s = n.charCodeAt(0); - if ("\n" != n && "\r" != n) if ("#" != n) if ("$" != n) { - if (97 <= s && s <= 122 || 65 <= s && s <= 90 || "_" == n) { - for (var _ = o, a = -1; o < r && ("\r" != (n = Q(t, o)) && "\n" != n); ++o) if ("=" == n) { - a = o; - break - } - if (a >= 0) { - var h = new B; - O.startsWith(t, _, J._$cs) ? (h._$RP = B._$hs, h._$4P = O.createString(t, _, a - _)) : O.startsWith(t, _, J._$ar) ? (h._$4P = O.createString(t, _ + 7, a - _ - 7), O.startsWith(t, _ + 7, "ANCHOR_X") ? h._$RP = B._$xs : O.startsWith(t, _ + 7, "ANCHOR_Y") ? h._$RP = B._$us : O.startsWith(t, _ + 7, "SCALE_X") ? h._$RP = B._$qs : O.startsWith(t, _ + 7, "SCALE_Y") ? h._$RP = B._$Ys : O.startsWith(t, _ + 7, "X") ? h._$RP = B._$ws : O.startsWith(t, _ + 7, "Y") && (h._$RP = B._$Ns)) : (h._$RP = B._$Fr, h._$4P = O.createString(t, _, a - _)), i.motions.push(h); - var l = 0, - $ = []; - for (o = a + 1; o < r && ("\r" != (n = Q(t, o)) && "\n" != n); ++o) if ("," != n && " " != n && "\t" != n) { - var u = O._$LS(t, r, o, e); - if (e[0] > 0) { - $.push(u), l++; - var p = e[0]; - if (p < o) { - console.log("_$n0 _$hi . @Live2DMotion loadMotion()\n"); - break - } - o = p - 1 - } - } - h._$I0 = new Float32Array($), l > i._$yT && (i._$yT = l) - } - } - } else { - for (var _ = o, a = -1; o < r && ("\r" != (n = Q(t, o)) && "\n" != n); ++o) if ("=" == n) { - a = o; - break - } - var f = !1; - if (a >= 0) for (a == _ + 4 && "f" == Q(t, _ + 1) && "p" == Q(t, _ + 2) && "s" == Q(t, _ + 3) && (f = !0), o = a + 1; o < r && ("\r" != (n = Q(t, o)) && "\n" != n); ++o) if ("," != n && " " != n && "\t" != n) { - var u = O._$LS(t, r, o, e); - e[0] > 0 && f && 5 < u && u < 121 && (i._$D0 = u), o = e[0] - } - for (; o < r && ("\n" != Q(t, o) && "\r" != Q(t, o)); ++o); - } else for (; o < r && ("\n" != Q(t, o) && "\r" != Q(t, o)); ++o); - } - return i._$rr = 1e3 * i._$yT / i._$D0 | 0, i - }, J.prototype.getDurationMSec = function() { - return this._$E ? -1 : this._$rr - }, J.prototype.getLoopDurationMSec = function() { - return this._$rr - }, J.prototype.dump = function() { - for (var t = 0; t < this.motions.length; t++) { - var i = this.motions[t]; - console.log("_$wL[%s] [%d]. ", i._$4P, i._$I0.length); - for (var e = 0; e < i._$I0.length && e < 10; e++) console.log("%5.2f ,", i._$I0[e]); - console.log("\n") - } - }, J.prototype.updateParamExe = function(t, i, e, r) { - for (var o = i - r._$z2, n = o * this._$D0 / 1e3, s = 0 | n, _ = n - s, a = 0; a < this.motions.length; a++) { - var h = this.motions[a], - l = h._$I0.length, - $ = h._$4P; - if (h._$RP == B._$hs) { - var u = h._$I0[s >= l ? l - 1 : s]; - t.setParamFloat($, u) - } else if (B._$ws <= h._$RP && h._$RP <= B._$Ys); - else { - var p, f = t.getParamIndex($), - c = t.getModelContext(), - d = c.getParamMax(f), - g = c.getParamMin(f), - y = .4 * (d - g), - m = c.getParamFloat(f), - T = h._$I0[s >= l ? l - 1 : s], - P = h._$I0[s + 1 >= l ? l - 1 : s + 1]; - p = T < P && P - T > y || T > P && T - P > y ? T : T + (P - T) * _; - var S = m + (p - m) * e; - t.setParamFloat($, S) - } - } - s >= this._$yT && (this._$E ? (r._$z2 = i, this.loopFadeIn && (r._$bs = i)) : r._$9L = !0), this._$eP = e - }, J.prototype._$r0 = function() { - return this._$E - }, J.prototype._$aL = function(t) { - this._$E = t - }, J.prototype._$S0 = function() { - return this._$D0 - }, J.prototype._$U0 = function(t) { - this._$D0 = t - }, J.prototype.isLoopFadeIn = function() { - return this.loopFadeIn - }, J.prototype.setLoopFadeIn = function(t) { - this.loopFadeIn = t - }, N.prototype.clear = function() { - this.size = 0 - }, N.prototype.add = function(t) { - if (this._$P.length <= this.size) { - var i = new Float32Array(2 * this.size); - w._$jT(this._$P, 0, i, 0, this.size), this._$P = i - } - this._$P[this.size++] = t - }, N.prototype._$BL = function() { - var t = new Float32Array(this.size); - return w._$jT(this._$P, 0, t, 0, this.size), t - }, B._$Fr = 0, B._$hs = 1, B._$ws = 100, B._$Ns = 101, B._$xs = 102, B._$us = 103, B._$qs = 104, B._$Ys = 105, Z.prototype = new I, Z._$gT = new Array, Z.prototype._$zP = function() { - this._$GS = new D, this._$GS._$zP() - }, Z.prototype._$F0 = function(t) { - I.prototype._$F0.call(this, t), this._$A = t._$6L(), this._$o = t._$6L(), this._$GS = t._$nP(), this._$Eo = t._$nP(), I.prototype.readV2_opacity.call(this, t) - }, Z.prototype.init = function(t) { - var i = new K(this), - e = (this._$o + 1) * (this._$A + 1); - return null != i._$Cr && (i._$Cr = null), i._$Cr = new Float32Array(2 * e), null != i._$hr && (i._$hr = null), this._$32() ? i._$hr = new Float32Array(2 * e) : i._$hr = null, i - }, Z.prototype._$Nr = function(t, i) { - var e = i; - if (this._$GS._$Ur(t)) { - var r = this._$VT(), - o = Z._$gT; - o[0] = !1, v._$Vr(t, this._$GS, o, r, this._$Eo, e._$Cr, 0, 2), i._$Ib(o[0]), this.interpolateOpacity(t, this._$GS, i, o) - } - }, Z.prototype._$2b = function(t, i) { - var e = i; - if (e._$hS(!0), this._$32()) { - var r = this.getTargetBaseDataID(); - if (e._$8r == I._$ur && (e._$8r = t.getBaseDataIndex(r)), e._$8r < 0) at._$so && _._$li("_$L _$0P _$G :: %s", r), e._$hS(!1); - else { - var o = t.getBaseData(e._$8r), - n = t._$q2(e._$8r); - if (null != o && n._$yo()) { - var s = n.getTotalScale(); - e.setTotalScale_notForClient(s); - var a = n.getTotalOpacity(); - e.setTotalOpacity(a * e.getInterpolatedOpacity()), o._$nb(t, n, e._$Cr, e._$hr, this._$VT(), 0, 2), e._$hS(!0) - } else e._$hS(!1) - } - } else e.setTotalOpacity(e.getInterpolatedOpacity()) - }, Z.prototype._$nb = function(t, i, e, r, o, n, s) { - var _ = i, - a = null != _._$hr ? _._$hr : _._$Cr; - Z.transformPoints_sdk2(e, r, o, n, s, a, this._$o, this._$A) - }, Z.transformPoints_sdk2 = function(i, e, r, o, n, s, _, a) { - for (var h, l, $, u = r * n, p = 0, f = 0, c = 0, d = 0, g = 0, y = 0, m = !1, T = o; T < u; T += n) { - var P, S, v, L; - if (v = i[T], L = i[T + 1], P = v * _, S = L * a, P < 0 || S < 0 || _ <= P || a <= S) { - var M = _ + 1; - if (!m) { - m = !0, p = .25 * (s[2 * (0 + 0 * M)] + s[2 * (_ + 0 * M)] + s[2 * (0 + a * M)] + s[2 * (_ + a * M)]), f = .25 * (s[2 * (0 + 0 * M) + 1] + s[2 * (_ + 0 * M) + 1] + s[2 * (0 + a * M) + 1] + s[2 * (_ + a * M) + 1]); - var E = s[2 * (_ + a * M)] - s[2 * (0 + 0 * M)], - A = s[2 * (_ + a * M) + 1] - s[2 * (0 + 0 * M) + 1], - I = s[2 * (_ + 0 * M)] - s[2 * (0 + a * M)], - w = s[2 * (_ + 0 * M) + 1] - s[2 * (0 + a * M) + 1]; - c = .5 * (E + I), d = .5 * (A + w), g = .5 * (E - I), y = .5 * (A - w), p -= .5 * (c + g), f -= .5 * (d + y) - } - if (-2 < v && v < 3 && -2 < L && L < 3) if (v <= 0) if (L <= 0) { - var x = s[2 * (0 + 0 * M)], - O = s[2 * (0 + 0 * M) + 1], - D = p - 2 * c, - R = f - 2 * d, - b = p - 2 * g, - F = f - 2 * y, - C = p - 2 * c - 2 * g, - N = f - 2 * d - 2 * y, - B = .5 * (v - -2), - U = .5 * (L - -2); - B + U <= 1 ? (e[T] = C + (b - C) * B + (D - C) * U, e[T + 1] = N + (F - N) * B + (R - N) * U) : (e[T] = x + (D - x) * (1 - B) + (b - x) * (1 - U), e[T + 1] = O + (R - O) * (1 - B) + (F - O) * (1 - U)) - } else if (L >= 1) { - var b = s[2 * (0 + a * M)], - F = s[2 * (0 + a * M) + 1], - C = p - 2 * c + 1 * g, - N = f - 2 * d + 1 * y, - x = p + 3 * g, - O = f + 3 * y, - D = p - 2 * c + 3 * g, - R = f - 2 * d + 3 * y, - B = .5 * (v - -2), - U = .5 * (L - 1); - B + U <= 1 ? (e[T] = C + (b - C) * B + (D - C) * U, e[T + 1] = N + (F - N) * B + (R - N) * U) : (e[T] = x + (D - x) * (1 - B) + (b - x) * (1 - U), e[T + 1] = O + (R - O) * (1 - B) + (F - O) * (1 - U)) - } else { - var G = 0 | S; - G == a && (G = a - 1); - var B = .5 * (v - -2), - U = S - G, - Y = G / a, - k = (G + 1) / a, - b = s[2 * (0 + G * M)], - F = s[2 * (0 + G * M) + 1], - x = s[2 * (0 + (G + 1) * M)], - O = s[2 * (0 + (G + 1) * M) + 1], - C = p - 2 * c + Y * g, - N = f - 2 * d + Y * y, - D = p - 2 * c + k * g, - R = f - 2 * d + k * y; - B + U <= 1 ? (e[T] = C + (b - C) * B + (D - C) * U, e[T + 1] = N + (F - N) * B + (R - N) * U) : (e[T] = x + (D - x) * (1 - B) + (b - x) * (1 - U), e[T + 1] = O + (R - O) * (1 - B) + (F - O) * (1 - U)) - } else if (1 <= v) if (L <= 0) { - var D = s[2 * (_ + 0 * M)], - R = s[2 * (_ + 0 * M) + 1], - x = p + 3 * c, - O = f + 3 * d, - C = p + 1 * c - 2 * g, - N = f + 1 * d - 2 * y, - b = p + 3 * c - 2 * g, - F = f + 3 * d - 2 * y, - B = .5 * (v - 1), - U = .5 * (L - -2); - B + U <= 1 ? (e[T] = C + (b - C) * B + (D - C) * U, e[T + 1] = N + (F - N) * B + (R - N) * U) : (e[T] = x + (D - x) * (1 - B) + (b - x) * (1 - U), e[T + 1] = O + (R - O) * (1 - B) + (F - O) * (1 - U)) - } else if (L >= 1) { - var C = s[2 * (_ + a * M)], - N = s[2 * (_ + a * M) + 1], - b = p + 3 * c + 1 * g, - F = f + 3 * d + 1 * y, - D = p + 1 * c + 3 * g, - R = f + 1 * d + 3 * y, - x = p + 3 * c + 3 * g, - O = f + 3 * d + 3 * y, - B = .5 * (v - 1), - U = .5 * (L - 1); - B + U <= 1 ? (e[T] = C + (b - C) * B + (D - C) * U, e[T + 1] = N + (F - N) * B + (R - N) * U) : (e[T] = x + (D - x) * (1 - B) + (b - x) * (1 - U), e[T + 1] = O + (R - O) * (1 - B) + (F - O) * (1 - U)) - } else { - var G = 0 | S; - G == a && (G = a - 1); - var B = .5 * (v - 1), - U = S - G, - Y = G / a, - k = (G + 1) / a, - C = s[2 * (_ + G * M)], - N = s[2 * (_ + G * M) + 1], - D = s[2 * (_ + (G + 1) * M)], - R = s[2 * (_ + (G + 1) * M) + 1], - b = p + 3 * c + Y * g, - F = f + 3 * d + Y * y, - x = p + 3 * c + k * g, - O = f + 3 * d + k * y; - B + U <= 1 ? (e[T] = C + (b - C) * B + (D - C) * U, e[T + 1] = N + (F - N) * B + (R - N) * U) : (e[T] = x + (D - x) * (1 - B) + (b - x) * (1 - U), e[T + 1] = O + (R - O) * (1 - B) + (F - O) * (1 - U)) - } else if (L <= 0) { - var V = 0 | P; - V == _ && (V = _ - 1); - var B = P - V, - U = .5 * (L - -2), - X = V / _, - z = (V + 1) / _, - D = s[2 * (V + 0 * M)], - R = s[2 * (V + 0 * M) + 1], - x = s[2 * (V + 1 + 0 * M)], - O = s[2 * (V + 1 + 0 * M) + 1], - C = p + X * c - 2 * g, - N = f + X * d - 2 * y, - b = p + z * c - 2 * g, - F = f + z * d - 2 * y; - B + U <= 1 ? (e[T] = C + (b - C) * B + (D - C) * U, e[T + 1] = N + (F - N) * B + (R - N) * U) : (e[T] = x + (D - x) * (1 - B) + (b - x) * (1 - U), e[T + 1] = O + (R - O) * (1 - B) + (F - O) * (1 - U)) - } else if (L >= 1) { - var V = 0 | P; - V == _ && (V = _ - 1); - var B = P - V, - U = .5 * (L - 1), - X = V / _, - z = (V + 1) / _, - C = s[2 * (V + a * M)], - N = s[2 * (V + a * M) + 1], - b = s[2 * (V + 1 + a * M)], - F = s[2 * (V + 1 + a * M) + 1], - D = p + X * c + 3 * g, - R = f + X * d + 3 * y, - x = p + z * c + 3 * g, - O = f + z * d + 3 * y; - B + U <= 1 ? (e[T] = C + (b - C) * B + (D - C) * U, e[T + 1] = N + (F - N) * B + (R - N) * U) : (e[T] = x + (D - x) * (1 - B) + (b - x) * (1 - U), e[T + 1] = O + (R - O) * (1 - B) + (F - O) * (1 - U)) - } else t.err.printf("_$li calc : %.4f , %.4f\t\t\t\t\t@@BDBoxGrid\n", v, L); - else e[T] = p + v * c + L * g, e[T + 1] = f + v * d + L * y - } else l = P - (0 | P), $ = S - (0 | S), h = 2 * ((0 | P) + (0 | S) * (_ + 1)), l + $ < 1 ? (e[T] = s[h] * (1 - l - $) + s[h + 2] * l + s[h + 2 * (_ + 1)] * $, e[T + 1] = s[h + 1] * (1 - l - $) + s[h + 3] * l + s[h + 2 * (_ + 1) + 1] * $) : (e[T] = s[h + 2 * (_ + 1) + 2] * (l - 1 + $) + s[h + 2 * (_ + 1)] * (1 - l) + s[h + 2] * (1 - $), e[T + 1] = s[h + 2 * (_ + 1) + 3] * (l - 1 + $) + s[h + 2 * (_ + 1) + 1] * (1 - l) + s[h + 3] * (1 - $)) - } - }, Z.prototype.transformPoints_sdk1 = function(t, i, e, r, o, n, s) { - for (var _, a, h, l, $, u, p, f = i, c = this._$o, d = this._$A, g = o * s, y = null != f._$hr ? f._$hr : f._$Cr, m = n; m < g; m += s) at._$ts ? (_ = e[m], a = e[m + 1], _ < 0 ? _ = 0 : _ > 1 && (_ = 1), a < 0 ? a = 0 : a > 1 && (a = 1), _ *= c, a *= d, h = 0 | _, l = 0 | a, h > c - 1 && (h = c - 1), l > d - 1 && (l = d - 1), u = _ - h, p = a - l, $ = 2 * (h + l * (c + 1))) : (_ = e[m] * c, a = e[m + 1] * d, u = _ - (0 | _), p = a - (0 | a), $ = 2 * ((0 | _) + (0 | a) * (c + 1))), u + p < 1 ? (r[m] = y[$] * (1 - u - p) + y[$ + 2] * u + y[$ + 2 * (c + 1)] * p, r[m + 1] = y[$ + 1] * (1 - u - p) + y[$ + 3] * u + y[$ + 2 * (c + 1) + 1] * p) : (r[m] = y[$ + 2 * (c + 1) + 2] * (u - 1 + p) + y[$ + 2 * (c + 1)] * (1 - u) + y[$ + 2] * (1 - p), r[m + 1] = y[$ + 2 * (c + 1) + 3] * (u - 1 + p) + y[$ + 2 * (c + 1) + 1] * (1 - u) + y[$ + 3] * (1 - p)) - }, Z.prototype._$VT = function() { - return (this._$o + 1) * (this._$A + 1) - }, Z.prototype.getType = function() { - return I._$_b - }, K.prototype = new _t, tt._$42 = 0, tt.prototype._$zP = function() { - this._$3S = new Array, this._$aS = new Array - }, tt.prototype._$F0 = function(t) { - this._$g0 = t._$8L(), this.visible = t._$8L(), this._$NL = t._$nP(), this._$3S = t._$nP(), this._$aS = t._$nP() - }, tt.prototype.init = function(t) { - var i = new it(this); - return i.setPartsOpacity(this.isVisible() ? 1 : 0), i - }, tt.prototype._$6o = function(t) { - if (null == this._$3S) throw new Error("_$3S _$6 _$Wo@_$6o"); - this._$3S.push(t) - }, tt.prototype._$3o = function(t) { - if (null == this._$aS) throw new Error("_$aS _$6 _$Wo@_$3o"); - this._$aS.push(t) - }, tt.prototype._$Zo = function(t) { - this._$3S = t - }, tt.prototype._$xo = function(t) { - this._$aS = t - }, tt.prototype.isVisible = function() { - return this.visible - }, tt.prototype._$uL = function() { - return this._$g0 - }, tt.prototype._$KP = function(t) { - this.visible = t - }, tt.prototype._$ET = function(t) { - this._$g0 = t - }, tt.prototype.getBaseData = function() { - return this._$3S - }, tt.prototype.getDrawData = function() { - return this._$aS - }, tt.prototype._$p2 = function() { - return this._$NL - }, tt.prototype._$ob = function(t) { - this._$NL = t - }, tt.prototype.getPartsID = function() { - return this._$NL - }, tt.prototype._$MP = function(t) { - this._$NL = t - }, it.prototype = new $, it.prototype.getPartsOpacity = function() { - return this._$VS - }, it.prototype.setPartsOpacity = function(t) { - this._$VS = t - }, et._$L7 = function() { - u._$27(), yt._$27(), b._$27(), l._$27() - }, et.prototype.toString = function() { - return this.id - }, rt.prototype._$F0 = function(t) {}, ot.prototype._$1s = function() { - return this._$4S - }, ot.prototype._$zP = function() { - this._$4S = new Array - }, ot.prototype._$F0 = function(t) { - this._$4S = t._$nP() - }, ot.prototype._$Ks = function(t) { - this._$4S.push(t) - }, nt.tr = new gt, nt._$50 = new gt, nt._$Ti = new Array(0, 0), nt._$Pi = new Array(0, 0), nt._$B = new Array(0, 0), nt.prototype._$lP = function(t, i, e, r) { - this.viewport = new Array(t, i, e, r) - }, nt.prototype._$bL = function() { - this.context.save(); - var t = this.viewport; - null != t && (this.context.beginPath(), this.context._$Li(t[0], t[1], t[2], t[3]), this.context.clip()) - }, nt.prototype._$ei = function() { - this.context.restore() - }, nt.prototype.drawElements = function(t, i, e, r, o, n, s, a) { - try { - o != this._$Qo && (this._$Qo = o, this.context.globalAlpha = o); - for (var h = i.length, l = t.width, $ = t.height, u = this.context, p = this._$xP, f = this._$uP, c = this._$6r, d = this._$3r, g = nt.tr, y = nt._$Ti, m = nt._$Pi, T = nt._$B, P = 0; P < h; P += 3) { - u.save(); - var S = i[P], - v = i[P + 1], - L = i[P + 2], - M = p + c * e[2 * S], - E = f + d * e[2 * S + 1], - A = p + c * e[2 * v], - I = f + d * e[2 * v + 1], - w = p + c * e[2 * L], - x = f + d * e[2 * L + 1]; - s && (s._$PS(M, E, T), M = T[0], E = T[1], s._$PS(A, I, T), A = T[0], I = T[1], s._$PS(w, x, T), w = T[0], x = T[1]); - var O = l * r[2 * S], - D = $ - $ * r[2 * S + 1], - R = l * r[2 * v], - b = $ - $ * r[2 * v + 1], - F = l * r[2 * L], - C = $ - $ * r[2 * L + 1], - N = Math.atan2(b - D, R - O), - B = Math.atan2(I - E, A - M), - U = A - M, - G = I - E, - Y = Math.sqrt(U * U + G * G), - k = R - O, - V = b - D, - X = Math.sqrt(k * k + V * V), - z = Y / X; - It._$ni(F, C, O, D, R - O, b - D, -(b - D), R - O, y), It._$ni(w, x, M, E, A - M, I - E, -(I - E), A - M, m); - var H = (m[0] - y[0]) / y[1], - W = Math.min(O, R, F), - j = Math.max(O, R, F), - q = Math.min(D, b, C), - J = Math.max(D, b, C), - Q = Math.floor(W), - Z = Math.floor(q), - K = Math.ceil(j), - tt = Math.ceil(J); - g.identity(), g.translate(M, E), g.rotate(B), g.scale(1, m[1] / y[1]), g.shear(H, 0), g.scale(z, z), g.rotate(-N), g.translate(-O, -D), g.setContext(u); - if (n || (n = 1.2), at.IGNORE_EXPAND && (n = 0), at.USE_CACHED_POLYGON_IMAGE) { - var it = a._$e0; - if (it.gl_cacheImage = it.gl_cacheImage || {}, !it.gl_cacheImage[P]) { - var et = nt.createCanvas(K - Q, tt - Z); - at.DEBUG_DATA.LDGL_CANVAS_MB = at.DEBUG_DATA.LDGL_CANVAS_MB || 0, at.DEBUG_DATA.LDGL_CANVAS_MB += (K - Q) * (tt - Z) * 4; - var rt = et.getContext("2d"); - rt.translate(-Q, -Z), nt.clip(rt, g, n, Y, O, D, R, b, F, C, M, E, A, I, w, x), rt.drawImage(t, 0, 0), it.gl_cacheImage[P] = { - cacheCanvas: et, - cacheContext: rt - } - } - u.drawImage(it.gl_cacheImage[P].cacheCanvas, Q, Z) - } else at.IGNORE_CLIP || nt.clip(u, g, n, Y, O, D, R, b, F, C, M, E, A, I, w, x), at.USE_ADJUST_TRANSLATION && (W = 0, j = l, q = 0, J = $), u.drawImage(t, W, q, j - W, J - q, W, q, j - W, J - q); - u.restore() - } - } catch (t) { - _._$Rb(t) - } - }, nt.clip = function(t, i, e, r, o, n, s, _, a, h, l, $, u, p, f, c) { - e > .02 ? nt.expandClip(t, i, e, r, l, $, u, p, f, c) : nt.clipWithTransform(t, null, o, n, s, _, a, h) - }, nt.expandClip = function(t, i, e, r, o, n, s, _, a, h) { - var l = s - o, - $ = _ - n, - u = a - o, - p = h - n, - f = l * p - $ * u > 0 ? e : -e, - c = -$, - d = l, - g = a - s, - y = h - _, - m = -y, - T = g, - P = Math.sqrt(g * g + y * y), - S = -p, - v = u, - L = Math.sqrt(u * u + p * p), - M = o - f * c / r, - E = n - f * d / r, - A = s - f * c / r, - I = _ - f * d / r, - w = s - f * m / P, - x = _ - f * T / P, - O = a - f * m / P, - D = h - f * T / P, - R = o + f * S / L, - b = n + f * v / L, - F = a + f * S / L, - C = h + f * v / L, - N = nt._$50; - return null != i._$P2(N) && (nt.clipWithTransform(t, N, M, E, A, I, w, x, O, D, F, C, R, b), !0) - }, nt.clipWithTransform = function(t, i, e, r, o, n, s, a) { - if (arguments.length < 7) return void _._$li("err : @LDGL.clip()"); - if (!(arguments[1] instanceof gt)) return void _._$li("err : a[0] is _$6 LDTransform @LDGL.clip()"); - var h = nt._$B, - l = i, - $ = arguments; - if (t.beginPath(), l) { - l._$PS($[2], $[3], h), t.moveTo(h[0], h[1]); - for (var u = 4; u < $.length; u += 2) l._$PS($[u], $[u + 1], h), t.lineTo(h[0], h[1]) - } else { - t.moveTo($[2], $[3]); - for (var u = 4; u < $.length; u += 2) t.lineTo($[u], $[u + 1]) - } - t.clip() - }, nt.createCanvas = function(t, i) { - var e = document.createElement("canvas"); - return e.setAttribute("width", t), e.setAttribute("height", i), e || _._$li("err : " + e), e - }, nt.dumpValues = function() { - for (var t = "", i = 0; i < arguments.length; i++) t += "[" + i + "]= " + arguments[i].toFixed(3) + " , "; - console.log(t) - }, st.prototype._$F0 = function(t) { - this._$TT = t._$_T(), this._$LT = t._$_T(), this._$FS = t._$_T(), this._$wL = t._$nP() - }, st.prototype.getMinValue = function() { - return this._$TT - }, st.prototype.getMaxValue = function() { - return this._$LT - }, st.prototype.getDefaultValue = function() { - return this._$FS - }, st.prototype.getParamID = function() { - return this._$wL - }, _t.prototype._$yo = function() { - return this._$AT && !this._$JS - }, _t.prototype._$hS = function(t) { - this._$AT = t - }, _t.prototype._$GT = function() { - return this._$e0 - }, _t.prototype._$l2 = function(t) { - this._$IP = t - }, _t.prototype.getPartsIndex = function() { - return this._$IP - }, _t.prototype._$x2 = function() { - return this._$JS - }, _t.prototype._$Ib = function(t) { - this._$JS = t - }, _t.prototype.getTotalScale = function() { - return this.totalScale - }, _t.prototype.setTotalScale_notForClient = function(t) { - this.totalScale = t - }, _t.prototype.getInterpolatedOpacity = function() { - return this._$7s - }, _t.prototype.setInterpolatedOpacity = function(t) { - this._$7s = t - }, _t.prototype.getTotalOpacity = function(t) { - return this.totalOpacity - }, _t.prototype.setTotalOpacity = function(t) { - this.totalOpacity = t - }, at._$2s = "2.1.00_1", at._$Kr = 201001e3, at._$sP = !0, at._$so = !0, at._$cb = !1, at._$3T = !0, at._$Ts = !0, at._$fb = !0, at._$ts = !0, at.L2D_DEFORMER_EXTEND = !0, at._$Wb = !1; - at._$yr = !1, at._$Zs = !1, at.L2D_NO_ERROR = 0, at._$i7 = 1e3, at._$9s = 1001, at._$es = 1100, at._$r7 = 2e3, at._$07 = 2001, at._$b7 = 2002, at._$H7 = 4e3, at.L2D_COLOR_BLEND_MODE_MULT = 0, at.L2D_COLOR_BLEND_MODE_ADD = 1, at.L2D_COLOR_BLEND_MODE_INTERPOLATE = 2, at._$6b = !0, at._$cT = 0, at.clippingMaskBufferSize = 256, at.glContext = new Array, at.frameBuffers = new Array, at.fTexture = new Array, at.IGNORE_CLIP = !1, at.IGNORE_EXPAND = !1, at.EXPAND_W = 2, at.USE_ADJUST_TRANSLATION = !0, at.USE_CANVAS_TRANSFORM = !0, at.USE_CACHED_POLYGON_IMAGE = !1, at.DEBUG_DATA = {}, at.PROFILE_IOS_SPEED = { - PROFILE_NAME: "iOS Speed", - USE_ADJUST_TRANSLATION: !0, - USE_CACHED_POLYGON_IMAGE: !0, - EXPAND_W: 4 - }, at.PROFILE_IOS_QUALITY = { - PROFILE_NAME: "iOS HiQ", - USE_ADJUST_TRANSLATION: !0, - USE_CACHED_POLYGON_IMAGE: !1, - EXPAND_W: 2 - }, at.PROFILE_IOS_DEFAULT = at.PROFILE_IOS_QUALITY, at.PROFILE_ANDROID = { - PROFILE_NAME: "Android", - USE_ADJUST_TRANSLATION: !1, - USE_CACHED_POLYGON_IMAGE: !1, - EXPAND_W: 2 - }, at.PROFILE_DESKTOP = { - PROFILE_NAME: "Desktop", - USE_ADJUST_TRANSLATION: !1, - USE_CACHED_POLYGON_IMAGE: !1, - EXPAND_W: 2 - }, at.initProfile = function() { - Et.isIOS() ? at.setupProfile(at.PROFILE_IOS_DEFAULT) : Et.isAndroid() ? at.setupProfile(at.PROFILE_ANDROID) : at.setupProfile(at.PROFILE_DESKTOP) - }, at.setupProfile = function(t, i) { - if ("number" == typeof t) switch (t) { - case 9901: - t = at.PROFILE_IOS_SPEED; - break; - case 9902: - t = at.PROFILE_IOS_QUALITY; - break; - case 9903: - t = at.PROFILE_IOS_DEFAULT; - break; - case 9904: - t = at.PROFILE_ANDROID; - break; - case 9905: - t = at.PROFILE_DESKTOP; - break; - default: - alert("profile _$6 _$Ui : " + t) - } - arguments.length < 2 && (i = !0), i && console.log("profile : " + t.PROFILE_NAME); - for (var e in t) at[e] = t[e], i && console.log(" [" + e + "] = " + t[e]) - }, at.init = function() { - if (at._$6b) { - console.log("Live2D %s", at._$2s), at._$6b = !1; - !0, at.initProfile() - } - }, at.getVersionStr = function() { - return at._$2s - }, at.getVersionNo = function() { - return at._$Kr - }, at._$sT = function(t) { - at._$cT = t - }, at.getError = function() { - var t = at._$cT; - return at._$cT = 0, t - }, at.dispose = function() { - at.glContext = [], at.frameBuffers = [], at.fTexture = [] - }, at.setGL = function(t, i) { - var e = i || 0; - at.glContext[e] = t - }, at.getGL = function(t) { - return at.glContext[t] - }, at.setClippingMaskBufferSize = function(t) { - at.clippingMaskBufferSize = t - }, at.getClippingMaskBufferSize = function() { - return at.clippingMaskBufferSize - }, at.deleteBuffer = function(t) { - at.getGL(t).deleteFramebuffer(at.frameBuffers[t].framebuffer), delete at.frameBuffers[t], delete at.glContext[t] - }, ht._$r2 = function(t) { - return t < 0 ? 0 : t > 1 ? 1 : .5 - .5 * Math.cos(t * Lt.PI_F) - }, lt._$fr = -1, lt.prototype.toString = function() { - return this._$ib - }, $t.prototype = new W, $t._$42 = 0, $t._$Os = 30, $t._$ms = 0, $t._$ns = 1, $t._$_s = 2, $t._$gT = new Array, $t.prototype._$_S = function(t) { - this._$LP = t - }, $t.prototype.getTextureNo = function() { - return this._$LP - }, $t.prototype._$ZL = function() { - return this._$Qi - }, $t.prototype._$H2 = function() { - return this._$JP - }, $t.prototype.getNumPoints = function() { - return this._$d0 - }, $t.prototype.getType = function() { - return W._$wb - }, $t.prototype._$B2 = function(t, i, e) { - var r = i, - o = null != r._$hr ? r._$hr : r._$Cr; - switch (U._$do) { - default: - case U._$Ms: - throw new Error("_$L _$ro "); - case U._$Qs: - for (var n = this._$d0 - 1; n >= 0; --n) o[n * U._$No + 4] = e - } - }, $t.prototype._$zP = function() { - this._$GS = new D, this._$GS._$zP() - }, $t.prototype._$F0 = function(t) { - W.prototype._$F0.call(this, t), this._$LP = t._$6L(), this._$d0 = t._$6L(), this._$Yo = t._$6L(); - var i = t._$nP(); - this._$BP = new Int16Array(3 * this._$Yo); - for (var e = 3 * this._$Yo - 1; e >= 0; --e) this._$BP[e] = i[e]; - if (this._$Eo = t._$nP(), this._$Qi = t._$nP(), t.getFormatVersion() >= G._$s7) { - if (this._$JP = t._$6L(), 0 != this._$JP) { - if (0 != (1 & this._$JP)) { - var r = t._$6L(); - null == this._$5P && (this._$5P = new Object), this._$5P._$Hb = parseInt(r) - } - 0 != (this._$JP & $t._$Os) ? this._$6s = (this._$JP & $t._$Os) >> 1 : this._$6s = $t._$ms, 0 != (32 & this._$JP) && (this.culling = !1) - } - } else this._$JP = 0 - }, $t.prototype.init = function(t) { - var i = new ut(this), - e = this._$d0 * U._$No, - r = this._$32(); - switch (null != i._$Cr && (i._$Cr = null), i._$Cr = new Float32Array(e), null != i._$hr && (i._$hr = null), i._$hr = r ? new Float32Array(e) : null, U._$do) { - default: - case U._$Ms: - if (U._$Ls) for (var o = this._$d0 - 1; o >= 0; --o) { - var n = o << 1; - this._$Qi[n + 1] = 1 - this._$Qi[n + 1] - } - break; - case U._$Qs: - for (var o = this._$d0 - 1; o >= 0; --o) { - var n = o << 1, - s = o * U._$No, - _ = this._$Qi[n], - a = this._$Qi[n + 1]; - i._$Cr[s] = _, i._$Cr[s + 1] = a, i._$Cr[s + 4] = 0, r && (i._$hr[s] = _, i._$hr[s + 1] = a, i._$hr[s + 4] = 0) - } - } - return i - }, $t.prototype._$Nr = function(t, i) { - var e = i; - if (this != e._$GT() && console.log("### assert!! ### "), this._$GS._$Ur(t) && (W.prototype._$Nr.call(this, t, e), !e._$IS[0])) { - var r = $t._$gT; - r[0] = !1, v._$Vr(t, this._$GS, r, this._$d0, this._$Eo, e._$Cr, U._$i2, U._$No) - } - }, $t.prototype._$2b = function(t, i) { - try { - this != i._$GT() && console.log("### assert!! ### "); - var e = !1; - i._$IS[0] && (e = !0); - var r = i; - if (!e && (W.prototype._$2b.call(this, t), this._$32())) { - var o = this.getTargetBaseDataID(); - if (r._$8r == W._$ur && (r._$8r = t.getBaseDataIndex(o)), r._$8r < 0) at._$so && _._$li("_$L _$0P _$G :: %s", o); - else { - var n = t.getBaseData(r._$8r), - s = t._$q2(r._$8r); - null == n || s._$x2() ? r._$AT = !1 : (n._$nb(t, s, r._$Cr, r._$hr, this._$d0, U._$i2, U._$No), r._$AT = !0), r.baseOpacity = s.getTotalOpacity() - } - } - } catch (t) { - throw t - } - }, $t.prototype.draw = function(t, i, e) { - if (this != e._$GT() && console.log("### assert!! ### "), !e._$IS[0]) { - var r = e, - o = this._$LP; - o < 0 && (o = 1); - var n = this.getOpacity(i, r) * e._$VS * e.baseOpacity, - s = null != r._$hr ? r._$hr : r._$Cr; - t.setClipBufPre_clipContextForDraw(e.clipBufPre_clipContext), t._$WP(this.culling), t._$Uo(o, 3 * this._$Yo, this._$BP, s, this._$Qi, n, this._$6s, r) - } - }, $t.prototype.dump = function() { - console.log(" _$yi( %d ) , _$d0( %d ) , _$Yo( %d ) \n", this._$LP, this._$d0, this._$Yo), console.log(" _$Oi _$di = { "); - for (var t = 0; t < this._$BP.length; t++) console.log("%5d ,", this._$BP[t]); - console.log("\n _$5i _$30"); - for (var t = 0; t < this._$Eo.length; t++) { - console.log("\n _$30[%d] = ", t); - for (var i = this._$Eo[t], e = 0; e < i.length; e++) console.log("%6.2f, ", i[e]) - } - console.log("\n") - }, $t.prototype._$72 = function(t) { - return null == this._$5P ? null : this._$5P[t] - }, $t.prototype.getIndexArray = function() { - return this._$BP - }, ut.prototype = new Mt, ut.prototype.getTransformedPoints = function() { - return null != this._$hr ? this._$hr : this._$Cr - }, pt.prototype._$HT = function(t) { - this.x = t.x, this.y = t.y - }, pt.prototype._$HT = function(t, i) { - this.x = t, this.y = i - }, ft.prototype = new i, ft.loadModel = function(t) { - var e = new ft; - return i._$62(e, t), e - }, ft.loadModel = function(t, e) { - var r = e || 0, - o = new ft(r); - return i._$62(o, t), o - }, ft._$to = function() { - return new ft - }, ft._$er = function(t) { - var i = new _$5("../_$_r/_$t0/_$Ri/_$_P._$d"); - if (0 == i.exists()) throw new _$ls("_$t0 _$_ _$6 _$Ui :: " + i._$PL()); - for (var e = ["../_$_r/_$t0/_$Ri/_$_P.512/_$CP._$1", "../_$_r/_$t0/_$Ri/_$_P.512/_$vP._$1", "../_$_r/_$t0/_$Ri/_$_P.512/_$EP._$1", "../_$_r/_$t0/_$Ri/_$_P.512/_$pP._$1"], r = ft.loadModel(i._$3b()), o = 0; o < e.length; o++) { - var n = new _$5(e[o]); - if (0 == n.exists()) throw new _$ls("_$t0 _$_ _$6 _$Ui :: " + n._$PL()); - r.setTexture(o, _$nL._$_o(t, n._$3b())) - } - return r - }, ft.prototype.setGL = function(t) { - at.setGL(t) - }, ft.prototype.setTransform = function(t) { - this.drawParamWebGL.setTransform(t) - }, ft.prototype.update = function() { - this._$5S.update(), this._$5S.preDraw(this.drawParamWebGL) - }, ft.prototype.draw = function() { - this._$5S.draw(this.drawParamWebGL) - }, ft.prototype._$K2 = function() { - this.drawParamWebGL._$K2() - }, ft.prototype.setTexture = function(t, i) { - null == this.drawParamWebGL && _._$li("_$Yi for QT _$ki / _$XS() is _$6 _$ui!!"), this.drawParamWebGL.setTexture(t, i) - }, ft.prototype.setTexture = function(t, i) { - null == this.drawParamWebGL && _._$li("_$Yi for QT _$ki / _$XS() is _$6 _$ui!!"), this.drawParamWebGL.setTexture(t, i) - }, ft.prototype._$Rs = function() { - return this.drawParamWebGL._$Rs() - }, ft.prototype._$Ds = function(t) { - this.drawParamWebGL._$Ds(t) - }, ft.prototype.getDrawParam = function() { - return this.drawParamWebGL - }, ft.prototype.setMatrix = function(t) { - this.drawParamWebGL.setMatrix(t) - }, ft.prototype.setPremultipliedAlpha = function(t) { - this.drawParamWebGL.setPremultipliedAlpha(t) - }, ft.prototype.isPremultipliedAlpha = function() { - return this.drawParamWebGL.isPremultipliedAlpha() - }, ft.prototype.setAnisotropy = function(t) { - this.drawParamWebGL.setAnisotropy(t) - }, ft.prototype.getAnisotropy = function() { - return this.drawParamWebGL.getAnisotropy() - }, ct.prototype._$tb = function() { - return this.motions - }, ct.prototype.startMotion = function(t, i) { - for (var e = null, r = this.motions.length, o = 0; o < r; ++o) null != (e = this.motions[o]) && (e._$qS(e._$w0.getFadeOut()), this._$eb && _._$Ji("MotionQueueManager[size:%2d]->startMotion() / start _$K _$3 (m%d)\n", r, e._$sr)); - if (null == t) return -1; - e = new dt, e._$w0 = t, this.motions.push(e); - var n = e._$sr; - return this._$eb && _._$Ji("MotionQueueManager[size:%2d]->startMotion() / new _$w0 (m%d)\n", r, n), n - }, ct.prototype.updateParam = function(t) { - try { - for (var i = !1, e = 0; e < this.motions.length; e++) { - var r = this.motions[e]; - if (null != r) { - var o = r._$w0; - null != o ? (o.updateParam(t, r), i = !0, r.isFinished() && (this._$eb && _._$Ji("MotionQueueManager[size:%2d]->updateParam() / _$T0 _$w0 (m%d)\n", this.motions.length - 1, r._$sr), this.motions.splice(e, 1), e--)) : (this.motions = this.motions.splice(e, 1), e--) - } else this.motions.splice(e, 1), e-- - } - return i - } catch (t) { - return _._$li(t), !0 - } - }, ct.prototype.isFinished = function(t) { - if (arguments.length >= 1) { - for (var i = 0; i < this.motions.length; i++) { - var e = this.motions[i]; - if (null != e && (e._$sr == t && !e.isFinished())) return !1 - } - return !0 - } - for (var i = 0; i < this.motions.length; i++) { - var e = this.motions[i]; - if (null != e) { - if (null != e._$w0) { - if (!e.isFinished()) return !1 - } else this.motions.splice(i, 1), i-- - } else this.motions.splice(i, 1), i-- - } - return !0 - }, ct.prototype.stopAllMotions = function() { - for (var t = 0; t < this.motions.length; t++) { - var i = this.motions[t]; - if (null != i) { - i._$w0; - this.motions.splice(t, 1), t-- - } else this.motions.splice(t, 1), t-- - } - }, ct.prototype._$Zr = function(t) { - this._$eb = t - }, ct.prototype._$e = function() { - console.log("-- _$R --\n"); - for (var t = 0; t < this.motions.length; t++) { - var i = this.motions[t], - e = i._$w0; - console.log("MotionQueueEnt[%d] :: %s\n", this.motions.length, e.toString()) - } - }, dt._$Gs = 0, dt.prototype.isFinished = function() { - return this._$9L - }, dt.prototype._$qS = function(t) { - var i = w.getUserTimeMSec(), - e = i + t; - (this._$Do < 0 || e < this._$Do) && (this._$Do = e) - }, dt.prototype._$Bs = function() { - return this._$sr - }, gt.prototype.setContext = function(t) { - var i = this.m; - t.transform(i[0], i[1], i[3], i[4], i[6], i[7]) - }, gt.prototype.toString = function() { - for (var t = "LDTransform { ", i = 0; i < 9; i++) t += this.m[i].toFixed(2) + " ,"; - return t += " }" - }, gt.prototype.identity = function() { - var t = this.m; - t[0] = t[4] = t[8] = 1, t[1] = t[2] = t[3] = t[5] = t[6] = t[7] = 0 - }, gt.prototype._$PS = function(t, i, e) { - null == e && (e = new Array(0, 0)); - var r = this.m; - return e[0] = r[0] * t + r[3] * i + r[6], e[1] = r[1] * t + r[4] * i + r[7], e - }, gt.prototype._$P2 = function(t) { - t || (t = new gt); - var i = this.m, - e = i[0], - r = i[1], - o = i[2], - n = i[3], - s = i[4], - _ = i[5], - a = i[6], - h = i[7], - l = i[8], - $ = e * s * l + r * _ * a + o * n * h - e * _ * h - o * s * a - r * n * l; - if (0 == $) return null; - var u = 1 / $; - return t.m[0] = u * (s * l - h * _), t.m[1] = u * (h * o - r * l), t.m[2] = u * (r * _ - s * o), t.m[3] = u * (a * _ - n * l), t.m[4] = u * (e * l - a * o), t.m[5] = u * (n * o - e * _), t.m[6] = u * (n * h - a * s), t.m[7] = u * (a * r - e * h), t.m[8] = u * (e * s - n * r), t - }, gt.prototype.transform = function(t, i, e) { - null == e && (e = new Array(0, 0)); - var r = this.m; - return e[0] = r[0] * t + r[3] * i + r[6], e[1] = r[1] * t + r[4] * i + r[7], e - }, gt.prototype.translate = function(t, i) { - var e = this.m; - e[6] = e[0] * t + e[3] * i + e[6], e[7] = e[1] * t + e[4] * i + e[7], e[8] = e[2] * t + e[5] * i + e[8] - }, gt.prototype.scale = function(t, i) { - var e = this.m; - e[0] *= t, e[1] *= t, e[2] *= t, e[3] *= i, e[4] *= i, e[5] *= i - }, gt.prototype.shear = function(t, i) { - var e = this.m, - r = e[0] + e[3] * i, - o = e[1] + e[4] * i, - n = e[2] + e[5] * i; - e[3] = e[0] * t + e[3], e[4] = e[1] * t + e[4], e[5] = e[2] * t + e[5], e[0] = r, e[1] = o, e[2] = n - }, gt.prototype.rotate = function(t) { - var i = this.m, - e = Math.cos(t), - r = Math.sin(t), - o = i[0] * e + i[3] * r, - n = i[1] * e + i[4] * r, - s = i[2] * e + i[5] * r; - i[3] = -i[0] * r + i[3] * e, i[4] = -i[1] * r + i[4] * e, i[5] = -i[2] * r + i[5] * e, i[0] = o, i[1] = n, i[2] = s - }, gt.prototype.concatenate = function(t) { - var i = this.m, - e = t.m, - r = i[0] * e[0] + i[3] * e[1] + i[6] * e[2], - o = i[1] * e[0] + i[4] * e[1] + i[7] * e[2], - n = i[2] * e[0] + i[5] * e[1] + i[8] * e[2], - s = i[0] * e[3] + i[3] * e[4] + i[6] * e[5], - _ = i[1] * e[3] + i[4] * e[4] + i[7] * e[5], - a = i[2] * e[3] + i[5] * e[4] + i[8] * e[5], - h = i[0] * e[6] + i[3] * e[7] + i[6] * e[8], - l = i[1] * e[6] + i[4] * e[7] + i[7] * e[8], - $ = i[2] * e[6] + i[5] * e[7] + i[8] * e[8]; - m[0] = r, m[1] = o, m[2] = n, m[3] = s, m[4] = _, m[5] = a, m[6] = h, m[7] = l, m[8] = $ - }, yt.prototype = new et, yt._$eT = null, yt._$tP = new Object, yt._$2o = function() { - return null == yt._$eT && (yt._$eT = yt.getID("DST_BASE")), yt._$eT - }, yt._$27 = function() { - yt._$tP.clear(), yt._$eT = null - }, yt.getID = function(t) { - var i = yt._$tP[t]; - return null == i && (i = new yt(t), yt._$tP[t] = i), i - }, yt.prototype._$3s = function() { - return new yt - }, mt.prototype = new E, mt._$9r = function(t) { - return new Float32Array(t) - }, mt._$vb = function(t) { - return new Int16Array(t) - }, mt._$cr = function(t, i) { - return null == t || t._$yL() < i.length ? (t = mt._$9r(2 * i.length), t.put(i), t._$oT(0)) : (t.clear(), t.put(i), t._$oT(0)), t - }, mt._$mb = function(t, i) { - return null == t || t._$yL() < i.length ? (t = mt._$vb(2 * i.length), t.put(i), t._$oT(0)) : (t.clear(), t.put(i), t._$oT(0)), t - }, mt._$Hs = function() { - return this._$Gr - }, mt._$as = function(t) { - this._$Gr = t - }, mt.prototype.getGL = function() { - return this.gl - }, mt.prototype.setGL = function(t) { - this.gl = t - }, mt.prototype.setTransform = function(t) { - this.transform = t - }, mt.prototype._$ZT = function() { - var t = this.gl; - this.firstDraw && (this.initShader(), this.firstDraw = !1, this.anisotropyExt = t.getExtension("EXT_texture_filter_anisotropic") || t.getExtension("WEBKIT_EXT_texture_filter_anisotropic") || t.getExtension("MOZ_EXT_texture_filter_anisotropic"), this.anisotropyExt && (this.maxAnisotropy = t.getParameter(this.anisotropyExt.MAX_TEXTURE_MAX_ANISOTROPY_EXT))), t.disable(t.SCISSOR_TEST), t.disable(t.STENCIL_TEST), t.disable(t.DEPTH_TEST), t.frontFace(t.CW), t.enable(t.BLEND), t.colorMask(1, 1, 1, 1), t.bindBuffer(t.ARRAY_BUFFER, null), t.bindBuffer(t.ELEMENT_ARRAY_BUFFER, null) - }, mt.prototype._$Uo = function(t, i, e, r, o, n, s, _) { - if (!(n < .01 && null == this.clipBufPre_clipContextMask)) { - var a = (n > .9 && at.EXPAND_W, this.gl); - if (null == this.gl) throw new Error("gl is null"); - var h = 1 * this._$C0 * n, - l = 1 * this._$tT * n, - $ = 1 * this._$WL * n, - u = this._$lT * n; - if (null != this.clipBufPre_clipContextMask) { - a.frontFace(a.CCW), a.useProgram(this.shaderProgram), this._$vS = Tt(a, this._$vS, r), this._$no = Pt(a, this._$no, e), a.enableVertexAttribArray(this.a_position_Loc), a.vertexAttribPointer(this.a_position_Loc, 2, a.FLOAT, !1, 0, 0), this._$NT = Tt(a, this._$NT, o), a.activeTexture(a.TEXTURE1), a.bindTexture(a.TEXTURE_2D, this.textures[t]), a.uniform1i(this.s_texture0_Loc, 1), a.enableVertexAttribArray(this.a_texCoord_Loc), a.vertexAttribPointer(this.a_texCoord_Loc, 2, a.FLOAT, !1, 0, 0), a.uniformMatrix4fv(this.u_matrix_Loc, !1, this.getClipBufPre_clipContextMask().matrixForMask); - var p = this.getClipBufPre_clipContextMask().layoutChannelNo, - f = this.getChannelFlagAsColor(p); - a.uniform4f(this.u_channelFlag, f.r, f.g, f.b, f.a); - var c = this.getClipBufPre_clipContextMask().layoutBounds; - a.uniform4f(this.u_baseColor_Loc, 2 * c.x - 1, 2 * c.y - 1, 2 * c._$EL() - 1, 2 * c._$5T() - 1), a.uniform1i(this.u_maskFlag_Loc, !0) - } else if (null != this.getClipBufPre_clipContextDraw()) { - a.useProgram(this.shaderProgramOff), this._$vS = Tt(a, this._$vS, r), this._$no = Pt(a, this._$no, e), a.enableVertexAttribArray(this.a_position_Loc_Off), a.vertexAttribPointer(this.a_position_Loc_Off, 2, a.FLOAT, !1, 0, 0), this._$NT = Tt(a, this._$NT, o), a.activeTexture(a.TEXTURE1), a.bindTexture(a.TEXTURE_2D, this.textures[t]), a.uniform1i(this.s_texture0_Loc_Off, 1), a.enableVertexAttribArray(this.a_texCoord_Loc_Off), a.vertexAttribPointer(this.a_texCoord_Loc_Off, 2, a.FLOAT, !1, 0, 0), a.uniformMatrix4fv(this.u_clipMatrix_Loc_Off, !1, this.getClipBufPre_clipContextDraw().matrixForDraw), a.uniformMatrix4fv(this.u_matrix_Loc_Off, !1, this.matrix4x4), a.activeTexture(a.TEXTURE2), a.bindTexture(a.TEXTURE_2D, at.fTexture[this.glno]), a.uniform1i(this.s_texture1_Loc_Off, 2); - var p = this.getClipBufPre_clipContextDraw().layoutChannelNo, - f = this.getChannelFlagAsColor(p); - a.uniform4f(this.u_channelFlag_Loc_Off, f.r, f.g, f.b, f.a), a.uniform4f(this.u_baseColor_Loc_Off, h, l, $, u) - } else a.useProgram(this.shaderProgram), this._$vS = Tt(a, this._$vS, r), this._$no = Pt(a, this._$no, e), a.enableVertexAttribArray(this.a_position_Loc), a.vertexAttribPointer(this.a_position_Loc, 2, a.FLOAT, !1, 0, 0), this._$NT = Tt(a, this._$NT, o), a.activeTexture(a.TEXTURE1), a.bindTexture(a.TEXTURE_2D, this.textures[t]), a.uniform1i(this.s_texture0_Loc, 1), a.enableVertexAttribArray(this.a_texCoord_Loc), a.vertexAttribPointer(this.a_texCoord_Loc, 2, a.FLOAT, !1, 0, 0), a.uniformMatrix4fv(this.u_matrix_Loc, !1, this.matrix4x4), a.uniform4f(this.u_baseColor_Loc, h, l, $, u), a.uniform1i(this.u_maskFlag_Loc, !1); - this.culling ? this.gl.enable(a.CULL_FACE) : this.gl.disable(a.CULL_FACE), this.gl.enable(a.BLEND); - var d, g, y, m; - if (null != this.clipBufPre_clipContextMask) d = a.ONE, g = a.ONE_MINUS_SRC_ALPHA, y = a.ONE, m = a.ONE_MINUS_SRC_ALPHA; - else switch (s) { - case $t._$ms: - d = a.ONE, g = a.ONE_MINUS_SRC_ALPHA, y = a.ONE, m = a.ONE_MINUS_SRC_ALPHA; - break; - case $t._$ns: - d = a.ONE, g = a.ONE, y = a.ZERO, m = a.ONE; - break; - case $t._$_s: - d = a.DST_COLOR, g = a.ONE_MINUS_SRC_ALPHA, y = a.ZERO, m = a.ONE - } - a.blendEquationSeparate(a.FUNC_ADD, a.FUNC_ADD), a.blendFuncSeparate(d, g, y, m), this.anisotropyExt && a.texParameteri(a.TEXTURE_2D, this.anisotropyExt.TEXTURE_MAX_ANISOTROPY_EXT, this.maxAnisotropy); - var T = e.length; - a.drawElements(a.TRIANGLES, T, a.UNSIGNED_SHORT, 0), a.bindTexture(a.TEXTURE_2D, null) - } - }, mt.prototype._$Rs = function() { - throw new Error("_$Rs") - }, mt.prototype._$Ds = function(t) { - throw new Error("_$Ds") - }, mt.prototype._$K2 = function() { - for (var t = 0; t < this.textures.length; t++) { - 0 != this.textures[t] && (this.gl._$K2(1, this.textures, t), this.textures[t] = null) - } - }, mt.prototype.setTexture = function(t, i) { - this.textures[t] = i - }, mt.prototype.initShader = function() { - var t = this.gl; - this.loadShaders2(), this.a_position_Loc = t.getAttribLocation(this.shaderProgram, "a_position"), this.a_texCoord_Loc = t.getAttribLocation(this.shaderProgram, "a_texCoord"), this.u_matrix_Loc = t.getUniformLocation(this.shaderProgram, "u_mvpMatrix"), this.s_texture0_Loc = t.getUniformLocation(this.shaderProgram, "s_texture0"), this.u_channelFlag = t.getUniformLocation(this.shaderProgram, "u_channelFlag"), this.u_baseColor_Loc = t.getUniformLocation(this.shaderProgram, "u_baseColor"), this.u_maskFlag_Loc = t.getUniformLocation(this.shaderProgram, "u_maskFlag"), this.a_position_Loc_Off = t.getAttribLocation(this.shaderProgramOff, "a_position"), this.a_texCoord_Loc_Off = t.getAttribLocation(this.shaderProgramOff, "a_texCoord"), this.u_matrix_Loc_Off = t.getUniformLocation(this.shaderProgramOff, "u_mvpMatrix"), this.u_clipMatrix_Loc_Off = t.getUniformLocation(this.shaderProgramOff, "u_ClipMatrix"), this.s_texture0_Loc_Off = t.getUniformLocation(this.shaderProgramOff, "s_texture0"), this.s_texture1_Loc_Off = t.getUniformLocation(this.shaderProgramOff, "s_texture1"), this.u_channelFlag_Loc_Off = t.getUniformLocation(this.shaderProgramOff, "u_channelFlag"), this.u_baseColor_Loc_Off = t.getUniformLocation(this.shaderProgramOff, "u_baseColor") - }, mt.prototype.disposeShader = function() { - var t = this.gl; - this.shaderProgram && (t.deleteProgram(this.shaderProgram), this.shaderProgram = null), this.shaderProgramOff && (t.deleteProgram(this.shaderProgramOff), this.shaderProgramOff = null) - }, mt.prototype.compileShader = function(t, i) { - var e = this.gl, - r = i, - o = e.createShader(t); - if (null == o) return _._$Ji("_$L0 to create shader"), null; - if (e.shaderSource(o, r), e.compileShader(o), !e.getShaderParameter(o, e.COMPILE_STATUS)) { - var n = e.getShaderInfoLog(o); - return _._$Ji("_$L0 to compile shader : " + n), e.deleteShader(o), null - } - return o - }, mt.prototype.loadShaders2 = function() { - var t = this.gl; - if (this.shaderProgram = t.createProgram(), !this.shaderProgram) return !1; - if (this.shaderProgramOff = t.createProgram(), !this.shaderProgramOff) return !1; - if (this.vertShader = this.compileShader(t.VERTEX_SHADER, "attribute vec4 a_position;attribute vec2 a_texCoord;varying vec2 v_texCoord;varying vec4 v_ClipPos;uniform mat4 u_mvpMatrix;void main(){ gl_Position = u_mvpMatrix * a_position; v_ClipPos = u_mvpMatrix * a_position; v_texCoord = a_texCoord;}"), !this.vertShader) return _._$Ji("Vertex shader compile _$li!"), !1; - if (this.vertShaderOff = this.compileShader(t.VERTEX_SHADER, "attribute vec4 a_position;attribute vec2 a_texCoord;varying vec2 v_texCoord;varying vec4 v_ClipPos;uniform mat4 u_mvpMatrix;uniform mat4 u_ClipMatrix;void main(){ gl_Position = u_mvpMatrix * a_position; v_ClipPos = u_ClipMatrix * a_position; v_texCoord = a_texCoord ;}"), !this.vertShaderOff) return _._$Ji("OffVertex shader compile _$li!"), !1; - if (this.fragShader = this.compileShader(t.FRAGMENT_SHADER, "precision mediump float;varying vec2 v_texCoord;varying vec4 v_ClipPos;uniform sampler2D s_texture0;uniform vec4 u_channelFlag;uniform vec4 u_baseColor;uniform bool u_maskFlag;void main(){ vec4 smpColor; if(u_maskFlag){ float isInside = step(u_baseColor.x, v_ClipPos.x/v_ClipPos.w) * step(u_baseColor.y, v_ClipPos.y/v_ClipPos.w) * step(v_ClipPos.x/v_ClipPos.w, u_baseColor.z) * step(v_ClipPos.y/v_ClipPos.w, u_baseColor.w); smpColor = u_channelFlag * texture2D(s_texture0 , v_texCoord).a * isInside; }else{ smpColor = texture2D(s_texture0 , v_texCoord) * u_baseColor; } gl_FragColor = smpColor;}"), !this.fragShader) return _._$Ji("Fragment shader compile _$li!"), !1; - if (this.fragShaderOff = this.compileShader(t.FRAGMENT_SHADER, "precision mediump float ;varying vec2 v_texCoord;varying vec4 v_ClipPos;uniform sampler2D s_texture0;uniform sampler2D s_texture1;uniform vec4 u_channelFlag;uniform vec4 u_baseColor ;void main(){ vec4 col_formask = texture2D(s_texture0, v_texCoord) * u_baseColor; vec4 clipMask = texture2D(s_texture1, v_ClipPos.xy / v_ClipPos.w) * u_channelFlag; float maskVal = clipMask.r + clipMask.g + clipMask.b + clipMask.a; col_formask = col_formask * maskVal; gl_FragColor = col_formask;}"), !this.fragShaderOff) return _._$Ji("OffFragment shader compile _$li!"), !1; - if (t.attachShader(this.shaderProgram, this.vertShader), t.attachShader(this.shaderProgram, this.fragShader), t.attachShader(this.shaderProgramOff, this.vertShaderOff), t.attachShader(this.shaderProgramOff, this.fragShaderOff), t.linkProgram(this.shaderProgram), t.linkProgram(this.shaderProgramOff), !t.getProgramParameter(this.shaderProgram, t.LINK_STATUS)) { - var i = t.getProgramInfoLog(this.shaderProgram); - return _._$Ji("_$L0 to link program: " + i), this.vertShader && (t.deleteShader(this.vertShader), this.vertShader = 0), this.fragShader && (t.deleteShader(this.fragShader), this.fragShader = 0), this.shaderProgram && (t.deleteProgram(this.shaderProgram), this.shaderProgram = 0), this.vertShaderOff && (t.deleteShader(this.vertShaderOff), this.vertShaderOff = 0), this.fragShaderOff && (t.deleteShader(this.fragShaderOff), this.fragShaderOff = 0), this.shaderProgramOff && (t.deleteProgram(this.shaderProgramOff), this.shaderProgramOff = 0), !1 - } - return !0 - }, mt.prototype.createFramebuffer = function() { - var t = this.gl, - i = at.clippingMaskBufferSize, - e = t.createFramebuffer(); - t.bindFramebuffer(t.FRAMEBUFFER, e); - var r = t.createRenderbuffer(); - t.bindRenderbuffer(t.RENDERBUFFER, r), t.renderbufferStorage(t.RENDERBUFFER, t.RGBA4, i, i), t.framebufferRenderbuffer(t.FRAMEBUFFER, t.COLOR_ATTACHMENT0, t.RENDERBUFFER, r); - var o = t.createTexture(); - return t.bindTexture(t.TEXTURE_2D, o), t.texImage2D(t.TEXTURE_2D, 0, t.RGBA, i, i, 0, t.RGBA, t.UNSIGNED_BYTE, null), t.texParameteri(t.TEXTURE_2D, t.TEXTURE_MIN_FILTER, t.LINEAR), t.texParameteri(t.TEXTURE_2D, t.TEXTURE_MAG_FILTER, t.LINEAR), t.texParameteri(t.TEXTURE_2D, t.TEXTURE_WRAP_S, t.CLAMP_TO_EDGE), t.texParameteri(t.TEXTURE_2D, t.TEXTURE_WRAP_T, t.CLAMP_TO_EDGE), t.framebufferTexture2D(t.FRAMEBUFFER, t.COLOR_ATTACHMENT0, t.TEXTURE_2D, o, 0), t.bindTexture(t.TEXTURE_2D, null), t.bindRenderbuffer(t.RENDERBUFFER, null), t.bindFramebuffer(t.FRAMEBUFFER, null), at.fTexture[this.glno] = o, { - framebuffer: e, - renderbuffer: r, - texture: at.fTexture[this.glno] - } - }, St.prototype._$fP = function() { - var t, i, e, r = this._$ST(); - if (0 == (128 & r)) return 255 & r; - if (0 == (128 & (t = this._$ST()))) return (127 & r) << 7 | 127 & t; - if (0 == (128 & (i = this._$ST()))) return (127 & r) << 14 | (127 & t) << 7 | 255 & i; - if (0 == (128 & (e = this._$ST()))) return (127 & r) << 21 | (127 & t) << 14 | (127 & i) << 7 | 255 & e; - throw new lt("_$L _$0P _") - }, St.prototype.getFormatVersion = function() { - return this._$S2 - }, St.prototype._$gr = function(t) { - this._$S2 = t - }, St.prototype._$3L = function() { - return this._$fP() - }, St.prototype._$mP = function() { - return this._$zT(), this._$F += 8, this._$T.getFloat64(this._$F - 8) - }, St.prototype._$_T = function() { - return this._$zT(), this._$F += 4, this._$T.getFloat32(this._$F - 4) - }, St.prototype._$6L = function() { - return this._$zT(), this._$F += 4, this._$T.getInt32(this._$F - 4) - }, St.prototype._$ST = function() { - return this._$zT(), this._$T.getInt8(this._$F++) - }, St.prototype._$9T = function() { - return this._$zT(), this._$F += 2, this._$T.getInt16(this._$F - 2) - }, St.prototype._$2T = function() { - throw this._$zT(), this._$F += 8, new lt("_$L _$q read long") - }, St.prototype._$po = function() { - return this._$zT(), 0 != this._$T.getInt8(this._$F++) - }; - var xt = !0; - St.prototype._$bT = function() { - this._$zT(); - var t = this._$3L(), - i = null; - if (xt) try { - var e = new ArrayBuffer(2 * t); - i = new Uint16Array(e); - for (var r = 0; r < t; ++r) i[r] = this._$T.getUint8(this._$F++); - return String.fromCharCode.apply(null, i) - } catch (t) { - xt = !1 - } - try { - var o = new Array; - if (null == i) for (var r = 0; r < t; ++r) o[r] = this._$T.getUint8(this._$F++); - else for (var r = 0; r < t; ++r) o[r] = i[r]; - return String.fromCharCode.apply(null, o) - } catch (t) { - console.log("read utf8 / _$rT _$L0 !! : " + t) - } - }, St.prototype._$cS = function() { - this._$zT(); - for (var t = this._$3L(), i = new Int32Array(t), e = 0; e < t; e++) i[e] = this._$T.getInt32(this._$F), this._$F += 4; - return i - }, St.prototype._$Tb = function() { - this._$zT(); - for (var t = this._$3L(), i = new Float32Array(t), e = 0; e < t; e++) i[e] = this._$T.getFloat32(this._$F), this._$F += 4; - return i - }, St.prototype._$5b = function() { - this._$zT(); - for (var t = this._$3L(), i = new Float64Array(t), e = 0; e < t; e++) i[e] = this._$T.getFloat64(this._$F), this._$F += 8; - return i - }, St.prototype._$nP = function() { - return this._$Jb(-1) - }, St.prototype._$Jb = function(t) { - if (this._$zT(), t < 0 && (t = this._$3L()), t == G._$7P) { - var i = this._$6L(); - if (0 <= i && i < this._$Ko.length) return this._$Ko[i]; - throw new lt("_$sL _$4i @_$m0") - } - var e = this._$4b(t); - return this._$Ko.push(e), e - }, St.prototype._$4b = function(t) { - if (0 == t) return null; - if (50 == t) { - var i = this._$bT(), - e = b.getID(i); - return e - } - if (51 == t) { - var i = this._$bT(), - e = yt.getID(i); - return e - } - if (134 == t) { - var i = this._$bT(), - e = l.getID(i); - return e - } - if (60 == t) { - var i = this._$bT(), - e = u.getID(i); - return e - } - if (t >= 48) { - var r = G._$9o(t); - return null != r ? (r._$F0(this), r) : null - } - switch (t) { - case 1: - return this._$bT(); - case 10: - return new n(this._$6L(), !0); - case 11: - return new S(this._$mP(), this._$mP(), this._$mP(), this._$mP()); - case 12: - return new S(this._$_T(), this._$_T(), this._$_T(), this._$_T()); - case 13: - return new L(this._$mP(), this._$mP()); - case 14: - return new L(this._$_T(), this._$_T()); - case 15: - for (var o = this._$3L(), e = new Array(o), s = 0; s < o; s++) e[s] = this._$nP(); - return e; - case 17: - var e = new F(this._$mP(), this._$mP(), this._$mP(), this._$mP(), this._$mP(), this._$mP()); - return e; - case 21: - return new h(this._$6L(), this._$6L(), this._$6L(), this._$6L()); - case 22: - return new pt(this._$6L(), this._$6L()); - case 23: - throw new Error("_$L _$ro "); - case 16: - case 25: - return this._$cS(); - case 26: - return this._$5b(); - case 27: - return this._$Tb(); - case 2: - case 3: - case 4: - case 5: - case 6: - case 7: - case 8: - case 9: - case 18: - case 19: - case 20: - case 24: - case 28: - throw new lt("_$6 _$q : _$nP() of 2-9 ,18,19,20,24,28 : " + t); - default: - throw new lt("_$6 _$q : _$nP() NO _$i : " + t) - } - }, St.prototype._$8L = function() { - return 0 == this._$hL ? this._$v0 = this._$ST() : 8 == this._$hL && (this._$v0 = this._$ST(), this._$hL = 0), 1 == (this._$v0 >> 7 - this._$hL++ & 1) - }, St.prototype._$zT = function() { - 0 != this._$hL && (this._$hL = 0) - }, vt.prototype._$wP = function(t, i, e) { - for (var r = 0; r < e; r++) { - for (var o = 0; o < i; o++) { - var n = 2 * (o + r * i); - console.log("(% 7.3f , % 7.3f) , ", t[n], t[n + 1]) - } - console.log("\n") - } - console.log("\n") - }, Lt._$2S = Math.PI / 180, Lt._$bS = Math.PI / 180, Lt._$wS = 180 / Math.PI, Lt._$NS = 180 / Math.PI, Lt.PI_F = Math.PI, Lt._$kT = [0, .012368, .024734, .037097, .049454, .061803, .074143, .086471, .098786, .111087, .12337, .135634, .147877, .160098, .172295, .184465, .196606, .208718, .220798, .232844, .244854, .256827, .268761, .280654, .292503, .304308, .316066, .327776, .339436, .351044, .362598, .374097, .385538, .396921, .408243, .419502, .430697, .441826, .452888, .463881, .474802, .485651, .496425, .507124, .517745, .528287, .538748, .549126, .559421, .56963, .579752, .589785, .599728, .609579, .619337, .629, .638567, .648036, .657406, .666676, .675843, .684908, .693867, .70272, .711466, .720103, .72863, .737045, .745348, .753536, .76161, .769566, .777405, .785125, .792725, .800204, .807561, .814793, .821901, .828884, .835739, .842467, .849066, .855535, .861873, .868079, .874153, .880093, .885898, .891567, .897101, .902497, .907754, .912873, .917853, .922692, .92739, .931946, .936359, .940629, .944755, .948737, .952574, .956265, .959809, .963207, .966457, .96956, .972514, .97532, .977976, .980482, .982839, .985045, .987101, .989006, .990759, .992361, .993811, .995109, .996254, .997248, .998088, .998776, .999312, .999694, .999924, 1], Lt._$92 = function(t, i) { - var e = Math.atan2(t[1], t[0]), - r = Math.atan2(i[1], i[0]); - return Lt._$tS(e, r) - }, Lt._$tS = function(t, i) { - for (var e = t - i; e < -Math.PI;) e += 2 * Math.PI; - for (; e > Math.PI;) e -= 2 * Math.PI; - return e - }, Lt._$9 = function(t) { - return Math.sin(t) - }, Lt.fcos = function(t) { - return Math.cos(t) - }, Mt.prototype._$u2 = function() { - return this._$IS[0] - }, Mt.prototype._$yo = function() { - return this._$AT && !this._$IS[0] - }, Mt.prototype._$GT = function() { - return this._$e0 - }, Et._$W2 = 0, Et.SYSTEM_INFO = null, Et.USER_AGENT = navigator.userAgent, Et.isIPhone = function() { - return Et.SYSTEM_INFO || Et.setup(), Et.SYSTEM_INFO._isIPhone - }, Et.isIOS = function() { - return Et.SYSTEM_INFO || Et.setup(), Et.SYSTEM_INFO._isIPhone || Et.SYSTEM_INFO._isIPad - }, Et.isAndroid = function() { - return Et.SYSTEM_INFO || Et.setup(), Et.SYSTEM_INFO._isAndroid - }, Et.getOSVersion = function() { - return Et.SYSTEM_INFO || Et.setup(), Et.SYSTEM_INFO.version - }, Et.getOS = function() { - return Et.SYSTEM_INFO || Et.setup(), Et.SYSTEM_INFO._isIPhone || Et.SYSTEM_INFO._isIPad ? "iOS" : Et.SYSTEM_INFO._isAndroid ? "Android" : "_$Q0 OS" - }, Et.setup = function() { - function t(t, i) { - for (var e = t.substring(i).split(/[ _,;\.]/), r = 0, o = 0; o <= 2 && !isNaN(e[o]); o++) { - var n = parseInt(e[o]); - if (n < 0 || n > 999) { - _._$li("err : " + n + " @UtHtml5.setup()"), r = 0; - break - } - r += n * Math.pow(1e3, 2 - o) - } - return r - } - var i, e = Et.USER_AGENT, - r = Et.SYSTEM_INFO = { - userAgent: e - }; - if ((i = e.indexOf("iPhone OS ")) >= 0) r.os = "iPhone", r._isIPhone = !0, r.version = t(e, i + "iPhone OS ".length); - else if ((i = e.indexOf("iPad")) >= 0) { - if ((i = e.indexOf("CPU OS")) < 0) return void _._$li(" err : " + e + " @UtHtml5.setup()"); - r.os = "iPad", r._isIPad = !0, r.version = t(e, i + "CPU OS ".length) - } else(i = e.indexOf("Android")) >= 0 ? (r.os = "Android", r._isAndroid = !0, r.version = t(e, i + "Android ".length)) : (r.os = "-", r.version = -1) - }, window.UtSystem = w, window.UtDebug = _, window.LDTransform = gt, window.LDGL = nt, window.Live2D = at, window.Live2DModelWebGL = ft, window.Live2DModelJS = q, window.Live2DMotion = J, window.MotionQueueManager = ct, window.PhysicsHair = f, window.AMotion = s, window.PartsDataID = l, window.DrawDataID = b, window.BaseDataID = yt, window.ParamID = u, at.init(); - var At = !1 - }() - }).call(i, e(7)) -}, function(t, i) { - t.exports = { - import: function() { - throw new Error("System.import cannot be used indirectly") - } - } -}, function(t, i, e) { - "use strict"; - - function r(t) { - return t && t.__esModule ? t : { - default: - t - } - } - function o() { - this.models = [], this.count = -1, this.reloadFlg = !1, Live2D.init(), n.Live2DFramework.setPlatformManager(new _. - default) - } - Object.defineProperty(i, "__esModule", { - value: !0 - }), i. -default = o; - var n = e(0), - s = e(9), - _ = r(s), - a = e(10), - h = r(a), - l = e(1), - $ = r(l); - o.prototype.createModel = function() { - var t = new h. - default; - return this.models.push(t), t - }, o.prototype.changeModel = function(t, i) { - if (this.reloadFlg) { - this.reloadFlg = !1; - this.releaseModel(0, t), this.createModel(), this.models[0].load(t, i) - } - }, o.prototype.getModel = function(t) { - return t >= this.models.length ? null : this.models[t] - }, o.prototype.releaseModel = function(t, i) { - this.models.length <= t || (this.models[t].release(i), delete this.models[t], this.models.splice(t, 1)) - }, o.prototype.numModels = function() { - return this.models.length - }, o.prototype.setDrag = function(t, i) { - for (var e = 0; e < this.models.length; e++) this.models[e].setDrag(t, i) - }, o.prototype.maxScaleEvent = function() { - $. - default.DEBUG_LOG && console.log("Max scale event."); - for (var t = 0; t < this.models.length; t++) this.models[t].startRandomMotion($. - default.MOTION_GROUP_PINCH_IN, $. - default.PRIORITY_NORMAL) - }, o.prototype.minScaleEvent = function() { - $. - default.DEBUG_LOG && console.log("Min scale event."); - for (var t = 0; t < this.models.length; t++) this.models[t].startRandomMotion($. - default.MOTION_GROUP_PINCH_OUT, $. - default.PRIORITY_NORMAL) - }, o.prototype.tapEvent = function(t, i) { - $. - default.DEBUG_LOG && console.log("tapEvent view x:" + t + " y:" + i); - for (var e = 0; e < this.models.length; e++) this.models[e].hitTest($. - default.HIT_AREA_HEAD, t, i) ? ($. - default.DEBUG_LOG && console.log("Tap face."), this.models[e].setRandomExpression()): - this.models[e].hitTest($. - default.HIT_AREA_BODY, t, i) ? ($. - default.DEBUG_LOG && console.log("Tap body. models[" + e + "]"), this.models[e].startRandomMotion($. - default.MOTION_GROUP_TAP_BODY, $. - default.PRIORITY_NORMAL)) : this.models[e].hitTestCustom("head", t, i) ? ($. - default.DEBUG_LOG && console.log("Tap face."), this.models[e].startRandomMotion($. - default.MOTION_GROUP_FLICK_HEAD, $. - default.PRIORITY_NORMAL)) : this.models[e].hitTestCustom("body", t, i) && ($. - default.DEBUG_LOG && console.log("Tap body. models[" + e + "]"), this.models[e].startRandomMotion($. - default.MOTION_GROUP_TAP_BODY, $. - default.PRIORITY_NORMAL)); - return !0 - } -}, function(t, i, e) { - "use strict"; - - function r() {} - Object.defineProperty(i, "__esModule", { - value: !0 - }), i. -default = r; - var o = e(2); - var requestCache = {}; - r.prototype.loadBytes = function(t, i) { - // Cache 相同的请求,减少请求数量 - if (requestCache[t] !== undefined) { - i(requestCache[t]); - return; - } - var e = new XMLHttpRequest; - e.open("GET", t, !0), e.responseType = "arraybuffer", e.onload = function() { - switch (e.status) { - case 200: - requestCache[t] = e.response; - i(e.response); - break; - default: - console.error("Failed to load (" + e.status + ") : " + t) - } - }, e.send(null) - }, r.prototype.loadString = function(t) { - this.loadBytes(t, function(t) { - return t - }) - }, r.prototype.loadLive2DModel = function(t, i) { - var e = null; - this.loadBytes(t, function(t) { - e = Live2DModelWebGL.loadModel(t), i(e) - }) - }, r.prototype.loadTexture = function(t, i, e, r) { - var n = new Image; - n.crossOrigin = "Anonymous", n.src = e; - n.onload = function() { - var e = (0, o.getContext)(), - s = e.createTexture(); - if (!s) return console.error("Failed to generate gl texture name."), -1; - 0 == t.isPremultipliedAlpha() && e.pixelStorei(e.UNPACK_PREMULTIPLY_ALPHA_WEBGL, 1), e.pixelStorei(e.UNPACK_FLIP_Y_WEBGL, 1), e.activeTexture(e.TEXTURE0), e.bindTexture(e.TEXTURE_2D, s), e.texImage2D(e.TEXTURE_2D, 0, e.RGBA, e.RGBA, e.UNSIGNED_BYTE, n), e.texParameteri(e.TEXTURE_2D, e.TEXTURE_MAG_FILTER, e.LINEAR), e.texParameteri(e.TEXTURE_2D, e.TEXTURE_MIN_FILTER, e.LINEAR_MIPMAP_NEAREST), e.generateMipmap(e.TEXTURE_2D), t.setTexture(i, s), s = null, "function" == typeof r && r() - }, n.onerror = function() { - console.error("Failed to load image : " + e) - } - }, r.prototype.jsonParseFromBytes = function(t) { - var i, e = new Uint8Array(t, 0, 3); - return i = 239 == e[0] && 187 == e[1] && 191 == e[2] ? String.fromCharCode.apply(null, new Uint8Array(t, 3)) : String.fromCharCode.apply(null, new Uint8Array(t)), JSON.parse(i) - }, r.prototype.log = function(t) {} -}, function(t, i, e) { - "use strict"; - - function r(t) { - return t && t.__esModule ? t : { - default: - t - } - } - function o() { - n.L2DBaseModel.prototype.constructor.call(this), this.modelHomeDir = "", this.modelSetting = null, this.tmpMatrix = [] - } - Object.defineProperty(i, "__esModule", { - value: !0 - }), i. -default = o; - var n = e(0), - s = e(11), - _ = r(s), - a = e(1), - h = r(a), - l = e(3), - $ = r(l); - o.prototype = new n.L2DBaseModel, o.prototype.load = function(t, i, e) { - this.setUpdating(!0), this.setInitialized(!1), this.modelHomeDir = i.substring(0, i.lastIndexOf("/") + 1), this.modelSetting = new _. - default; - var r = this; - this.modelSetting.loadModelSetting(i, function() { - var t = r.modelHomeDir + r.modelSetting.getModelFile(); - r.loadModelData(t, function(t) { - for (var i = 0; i < r.modelSetting.getTextureNum(); i++) { - if (/^https?:\/\/|^\/\//i.test(r.modelSetting.getTextureFile(i))) var o = r.modelSetting.getTextureFile(i); - else var o = r.modelHomeDir + r.modelSetting.getTextureFile(i); - r.loadTexture(i, o, function() { - if (r.isTexLoaded) { - if (r.modelSetting.getExpressionNum() > 0) { - r.expressions = {}; - for (var t = 0; t < r.modelSetting.getExpressionNum(); t++) { - var i = r.modelSetting.getExpressionName(t), - o = r.modelHomeDir + r.modelSetting.getExpressionFile(t); - r.loadExpression(i, o) - } - } else r.expressionManager = null, r.expressions = {}; - if (r.eyeBlink, null != r.modelSetting.getPhysicsFile() ? r.loadPhysics(r.modelHomeDir + r.modelSetting.getPhysicsFile()) : r.physics = null, null != r.modelSetting.getPoseFile() ? r.loadPose(r.modelHomeDir + r.modelSetting.getPoseFile(), function() { - r.pose.updateParam(r.live2DModel) - }) : r.pose = null, null != r.modelSetting.getLayout()) { - var n = r.modelSetting.getLayout(); - null != n.width && r.modelMatrix.setWidth(n.width), null != n.height && r.modelMatrix.setHeight(n.height), null != n.x && r.modelMatrix.setX(n.x), null != n.y && r.modelMatrix.setY(n.y), null != n.center_x && r.modelMatrix.centerX(n.center_x), null != n.center_y && r.modelMatrix.centerY(n.center_y), null != n.top && r.modelMatrix.top(n.top), null != n.bottom && r.modelMatrix.bottom(n.bottom), null != n.left && r.modelMatrix.left(n.left), null != n.right && r.modelMatrix.right(n.right) - } - if (null != r.modelSetting.getHitAreasCustom()) { - var s = r.modelSetting.getHitAreasCustom(); - null != s.head_x && (h. - default.hit_areas_custom_head_x = s.head_x), null != s.head_y && (h. - default.hit_areas_custom_head_y = s.head_y), null != s.body_x && (h. - default.hit_areas_custom_body_x = s.body_x), null != s.body_y && (h. - default.hit_areas_custom_body_y = s.body_y) - } - for (var t = 0; t < r.modelSetting.getInitParamNum(); t++) r.live2DModel.setParamFloat(r.modelSetting.getInitParamID(t), r.modelSetting.getInitParamValue(t)); - for (var t = 0; t < r.modelSetting.getInitPartsVisibleNum(); t++) r.live2DModel.setPartsOpacity(r.modelSetting.getInitPartsVisibleID(t), r.modelSetting.getInitPartsVisibleValue(t)); - r.live2DModel.saveParam(), r.preloadMotionGroup(h. - default.MOTION_GROUP_IDLE), r.preloadMotionGroup(h. - default.MOTION_GROUP_SLEEPY), r.mainMotionManager.stopAllMotions(), r.setUpdating(!1), r.setInitialized(!0), "function" == typeof e && e() - } - }) - } - }) - }) - }, o.prototype.release = function(t) { - var i = n.Live2DFramework.getPlatformManager(); - t.deleteTexture(i.texture) - }, o.prototype.preloadMotionGroup = function(t) { - for (var i = this, e = 0; e < this.modelSetting.getMotionNum(t); e++) { - var r = this.modelSetting.getMotionFile(t, e); - this.loadMotion(r, this.modelHomeDir + r, function(r) { - r.setFadeIn(i.modelSetting.getMotionFadeIn(t, e)), r.setFadeOut(i.modelSetting.getMotionFadeOut(t, e)) - }) - } - }, o.prototype.update = function() { - if (null == this.live2DModel) return void(h. - default.DEBUG_LOG && console.error("Failed to update.")); - var t = UtSystem.getUserTimeMSec() - this.startTimeMSec, - i = t / 1e3, - e = 2 * i * Math.PI; - if (this.mainMotionManager.isFinished()) { - "1" === sessionStorage.getItem("Sleepy") ? this.startRandomMotion(h. - default.MOTION_GROUP_SLEEPY, h. - default.PRIORITY_SLEEPY) : this.startRandomMotion(h. - default.MOTION_GROUP_IDLE, h. - default.PRIORITY_IDLE) - } - this.live2DModel.loadParam(), this.mainMotionManager.updateParam(this.live2DModel) || null != this.eyeBlink && this.eyeBlink.updateParam(this.live2DModel), this.live2DModel.saveParam(), null == this.expressionManager || null == this.expressions || this.expressionManager.isFinished() || this.expressionManager.updateParam(this.live2DModel), this.live2DModel.addToParamFloat("PARAM_ANGLE_X", 30 * this.dragX, 1), this.live2DModel.addToParamFloat("PARAM_ANGLE_Y", 30 * this.dragY, 1), this.live2DModel.addToParamFloat("PARAM_ANGLE_Z", this.dragX * this.dragY * -30, 1), this.live2DModel.addToParamFloat("PARAM_BODY_ANGLE_X", 10 * this.dragX, 1), this.live2DModel.addToParamFloat("PARAM_EYE_BALL_X", this.dragX, 1), this.live2DModel.addToParamFloat("PARAM_EYE_BALL_Y", this.dragY, 1), this.live2DModel.addToParamFloat("PARAM_ANGLE_X", Number(15 * Math.sin(e / 6.5345)), .5), this.live2DModel.addToParamFloat("PARAM_ANGLE_Y", Number(8 * Math.sin(e / 3.5345)), .5), this.live2DModel.addToParamFloat("PARAM_ANGLE_Z", Number(10 * Math.sin(e / 5.5345)), .5), this.live2DModel.addToParamFloat("PARAM_BODY_ANGLE_X", Number(4 * Math.sin(e / 15.5345)), .5), this.live2DModel.setParamFloat("PARAM_BREATH", Number(.5 + .5 * Math.sin(e / 3.2345)), 1), null != this.physics && this.physics.updateParam(this.live2DModel), null == this.lipSync && this.live2DModel.setParamFloat("PARAM_MOUTH_OPEN_Y", this.lipSyncValue), null != this.pose && this.pose.updateParam(this.live2DModel), this.live2DModel.update() - }, o.prototype.setRandomExpression = function() { - var t = []; - for (var i in this.expressions) t.push(i); - var e = parseInt(Math.random() * t.length); - this.setExpression(t[e]) - }, o.prototype.startRandomMotion = function(t, i) { - var e = this.modelSetting.getMotionNum(t), - r = parseInt(Math.random() * e); - this.startMotion(t, r, i) - }, o.prototype.startMotion = function(t, i, e) { - var r = this.modelSetting.getMotionFile(t, i); - if (null == r || "" == r) return void(h. - default.DEBUG_LOG && console.error("Failed to motion.")); - if (e == h. - default.PRIORITY_FORCE) this.mainMotionManager.setReservePriority(e); - else if (!this.mainMotionManager.reserveMotion(e)) return void(h. - default.DEBUG_LOG && console.log("Motion is running.")); - var o, n = this; - null == this.motions[t] ? this.loadMotion(null, this.modelHomeDir + r, function(r) { - o = r, n.setFadeInFadeOut(t, i, e, o) - }) : (o = this.motions[t], n.setFadeInFadeOut(t, i, e, o)) - }, o.prototype.setFadeInFadeOut = function(t, i, e, r) { - var o = this.modelSetting.getMotionFile(t, i); - if (r.setFadeIn(this.modelSetting.getMotionFadeIn(t, i)), r.setFadeOut(this.modelSetting.getMotionFadeOut(t, i)), h. - default.DEBUG_LOG && console.log("Start motion : " + o), null == this.modelSetting.getMotionSound(t, i)) this.mainMotionManager.startMotionPrio(r, e); - else { - var n = this.modelSetting.getMotionSound(t, i), - s = document.createElement("audio"); - s.src = this.modelHomeDir + n, h. - default.DEBUG_LOG && console.log("Start sound : " + n), s.play(), this.mainMotionManager.startMotionPrio(r, e) - } - }, o.prototype.setExpression = function(t) { - var i = this.expressions[t]; - h. - default.DEBUG_LOG && console.log("Expression : " + t), this.expressionManager.startMotion(i, !1) - }, o.prototype.draw = function(t) { - $. - default.push(), $. - default.multMatrix(this.modelMatrix.getArray()), this.tmpMatrix = $. - default.getMatrix(), this.live2DModel.setMatrix(this.tmpMatrix), this.live2DModel.draw(), $. - default.pop() - }, o.prototype.hitTest = function(t, i, e) { - for (var r = this.modelSetting.getHitAreaNum(), o = 0; o < r; o++) if (t == this.modelSetting.getHitAreaName(o)) { - var n = this.modelSetting.getHitAreaID(o); - return this.hitTestSimple(n, i, e) - } - return !1 - }, o.prototype.hitTestCustom = function(t, i, e) { - return "head" == t ? this.hitTestSimpleCustom(h. - default.hit_areas_custom_head_x, h. - default.hit_areas_custom_head_y, i, e) : "body" == t && this.hitTestSimpleCustom(h. - default.hit_areas_custom_body_x, h. - default.hit_areas_custom_body_y, i, e) - } -}, function(t, i, e) { - "use strict"; - - function r() { - this.NAME = "name", this.ID = "id", this.MODEL = "model", this.TEXTURES = "textures", this.HIT_AREAS = "hit_areas", this.PHYSICS = "physics", this.POSE = "pose", this.EXPRESSIONS = "expressions", this.MOTION_GROUPS = "motions", this.SOUND = "sound", this.FADE_IN = "fade_in", this.FADE_OUT = "fade_out", this.LAYOUT = "layout", this.HIT_AREAS_CUSTOM = "hit_areas_custom", this.INIT_PARAM = "init_param", this.INIT_PARTS_VISIBLE = "init_parts_visible", this.VALUE = "val", this.FILE = "file", this.json = {} - } - Object.defineProperty(i, "__esModule", { - value: !0 - }), i. -default = r; - var o = e(0); - r.prototype.loadModelSetting = function(t, i) { - var e = this; - o.Live2DFramework.getPlatformManager().loadBytes(t, function(t) { - var r = String.fromCharCode.apply(null, new Uint8Array(t)); - e.json = JSON.parse(r), i() - }) - }, r.prototype.getTextureFile = function(t) { - return null == this.json[this.TEXTURES] || null == this.json[this.TEXTURES][t] ? null : this.json[this.TEXTURES][t] - }, r.prototype.getModelFile = function() { - return this.json[this.MODEL] - }, r.prototype.getTextureNum = function() { - return null == this.json[this.TEXTURES] ? 0 : this.json[this.TEXTURES].length - }, r.prototype.getHitAreaNum = function() { - return null == this.json[this.HIT_AREAS] ? 0 : this.json[this.HIT_AREAS].length - }, r.prototype.getHitAreaID = function(t) { - return null == this.json[this.HIT_AREAS] || null == this.json[this.HIT_AREAS][t] ? null : this.json[this.HIT_AREAS][t][this.ID] - }, r.prototype.getHitAreaName = function(t) { - return null == this.json[this.HIT_AREAS] || null == this.json[this.HIT_AREAS][t] ? null : this.json[this.HIT_AREAS][t][this.NAME] - }, r.prototype.getPhysicsFile = function() { - return this.json[this.PHYSICS] - }, r.prototype.getPoseFile = function() { - return this.json[this.POSE] - }, r.prototype.getExpressionNum = function() { - return null == this.json[this.EXPRESSIONS] ? 0 : this.json[this.EXPRESSIONS].length - }, r.prototype.getExpressionFile = function(t) { - return null == this.json[this.EXPRESSIONS] ? null : this.json[this.EXPRESSIONS][t][this.FILE] - }, r.prototype.getExpressionName = function(t) { - return null == this.json[this.EXPRESSIONS] ? null : this.json[this.EXPRESSIONS][t][this.NAME] - }, r.prototype.getLayout = function() { - return this.json[this.LAYOUT] - }, r.prototype.getHitAreasCustom = function() { - return this.json[this.HIT_AREAS_CUSTOM] - }, r.prototype.getInitParamNum = function() { - return null == this.json[this.INIT_PARAM] ? 0 : this.json[this.INIT_PARAM].length - }, r.prototype.getMotionNum = function(t) { - return null == this.json[this.MOTION_GROUPS] || null == this.json[this.MOTION_GROUPS][t] ? 0 : this.json[this.MOTION_GROUPS][t].length - }, r.prototype.getMotionFile = function(t, i) { - return null == this.json[this.MOTION_GROUPS] || null == this.json[this.MOTION_GROUPS][t] || null == this.json[this.MOTION_GROUPS][t][i] ? null : this.json[this.MOTION_GROUPS][t][i][this.FILE] - }, r.prototype.getMotionSound = function(t, i) { - return null == this.json[this.MOTION_GROUPS] || null == this.json[this.MOTION_GROUPS][t] || null == this.json[this.MOTION_GROUPS][t][i] || null == this.json[this.MOTION_GROUPS][t][i][this.SOUND] ? null : this.json[this.MOTION_GROUPS][t][i][this.SOUND] - }, r.prototype.getMotionFadeIn = function(t, i) { - return null == this.json[this.MOTION_GROUPS] || null == this.json[this.MOTION_GROUPS][t] || null == this.json[this.MOTION_GROUPS][t][i] || null == this.json[this.MOTION_GROUPS][t][i][this.FADE_IN] ? 1e3 : this.json[this.MOTION_GROUPS][t][i][this.FADE_IN] - }, r.prototype.getMotionFadeOut = function(t, i) { - return null == this.json[this.MOTION_GROUPS] || null == this.json[this.MOTION_GROUPS][t] || null == this.json[this.MOTION_GROUPS][t][i] || null == this.json[this.MOTION_GROUPS][t][i][this.FADE_OUT] ? 1e3 : this.json[this.MOTION_GROUPS][t][i][this.FADE_OUT] - }, r.prototype.getInitParamID = function(t) { - return null == this.json[this.INIT_PARAM] || null == this.json[this.INIT_PARAM][t] ? null : this.json[this.INIT_PARAM][t][this.ID] - }, r.prototype.getInitParamValue = function(t) { - return null == this.json[this.INIT_PARAM] || null == this.json[this.INIT_PARAM][t] ? NaN : this.json[this.INIT_PARAM][t][this.VALUE] - }, r.prototype.getInitPartsVisibleNum = function() { - return null == this.json[this.INIT_PARTS_VISIBLE] ? 0 : this.json[this.INIT_PARTS_VISIBLE].length - }, r.prototype.getInitPartsVisibleID = function(t) { - return null == this.json[this.INIT_PARTS_VISIBLE] || null == this.json[this.INIT_PARTS_VISIBLE][t] ? null : this.json[this.INIT_PARTS_VISIBLE][t][this.ID] - }, r.prototype.getInitPartsVisibleValue = function(t) { - return null == this.json[this.INIT_PARTS_VISIBLE] || null == this.json[this.INIT_PARTS_VISIBLE][t] ? NaN : this.json[this.INIT_PARTS_VISIBLE][t][this.VALUE] - } -}]); -//# sourceMappingURL=live2d.js.map diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Artlantis 5 X64 With Crack Keygen UPD.md b/spaces/quidiaMuxgu/Expedit-SAM/Artlantis 5 X64 With Crack Keygen UPD.md deleted file mode 100644 index 74b80395a7de769cae90bc92ad1970c62f5da822..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Artlantis 5 X64 With Crack Keygen UPD.md +++ /dev/null @@ -1,40 +0,0 @@ -
-

Artlantis 5 x64 with crack keygen - How to Download and Use the Best 3D Rendering Software

- -

Artlantis 5 is a powerful and versatile 3D rendering software that can create stunning images and animations from any 3D model. It is specially designed for architects and designers who need to present their projects in a realistic and professional way. Artlantis 5 can work with any 3D modeling software, such as ArchiCAD, Revit, SketchUp, or AutoCAD. It has a user-friendly interface, a real-time preview window, and a rich library of materials, textures, lights, and objects.

- -

However, Artlantis 5 is not a cheap software. It costs around $1000 for a single license, which can be too expensive for some users. Moreover, it requires an activation code and an internet connection to run properly. If you want to use Artlantis 5 without paying or activating it, you need to crack it with a keygen. A keygen is a program that can generate valid serial numbers or activation codes for any software. By using a keygen, you can bypass the security measures of Artlantis 5 and use it for free.

-

Artlantis 5 x64 with crack keygen


DOWNLOAD ✸✸✸ https://geags.com/2uCr7k



- -

How to Download Artlantis 5 x64 with crack keygen

- -

To download Artlantis 5 x64 with crack keygen, you need to follow these steps:

- -
    -
  1. Find a reliable source for downloading the cracked version of Artlantis 5. You can use torrent sites like The Pirate Bay or direct download sites like haxNode or YASIR252 to find links to cracked versions of Artlantis 5.
  2. -
  3. Download the cracked version of Artlantis 5 that suits your system requirements. Make sure it includes the original software installer and the keygen program.
  4. -
  5. Extract the files from the downloaded archive using a program like WinRAR or 7-Zip.
  6. -
  7. Run the installer of Artlantis 5 and follow the instructions. When prompted for an activation code, do not enter anything and click Next.
  8. -
  9. Run the keygen program and copy the generated serial number or activation code.
  10. -
  11. Paste the serial number or activation code into the activation window of Artlantis 5 and click Activate.
  12. -
  13. Enjoy Artlantis 5 x64 with crack keygen!
  14. -
- -

Why Use Artlantis 5 x64 with crack keygen

- -

There are many benefits of using Artlantis 5 x64 with crack keygen. Here are some of them:

- -
    -
  • You can use Artlantis 5 without paying or activating it.
  • -
  • You can use Artlantis 5 offline without an internet connection.
  • -
  • You can use any version or feature of Artlantis 5 that you want.
  • -
  • You can create amazing 3D renderings and animations with Artlantis 5.
  • -
  • You can impress your clients and colleagues with your professional presentations.
  • -
- -

Conclusion

- -

Artlantis 5 is a great 3D rendering software that can help you create realistic and stunning images and animations from any 3D model. However, if you don't want to spend money or activate it online, you can download Artlantis 5 x64 with crack keygen and use it for free. All you need is a reliable source for downloading the cracked version of Artlantis 5 and a keygen program that can generate valid serial numbers or activation codes. You can then install and activate Artlantis 5 without any hassle and enjoy its full potential.

-

3cee63e6c2
-
-
\ No newline at end of file diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Autodata340crack.md b/spaces/quidiaMuxgu/Expedit-SAM/Autodata340crack.md deleted file mode 100644 index 9c02274900e0d8f76481cd26b4de22a6365134c7..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Autodata340crack.md +++ /dev/null @@ -1,19 +0,0 @@ -

autodata340crack


Download ►►►►► https://geags.com/2uCsD4



- -15, 2021 — autodata340crack. you yourself play in Solo mode. The goal is to clear each a. 1 / 3 ... Bioshock Infinite version 1.1.25.5165+all ... Bios-Setup.ru - free download BIOS-Setup.ru ... -16 Aug 2017 ... -Bios-Setup.ru - free download BIOS-Setup.ru - free download BIOS-Setup.ru (Bios Seatru). -Download bios setup.ru, on the download page ... -20 Jul 2011 ... -Bios-Setup.ru - free download BIOS-Setup.ru, download Bios-Setup.ru for free. -Windows download ... -Oct 21, 2011 ... -Free download. -Windows free download. -Description: Bios-Setup.ru - free download BIOS-Setup.ru. -Download Bios-Setup.ru ... -27 Apr 2011 ... -Bios- 8a78ff9644
-
-
-

diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Chandni Chowk To China 3 Movie Download Hd Mp4.md b/spaces/quidiaMuxgu/Expedit-SAM/Chandni Chowk To China 3 Movie Download Hd Mp4.md deleted file mode 100644 index a464d74b6084fb633316fd6e7b827a26cb8e768f..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Chandni Chowk To China 3 Movie Download Hd Mp4.md +++ /dev/null @@ -1,18 +0,0 @@ -

Chandni Chowk To China 3 movie download hd mp4


DOWNLOAD ———>>> https://geags.com/2uCraB



-
-Cast: Vishal Bhardwaj, Manoj Bajpayee, Annapurna Bhatt, Shalini Chatterjee, Dev Anand, Indu Sarkar, Farooq Sheikh, Rajkummar Rao. With: Govind Namdeo, Vishal Pandya, Arshad Warsi, Ashish Vidyarthi, Manoj Pahwa - -The story is set in a village with mostly untouchable castes and a Brahmin higher caste. Traditionally, the Brahmin caste is expected to give a woman in marriage before the others, but the custom is very old and the higher caste considers it to be a stigma. During the season of Deepavali, a Brahmin upper caste woman is brought to the village for marriage. She is rescued by a few lower caste people who help her return home. - -The screenplay was written by Radha Krishna Chandravanshi, who also produced the film. He came up with the idea for the film after reading Ludo Millet's novel, which was originally titled Legend of the King. His screenplay uses Millet's tale as a jumping-off point, but he expanded the story by adding several new characters and scenes. He wanted to showcase "a more positive story that wouldn't be accepted by society", so he made it an underdog coming-of-age story set in the context of a rebellion against the upper caste. It deals with issues such as casteism, gender discrimination, and forced marriages. Vishal Bhardwaj worked on the film's production design, with his team designing the settings, props, costumes, and set-ups for the film. The soundtrack was composed by Vishal Bhardwaj. The film opened to mixed reviews from critics and audiences, but was a commercial success. - -His next film, Queen, was released in 2019. The film was inspired by the story of Sangita, the daughter of Jahnu and Jevat. She wanted to become a singer, and was obsessed with it. While practicing, she ran into her master's son, who eventually became her husband. It was also set in the backdrop of the villages in the 1970s, but this time, in the context of a rebellion against the state. It follows the tale of Jahnu, a tiller and an oppressed Dalit man, and sets it against a backdrop of the rural poverty faced by the oppressed. - -Filmography - -Awards and nominations - -See also 4fefd39f24
-
-
-

diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Dr Fone For Ios Registration Crack ((FREE)).md b/spaces/quidiaMuxgu/Expedit-SAM/Dr Fone For Ios Registration Crack ((FREE)).md deleted file mode 100644 index 46030f672a2e496365f9d068c5e57e86e0c8bd2a..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Dr Fone For Ios Registration Crack ((FREE)).md +++ /dev/null @@ -1,6 +0,0 @@ -

Dr Fone For Ios Registration Crack


Download File »»» https://geags.com/2uCsyn



-
-There are two versions of dr fone cracked with registration key free or ... It is compatible to work on windows, mac, android, and ios versions of ... 4d29de3e1b
-
-
-

diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Kanavu Meipada Vendum Pdf Download.md b/spaces/quidiaMuxgu/Expedit-SAM/Kanavu Meipada Vendum Pdf Download.md deleted file mode 100644 index 242c9cbff8121f5fa52c2daf5e78968c9d542004..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Kanavu Meipada Vendum Pdf Download.md +++ /dev/null @@ -1,28 +0,0 @@ -

kanavu meipada vendum pdf download


Download ✒ ✒ ✒ https://geags.com/2uCqqV



-
-A Different Answer A short story about a young kid in an asylum. - -They hate each other, and hate her and her father even more, as they want to be. Can she find a different answer? Find great deals on eBay for Different Answers by Laurie Halse Anderson. Shop with confidence. - -A short story about a young kid in an asylum. · A Short Story by Laurie Halse Anderson. Easy to navigate, find your way through the maze of pages in the best design, high quality paper. - -Different Answer - -Rated 5 out of 5 by MissMakesIt True to her genre, every word is important. The name could have been better, but it was still a good story. There are a number of plot holes, but nothing that dramatically affects the story. - -Contact Addresses Please contact me with any questions or problems you may have. - -Teachers a short story about a young kid in an asylum. An asylum under a tall cliff. October It is a day like any other, except there are no people in the town. No one is waiting in line at the gas station. No one is queuing at the post office. - -No one is playing in the park. No one is sitting in the coffee shop. There are no families in the café. There are no mothers sitting in the library or grade school, no fathers in high school or college. - -There is no one at the restaurant or the market. There are no friends and neighbors chatting. A short story about a young kid in an asylum. A Short Story by Laurie Halse Anderson. - -There are no street vendors or school crossing guards, no police, no drug dealers. Everyone in the town is missing. Everyone but the residents. Who are the residents? Someone called the nurse. He said a boy had cut his arm. The nurse sent for a doctor. She brought a doctor and a policeman. - -The doctor cut the boy open. He looked at the boy's brain, heart, lungs, liver. The police looked at the boy. The policeman took a picture of the boy with his cell phone. All the time the boy was lying there, lying on a table, covered with white cloths, his arms were cut open. - -The doctor and the policeman talked quietly among themselves. They didn't talk to each other. But they were talking in the room, so I knew they were talking. They found out that the boy had fallen from the top of the cliff. 4fefd39f24
-
-
-

diff --git a/spaces/r3gm/Advanced-RVC-Inference/lib/infer_pack/attentions.py b/spaces/r3gm/Advanced-RVC-Inference/lib/infer_pack/attentions.py deleted file mode 100644 index 05501be1871643f78dddbeaa529c96667031a8db..0000000000000000000000000000000000000000 --- a/spaces/r3gm/Advanced-RVC-Inference/lib/infer_pack/attentions.py +++ /dev/null @@ -1,417 +0,0 @@ -import copy -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - -from lib.infer_pack import commons -from lib.infer_pack import modules -from lib.infer_pack.modules import LayerNorm - - -class Encoder(nn.Module): - def __init__( - self, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size=1, - p_dropout=0.0, - window_size=10, - **kwargs - ): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.window_size = window_size - - self.drop = nn.Dropout(p_dropout) - self.attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.attn_layers.append( - MultiHeadAttention( - hidden_channels, - hidden_channels, - n_heads, - p_dropout=p_dropout, - window_size=window_size, - ) - ) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append( - FFN( - hidden_channels, - hidden_channels, - filter_channels, - kernel_size, - p_dropout=p_dropout, - ) - ) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask): - attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.attn_layers[i](x, x, attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class Decoder(nn.Module): - def __init__( - self, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size=1, - p_dropout=0.0, - proximal_bias=False, - proximal_init=True, - **kwargs - ): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - - self.drop = nn.Dropout(p_dropout) - self.self_attn_layers = nn.ModuleList() - self.norm_layers_0 = nn.ModuleList() - self.encdec_attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.self_attn_layers.append( - MultiHeadAttention( - hidden_channels, - hidden_channels, - n_heads, - p_dropout=p_dropout, - proximal_bias=proximal_bias, - proximal_init=proximal_init, - ) - ) - self.norm_layers_0.append(LayerNorm(hidden_channels)) - self.encdec_attn_layers.append( - MultiHeadAttention( - hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout - ) - ) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append( - FFN( - hidden_channels, - hidden_channels, - filter_channels, - kernel_size, - p_dropout=p_dropout, - causal=True, - ) - ) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask, h, h_mask): - """ - x: decoder input - h: encoder output - """ - self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to( - device=x.device, dtype=x.dtype - ) - encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.self_attn_layers[i](x, x, self_attn_mask) - y = self.drop(y) - x = self.norm_layers_0[i](x + y) - - y = self.encdec_attn_layers[i](x, h, encdec_attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class MultiHeadAttention(nn.Module): - def __init__( - self, - channels, - out_channels, - n_heads, - p_dropout=0.0, - window_size=None, - heads_share=True, - block_length=None, - proximal_bias=False, - proximal_init=False, - ): - super().__init__() - assert channels % n_heads == 0 - - self.channels = channels - self.out_channels = out_channels - self.n_heads = n_heads - self.p_dropout = p_dropout - self.window_size = window_size - self.heads_share = heads_share - self.block_length = block_length - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - self.attn = None - - self.k_channels = channels // n_heads - self.conv_q = nn.Conv1d(channels, channels, 1) - self.conv_k = nn.Conv1d(channels, channels, 1) - self.conv_v = nn.Conv1d(channels, channels, 1) - self.conv_o = nn.Conv1d(channels, out_channels, 1) - self.drop = nn.Dropout(p_dropout) - - if window_size is not None: - n_heads_rel = 1 if heads_share else n_heads - rel_stddev = self.k_channels**-0.5 - self.emb_rel_k = nn.Parameter( - torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) - * rel_stddev - ) - self.emb_rel_v = nn.Parameter( - torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) - * rel_stddev - ) - - nn.init.xavier_uniform_(self.conv_q.weight) - nn.init.xavier_uniform_(self.conv_k.weight) - nn.init.xavier_uniform_(self.conv_v.weight) - if proximal_init: - with torch.no_grad(): - self.conv_k.weight.copy_(self.conv_q.weight) - self.conv_k.bias.copy_(self.conv_q.bias) - - def forward(self, x, c, attn_mask=None): - q = self.conv_q(x) - k = self.conv_k(c) - v = self.conv_v(c) - - x, self.attn = self.attention(q, k, v, mask=attn_mask) - - x = self.conv_o(x) - return x - - def attention(self, query, key, value, mask=None): - # reshape [b, d, t] -> [b, n_h, t, d_k] - b, d, t_s, t_t = (*key.size(), query.size(2)) - query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3) - key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - - scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1)) - if self.window_size is not None: - assert ( - t_s == t_t - ), "Relative attention is only available for self-attention." - key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s) - rel_logits = self._matmul_with_relative_keys( - query / math.sqrt(self.k_channels), key_relative_embeddings - ) - scores_local = self._relative_position_to_absolute_position(rel_logits) - scores = scores + scores_local - if self.proximal_bias: - assert t_s == t_t, "Proximal bias is only available for self-attention." - scores = scores + self._attention_bias_proximal(t_s).to( - device=scores.device, dtype=scores.dtype - ) - if mask is not None: - scores = scores.masked_fill(mask == 0, -1e4) - if self.block_length is not None: - assert ( - t_s == t_t - ), "Local attention is only available for self-attention." - block_mask = ( - torch.ones_like(scores) - .triu(-self.block_length) - .tril(self.block_length) - ) - scores = scores.masked_fill(block_mask == 0, -1e4) - p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s] - p_attn = self.drop(p_attn) - output = torch.matmul(p_attn, value) - if self.window_size is not None: - relative_weights = self._absolute_position_to_relative_position(p_attn) - value_relative_embeddings = self._get_relative_embeddings( - self.emb_rel_v, t_s - ) - output = output + self._matmul_with_relative_values( - relative_weights, value_relative_embeddings - ) - output = ( - output.transpose(2, 3).contiguous().view(b, d, t_t) - ) # [b, n_h, t_t, d_k] -> [b, d, t_t] - return output, p_attn - - def _matmul_with_relative_values(self, x, y): - """ - x: [b, h, l, m] - y: [h or 1, m, d] - ret: [b, h, l, d] - """ - ret = torch.matmul(x, y.unsqueeze(0)) - return ret - - def _matmul_with_relative_keys(self, x, y): - """ - x: [b, h, l, d] - y: [h or 1, m, d] - ret: [b, h, l, m] - """ - ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1)) - return ret - - def _get_relative_embeddings(self, relative_embeddings, length): - max_relative_position = 2 * self.window_size + 1 - # Pad first before slice to avoid using cond ops. - pad_length = max(length - (self.window_size + 1), 0) - slice_start_position = max((self.window_size + 1) - length, 0) - slice_end_position = slice_start_position + 2 * length - 1 - if pad_length > 0: - padded_relative_embeddings = F.pad( - relative_embeddings, - commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]]), - ) - else: - padded_relative_embeddings = relative_embeddings - used_relative_embeddings = padded_relative_embeddings[ - :, slice_start_position:slice_end_position - ] - return used_relative_embeddings - - def _relative_position_to_absolute_position(self, x): - """ - x: [b, h, l, 2*l-1] - ret: [b, h, l, l] - """ - batch, heads, length, _ = x.size() - # Concat columns of pad to shift from relative to absolute indexing. - x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, 1]])) - - # Concat extra elements so to add up to shape (len+1, 2*len-1). - x_flat = x.view([batch, heads, length * 2 * length]) - x_flat = F.pad( - x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [0, length - 1]]) - ) - - # Reshape and slice out the padded elements. - x_final = x_flat.view([batch, heads, length + 1, 2 * length - 1])[ - :, :, :length, length - 1 : - ] - return x_final - - def _absolute_position_to_relative_position(self, x): - """ - x: [b, h, l, l] - ret: [b, h, l, 2*l-1] - """ - batch, heads, length, _ = x.size() - # padd along column - x = F.pad( - x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length - 1]]) - ) - x_flat = x.view([batch, heads, length**2 + length * (length - 1)]) - # add 0's in the beginning that will skew the elements after reshape - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]])) - x_final = x_flat.view([batch, heads, length, 2 * length])[:, :, :, 1:] - return x_final - - def _attention_bias_proximal(self, length): - """Bias for self-attention to encourage attention to close positions. - Args: - length: an integer scalar. - Returns: - a Tensor with shape [1, 1, length, length] - """ - r = torch.arange(length, dtype=torch.float32) - diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1) - return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0) - - -class FFN(nn.Module): - def __init__( - self, - in_channels, - out_channels, - filter_channels, - kernel_size, - p_dropout=0.0, - activation=None, - causal=False, - ): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.activation = activation - self.causal = causal - - if causal: - self.padding = self._causal_padding - else: - self.padding = self._same_padding - - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size) - self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size) - self.drop = nn.Dropout(p_dropout) - - def forward(self, x, x_mask): - x = self.conv_1(self.padding(x * x_mask)) - if self.activation == "gelu": - x = x * torch.sigmoid(1.702 * x) - else: - x = torch.relu(x) - x = self.drop(x) - x = self.conv_2(self.padding(x * x_mask)) - return x * x_mask - - def _causal_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = self.kernel_size - 1 - pad_r = 0 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x - - def _same_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = (self.kernel_size - 1) // 2 - pad_r = self.kernel_size // 2 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x diff --git a/spaces/r3gm/RVC_HF/julius/fftconv.py b/spaces/r3gm/RVC_HF/julius/fftconv.py deleted file mode 100644 index 1920e5369bb49b76eeea1832b7be2a0ddbc8db6b..0000000000000000000000000000000000000000 --- a/spaces/r3gm/RVC_HF/julius/fftconv.py +++ /dev/null @@ -1,183 +0,0 @@ -# File under the MIT license, see https://github.com/adefossez/julius/LICENSE for details. -# Author: adefossez, 2020 - -""" -Implementation of a FFT based 1D convolution in PyTorch. -While FFT is used in CUDNN for small kernel sizes, it is not the case for long ones, e.g. 512. -This module implements efficient FFT based convolutions for such convolutions. A typical -application is for evaluationg FIR filters with a long receptive field, typically -evaluated with a stride of 1. -""" -from typing import Optional - -import torch -try: - import torch.fft as new_fft -except ImportError: - new_fft = None # type: ignore -from torch.nn import functional as F - -from .core import pad_to, unfold -from .utils import simple_repr - - -# This is quite verbose, but sadly needed to make TorchScript happy. -def _new_rfft(x: torch.Tensor): - z = new_fft.rfft(x, dim=-1) - return torch.view_as_real(z) - - -def _old_rfft(x: torch.Tensor): - return torch.rfft(x, 1) # type: ignore - - -def _old_irfft(x: torch.Tensor, length: int): - result = torch.irfft(x, 1, signal_sizes=(length,)) # type: ignore - return result - - -def _new_irfft(x: torch.Tensor, length: int): - x = torch.view_as_complex(x) - return new_fft.irfft(x, length, dim=-1) - - -if new_fft is None: - _rfft = _old_rfft - _irfft = _old_irfft -else: - _rfft = _new_rfft - _irfft = _new_irfft - - -def _compl_mul_conjugate(a: torch.Tensor, b: torch.Tensor): - """ - Given a and b two tensors of dimension 4 - with the last dimension being the real and imaginary part, - returns a multiplied by the conjugate of b, the multiplication - being with respect to the second dimension. - - """ - # PyTorch 1.7 supports complex number, but not for all operations. - # Once the support is widespread, this can likely go away. - - op = "bcft,dct->bdft" - return torch.stack([ - torch.einsum(op, a[..., 0], b[..., 0]) + torch.einsum(op, a[..., 1], b[..., 1]), - torch.einsum(op, a[..., 1], b[..., 0]) - torch.einsum(op, a[..., 0], b[..., 1]) - ], - dim=-1) - - -def fft_conv1d( - input: torch.Tensor, weight: torch.Tensor, - bias: Optional[torch.Tensor] = None, stride: int = 1, padding: int = 0, - block_ratio: float = 5): - """ - Same as `torch.nn.functional.conv1d` but using FFT for the convolution. - Please check PyTorch documentation for more information. - - Args: - input (Tensor): input signal of shape `[B, C, T]`. - weight (Tensor): weight of the convolution `[D, C, K]` with `D` the number - of output channels. - bias (Tensor or None): if not None, bias term for the convolution. - stride (int): stride of convolution. - padding (int): padding to apply to the input. - block_ratio (float): can be tuned for speed. The input is splitted in chunks - with a size of `int(block_ratio * kernel_size)`. - - Shape: - - - Inputs: `input` is `[B, C, T]`, `weight` is `[D, C, K]` and bias is `[D]`. - - Output: `(*, T)` - - - ..note:: - This function is faster than `torch.nn.functional.conv1d` only in specific cases. - Typically, the kernel size should be of the order of 256 to see any real gain, - for a stride of 1. - - ..Warning:: - Dilation and groups are not supported at the moment. This function might use - more memory than the default Conv1d implementation. - """ - input = F.pad(input, (padding, padding)) - batch, channels, length = input.shape - out_channels, _, kernel_size = weight.shape - - if length < kernel_size: - raise RuntimeError(f"Input should be at least as large as the kernel size {kernel_size}, " - f"but it is only {length} samples long.") - if block_ratio < 1: - raise RuntimeError("Block ratio must be greater than 1.") - - # We are going to process the input blocks by blocks, as for some reason it is faster - # and less memory intensive (I think the culprit is `torch.einsum`. - block_size: int = min(int(kernel_size * block_ratio), length) - fold_stride = block_size - kernel_size + 1 - weight = pad_to(weight, block_size) - weight_z = _rfft(weight) - - # We pad the input and get the different frames, on which - frames = unfold(input, block_size, fold_stride) - - frames_z = _rfft(frames) - out_z = _compl_mul_conjugate(frames_z, weight_z) - out = _irfft(out_z, block_size) - # The last bit is invalid, because FFT will do a circular convolution. - out = out[..., :-kernel_size + 1] - out = out.reshape(batch, out_channels, -1) - out = out[..., ::stride] - target_length = (length - kernel_size) // stride + 1 - out = out[..., :target_length] - if bias is not None: - out += bias[:, None] - return out - - -class FFTConv1d(torch.nn.Module): - """ - Same as `torch.nn.Conv1d` but based on `fft_conv1d`. - Please check PyTorch documentation for more information. - - Args: - in_channels (int): number of input channels. - out_channels (int): number of output channels. - kernel_size (int): kernel size of convolution. - stride (int): stride of convolution. - padding (int): padding to apply to the input. - bias (bool): if True, use a bias term. - - ..note:: - This module is faster than `torch.nn.Conv1d` only in specific cases. - Typically, `kernel_size` should be of the order of 256 to see any real gain, - for a stride of 1. - - ..warning:: - Dilation and groups are not supported at the moment. This module might use - more memory than the default Conv1d implementation. - - >>> fftconv = FFTConv1d(12, 24, 128, 4) - >>> x = torch.randn(4, 12, 1024) - >>> print(list(fftconv(x).shape)) - [4, 24, 225] - """ - def __init__(self, in_channels: int, out_channels: int, kernel_size: int, - stride: int = 1, padding: int = 0, bias: bool = True): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.kernel_size = kernel_size - self.stride = stride - self.padding = padding - - conv = torch.nn.Conv1d(in_channels, out_channels, kernel_size, bias=bias) - self.weight = conv.weight - self.bias = conv.bias - - def forward(self, input: torch.Tensor): - return fft_conv1d( - input, self.weight, self.bias, self.stride, self.padding) - - def __repr__(self): - return simple_repr(self, overrides={"bias": self.bias is not None}) diff --git a/spaces/radames/MusicGen-Continuation/audiocraft/quantization/vq.py b/spaces/radames/MusicGen-Continuation/audiocraft/quantization/vq.py deleted file mode 100644 index f67c3a0cd30d4b8993a36c587f00dc8a451d926f..0000000000000000000000000000000000000000 --- a/spaces/radames/MusicGen-Continuation/audiocraft/quantization/vq.py +++ /dev/null @@ -1,116 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import math -import typing as tp - -import torch - -from .base import BaseQuantizer, QuantizedResult -from .core_vq import ResidualVectorQuantization - - -class ResidualVectorQuantizer(BaseQuantizer): - """Residual Vector Quantizer. - - Args: - dimension (int): Dimension of the codebooks. - n_q (int): Number of residual vector quantizers used. - q_dropout (bool): Random quantizer drop out at train time. - bins (int): Codebook size. - decay (float): Decay for exponential moving average over the codebooks. - kmeans_init (bool): Whether to use kmeans to initialize the codebooks. - kmeans_iters (int): Number of iterations used for kmeans initialization. - threshold_ema_dead_code (int): Threshold for dead code expiration. Replace any codes - that have an exponential moving average cluster size less than the specified threshold with - randomly selected vector from the current batch. - orthogonal_reg_weight (float): Orthogonal regularization weights. - orthogonal_reg_active_codes_only (bool): Apply orthogonal regularization only on active codes. - orthogonal_reg_max_codes (optional int): Maximum number of codes to consider. - for orthogonal regulariation. - """ - def __init__( - self, - dimension: int = 256, - n_q: int = 8, - q_dropout: bool = False, - bins: int = 1024, - decay: float = 0.99, - kmeans_init: bool = True, - kmeans_iters: int = 10, - threshold_ema_dead_code: int = 2, - orthogonal_reg_weight: float = 0.0, - orthogonal_reg_active_codes_only: bool = False, - orthogonal_reg_max_codes: tp.Optional[int] = None, - ): - super().__init__() - self.max_n_q = n_q - self.n_q = n_q - self.q_dropout = q_dropout - self.dimension = dimension - self.bins = bins - self.decay = decay - self.kmeans_init = kmeans_init - self.kmeans_iters = kmeans_iters - self.threshold_ema_dead_code = threshold_ema_dead_code - self.orthogonal_reg_weight = orthogonal_reg_weight - self.orthogonal_reg_active_codes_only = orthogonal_reg_active_codes_only - self.orthogonal_reg_max_codes = orthogonal_reg_max_codes - self.vq = ResidualVectorQuantization( - dim=self.dimension, - codebook_size=self.bins, - num_quantizers=self.n_q, - decay=self.decay, - kmeans_init=self.kmeans_init, - kmeans_iters=self.kmeans_iters, - threshold_ema_dead_code=self.threshold_ema_dead_code, - orthogonal_reg_weight=self.orthogonal_reg_weight, - orthogonal_reg_active_codes_only=self.orthogonal_reg_active_codes_only, - orthogonal_reg_max_codes=self.orthogonal_reg_max_codes, - channels_last=False - ) - - def forward(self, x: torch.Tensor, frame_rate: int): - n_q = self.n_q - if self.training and self.q_dropout: - n_q = int(torch.randint(1, self.n_q + 1, (1,)).item()) - bw_per_q = math.log2(self.bins) * frame_rate / 1000 - quantized, codes, commit_loss = self.vq(x, n_q=n_q) - codes = codes.transpose(0, 1) - # codes is [B, K, T], with T frames, K nb of codebooks. - bw = torch.tensor(n_q * bw_per_q).to(x) - return QuantizedResult(quantized, codes, bw, penalty=torch.mean(commit_loss)) - - def encode(self, x: torch.Tensor) -> torch.Tensor: - """Encode a given input tensor with the specified frame rate at the given bandwidth. - The RVQ encode method sets the appropriate number of quantizer to use - and returns indices for each quantizer. - """ - n_q = self.n_q - codes = self.vq.encode(x, n_q=n_q) - codes = codes.transpose(0, 1) - # codes is [B, K, T], with T frames, K nb of codebooks. - return codes - - def decode(self, codes: torch.Tensor) -> torch.Tensor: - """Decode the given codes to the quantized representation. - """ - # codes is [B, K, T], with T frames, K nb of codebooks, vq.decode expects [K, B, T]. - codes = codes.transpose(0, 1) - quantized = self.vq.decode(codes) - return quantized - - @property - def total_codebooks(self): - return self.max_n_q - - @property - def num_codebooks(self): - return self.n_q - - def set_num_codebooks(self, n: int): - assert n > 0 and n <= self.max_n_q - self.n_q = n diff --git a/spaces/radames/UserControllableLT-Latent-Transformer/models/networks/latent_transformer.py b/spaces/radames/UserControllableLT-Latent-Transformer/models/networks/latent_transformer.py deleted file mode 100644 index 7aaafff337495af1495071fee3ac3514bd308e5e..0000000000000000000000000000000000000000 --- a/spaces/radames/UserControllableLT-Latent-Transformer/models/networks/latent_transformer.py +++ /dev/null @@ -1,162 +0,0 @@ -import numpy as np -import torch -import torch.nn.functional as F -from torch import nn, einsum -from einops import rearrange - -# classes -class PreNorm(nn.Module): - def __init__(self, dim, fn): - super().__init__() - self.norm = nn.LayerNorm(dim) - self.fn = fn - def forward(self, x, **kwargs): - return self.fn(self.norm(x), **kwargs) - -class FeedForward(nn.Module): - def __init__(self, dim, hidden_dim, dropout = 0.): - super().__init__() - self.net = nn.Sequential( - nn.Linear(dim, hidden_dim), - nn.GELU(), - nn.Dropout(dropout), - nn.Linear(hidden_dim, dim), - nn.Dropout(dropout) - ) - def forward(self, x): - return self.net(x) - -class Attention(nn.Module): - def __init__(self, dim, heads = 8, dim_head = 64, dropout = 0.): - super().__init__() - inner_dim = dim_head * heads - project_out = not (heads == 1 and dim_head == dim) - - self.heads = heads - self.scale = dim_head ** -0.5 - - self.attend = nn.Softmax(dim = -1) - self.to_qkv = nn.Linear(dim, inner_dim * 3, bias = False) - - self.to_out = nn.Sequential( - nn.Linear(inner_dim, dim), - nn.Dropout(dropout) - ) if project_out else nn.Identity() - - def forward(self, x): - qkv = self.to_qkv(x).chunk(3, dim = -1) - q, k, v = map(lambda t: rearrange(t, 'b n (h d) -> b h n d', h = self.heads), qkv) - - dots = torch.matmul(q, k.transpose(-1, -2)) * self.scale - - attn = self.attend(dots) - - out = torch.matmul(attn, v) - out = rearrange(out, 'b h n d -> b n (h d)') - return self.to_out(out) - -class CrossAttention(nn.Module): - def __init__(self, dim, heads = 8, dim_head = 64, dropout = 0.): - super().__init__() - inner_dim = dim_head * heads - project_out = not (heads == 1 and dim_head == dim) - - self.heads = heads - self.scale = dim_head ** -0.5 - - self.to_k = nn.Linear(dim, inner_dim , bias=False) - self.to_v = nn.Linear(dim, inner_dim , bias = False) - self.to_q = nn.Linear(dim, inner_dim, bias = False) - - self.to_out = nn.Sequential( - nn.Linear(inner_dim, dim), - nn.Dropout(dropout) - ) if project_out else nn.Identity() - - def forward(self, x_qkv, query_length=1): - h = self.heads - - k = self.to_k(x_qkv)[:, query_length:] - k = rearrange(k, 'b n (h d) -> b h n d', h = h) - - v = self.to_v(x_qkv)[:, query_length:] - v = rearrange(v, 'b n (h d) -> b h n d', h = h) - - q = self.to_q(x_qkv)[:, :query_length] - q = rearrange(q, 'b n (h d) -> b h n d', h = h) - - dots = einsum('b h i d, b h j d -> b h i j', q, k) * self.scale - - attn = dots.softmax(dim=-1) - - out = einsum('b h i j, b h j d -> b h i d', attn, v) - out = rearrange(out, 'b h n d -> b n (h d)') - out = self.to_out(out) - - return out - -class TransformerEncoder(nn.Module): - def __init__(self, dim, depth, heads, dim_head, mlp_dim, dropout = 0.): - super().__init__() - self.layers = nn.ModuleList([]) - for _ in range(depth): - self.layers.append(nn.ModuleList([ - PreNorm(dim, Attention(dim, heads = heads, dim_head = dim_head, dropout = dropout)), - PreNorm(dim, FeedForward(dim, mlp_dim, dropout = dropout)) - ])) - def forward(self, x): - for attn, ff in self.layers: - x = attn(x) + x - x = ff(x) + x - return x - -class TransformerDecoder(nn.Module): - def __init__(self, dim, depth, heads, dim_head, mlp_dim, dropout = 0.): - super().__init__() - self.pos_embedding = nn.Parameter(torch.randn(1, 6, dim)) - self.layers = nn.ModuleList([]) - for _ in range(depth): - self.layers.append(nn.ModuleList([ - PreNorm(dim, Attention(dim, heads = heads, dim_head = dim_head, dropout = dropout)), - PreNorm(dim, CrossAttention(dim, heads = heads, dim_head = dim_head, dropout = dropout)), - PreNorm(dim, FeedForward(dim, mlp_dim, dropout = dropout)) - ])) - def forward(self, x, y): - x = x + self.pos_embedding[:, :x.shape[1]] - for sattn, cattn, ff in self.layers: - x = sattn(x) + x - xy = torch.cat((x,y), dim=1) - x = cattn(xy, query_length=x.shape[1]) + x - x = ff(x) + x - return x - -class Network(nn.Module): - def __init__(self, opts): - super(Network, self).__init__() - - self.transformer_encoder = TransformerEncoder(dim=512, depth=6, heads=8, dim_head=64, mlp_dim=512, dropout=0) - self.transformer_decoder = TransformerDecoder(dim=512, depth=6, heads=8, dim_head=64, mlp_dim=512, dropout=0) - self.layer1 = nn.Linear(3, 256) - self.layer2 = nn.Linear(512, 256) - self.layer3 = nn.Linear(512, 512) - self.layer4 = nn.Linear(512, 512) - self.mlp_head = nn.Sequential( - nn.Linear(512, 512) - ) - - def forward(self, w, x, y, alpha=1.): - #w: latent vectors - #x: flow vectors - #y: StyleGAN features - xh = F.relu(self.layer1(x)) - yh = F.relu(self.layer2(y)) - xyh = torch.cat([xh,yh], dim=2) - xyh = F.relu(self.layer3(xyh)) - xyh = self.transformer_encoder(xyh) - - wh = F.relu(self.layer4(w)) - - h = self.transformer_decoder(wh, xyh) - h = self.mlp_head(h) - w_hat = w+alpha*h - return w_hat diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Bmw Inpa 720 Torrent How to Calibrate and Code ECU Variables with INPA Software.md b/spaces/raedeXanto/academic-chatgpt-beta/Bmw Inpa 720 Torrent How to Calibrate and Code ECU Variables with INPA Software.md deleted file mode 100644 index be771cba423dcebf17d229912c3f278f52c3e80f..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Bmw Inpa 720 Torrent How to Calibrate and Code ECU Variables with INPA Software.md +++ /dev/null @@ -1,100 +0,0 @@ -
- -

Pakov Svet: A Dark Comedy About Cops in a Crime-Ridden Neighborhood

-

If you are looking for a show that combines humor, drama, and crime, you might want to check out Pakov Svet. This Spanish series, also known as Los Hombres de Paco in its original language, follows the adventures and misadventures of a group of police officers in a dangerous district of Madrid. In this article, we will tell you everything you need to know about Pakov Svet, how to watch it online with subtitles, and why you should give it a try.

-

What is Pakov Svet?

-

Pakov Svet is a Spanish television series that aired from 2005 to 2010 on Antena 3. It has 117 episodes divided into nine seasons. The show was created by Daniel Écija and Álex Pina, who later became famous for producing other hit series like Money Heist and The Pier.

-

pakov svet download sa prevodom


Download Filehttps://tinourl.com/2uL30x



-

The plot and the characters of Pakov Svet

-

The main protagonist of Pakov Svet is Paco Miranda, a clumsy and naive inspector who works at the San Antonio police station. He is married to Lola, the daughter of his boss, Don Lorenzo, a strict and authoritarian commissioner. Paco's team consists of his two best friends, Lucas and Mariano, who are equally incompetent and unlucky. Together, they face all kinds of criminals, from drug dealers and murderers to terrorists and kidnappers.

-

Along the way, they also deal with their personal lives, which are full of drama, romance, and betrayal. Some of the recurring characters include Sara, Paco's niece who falls in love with Lucas; Silvia, Lola's sister who works as a forensic scientist; Povedilla, a geeky and loyal officer; Rita, a cheerful and optimistic secretary; Curtis, a clumsy and cowardly cop; Montoya, a handsome and arrogant detective; Pepa, a tough and rebellious policewoman; Aitor, a young and naive rookie; and many others.

-

The genre and the style of Pakov Svet

-

Pakov Svet is a comedy of situations that uses black humor and naturalistic scenes to depict the professional and personal experiences of a group of cops in a big city. The show mixes elements of action, drama, thriller, romance, and satire. It also breaks the fourth wall by having some characters address the audience directly or comment on the events of the show.

-

The tone of Pakov Svet varies depending on the season and the episode. Some episodes are more comedic and lighthearted, while others are more dramatic and intense. The show also evolves over time, becoming darker and more violent as the stakes get higher and the characters face more serious threats.

-

The popularity and the reception of Pakov Svet

-

Pakov Svet was a huge success in Spain, where it reached an average audience share of 20% and won several awards. It was also exported to other countries like France, Italy, Portugal, Greece, Turkey, Argentina, Chile, Mexico, Colombia, Peru, Venezuela, Ecuador, Costa Rica, Panama, and Serbia.

-

pakov svet sa prevodom online
-pakov svet nove epizode
-pakov svet glumci
-pakov svet komedija situacije
-pakov svet crni humor
-pakov svet policajci u velikom gradu
-pakov svet kvart sa visokim stepenom delinkvencije
-pakov svet epizoda 109
-pakov svet epizoda 108
-pakov svet epizoda 107
-pakov svet epizoda 106
-pakov svet epizoda 105
-pakov svet epizoda 104
-pakov svet epizoda 103
-pakov svet epizoda 102
-pakov svet epizoda 101
-pakov svet epizoda 100
-pakov svet epizoda 99
-pakov svet natabanu.com serija
-pakov svet facebook page
-pakov svet interest page transparency
-pakov svet majice sa stampom
-pakov svet igraonica grand-metroid
-pakov svet moj grad sremska mitrovica
-pakov svet dragi bravo
-pakov svet marko radeta musician/band
-pakov svet bltlly.com link
-gledajte pakov svet online besplatno
-kako skinuti pakov svet sa prevodom
-gde mogu da nadjem sve epizode pakovog sveta
-ko je pisao scenarij za pakov svet
-ko su glavni glumci u seriji pakov svet
-zasto se zove pakov svet serija
-kada je pocelo snimanje serije pakov svet
-kada je zavrseno snimanje serije pakov svet
-koliko ima ukupno epizoda serije pakov svet
-kakve su kritike i ocene za seriju pakov svet
-da li je serija pakov svet zasnovana na istinitim dogadjajima
-da li je serija pakov svet dostupna na netflixu ili amazon prime-u
-da li postoji nastavak ili spin-off serije pakov svet
-da li je serija pakov svet prevedena na druge jezike osim srpskog i engleskog
-da li je serija pakov svet zabranjena ili cenzurisana u nekim zemljama ili regionima
-da li je serija pakov svet dobila neke nagrade ili priznanja
-da li je serija pakov svet imala neke kontroverze ili skandale
-da li je serija pakov svet uticala na popularnost ili karijeru glumaca i scenarista
-da li je serija pakov svet inspirisala neke druge serije ili filmove
-da li je serija pakov svet imala neke specijalne efekte ili tehnicke inovacije
-da li je serija pakov svet imala neku posebnu muziku ili zvucnu podlogu
-da li je serija pakov svet imala neke gostujuce zvezde ili poznate licnosti

-

The show received mostly positive reviews from critics and viewers alike. It was praised for its originality, its humor, its action scenes, its plot twists, its social criticism, and its cast performance. Some of the most popular actors who starred in Pakov Svet are Hugo Silva, Michelle Jenner, Mario Casas, Adriana Ozores, Juan Diego, and Paco Tous.

-

How to watch Pakov Svet online with subtitles?

-

If you want to watch Pakov Svet online with subtitles, you have two options: legal or illegal. Both have their pros and cons that we will explain below.

-

The legal and the illegal ways to stream Pakov Svet

-

The legal way to stream Pakov Svet is to use a platform that has the rights to broadcast the show in your country. For example, in Spain, you can watch Pakov Svet on Atresplayer, the official website of Antena 3. You can also buy the DVD box sets of each season on Amazon or other online stores.

-

The illegal way to stream Pakov Svet is to use a website or an app that offers free downloads or streaming links of the show. These sites usually have a large catalog of movies and series from different countries and languages. They also provide subtitles in various formats.

-

The advantages and the disadvantages of each method

-

The advantage of using a legal platform is that you can enjoy a high-quality image and sound without interruptions or ads. You also support the creators and producers of the show by paying for their work.

-

The disadvantage of using a legal platform is that you may have to pay a subscription fee or register with your personal data. You also may not find all the seasons or episodes of the show available in your region.

-

The advantage of using an illegal website or app is that you can access all the content you want for free and without restrictions. You can also choose the subtitles you prefer.

-

The disadvantage of using an illegal website or app is that you may encounter low-quality videos or audio, broken links, malware, pop-ups, or legal issues. You also do not contribute to the sustainability of the show or respect its intellectual property rights.

-

The best websites and apps to download Pakov Svet with subtitles

-

If you decide to use an illegal method to watch Pakov Svet online with subtitles, you should be careful about which website or app you choose. Some of them may be unsafe or unreliable. To help you out, we have selected some of the best options for you:

- - BalkanDownload: This is a Serbian website that offers downloads of movies, series, books, magazines, cartoons, games, music, and more. It has all the episodes of Pakov Svet with Serbian subtitles. - Natabanu: This is another Serbian website that offers streaming links for movies, series, soap operas, documentaries, and shows from different countries. It has all the episodes of Pakov Svet with Serbian subtitles. - Facebook: This is a social media platform that allows users to create pages or groups about their interests. There are several pages or groups dedicated to Pakov Svet where fans share links or videos of the show with subtitles in different languages. -

Why should you watch Pakov Svet?

-

Pakov Svet is not just another cop show. It is a unique blend of comedy, drama, and crime that will keep you entertained for hours. Here are some reasons why you should watch it:

-

The reasons to enjoy Pakov Svet as a comedy fan

-
    - - You will laugh out loud at the hilarious situations that Paco and his team get into. - You will appreciate the witty dialogues and the sarcastic comments that the characters make. - You will love the absurd characters that populate the show, -

    The reasons to appreciate Pakov Svet as a drama fan

    -
      - - You will get invested in the emotional stories and the relationships of the characters. - You will witness the evolution and the growth of the characters as they face challenges and changes. - You will feel the tension and the suspense of the dramatic scenes and the cliffhangers.
    -

    The reasons to admire Pakov Svet as a crime fan

    -
      - - You will be intrigued by the diverse and complex cases that Paco and his team have to solve. - You will be surprised by the twists and the turns that the plot takes. - You will enjoy the action and the thrill of the chase scenes and the shootouts.
    -

    Conclusion

    -

    Pakov Svet is a show that has something for everyone. It is a dark comedy that combines humor, drama, and crime in a captivating way. It is also a show that reflects on the social and political issues of Spain and the world. If you want to watch Pakov Svet online with subtitles, you have several options to choose from, depending on your preferences and your budget. We hope that this article has helped you to learn more about Pakov Svet and to decide whether to give it a chance or not.

    -

    A summary of the main points of the article

    -

    In this article, we have covered the following topics:

    -
      - - What is Pakov Svet? - How to watch Pakov Svet online with subtitles? - Why should you watch Pakov Svet?
    -

    A call to action for the readers

    -

    If you are interested in watching Pakov Svet online with subtitles, we invite you to visit one of the websites or apps that we have recommended in this article. You can also join one of the Facebook pages or groups that are dedicated to Pakov Svet and share your opinions and experiences with other fans. And if you liked this article, please share it with your friends and family who might also enjoy Pakov Svet.

    - **FAQs** Q: What is the original title of Pakov Svet? A: The original title of Pakov Svet is Los Hombres de Paco, which means Paco's Men in Spanish. Q: How many seasons and episodes does Pakov Svet have? A: Pakov Svet has nine seasons and 117 episodes. Q: Who are the main actors of Pakov Svet? A: The main actors of Pakov Svet are Paco Tous, Pepón Nieto, Carlos Santos, Michelle Jenner, Mario Casas, Hugo Silva, Adriana Ozores, Juan Diego, and many others. Q: When did Pakov Svet air for the first time? A: Pakov Svet aired for the first time on October 9, 2005 on Antena 3. Q: Is there a remake or a spin-off of Pakov Svet? A: Yes, there is a remake of Pakov Svet called Los Hombres de Paco: El Reencuentro, which means Paco's Men: The Reunion in Spanish. It is a four-episode miniseries that premiered on May 6, 2021 on Atresplayer Premium. It reunites some of the original cast members and follows their lives ten years after the end of the original series.

    0a6ba089eb
    -
    -
    \ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Cadimage Tools Archi Cad 19 Crack The Ultimate Guide to Download Install and Activate.md b/spaces/raedeXanto/academic-chatgpt-beta/Cadimage Tools Archi Cad 19 Crack The Ultimate Guide to Download Install and Activate.md deleted file mode 100644 index d015b851ef8f114a654bdb12a5ed2ac11d8e7b3d..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Cadimage Tools Archi Cad 19 Crack The Ultimate Guide to Download Install and Activate.md +++ /dev/null @@ -1,134 +0,0 @@ -
    -

    Cadimage Tools ArchiCAD 19 Crack: What You Need to Know

    -

    If you are an architect or a designer who uses ArchiCAD 19, you might have heard of Cadimage Tools. These are a set of add-ons that enhance the functionality and performance of ArchiCAD. They allow you to create more realistic and detailed models, drawings and documents with less effort and time. But what if you don't have a license for these tools? Can you use a crack version instead? In this article, we will explain what Cadimage Tools are, how to install and use them with ArchiCAD 19, what are the benefits and risks of using them, and answer some common questions you might have.

    -

    cadimage tools archi cad 19 crack


    Download ☆☆☆ https://tinourl.com/2uL5AW



    -

    What are Cadimage Tools?

    -

    Cadimage is a company that develops and distributes tools to help architects and designers get the most out of ArchiCAD. They offer a range of tools that cover different aspects of architectural design, such as cabinetry, electrical, extrusions, coverings and stairs. These tools are designed to be intuitive, smart and flexible, so you can customize them according to your needs and preferences. They also integrate seamlessly with ArchiCAD, so you can use them as if they were part of the software.

    -

    Some of the features of Cadimage Tools include:

    -
      -
    • Design custom cabinetry with ease and accuracy using Cabinet Maker
    • -
    • Create smart electrical symbols and schedules using Electrical
    • -
    • Manipulate, edit and form your own objects in 3D using Extrusions
    • -
    • Apply a range of scalable claddings in 2D or 3D views using Coverings
    • -
    • Design, draw and edit stairs within your floor plan using Stairs
    • -
    -

    How to Install Cadimage Tools for ArchiCAD 19

    -

    If you have a license for Cadimage Tools, you can install them easily by following these steps:

    -

    cadimage tools for archi cad 19 free download with crack
    -how to install cadimage tools on archi cad 19 cracked version
    -cadimage tools archi cad 19 serial key generator
    -cadimage tools archi cad 19 license activation code
    -cadimage tools archi cad 19 full version download link
    -cadimage tools archi cad 19 torrent file
    -cadimage tools archi cad 19 patch file
    -cadimage tools archi cad 19 keygen software
    -cadimage tools archi cad 19 crack mac os
    -cadimage tools archi cad 19 crack windows 10
    -cadimage tools archi cad 19 crack linux
    -cadimage tools archi cad 19 system requirements
    -cadimage tools archi cad 19 features and benefits
    -cadimage tools archi cad 19 tutorials and guides
    -cadimage tools archi cad 19 reviews and ratings
    -cadimage tools archi cad 19 alternatives and competitors
    -cadimage tools archi cad 19 price and discounts
    -cadimage tools archi cad 19 support and customer service
    -cadimage tools archi cad 19 updates and upgrades
    -cadimage tools archi cad 19 bugs and issues
    -how to use cadimage tools in archi cad 19 projects
    -how to customize cadimage tools in archi cad 19 settings
    -how to access cadimage tools in archi cad 19 menu
    -how to uninstall cadimage tools from archi cad 19 software
    -how to backup and restore cadimage tools in archi cad 19 data
    -how to import and export cadimage tools in archi cad 19 files
    -how to troubleshoot and fix cadimage tools in archi cad 19 errors
    -how to optimize and improve performance of cadimage tools in archi cad 19 workflow
    -how to create and edit objects with cadimage tools in archi cad 19 design
    -how to add and remove materials with cadimage tools in archi cad 19 rendering
    -how to apply and adjust effects with cadimage tools in archi cad 19 visualization
    -how to generate and print reports with cadimage tools in archi cad 19 documentation
    -how to share and collaborate with others using cadimage tools in archi cad 19 cloud
    -how to learn and master skills with

    -
      -
    1. Go to www.myarchicad.com and log in with your credentials
    2. -
    3. Click on the Downloads tab and select Cadimage Tools for ARCHICAD 19
    4. -
    5. Download the installer file for your operating system (Windows or Mac)
    6. -
    7. Run the installer file and follow the instructions on the screen
    8. -
    9. Restart ArchiCAD 19 and enjoy using Cadimage Tools
    10. -
    -

    How to Use Cadimage Tools for ArchiCAD 19

    -

    Once you have installed Cadimage Tools for ArchiCAD 19, you can access them from the menu bar under Cadimage. Each tool has its own palette that contains various commands and options. You can also use keyboard shortcuts or toolbar icons to activate some of the tools. Here are some examples of how to use some of the most popular tools:

    -

    Cabinet Maker

    -

    Cabinet Maker allows you to design custom cabinetry with ease and accuracy. You can choose from different types of cabinets, such as base, wall, tall or corner units. You can also adjust the dimensions, materials, colors, handles, shelves, drawers and doors of each cabinet. You can also create your own library of cabinet styles that you can reuse in different projects.

    -

    To use Cabinet Maker:

    -
      -
    1. Select Cabinet Maker from the Cadimage menu or press Ctrl+Shift+C (Windows) or Command+Shift+C (Mac)
    2. -
    3. The Cabinet Maker palette will appear on the screen. Click on the New button to create a new cabinet
    4. -
    5. Select the type of cabinet you want from the drop-down menu (e.g., Base Unit)
    6. -
    7. Click on the Plan button to place the cabinet on your floor plan. You can drag it to adjust its position and orientation
    8. -
    9. Click on the Settings button to open the Cabinet Settings dialog box. Here you can modify various parameters of your cabinet, such as dimensions, materials, colors, handles, shelves, drawers and doors
    10. -
    11. Click OK when you are done. Your cabinet will be updated on your floor plan and in your 3D view
    12. -
    13. You can repeat these steps to create more cabinets or edit existing ones
    14. -
    -

    Electrical

    -

    Electrical allows you to create smart electrical symbols and schedules for your projects. You can choose from different categories of symbols, such as switches, outlets, lights or appliances. You can also customize the appearance, size and orientation of each symbol. You can also generate automatic schedules that list all the electrical items in your project.

    -

    To use Electrical:

    -
      -
    1. Select Electrical from the Cadimage menu or press Ctrl+Shift+E (Windows) or Command+Shift+E (Mac)
    2. -
    3. The Electrical palette will appear on the screen. Click on the New button to create a new electrical symbol
    4. -
    5. Select the category of symbol you want from the drop-down menu (e.g., Switches)
    6. -
    7. Select the type of symbol you want from the list (e.g., Single Pole Switch)
    8. -
    9. Click on the Plan button to place the symbol on your floor plan. You can drag it to adjust its position and orientation
    10. -
    11. Click on the Settings button to open the Symbol Settings dialog box. Here you can modify various parameters of your symbol, such as appearance, size and orientation
    12. -
    13. Click OK when you are done. Your symbol will be updated on your floor plan and in your 3D view
    14. -
    15. You can repeat these steps to create more symbols or edit existing ones
    16. -
    17. To generate an electrical schedule:
    18. -
        -
      1. Select Schedule from the Electrical palette or press Ctrl+Shift+S (Windows) or Command+Shift+S (Mac)
      2. -
      3. The Schedule Settings dialog box will appear on the screen. Here you can choose which categories of symbols you want to include in your schedule, as well as other options such as title, header row, column order and format
      4. -
      5. Click OK when you are done. Your schedule will be created on a new worksheet in your project
      6. - ``` Some additional sentences are: ``` You can edit or update your schedule at any time by selecting it from the Project Map or by clicking on it on your worksheet You can also export your schedule as a PDF or Excel file by selecting File > Save As... from the menu bar ```

        Extrusions

        -

        Extrusions allows you to manipulate, edit and form your own objects in 3D. You can create complex shapes by extruding profiles along paths or by sweeping profiles around axes. You can also modify existing objects by adding or subtracting extrusions from them.

        -

        To use Extrusions:

        -
          -
        1. Select Extrusions from the Cadimage menu or press Ctrl+Shift+X (Windows) or Command+Shift+X (Mac)
        2. -
        3. The Extrusions palette will appear on the screen. Click on the New button to create a new extrusion
        4. -
        5. Select the profile you want to extrude from the drop-down menu (e.g., Rectangle)
        6. -
        7. Select the geometry you want to use for the extrusion from the drop-down menu (e.g., Straight)
        8. -
        9. Click on the Plan button to draw the path or axis of the extrusion on your floor plan. You can use any drawing tool, such as Line, Arc or Spline
        10. -
        11. Click on the Settings button to open the Extrusion Settings dialog box. Here you can modify various parameters of your extrusion, such as height, angle, offset and rotation
        12. -
        13. Click OK when you are done. Your extrusion will be created on your floor plan and in your 3D view
        14. -
        15. You can repeat these steps to create more extrusions or edit existing ones
        16. -
        17. To modify an existing object by adding or subtracting an extrusion:
        18. -
            -
          1. Select the object you want to modify and click on the Modify button on the Extrusions palette
          2. -
          3. Select the profile you want to use for the modification from the drop-down menu (e.g., Circle)
          4. -
          5. Select the geometry you want to use for the modification from the drop-down menu (e.g., Curved)
          6. -
          7. Click on the Plan button to draw the path or axis of the modification on your floor plan. You can use any drawing tool, such as Line, Arc or Spline
          8. -
          9. Click on the Settings button to open the Modification Settings dialog box. Here you can choose whether you want to add or subtract the extrusion from the object, as well as other parameters such as height, angle, offset and rotation
          10. -
          11. Click OK when you are done. Your object will be modified by the extrusion on your floor plan and in your 3D view
          12. -
          -

          Coverings

          -

          Coverings allows you to apply a range of scalable claddings in 2D or 3D views. You can choose from different types of coverings, such as tiles, bricks, shingles or panels. You can also adjust the dimensions, colors, patterns and offsets of each covering. You can also create your own library of covering styles that you can reuse in different projects.

          -

          To use Coverings:

          -
            -
          1. Select Coverings from the Cadimage menu or press Ctrl+Shift+V (Windows) or Command+Shift+V (Mac)
          2. -
          3. The Coverings palette will appear on the screen. Click on the New button to create a new covering
          4. -
          5. Select the type of covering you want from the drop-down menu (e.g., Tiles)
          6. -
          7. Click on the Plan button to place the covering on your floor plan. You can drag it to adjust its position and size
          8. -
          9. Click on the Settings button to open the Covering Settings dialog box. Here you can modify various parameters of your covering, such as dimensions, colors, patterns and offsets
          10. -
          11. Click OK when you are done. Your covering will be updated on your floor plan and in your 3D view
          12. -
          13. You can repeat these steps to create more coverings or edit existing ones
          14. - ``` Some additional sentences are: ```

            Stairs

            -

            Stairs allows you to design, draw and edit stairs within your floor plan. You can choose from different types of stairs, such as straight, curved, spiral or custom. You can also adjust the dimensions, materials, colors, railings and landings of each stair. You can also create your own library of stair styles that you can reuse in different projects.

            -

            To use Stairs:

            -
              -
            1. Select Stairs from the Cadimage menu or press Ctrl+Shift+A (Windows) or Command+Shift+A (Mac)
            2. -
            3. The Stairs palette will appear on the screen. Click on the New button to create a new stair
            4. -
            5. Select the type of stair you want from the drop-down menu (e.g., Straight)
            6. -
            7. Click on the Plan button to draw the stair on your floor plan. You can use any drawing tool, such as Line, Arc or Spline
            8. -
            9. Click on the Settings button to open the Stair Settings dialog box. Here you can modify various parameters of your stair, such as dimensions, materials, colors, railings and landings
            10. -
            11. Click OK when you are done. Your stair will be created on your floor plan and in your 3D view
            12. -
            13. You can repeat these steps to create more stairs or edit existing ones
            14. - ```

              0a6ba089eb
              -
              -
              \ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/FileMaker Pro Advanced 12.0.1 Portable.md b/spaces/raedeXanto/academic-chatgpt-beta/FileMaker Pro Advanced 12.0.1 Portable.md deleted file mode 100644 index 40fbad1a7b2f587f19eeb0cfcdad8f955e082379..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/FileMaker Pro Advanced 12.0.1 Portable.md +++ /dev/null @@ -1,178 +0,0 @@ - -

              FileMaker Pro Advanced 12.0.1 Portable: A Powerful and Flexible Database Solution

              -

              If you are looking for a way to create, manage, and share databases across different platforms, you might want to check out FileMaker Pro Advanced 12.0.1 Portable. This is a version of the popular database software that can run from a USB drive or a cloud service without requiring installation or activation.

              -

              FileMaker Pro Advanced 12.0.1 Portable


              Download File ->->->-> https://tinourl.com/2uKZ4s



              -

              In this article, we will explain what FileMaker Pro Advanced 12.0.1 Portable is, how to download and install it, how to use it, and how to troubleshoot common issues with it.

              -

              What is FileMaker Pro Advanced 12.0.1 Portable?

              -

              FileMaker Pro Advanced 12.0.1 Portable is a version of FileMaker Pro Advanced, which is the world's leading easy-to-use workgroup database software for Windows and Mac OS X. It allows you to create and manage databases for projects, people, or other things in a simple and intuitive way.

              -

              The features of FileMaker Pro Advanced 12.0.1 Portable

              -

              FileMaker Pro Advanced 12.0.1 Portable has all the features of FileMaker Pro Advanced, plus some additional ones that make it more convenient and flexible for portable use.

              -

              Some of the features of FileMaker Pro Advanced 12.0.1 Portable are:

              -

              FileMaker Pro 12 for Windows and Mac OS
              -FileMaker Pro 12 updates and resources
              -FileMaker Pro 12 easy-to-use workgroup database software
              -FileMaker Pro 12 cross-platform relational database
              -FileMaker Pro 12 license key and installation guide
              -FileMaker Pro 12 PDF bookshelf and learning series
              -FileMaker Pro 12 system requirements and compatibility
              -FileMaker Pro 12 features and capabilities
              -FileMaker Pro 12 tutorials and examples
              -FileMaker Pro 12 best practices and tips
              -FileMaker Pro 12 troubleshooting and support
              -FileMaker Pro 12 custom functions and scripts
              -FileMaker Pro 12 web publishing and sharing
              -FileMaker Pro 12 security and encryption
              -FileMaker Pro 12 data import and export
              -FileMaker Pro 12 layout design and themes
              -FileMaker Pro 12 charts and reports
              -FileMaker Pro 12 container fields and multimedia
              -FileMaker Pro 12 calculations and formulas
              -FileMaker Pro 12 external data sources and ODBC
              -FileMaker Pro 12 performance optimization and testing
              -FileMaker Pro 12 plugins and extensions
              -FileMaker Pro 12 server and cloud hosting
              -FileMaker Pro 12 mobile access and synchronization
              -FileMaker Pro 12 integration with other applications
              -FileMaker Pro Advanced 12 vs FileMaker Pro 12 comparison
              -FileMaker Pro Advanced 12 additional tools and features
              -FileMaker Pro Advanced 12 development and debugging
              -FileMaker Pro Advanced 12 custom menus and keyboard shortcuts
              -FileMaker Pro Advanced 12 runtime solutions and kiosk mode
              -FileMaker Pro Advanced 12 database analysis and design report
              -FileMaker Pro Advanced 12 script debugger and data viewer
              -FileMaker Pro Advanced 12 custom functions editor and manager
              -FileMaker Pro Advanced 12 external function plug-in API
              -FileMaker Pro Advanced 12 multiple table import and export
              -How to upgrade from FileMaker Pro to FileMaker Pro Advanced 12
              -How to download FileMaker Pro Advanced 12 for free
              -How to install FileMaker Pro Advanced 12 on Windows or Mac
              -How to activate FileMaker Pro Advanced 12 with license key
              -How to update FileMaker Pro Advanced 12 to the latest version
              -How to uninstall or remove FileMaker Pro Advanced 12
              -How to use FileMaker Pro Advanced 12 for beginners
              -How to create a database with FileMaker Pro Advanced 12
              -How to manage data with FileMaker Pro Advanced 12
              -How to enhance user interface with FileMaker Pro Advanced 12
              -How to automate tasks with FileMaker Pro Advanced 12
              -How to deploy solutions with FileMaker Pro Advanced 12

              -
                -
              • Portability: You can run FileMaker Pro Advanced 12.0.1 Portable from a USB drive or a cloud service without installing or activating it on your computer.
              • -
              • Compatibility: You can use FileMaker Pro Advanced 12.0.1 Portable on any Windows or Mac OS X computer that meets the system requirements.
              • -
              • Security: You can encrypt your databases with AES-256 bit encryption and protect them with passwords and access privileges.
              • -
              • Customization: You can create custom menus, toolbars, scripts, functions, and reports for your databases.
              • -
              • Enhancement: You can use advanced tools such as the Script Debugger, the Data Viewer, the Database Design Report, and the Custom Functions to debug, monitor, document, and optimize your databases.
              • -
              • Integration: You can connect your databases to external data sources such as SQL, ODBC, XML, and web services.
              • -
              • Sharing: You can share your databases with up to nine other users over a network or the internet.
              • -
              -

              The benefits of using FileMaker Pro Advanced 12.0.1 Portable

              -

              FileMaker Pro Advanced 12.0.1 Portable offers many benefits for users who need a powerful and flexible database solution that can be used on different computers without installation or activation.

              -

              Some of the benefits of using FileMaker Pro Advanced 12.0.1 Portable are:

              -
                -
              • Convenience: You can carry your databases with you wherever you go and use them on any compatible computer without hassle.
              • -
              • Economy: You can save money by not having to buy multiple licenses or subscriptions for different computers.
              • -
              • Versatility: You can use your databases for various purposes such as managing projects, organizing contacts, tracking inventory, invoicing customers, etc.
              • -
              • Creativity: You can design and customize your databases according to your needs and preferences.
              • -
              • Productivity: You can improve your workflow and efficiency by using advanced tools and features that help you create and manage your databases more easily and effectively.
              • -
              • Collaboration: You can share your databases with other users and work together on them in real time.
              • -
              -

              How to download and install FileMaker Pro Advanced 12.0.1 Portable

              -

              How to download and install FileMaker Pro Advanced 12.0.1 Portable

              -

              If you are interested in trying out FileMaker Pro Advanced 12.0.1 Portable, you will need to download and install it on your USB drive or cloud service. Here are the steps to do so:

              -

              The system requirements for FileMaker Pro Advanced 12.0.1 Portable

              -

              Before you download and install FileMaker Pro Advanced 12.0.1 Portable, make sure your computer meets the following system requirements:

              - - - - - - - - - - - - - - - - - - - -
              Operating systemRAMVideoStorage
              Windows XP (SP3), Windows Vista (SP2), or Windows 7700 MB (Windows XP) / 1 GB (Windows Vista or Windows 7)1024 x 768 minimum resolution (DirectX 9 Graphics Device / WDDM 1.0 or higher required for Windows 7)Unknown
              Mac OS X 10.6, 10.7, or 10.81 GB (2 GB recommended)1024 x 768 minimum resolutionUnknown
              -

              The steps to download and install FileMaker Pro Advanced 12.0.1 Portable

              -

              To download and install FileMaker Pro Advanced 12.0.1 Portable, follow these steps:

              -
                -
              1. Go to https://archive.org/details/fmp_Win_12.0.4.403, which is a free online archive of FileMaker Pro 12 for Windows and Mac OS X.
              2. -
              3. Choose the version you need: Windows (2 .exe files) or Mac (1 .dmg file).
              4. -
              5. Download the license key: 36NV9-81X46-9M8X5-XNKX2-9TV9T-K7773-V69XV. This key is for both Windows and Mac versions of FileMaker Pro and will not unlock server, server advanced, or FileMaker Pro Advanced.
              6. -
              7. Download the FMP_12.0.1.183.exe file for Windows or the FMP_12.0.4.403.dmg file for Mac OS X.
              8. -
              9. Download the FMP_12.0.4.403.exe update file for Windows or skip this step for Mac OS X.
              10. -
              11. Copy the downloaded files to your USB drive or cloud service.
              12. -
              13. Run the FMP_12.0.1.183.exe file for Windows or mount the FMP_12.0.4.403.dmg file for Mac OS X.
              14. -
              15. Choose the license certificate (not Use Trial) and follow the onscreen instructions for installing the software.
              16. -
              17. If you are using Windows, run the FMP_12.0.4.403.exe update file and follow the onscreen instructions.
              18. -
              19. If you are using FileMaker Pro for the first time after converting from the trial license, in the FileMaker Pro License dialog box, click Enter License and then enter the downloaded license key.
              20. -
              -

              Congratulations! You have successfully downloaded and installed FileMaker Pro Advanced 12.0.1 Portable on your USB drive or cloud service.

              -

              How to use FileMaker Pro Advanced 12.0.1 Portable

              -

              Now that you have FileMaker Pro Advanced 12.0.1 Portable on your USB drive or cloud service, you can use it to create and manage databases on any compatible computer without installation or activation.

              -

              How to create and manage databases with FileMaker Pro Advanced 12.0.1 Portable

              -

              How to customize and enhance databases with FileMaker Pro Advanced 12.0.1 Portable

              -

              FileMaker Pro Advanced 12.0.1 Portable gives you more tools and options to customize and enhance your databases than FileMaker Pro. You can use these features to make your databases more user-friendly, efficient, and secure.

              -

              Some of the ways to customize and enhance databases with FileMaker Pro Advanced 12.0.1 Portable are:

              -
                -
              • Create custom menus and toolbars: You can create your own menus and toolbars for your databases, or modify the existing ones. You can also assign keyboard shortcuts to menu commands and scripts.
              • -
              • Create custom functions: You can create your own functions that can be used in calculations and scripts. You can also use functions from other FileMaker Pro Advanced files or from third-party sources.
              • -
              • Create custom reports: You can create your own reports that display information from your databases in different formats and layouts. You can also use subsummary parts, summary fields, charts, and web viewers in your reports.
              • -
              • Use advanced scripting tools: You can use the Script Debugger and the Data Viewer to test and debug your scripts. You can also use the Database Design Report to document your database structure and dependencies.
              • -
              • Encrypt your databases: You can encrypt your databases with AES-256 bit encryption to protect them from unauthorized access. You can also set passwords and access privileges for different users and groups.
              • -
              • Connect to external data sources: You can connect your databases to external SQL data sources such as Oracle, MySQL, or Microsoft SQL Server. You can also import or export data from these sources using ODBC or JDBC.
              • -
              -

              For more information on how to use these features, see FileMaker Pro Advanced Development Guide.

              -

              How to troubleshoot common issues with FileMaker Pro Advanced 12.0.1 Portable

              -

              FileMaker Pro Advanced 12.0.1 Portable is designed to run smoothly and reliably on your USB drive or cloud service. However, you may encounter some issues or errors while using it. Here are some tips on how to troubleshoot common issues with FileMaker Pro Advanced 12.0.1 Portable:

              -

              How to update FileMaker Pro Advanced 12.0.1 Portable

              -

              To ensure that you have the latest version of FileMaker Pro Advanced 12.0.1 Portable, you should check for updates regularly. You can do this by selecting Help > Check for Updates in the software.

              -

              If there is a newer version available, you will be prompted to download and install it. Follow the onscreen instructions to complete the update process.

              -

              Note: Updating FileMaker Pro Advanced 12.0.1 Portable may require you to re-enter the license key: 36NV9-81X46-9M8X5-XNKX2-9TV9T-K7773-V69XV.

              -

              How to fix errors and crashes with FileMaker Pro Advanced 12.0.1 Portable

              -

              If you encounter an error message or a crash while using FileMaker Pro Advanced 12.0.1 Portable, you should try the following steps:

              -
                -
              1. Restart the software: Sometimes, a simple restart can fix minor issues or glitches.
              2. -
              3. Check the system requirements: Make sure that your computer meets the system requirements for FileMaker Pro Advanced 12.0.1 Portable.
              4. -
              5. Check the USB drive or cloud service: Make sure that your USB drive or cloud service has enough free space and is not corrupted or damaged.
              6. -

                How to fix errors and crashes with FileMaker Pro Advanced 12.0.1 Portable

                -

                If you encounter an error message or a crash while using FileMaker Pro Advanced 12.0.1 Portable, you should try the following steps:

                -
                  -
                1. Restart the software: Sometimes, a simple restart can fix minor issues or glitches.
                2. -
                3. Check the system requirements: Make sure that your computer meets the system requirements for FileMaker Pro Advanced 12.0.1 Portable.
                4. -
                5. Check the USB drive or cloud service: Make sure that your USB drive or cloud service has enough free space and is not corrupted or damaged.
                6. -
                7. Check the database file: Make sure that your database file is not corrupted or damaged. You can use the Recover command in FileMaker Pro Advanced 12.0.1 Portable to try to salvage as much information as possible and create a new, recovered file. You can also use the Verify Consistency command to check the integrity of your file.
                8. -
                9. Check the ODBC driver: If you are connecting to an external SQL data source, make sure that you have installed and configured the appropriate ODBC driver for your data source.
                10. -
                11. Contact FileMaker support: If none of the above steps resolve your issue, you can contact FileMaker support for further assistance. You can find the contact information and support resources on https://www.filemaker.com/support/.
                12. -
                -

                Conclusion

                -

                FileMaker Pro Advanced 12.0.1 Portable is a powerful and flexible database solution that can run from a USB drive or a cloud service without installation or activation. It allows you to create and manage databases for various purposes and share them with other users across different platforms. It also offers more tools and options to customize and enhance your databases than FileMaker Pro.

                -

                If you want to try out FileMaker Pro Advanced 12.0.1 Portable, you can download and install it on your USB drive or cloud service by following the steps in this article. You can also use this article as a guide on how to use and troubleshoot FileMaker Pro Advanced 12.0.1 Portable.

                -

                We hope you found this article helpful and informative. If you have any questions or feedback, please feel free to contact us.

                -

                FAQs

                -

                Here are some frequently asked questions about FileMaker Pro Advanced 12.0.1 Portable:

                -
                  -
                1. What is the difference between FileMaker Pro and FileMaker Pro Advanced?
                2. -

                  FileMaker Pro is the standard version of the database software that allows you to create and manage databases for personal or professional use. FileMaker Pro Advanced is the premium version that offers more features and tools for customizing and enhancing your databases, such as custom menus, custom functions, advanced scripting tools, encryption, and external SQL data sources.

                  -
                3. What is the difference between FileMaker Pro Advanced 12.0.1 Portable and FileMaker Pro Advanced 12.0.1?
                4. -

                  FileMaker Pro Advanced 12.0.1 Portable is a version of FileMaker Pro Advanced 12.0.1 that can run from a USB drive or a cloud service without installation or activation on your computer. FileMaker Pro Advanced 12.0.1 is a version of FileMaker Pro Advanced that requires installation and activation on your computer.

                  -
                5. Can I use FileMaker Pro Advanced 12.0.1 Portable on any computer?
                6. -

                  You can use FileMaker Pro Advanced 12.0.1 Portable on any Windows or Mac OS X computer that meets the system requirements. However, you should not use FileMaker Pro Advanced 12.0.1 Portable on a public or shared computer, as it may pose a security risk for your databases.

                  -
                7. Can I use FileMaker Pro Advanced 12.0.1 Portable with FileMaker Server or FileMaker Cloud?
                8. -

                  You can use FileMaker Pro Advanced 12.0.1 Portable to access databases hosted by FileMaker Server or FileMaker Cloud, as long as the host supports the same version of FileMaker Pro Advanced. You can also use FileMaker Pro Advanced 12.0.1 Portable to upload databases to FileMaker Server or FileMaker Cloud, but you cannot use it to administer or monitor the host.

                  -
                9. Can I use FileMaker Pro Advanced 12.0.1 Portable to create runtime solutions?
                10. -

                  You cannot use FileMaker Pro Advanced 12.0.1 Portable to create runtime solutions, which are standalone applications that do not require FileMaker Pro or FileMaker Pro Advanced to run. You need to use FileMaker Pro Advanced 12.0.1 to create runtime solutions for Windows or Mac OS X platforms.

                  -
                11. Can I update FileMaker Pro Advanced 12.0.1 Portable to a newer version?
                12. -

                  You can update FileMaker Pro Advanced 12.0.1 Portable to a newer version by downloading and installing the update file on your USB drive or cloud service. However, you should always backup your databases before updating, as some updates may not be compatible with older versions of FileMaker Pro Advanced.

                  -
                13. Where can I get more help and support for FileMaker Pro Advanced 12.0.1 Portable?
                14. -

                  You can get more help and support for FileMaker Pro Advanced 12.0.1 Portable by visiting the official website of FileMaker at https://www.filemaker.com/. There you can find manuals, tutorials, forums, blogs, videos, and other resources that can help you learn and use FileMaker Pro Advanced 12.0.1 Portable.

                  -

                  0a6ba089eb
                  -
                  -
                  \ No newline at end of file diff --git a/spaces/ramkamal2000/voice-conversion-ddp/speaker_encoder/train.py b/spaces/ramkamal2000/voice-conversion-ddp/speaker_encoder/train.py deleted file mode 100644 index 282e4f51b3825c7f32e628506eb40a98e58e2deb..0000000000000000000000000000000000000000 --- a/spaces/ramkamal2000/voice-conversion-ddp/speaker_encoder/train.py +++ /dev/null @@ -1,125 +0,0 @@ -from speaker_encoder.visualizations import Visualizations -from speaker_encoder.data_objects import SpeakerVerificationDataLoader, SpeakerVerificationDataset -from speaker_encoder.params_model import * -from speaker_encoder.model import SpeakerEncoder -from utils.profiler import Profiler -from pathlib import Path -import torch - -def sync(device: torch.device): - # FIXME - return - # For correct profiling (cuda operations are async) - if device.type == "cuda": - torch.cuda.synchronize(device) - -def train(run_id: str, clean_data_root: Path, models_dir: Path, umap_every: int, save_every: int, - backup_every: int, vis_every: int, force_restart: bool, visdom_server: str, - no_visdom: bool): - # Create a dataset and a dataloader - dataset = SpeakerVerificationDataset(clean_data_root) - loader = SpeakerVerificationDataLoader( - dataset, - speakers_per_batch, # 64 - utterances_per_speaker, # 10 - num_workers=8, - ) - - # Setup the device on which to run the forward pass and the loss. These can be different, - # because the forward pass is faster on the GPU whereas the loss is often (depending on your - # hyperparameters) faster on the CPU. - device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - # FIXME: currently, the gradient is None if loss_device is cuda - loss_device = torch.device("cpu") - - # Create the model and the optimizer - model = SpeakerEncoder(device, loss_device) - optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate_init) - init_step = 1 - - # Configure file path for the model - state_fpath = models_dir.joinpath(run_id + ".pt") - backup_dir = models_dir.joinpath(run_id + "_backups") - - # Load any existing model - if not force_restart: - if state_fpath.exists(): - print("Found existing model \"%s\", loading it and resuming training." % run_id) - checkpoint = torch.load(state_fpath) - init_step = checkpoint["step"] - model.load_state_dict(checkpoint["model_state"]) - optimizer.load_state_dict(checkpoint["optimizer_state"]) - optimizer.param_groups[0]["lr"] = learning_rate_init - else: - print("No model \"%s\" found, starting training from scratch." % run_id) - else: - print("Starting the training from scratch.") - model.train() - - # Initialize the visualization environment - vis = Visualizations(run_id, vis_every, server=visdom_server, disabled=no_visdom) - vis.log_dataset(dataset) - vis.log_params() - device_name = str(torch.cuda.get_device_name(0) if torch.cuda.is_available() else "CPU") - vis.log_implementation({"Device": device_name}) - - # Training loop - profiler = Profiler(summarize_every=10, disabled=False) - for step, speaker_batch in enumerate(loader, init_step): - profiler.tick("Blocking, waiting for batch (threaded)") - - # Forward pass - inputs = torch.from_numpy(speaker_batch.data).to(device) - sync(device) - profiler.tick("Data to %s" % device) - embeds = model(inputs) - sync(device) - profiler.tick("Forward pass") - embeds_loss = embeds.view((speakers_per_batch, utterances_per_speaker, -1)).to(loss_device) - loss, eer = model.loss(embeds_loss) - sync(loss_device) - profiler.tick("Loss") - - # Backward pass - model.zero_grad() - loss.backward() - profiler.tick("Backward pass") - model.do_gradient_ops() - optimizer.step() - profiler.tick("Parameter update") - - # Update visualizations - # learning_rate = optimizer.param_groups[0]["lr"] - vis.update(loss.item(), eer, step) - - # Draw projections and save them to the backup folder - if umap_every != 0 and step % umap_every == 0: - print("Drawing and saving projections (step %d)" % step) - backup_dir.mkdir(exist_ok=True) - projection_fpath = backup_dir.joinpath("%s_umap_%06d.png" % (run_id, step)) - embeds = embeds.detach().cpu().numpy() - vis.draw_projections(embeds, utterances_per_speaker, step, projection_fpath) - vis.save() - - # Overwrite the latest version of the model - if save_every != 0 and step % save_every == 0: - print("Saving the model (step %d)" % step) - torch.save({ - "step": step + 1, - "model_state": model.state_dict(), - "optimizer_state": optimizer.state_dict(), - }, state_fpath) - - # Make a backup - if backup_every != 0 and step % backup_every == 0: - print("Making a backup (step %d)" % step) - backup_dir.mkdir(exist_ok=True) - backup_fpath = backup_dir.joinpath("%s_bak_%06d.pt" % (run_id, step)) - torch.save({ - "step": step + 1, - "model_state": model.state_dict(), - "optimizer_state": optimizer.state_dict(), - }, backup_fpath) - - profiler.tick("Extras (visualizations, saving)") - \ No newline at end of file diff --git a/spaces/rayan-saleh/whisper2notion/server/node_modules/asynckit/lib/readable_parallel.js b/spaces/rayan-saleh/whisper2notion/server/node_modules/asynckit/lib/readable_parallel.js deleted file mode 100644 index 5d2929f7a67750caca25e51ed80ced22c3f43e64..0000000000000000000000000000000000000000 --- a/spaces/rayan-saleh/whisper2notion/server/node_modules/asynckit/lib/readable_parallel.js +++ /dev/null @@ -1,25 +0,0 @@ -var parallel = require('../parallel.js'); - -// API -module.exports = ReadableParallel; - -/** - * Streaming wrapper to `asynckit.parallel` - * - * @param {array|object} list - array or object (named list) to iterate over - * @param {function} iterator - iterator to run - * @param {function} callback - invoked when all elements processed - * @returns {stream.Readable#} - */ -function ReadableParallel(list, iterator, callback) -{ - if (!(this instanceof ReadableParallel)) - { - return new ReadableParallel(list, iterator, callback); - } - - // turn on object mode - ReadableParallel.super_.call(this, {objectMode: true}); - - this._start(parallel, list, iterator, callback); -} diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/3Design Cad 7 Crack.rar FREE.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/3Design Cad 7 Crack.rar FREE.md deleted file mode 100644 index 698abe8f0a58aae335c3553917a29234587414ff..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/3Design Cad 7 Crack.rar FREE.md +++ /dev/null @@ -1,6 +0,0 @@ -

                  3Design Cad 7 crack.rar


                  DOWNLOADhttps://urlgoal.com/2uCMfp



                  - -Holt Physics Workbook Study Guide Solutions Manual Rar file, ZIP file ... 3Design Cad 7 crack.18 ... license key DAEMONToolsPro520 0348.rar 4d29de3e1b
                  -
                  -
                  -

                  diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Anatomy Of Upper Limb And Thorax By Vishram Singh Pdf 13 [TOP].md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Anatomy Of Upper Limb And Thorax By Vishram Singh Pdf 13 [TOP].md deleted file mode 100644 index 6560480a880139fa1d3a16b2e615fff7ef575bec..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Anatomy Of Upper Limb And Thorax By Vishram Singh Pdf 13 [TOP].md +++ /dev/null @@ -1,6 +0,0 @@ -

                  anatomy of upper limb and thorax by vishram singh pdf 13


                  Download Ziphttps://urlgoal.com/2uCMyp



                  -
                  -. Upper Limb and Thoracic Anatomy, Volume 1, 3rd Edition - 162668, Vishram Singh. English | Binding: Maps | ISBN-10: 0323352227 | ISBN-13: 9780323352222. | Year of publication: 2006. | Publisher: AST. 8a78ff9644
                  -
                  -
                  -

                  diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/FIGHTNIGHTCHAMPIONPCGAMEREGISTRATIONCODE1739.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/FIGHTNIGHTCHAMPIONPCGAMEREGISTRATIONCODE1739.md deleted file mode 100644 index 9333f57e23aeed453f68488c69040f4d87d58624..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/FIGHTNIGHTCHAMPIONPCGAMEREGISTRATIONCODE1739.md +++ /dev/null @@ -1,6 +0,0 @@ -

                  FIGHTNIGHTCHAMPIONPCGAMEREGISTRATIONCODE1739


                  Download Filehttps://urlgoal.com/2uCKZC



                  - - d5da3c52bf
                  -
                  -
                  -

                  diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Film Directing Shot By Shot Pdf Free Download 5.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Film Directing Shot By Shot Pdf Free Download 5.md deleted file mode 100644 index 00b4204db535fb58874e742f41d1d6848b42aae9..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Film Directing Shot By Shot Pdf Free Download 5.md +++ /dev/null @@ -1,106 +0,0 @@ - -

                  Film Directing Shot by Shot Pdf Free Download 5: How to Get the Best Book on Visual Storytelling

                  - -

                  Film Directing Shot by Shot is a classic book on visual storytelling for filmmakers and videomakers. Written by Steven D. Katz, the book offers a complete catalogue of visual techniques and their stylistic implications, along with examples and exercises to help you master the craft of shot design.

                  - -

                  The book was first published in 1991 and has sold over 250,000 copies, making it one of the bestselling books on film directing of all time. The book has been updated for a special 25th anniversary edition in 2019, with over 800 photos and illustrations, including storyboards from movies such as Citizen Kane, Blade Runner, Deadpool, and Moonrise Kingdom.

                  -

                  Film Directing Shot By Shot Pdf Free Download 5


                  Download File ····· https://urlgoal.com/2uCKhH



                  - -

                  If you are looking for a Film Directing Shot by Shot Pdf Free Download 5, you might be wondering how to get the best book on visual storytelling without spending a dime. In this article, we will guide you through the steps to find and download a Film Directing Shot by Shot Pdf Free Download 5 safely and legally.

                  - -

                  Step 1: Find a Reliable Pdf Site

                  - -

                  The first step to find and download a Film Directing Shot by Shot Pdf Free Download 5 is to find a reliable pdf site that hosts the file. There are many pdf sites on the internet, but not all of them are trustworthy or safe. Some pdf sites may contain malware, viruses, or fake files that can harm your device or compromise your privacy.

                  - -

                  To avoid these risks, you should look for a reputable pdf site that has a large user base, positive reviews, and verified files. Some examples of such sites are PDF Drive, Scribd, Z-Library, and Library Genesis. These sites have been around for a long time and have a good reputation among pdf users.

                  - -

                  However, you should also be aware that some pdf sites may be blocked or restricted in your country due to legal issues or copyright infringement. In that case, you may need to use a VPN (virtual private network) service to access them. A VPN is a software that encrypts your internet traffic and routes it through a server in another location, allowing you to bypass geo-restrictions and censorship.

                  - -

                  Step 2: Search for Film Directing Shot by Shot Pdf Free Download 5

                  - -

                  The next step to find and download a Film Directing Shot by Shot Pdf Free Download 5 is to search for it on the pdf site of your choice. You can use the search bar or browse through the categories to find the book you want. You should also pay attention to some details before downloading the pdf file, such as:

                  - -
                    -
                  • The file size: The smaller the file size, the faster it will download.
                  • -
                  • The file format: The file format should be compatible with your device and pdf reader.
                  • -
                  • The file quality: The file quality should be clear and readable.
                  • -
                  • The file source: The file source should be from the original publisher or author.
                  • -
                  - -

                  Once you find a Film Directing Shot by Shot Pdf Free Download 5 that meets your criteria, you can click on it and download the pdf file or view it online. You may need to create an account or sign up for a free trial on some pdf sites before you can download or view the file.

                  - -

                  Step 3: Open the Pdf File

                  - -

                  The final step to find and download a Film Directing Shot by Shot Pdf Free Download 5 is to open the pdf file with your pdf reader. You can do this by double-clicking on the file or opening it from your pdf reader. Your pdf reader will then display the book on your screen.

                  -

                  - -

                  You can read the book online or offline, depending on your preference and internet connection. You can also zoom in or out, bookmark pages, highlight text, or take notes on your pdf reader. You should also make sure that your pdf reader is updated to the latest version and configured properly for optimal performance and security.

                  - -

                  Step 4: Enjoy Film Directing Shot by Shot Pdf Free Download 5

                  - -

                  Congratulations! You have successfully found and downloaded a Film Directing Shot by Shot Pdf Free Download 5. You can now enjoy one of the best books on visual storytelling for filmmakers and videomakers. You can also share your thoughts and opinions about the book with other readers and learners on social media or online forums.

                  - -

                  However, you should also be careful about some legal and ethical issues when finding and downloading a Film Directing Shot by Shot Pdf Free Download 5. Finding and downloading copyrighted content without permission is illegal in many countries and can result in fines or lawsuits. You should also respect the rights of the author and publisher of the book by not distributing it without their consent or using it for commercial purposes.

                  - -

                  If you want to find and download a Film Directing Shot by Shot Pdf Free Download 5 legally and safely, you can look for other options such as official websites or online platforms that offer the book for free or at a low cost. This way, you can support the author and publisher of the book without breaking any laws or harming anyone's interests.

                  - -

                  Conclusion

                  - -

                  Film Directing Shot by Shot Pdf Free Download 5 is a great way to learn visual storytelling for filmmakers and videomakers. However, finding and downloading a Film Directing Shot by Shot Pdf Free Download 5 also involves some risks and challenges that you need to be aware of.

                  - -

                  In this article, we have guided you through the steps to find and download a Film Directing Shot by Shot Pdf Free Download 5 safely and legally. We hope this article has been helpful for you and that you have enjoyed reading Film Directing Shot by Shot Pdf Free Download 5.

                  -

                  Film Directing Shot by Shot Pdf Free Download 5: How to Get the Best Book on Visual Storytelling

                  - -

                  Film Directing Shot by Shot is a classic book on visual storytelling for filmmakers and videomakers. Written by Steven D. Katz, the book offers a complete catalogue of visual techniques and their stylistic implications, along with examples and exercises to help you master the craft of shot design.

                  - -

                  The book was first published in 1991 and has sold over 250,000 copies, making it one of the bestselling books on film directing of all time. The book has been updated for a special 25th anniversary edition in 2019, with over 800 photos and illustrations, including storyboards from movies such as Citizen Kane, Blade Runner, Deadpool, and Moonrise Kingdom.

                  - -

                  If you are looking for a Film Directing Shot by Shot Pdf Free Download 5, you might be wondering how to get the best book on visual storytelling without spending a dime. In this article, we will guide you through the steps to find and download a Film Directing Shot by Shot Pdf Free Download 5 safely and legally.

                  - -

                  Step 1: Find a Reliable Pdf Site

                  - -

                  The first step to find and download a Film Directing Shot by Shot Pdf Free Download 5 is to find a reliable pdf site that hosts the file. There are many pdf sites on the internet, but not all of them are trustworthy or safe. Some pdf sites may contain malware, viruses, or fake files that can harm your device or compromise your privacy.

                  - -

                  To avoid these risks, you should look for a reputable pdf site that has a large user base, positive reviews, and verified files. Some examples of such sites are PDF Drive, Scribd, Z-Library, and Library Genesis. These sites have been around for a long time and have a good reputation among pdf users.

                  - -

                  However, you should also be aware that some pdf sites may be blocked or restricted in your country due to legal issues or copyright infringement. In that case, you may need to use a VPN (virtual private network) service to access them. A VPN is a software that encrypts your internet traffic and routes it through a server in another location, allowing you to bypass geo-restrictions and censorship.

                  - -

                  Step 2: Search for Film Directing Shot by Shot Pdf Free Download 5

                  - -

                  The next step to find and download a Film Directing Shot by Shot Pdf Free Download 5 is to search for it on the pdf site of your choice. You can use the search bar or browse through the categories to find the book you want. You should also pay attention to some details before downloading the pdf file, such as:

                  - -
                    -
                  • The file size: The smaller the file size, the faster it will download.
                  • -
                  • The file format: The file format should be compatible with your device and pdf reader.
                  • -
                  • The file quality: The file quality should be clear and readable.
                  • -
                  • The file source: The file source should be from the original publisher or author.
                  • -
                  - -

                  Once you find a Film Directing Shot by Shot Pdf Free Download 5 that meets your criteria, you can click on it and download the pdf file or view it online. You may need to create an account or sign up for a free trial on some pdf sites before you can download or view the file.

                  - -

                  Step 3: Open the Pdf File

                  - -

                  The final step to find and download a Film Directing Shot by Shot Pdf Free Download 5 is to open the pdf file with your pdf reader. You can do this by double-clicking on the file or opening it from your pdf reader. Your pdf reader will then display the book on your screen.

                  - -

                  You can read the book online or offline, depending on your preference and internet connection. You can also zoom in or out, bookmark pages, highlight text, or take notes on your pdf reader. You should also make sure that your pdf reader is updated to the latest version and configured properly for optimal performance and security.

                  - -

                  Step 4: Enjoy Film Directing Shot by Shot Pdf Free Download 5

                  - -

                  Congratulations! You have successfully found and downloaded a Film Directing Shot by Shot Pdf Free Download 5. You can now enjoy one of the best books on visual storytelling for filmmakers and videomakers. You can also share your thoughts and opinions about the book with other readers and learners on social media or online forums.

                  - -

                  However, you should also be careful about some legal and ethical issues when finding and downloading a Film Directing Shot by Shot Pdf Free Download 5. Finding and downloading copyrighted content without permission is illegal in many countries and can result in fines or lawsuits. You should also respect the rights of the author and publisher of the book by not distributing it without their consent or using it for commercial purposes.

                  - -

                  If you want to find and download a Film Directing Shot by Shot Pdf Free Download 5 legally and safely, you can look for other options such as official websites or online platforms that offer the book for free or at a low cost. This way, you can support the author and publisher of the book without breaking any laws or harming anyone's interests.

                  - -

                  Conclusion

                  - -

                  Film Directing Shot by Shot Pdf Free Download 5 is a great way to learn visual storytelling for filmmakers and videomakers. However, finding and downloading a Film Directing Shot by Shot Pdf Free Download 5 also involves some risks and challenges that you need to be aware of.

                  - -

                  In this article, we have guided you through the steps to find and download a Film Directing Shot by Shot Pdf Free Download 5 safely and legally. We hope this article has been helpful for you and that you have enjoyed reading Film Directing Shot by Shot Pdf Free Download 5.

                  -

                  Conclusion

                  - -

                  Film Directing Shot by Shot Pdf Free Download 5 is a great way to learn visual storytelling for filmmakers and videomakers. However, finding and downloading a Film Directing Shot by Shot Pdf Free Download 5 also involves some risks and challenges that you need to be aware of.

                  - -

                  In this article, we have guided you through the steps to find and download a Film Directing Shot by Shot Pdf Free Download 5 safely and legally. We hope this article has been helpful for you and that you have enjoyed reading Film Directing Shot by Shot Pdf Free Download 5.

                  3cee63e6c2
                  -
                  -
                  \ No newline at end of file diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Garmin Streetpilot Apk Android UPDATED.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Garmin Streetpilot Apk Android UPDATED.md deleted file mode 100644 index 1759d67c3b81272c94f8621984c66c49c31e0304..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Garmin Streetpilot Apk Android UPDATED.md +++ /dev/null @@ -1,6 +0,0 @@ -

                  garmin streetpilot apk android


                  Downloadhttps://urlgoal.com/2uCLk8



                  -
                  -Download APK (5.4 MB) Versions Using APKPure App to upgrade System ... free download Android System WebView Android app, install Android apk app for ... to ask for an update after a job interview · Garmin streetpilot 2720 map update ... 1fdad05405
                  -
                  -
                  -

                  diff --git a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/projects/instance_segment_anything/models/focalnet_dino/models/dino/segmentation.py b/spaces/rockeycoss/Prompt-Segment-Anything-Demo/projects/instance_segment_anything/models/focalnet_dino/models/dino/segmentation.py deleted file mode 100644 index 5bab139dd7937f08bd06036b46f1b912dbf03a13..0000000000000000000000000000000000000000 --- a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/projects/instance_segment_anything/models/focalnet_dino/models/dino/segmentation.py +++ /dev/null @@ -1,375 +0,0 @@ -# ------------------------------------------------------------------------ -# DINO -# Copyright (c) 2022 IDEA. All Rights Reserved. -# Licensed under the Apache License, Version 2.0 [see LICENSE for details] -# ------------------------------------------------------------------------ -# Conditional DETR -# Copyright (c) 2021 Microsoft. All Rights Reserved. -# Licensed under the Apache License, Version 2.0 [see LICENSE for details] -# ------------------------------------------------------------------------ -# Copied from DETR (https://github.com/facebookresearch/detr) -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved. -# ------------------------------------------------------------------------ - -""" -This file provides the definition of the convolutional heads used to predict masks, as well as the losses -""" -import io -from collections import defaultdict -from typing import List, Optional - -import torch -import torch.nn as nn -import torch.nn.functional as F -from torch import Tensor -from PIL import Image - -from .util import box_ops -from .util.misc import NestedTensor, interpolate, nested_tensor_from_tensor_list - -try: - from panopticapi.utils import id2rgb, rgb2id -except ImportError: - pass - - -class DETRsegm(nn.Module): - def __init__(self, detr, freeze_detr=False): - super().__init__() - self.detr = detr - - if freeze_detr: - for p in self.parameters(): - p.requires_grad_(False) - - hidden_dim, nheads = detr.transformer.d_model, detr.transformer.nhead - self.bbox_attention = MHAttentionMap(hidden_dim, hidden_dim, nheads, dropout=0.0) - self.mask_head = MaskHeadSmallConv(hidden_dim + nheads, [1024, 512, 256], hidden_dim) - - def forward(self, samples: NestedTensor): - if isinstance(samples, (list, torch.Tensor)): - samples = nested_tensor_from_tensor_list(samples) - features, pos = self.detr.backbone(samples) - - bs = features[-1].tensors.shape[0] - - src, mask = features[-1].decompose() - assert mask is not None - src_proj = self.detr.input_proj(src) - hs, memory = self.detr.transformer(src_proj, mask, self.detr.query_embed.weight, pos[-1]) - - outputs_class = self.detr.class_embed(hs) - outputs_coord = self.detr.bbox_embed(hs).sigmoid() - out = {"pred_logits": outputs_class[-1], "pred_boxes": outputs_coord[-1]} - if self.detr.aux_loss: - out['aux_outputs'] = self.detr._set_aux_loss(outputs_class, outputs_coord) - - # FIXME h_boxes takes the last one computed, keep this in mind - bbox_mask = self.bbox_attention(hs[-1], memory, mask=mask) - - seg_masks = self.mask_head(src_proj, bbox_mask, [features[2].tensors, features[1].tensors, features[0].tensors]) - outputs_seg_masks = seg_masks.view(bs, self.detr.num_queries, seg_masks.shape[-2], seg_masks.shape[-1]) - - out["pred_masks"] = outputs_seg_masks - return out - - -def _expand(tensor, length: int): - return tensor.unsqueeze(1).repeat(1, int(length), 1, 1, 1).flatten(0, 1) - - -class MaskHeadSmallConv(nn.Module): - """ - Simple convolutional head, using group norm. - Upsampling is done using a FPN approach - """ - - def __init__(self, dim, fpn_dims, context_dim): - super().__init__() - - inter_dims = [dim, context_dim // 2, context_dim // 4, context_dim // 8, context_dim // 16, context_dim // 64] - self.lay1 = torch.nn.Conv2d(dim, dim, 3, padding=1) - self.gn1 = torch.nn.GroupNorm(8, dim) - self.lay2 = torch.nn.Conv2d(dim, inter_dims[1], 3, padding=1) - self.gn2 = torch.nn.GroupNorm(8, inter_dims[1]) - self.lay3 = torch.nn.Conv2d(inter_dims[1], inter_dims[2], 3, padding=1) - self.gn3 = torch.nn.GroupNorm(8, inter_dims[2]) - self.lay4 = torch.nn.Conv2d(inter_dims[2], inter_dims[3], 3, padding=1) - self.gn4 = torch.nn.GroupNorm(8, inter_dims[3]) - self.lay5 = torch.nn.Conv2d(inter_dims[3], inter_dims[4], 3, padding=1) - self.gn5 = torch.nn.GroupNorm(8, inter_dims[4]) - self.out_lay = torch.nn.Conv2d(inter_dims[4], 1, 3, padding=1) - - self.dim = dim - - self.adapter1 = torch.nn.Conv2d(fpn_dims[0], inter_dims[1], 1) - self.adapter2 = torch.nn.Conv2d(fpn_dims[1], inter_dims[2], 1) - self.adapter3 = torch.nn.Conv2d(fpn_dims[2], inter_dims[3], 1) - - for m in self.modules(): - if isinstance(m, nn.Conv2d): - nn.init.kaiming_uniform_(m.weight, a=1) - nn.init.constant_(m.bias, 0) - - def forward(self, x: Tensor, bbox_mask: Tensor, fpns: List[Tensor]): - x = torch.cat([_expand(x, bbox_mask.shape[1]), bbox_mask.flatten(0, 1)], 1) - - x = self.lay1(x) - x = self.gn1(x) - x = F.relu(x) - x = self.lay2(x) - x = self.gn2(x) - x = F.relu(x) - - cur_fpn = self.adapter1(fpns[0]) - if cur_fpn.size(0) != x.size(0): - cur_fpn = _expand(cur_fpn, x.size(0) // cur_fpn.size(0)) - x = cur_fpn + F.interpolate(x, size=cur_fpn.shape[-2:], mode="nearest") - x = self.lay3(x) - x = self.gn3(x) - x = F.relu(x) - - cur_fpn = self.adapter2(fpns[1]) - if cur_fpn.size(0) != x.size(0): - cur_fpn = _expand(cur_fpn, x.size(0) // cur_fpn.size(0)) - x = cur_fpn + F.interpolate(x, size=cur_fpn.shape[-2:], mode="nearest") - x = self.lay4(x) - x = self.gn4(x) - x = F.relu(x) - - cur_fpn = self.adapter3(fpns[2]) - if cur_fpn.size(0) != x.size(0): - cur_fpn = _expand(cur_fpn, x.size(0) // cur_fpn.size(0)) - x = cur_fpn + F.interpolate(x, size=cur_fpn.shape[-2:], mode="nearest") - x = self.lay5(x) - x = self.gn5(x) - x = F.relu(x) - - x = self.out_lay(x) - return x - - -class MHAttentionMap(nn.Module): - """This is a 2D attention module, which only returns the attention softmax (no multiplication by value)""" - - def __init__(self, query_dim, hidden_dim, num_heads, dropout=0.0, bias=True): - super().__init__() - self.num_heads = num_heads - self.hidden_dim = hidden_dim - self.dropout = nn.Dropout(dropout) - - self.q_linear = nn.Linear(query_dim, hidden_dim, bias=bias) - self.k_linear = nn.Linear(query_dim, hidden_dim, bias=bias) - - nn.init.zeros_(self.k_linear.bias) - nn.init.zeros_(self.q_linear.bias) - nn.init.xavier_uniform_(self.k_linear.weight) - nn.init.xavier_uniform_(self.q_linear.weight) - self.normalize_fact = float(hidden_dim / self.num_heads) ** -0.5 - - def forward(self, q, k, mask: Optional[Tensor] = None): - q = self.q_linear(q) - k = F.conv2d(k, self.k_linear.weight.unsqueeze(-1).unsqueeze(-1), self.k_linear.bias) - qh = q.view(q.shape[0], q.shape[1], self.num_heads, self.hidden_dim // self.num_heads) - kh = k.view(k.shape[0], self.num_heads, self.hidden_dim // self.num_heads, k.shape[-2], k.shape[-1]) - weights = torch.einsum("bqnc,bnchw->bqnhw", qh * self.normalize_fact, kh) - - if mask is not None: - weights.masked_fill_(mask.unsqueeze(1).unsqueeze(1), float("-inf")) - weights = F.softmax(weights.flatten(2), dim=-1).view(weights.size()) - weights = self.dropout(weights) - return weights - - -def dice_loss(inputs, targets, num_boxes): - """ - Compute the DICE loss, similar to generalized IOU for masks - Args: - inputs: A float tensor of arbitrary shape. - The predictions for each example. - targets: A float tensor with the same shape as inputs. Stores the binary - classification label for each element in inputs - (0 for the negative class and 1 for the positive class). - """ - inputs = inputs.sigmoid() - inputs = inputs.flatten(1) - numerator = 2 * (inputs * targets).sum(1) - denominator = inputs.sum(-1) + targets.sum(-1) - loss = 1 - (numerator + 1) / (denominator + 1) - return loss.sum() / num_boxes - - -def sigmoid_focal_loss(inputs, targets, num_boxes, alpha: float = 0.25, gamma: float = 2): - """ - Loss used in RetinaNet for dense detection: https://arxiv.org/abs/1708.02002. - Args: - inputs: A float tensor of arbitrary shape. - The predictions for each example. - targets: A float tensor with the same shape as inputs. Stores the binary - classification label for each element in inputs - (0 for the negative class and 1 for the positive class). - alpha: (optional) Weighting factor in range (0,1) to balance - positive vs negative examples. Default = -1 (no weighting). - gamma: Exponent of the modulating factor (1 - p_t) to - balance easy vs hard examples. - Returns: - Loss tensor - """ - prob = inputs.sigmoid() - ce_loss = F.binary_cross_entropy_with_logits(inputs, targets, reduction="none") - p_t = prob * targets + (1 - prob) * (1 - targets) - loss = ce_loss * ((1 - p_t) ** gamma) - - if alpha >= 0: - alpha_t = alpha * targets + (1 - alpha) * (1 - targets) - loss = alpha_t * loss - - return loss.mean(1).sum() / num_boxes - - -class PostProcessSegm(nn.Module): - def __init__(self, threshold=0.5): - super().__init__() - self.threshold = threshold - - @torch.no_grad() - def forward(self, results, outputs, orig_target_sizes, max_target_sizes): - assert len(orig_target_sizes) == len(max_target_sizes) - max_h, max_w = max_target_sizes.max(0)[0].tolist() - outputs_masks = outputs["pred_masks"].squeeze(2) - outputs_masks = F.interpolate(outputs_masks, size=(max_h, max_w), mode="bilinear", align_corners=False) - outputs_masks = (outputs_masks.sigmoid() > self.threshold).cpu() - - for i, (cur_mask, t, tt) in enumerate(zip(outputs_masks, max_target_sizes, orig_target_sizes)): - img_h, img_w = t[0], t[1] - results[i]["masks"] = cur_mask[:, :img_h, :img_w].unsqueeze(1) - results[i]["masks"] = F.interpolate( - results[i]["masks"].float(), size=tuple(tt.tolist()), mode="nearest" - ).byte() - - return results - - -class PostProcessPanoptic(nn.Module): - """This class converts the output of the model to the final panoptic result, in the format expected by the - coco panoptic API """ - - def __init__(self, is_thing_map, threshold=0.85): - """ - Parameters: - is_thing_map: This is a whose keys are the class ids, and the values a boolean indicating whether - the class is a thing (True) or a stuff (False) class - threshold: confidence threshold: segments with confidence lower than this will be deleted - """ - super().__init__() - self.threshold = threshold - self.is_thing_map = is_thing_map - - def forward(self, outputs, processed_sizes, target_sizes=None): - """ This function computes the panoptic prediction from the model's predictions. - Parameters: - outputs: This is a dict coming directly from the model. See the model doc for the content. - processed_sizes: This is a list of tuples (or torch tensors) of sizes of the images that were passed to the - model, ie the size after data augmentation but before batching. - target_sizes: This is a list of tuples (or torch tensors) corresponding to the requested final size - of each prediction. If left to None, it will default to the processed_sizes - """ - if target_sizes is None: - target_sizes = processed_sizes - assert len(processed_sizes) == len(target_sizes) - out_logits, raw_masks, raw_boxes = outputs["pred_logits"], outputs["pred_masks"], outputs["pred_boxes"] - assert len(out_logits) == len(raw_masks) == len(target_sizes) - preds = [] - - def to_tuple(tup): - if isinstance(tup, tuple): - return tup - return tuple(tup.cpu().tolist()) - - for cur_logits, cur_masks, cur_boxes, size, target_size in zip( - out_logits, raw_masks, raw_boxes, processed_sizes, target_sizes - ): - # we filter empty queries and detection below threshold - scores, labels = cur_logits.softmax(-1).max(-1) - keep = labels.ne(outputs["pred_logits"].shape[-1] - 1) & (scores > self.threshold) - cur_scores, cur_classes = cur_logits.softmax(-1).max(-1) - cur_scores = cur_scores[keep] - cur_classes = cur_classes[keep] - cur_masks = cur_masks[keep] - cur_masks = interpolate(cur_masks[:, None], to_tuple(size), mode="bilinear").squeeze(1) - cur_boxes = box_ops.box_cxcywh_to_xyxy(cur_boxes[keep]) - - h, w = cur_masks.shape[-2:] - assert len(cur_boxes) == len(cur_classes) - - # It may be that we have several predicted masks for the same stuff class. - # In the following, we track the list of masks ids for each stuff class (they are merged later on) - cur_masks = cur_masks.flatten(1) - stuff_equiv_classes = defaultdict(lambda: []) - for k, label in enumerate(cur_classes): - if not self.is_thing_map[label.item()]: - stuff_equiv_classes[label.item()].append(k) - - def get_ids_area(masks, scores, dedup=False): - # This helper function creates the final panoptic segmentation image - # It also returns the area of the masks that appears on the image - - m_id = masks.transpose(0, 1).softmax(-1) - - if m_id.shape[-1] == 0: - # We didn't detect any mask :( - m_id = torch.zeros((h, w), dtype=torch.long, device=m_id.device) - else: - m_id = m_id.argmax(-1).view(h, w) - - if dedup: - # Merge the masks corresponding to the same stuff class - for equiv in stuff_equiv_classes.values(): - if len(equiv) > 1: - for eq_id in equiv: - m_id.masked_fill_(m_id.eq(eq_id), equiv[0]) - - final_h, final_w = to_tuple(target_size) - - seg_img = Image.fromarray(id2rgb(m_id.view(h, w).cpu().numpy())) - seg_img = seg_img.resize(size=(final_w, final_h), resample=Image.NEAREST) - - np_seg_img = ( - torch.ByteTensor(torch.ByteStorage.from_buffer(seg_img.tobytes())).view(final_h, final_w, 3).numpy() - ) - m_id = torch.from_numpy(rgb2id(np_seg_img)) - - area = [] - for i in range(len(scores)): - area.append(m_id.eq(i).sum().item()) - return area, seg_img - - area, seg_img = get_ids_area(cur_masks, cur_scores, dedup=True) - if cur_classes.numel() > 0: - # We know filter empty masks as long as we find some - while True: - filtered_small = torch.as_tensor( - [area[i] <= 4 for i, c in enumerate(cur_classes)], dtype=torch.bool, device=keep.device - ) - if filtered_small.any().item(): - cur_scores = cur_scores[~filtered_small] - cur_classes = cur_classes[~filtered_small] - cur_masks = cur_masks[~filtered_small] - area, seg_img = get_ids_area(cur_masks, cur_scores) - else: - break - - else: - cur_classes = torch.ones(1, dtype=torch.long, device=cur_classes.device) - - segments_info = [] - for i, a in enumerate(area): - cat = cur_classes[i].item() - segments_info.append({"id": i, "isthing": self.is_thing_map[cat], "category_id": cat, "area": a}) - del cur_classes - - with io.BytesIO() as out: - seg_img.save(out, format="PNG") - predictions = {"png_string": out.getvalue(), "segments_info": segments_info} - preds.append(predictions) - return preds diff --git a/spaces/rorallitri/biomedical-language-models/logs/Ghoomketu Movie 1080p Download Utorrent.md b/spaces/rorallitri/biomedical-language-models/logs/Ghoomketu Movie 1080p Download Utorrent.md deleted file mode 100644 index ff3156f5d95ae825e0827a8a295dd82a02f0bd71..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/Ghoomketu Movie 1080p Download Utorrent.md +++ /dev/null @@ -1,6 +0,0 @@ -

                  Ghoomketu movie 1080p download utorrent


                  DOWNLOAD ✯✯✯ https://tinurll.com/2uzno3



                  -
                  - aaccfb2cb3
                  -
                  -
                  -

                  diff --git a/spaces/rorallitri/biomedical-language-models/logs/King Kong 1080p Dual Audio Torrent.md b/spaces/rorallitri/biomedical-language-models/logs/King Kong 1080p Dual Audio Torrent.md deleted file mode 100644 index f7b294f4733ae9a3b2851fbecbd3f54ce508edc0..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/King Kong 1080p Dual Audio Torrent.md +++ /dev/null @@ -1,12 +0,0 @@ -
                  -

                  King Kong: The Ultimate Edition - A Review of the 4K Ultra HD Blu-ray Release

                  -

                  King Kong is one of the most iconic and beloved movies of all time, and it has been remastered and re-released several times since its original debut in 2005. The latest version, the Ultimate Edition, is a 4K Ultra HD Blu-ray that features both the theatrical cut and the extended edition of the film, as well as a wealth of bonus materials and extras. But is it worth upgrading to this new format? Here are some reasons why you might want to consider adding this masterpiece to your collection.

                  -
                    -
                  • Stunning visuals: The 4K Ultra HD Blu-ray offers a significant improvement over the previous Blu-ray releases, with enhanced clarity, detail, color, and contrast. The film was shot on 35mm film, which gives it a natural grain and texture that is preserved in the 4K transfer. The HDR10 and Dolby Vision options also add more depth and realism to the image, especially in the dark scenes and the vibrant colors of Skull Island. The CGI effects of Kong and the other creatures still hold up remarkably well, and look even more impressive in 4K.
                  • -
                  • Immersive audio: The 4K Ultra HD Blu-ray features a Dolby Atmos soundtrack that delivers a powerful and immersive sound experience. The film has a rich and dynamic score by James Newton Howard, as well as a plethora of sound effects that create a realistic and atmospheric soundscape. The dialogue is clear and balanced, and the surround channels are used effectively to create a sense of directionality and movement. The subwoofer also adds a lot of impact and rumble to the action scenes, especially when Kong roars or fights.
                  • -
                  • Comprehensive extras: The 4K Ultra HD Blu-ray comes with three discs: one for the theatrical cut, one for the extended edition, and one for the bonus materials. The theatrical cut runs for 187 minutes, while the extended edition adds 13 minutes of additional footage that expands on some character moments and action scenes. Both versions are worth watching, as they offer different perspectives on the story and the characters. The bonus disc contains over seven hours of documentaries, featurettes, deleted scenes, commentaries, and more that cover every aspect of the production, from pre-production to post-production. Some of the highlights include Peter Jackson's production diaries, which offer a candid and fascinating look at the daily challenges and achievements of making such an ambitious film; the post-production diaries, which show how the film was edited, scored, mixed, and finalized; and the making-of documentary "The Eighth Wonder of the World", which is a comprehensive and informative overview of the entire project.
                  • -
                  -

                  In conclusion, King Kong: The Ultimate Edition is a must-have for fans of the film and for anyone who appreciates epic filmmaking. It is a stunning presentation of a classic story that showcases Peter Jackson's vision and passion for cinema. It is also a tribute to the original King Kong from 1933, which inspired Jackson and many other filmmakers to pursue their dreams. If you are looking for a way to experience King Kong in its full glory, look no further than this 4K Ultra HD Blu-ray release.

                  -

                  King Kong 1080p Dual Audio Torrent


                  Download Filehttps://tinurll.com/2uzmLs



                  d5da3c52bf
                  -
                  -
                  \ No newline at end of file diff --git a/spaces/ruslanmv/Clone-Your-Voice/vocoder/inference.py b/spaces/ruslanmv/Clone-Your-Voice/vocoder/inference.py deleted file mode 100644 index 7e546845da0b8cdb18b34fbd332b9aaa39cea55c..0000000000000000000000000000000000000000 --- a/spaces/ruslanmv/Clone-Your-Voice/vocoder/inference.py +++ /dev/null @@ -1,64 +0,0 @@ -from vocoder.models.fatchord_version import WaveRNN -from vocoder import hparams as hp -import torch - - -_model = None # type: WaveRNN - -def load_model(weights_fpath, verbose=True): - global _model, _device - - if verbose: - print("Building Wave-RNN") - _model = WaveRNN( - rnn_dims=hp.voc_rnn_dims, - fc_dims=hp.voc_fc_dims, - bits=hp.bits, - pad=hp.voc_pad, - upsample_factors=hp.voc_upsample_factors, - feat_dims=hp.num_mels, - compute_dims=hp.voc_compute_dims, - res_out_dims=hp.voc_res_out_dims, - res_blocks=hp.voc_res_blocks, - hop_length=hp.hop_length, - sample_rate=hp.sample_rate, - mode=hp.voc_mode - ) - - if torch.cuda.is_available(): - _model = _model.cuda() - _device = torch.device('cuda') - else: - _device = torch.device('cpu') - - if verbose: - print("Loading model weights at %s" % weights_fpath) - checkpoint = torch.load(weights_fpath, _device) - _model.load_state_dict(checkpoint['model_state']) - _model.eval() - - -def is_loaded(): - return _model is not None - - -def infer_waveform(mel, normalize=True, batched=True, target=8000, overlap=800, - progress_callback=None): - """ - Infers the waveform of a mel spectrogram output by the synthesizer (the format must match - that of the synthesizer!) - - :param normalize: - :param batched: - :param target: - :param overlap: - :return: - """ - if _model is None: - raise Exception("Please load Wave-RNN in memory before using it") - - if normalize: - mel = mel / hp.mel_max_abs_value - mel = torch.from_numpy(mel[None, ...]) - wav = _model.generate(mel, batched, target, overlap, hp.mu_law, progress_callback) - return wav diff --git a/spaces/salamat/first_app/app.py b/spaces/salamat/first_app/app.py deleted file mode 100644 index 0223c8d38a73a6019114954a4cf86527578e91ce..0000000000000000000000000000000000000000 --- a/spaces/salamat/first_app/app.py +++ /dev/null @@ -1,7 +0,0 @@ -import streamlit as st -from transformers import pipeline -pipe=pipeline('sentiment-analysis') -text=st.text_area('enter text for sentiment analysis') -if text: - out=pipe(text) - st.json(out) \ No newline at end of file diff --git a/spaces/sayakpaul/evaluate-sd-schedulers/app.py b/spaces/sayakpaul/evaluate-sd-schedulers/app.py deleted file mode 100644 index 3f8dcd6dc3c828a82b4bc0ed3ab361f8e4a4b57e..0000000000000000000000000000000000000000 --- a/spaces/sayakpaul/evaluate-sd-schedulers/app.py +++ /dev/null @@ -1,225 +0,0 @@ -import importlib -from typing import List - -import gradio as gr -import numpy as np -import torch -from diffusers import StableDiffusionPipeline -from torchmetrics import PeakSignalNoiseRatio, StructuralSimilarityIndexMeasure - -from image_utils import make_grid, numpy_to_pil -from metrics_utils import compute_main_metrics, compute_psnr_or_ssim -from report_utils import add_psnr_ssim_to_report, prepare_report - -SEED = 0 -WEIGHT_DTYPE = torch.float16 - -TITLE = "Evaluate Schedulers with StableDiffusionPipeline 🧨" -ABSTRACT = """ -This Space allows you to quantitatively compare [different noise schedulers](https://huggingface.co/docs/diffusers/using-diffusers/schedulers) with a [`StableDiffusionPipeline`](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/overview). - -One of the applications of this Space could be to evaluate different schedulers for a certain Stable Diffusion checkpoint for a fixed number of inference steps. -""" -DESCRIPTION = """ -#### Hoes does it work? -* The evaluator first sets a seed and then generates the initial noise which is passed as the initial latent to start the image generation process. It is done to ensure fair comparison. -* This initial latent is used every time the pipeline is run (with different schedulers). -* To quantify the quality of the generated images we use: - * [Inception Score](https://en.wikipedia.org/wiki/Inception_score) - * [Clip Score](https://arxiv.org/abs/2104.08718) -#### Notes -* When selecting a model checkpoint, if you select "Other" you will have the option to provide a custom Stable Diffusion checkpoint. -* The default scheduler associated with the provided checkpoint is always used for reporting the scores. -* Increasing both the number of images per prompt and the number of inference steps could quickly build up the inference queue and thus -resulting in slowdowns. -""" - -psnr_fn = PeakSignalNoiseRatio() -ssim_fn = StructuralSimilarityIndexMeasure() - - -def initialize_pipeline(checkpoint: str): - sd_pipe = StableDiffusionPipeline.from_pretrained( - checkpoint, torch_dtype=WEIGHT_DTYPE - ) - sd_pipe = sd_pipe.to("cuda") - original_scheduler_config = sd_pipe.scheduler.config - return sd_pipe, original_scheduler_config - - -def get_scheduler(scheduler_name: str): - schedulers_lib = importlib.import_module("diffusers", package="schedulers") - scheduler_abs = getattr(schedulers_lib, scheduler_name) - - return scheduler_abs - - -def get_latents(num_images_per_prompt: int, seed=SEED): - generator = torch.manual_seed(seed) - latents = np.random.RandomState(seed).standard_normal( - (num_images_per_prompt, 4, 64, 64) - ) - latents = torch.from_numpy(latents).to(device="cuda", dtype=WEIGHT_DTYPE) - return latents - - -def run( - prompt: str, - num_images_per_prompt: int, - num_inference_steps: int, - checkpoint: str, - other_finedtuned_checkpoints: str = None, - schedulers_to_test: List[str] = None, - ssim: bool = False, - psnr: bool = False, - progress=gr.Progress(), -): - progress(0, desc="Starting...") - - if checkpoint == "Other" and other_finedtuned_checkpoints == "": - return "❌ No legit checkpoint provided ❌" - - elif checkpoint == "Other": - checkpoint = other_finedtuned_checkpoints - - all_images = {} - scheduler_images = {} - - # Set up the pipeline - sd_pipeline, original_scheduler_config = initialize_pipeline(checkpoint) - sd_pipeline.set_progress_bar_config(disable=True) - - # Prepare latents to start generation and the prompts. - latents = get_latents(num_images_per_prompt) - prompts = [prompt] * num_images_per_prompt - - original_scheduler_name = original_scheduler_config._class_name - schedulers_to_test.append(original_scheduler_name) - - # Start generating the images and computing their scores. - for scheduler_name in progress.tqdm(schedulers_to_test): - if scheduler_name != original_scheduler_name: - scheduler_cls = get_scheduler(scheduler_name) - current_scheduler = scheduler_cls.from_config(original_scheduler_config) - sd_pipeline.scheduler = current_scheduler - - cur_scheduler_images = sd_pipeline( - prompts, - latents=latents, - num_inference_steps=num_inference_steps, - output_type="numpy", - ).images - all_images.update( - { - scheduler_name: { - "images": make_grid( - numpy_to_pil(cur_scheduler_images), 1, num_images_per_prompt - ), - "scores": compute_main_metrics(cur_scheduler_images, prompts), - } - } - ) - scheduler_images.update({scheduler_name: cur_scheduler_images}) - torch.cuda.empty_cache() - - # Prepare output report. - output_str = "" - for scheduler_name in all_images: - output_str += prepare_report(scheduler_name, all_images[scheduler_name]) - - # Append PSNR or SSIM if needed. - if len(schedulers_to_test) > 1: - ssim_scores = psnr_scores = None - if ssim: - ssim_scores = compute_psnr_or_ssim( - ssim_fn, scheduler_images, original_scheduler_name - ) - if psnr: - psnr_scores = compute_psnr_or_ssim( - psnr_fn, scheduler_images, original_scheduler_name - ) - - if len(schedulers_to_test) > 1: - ssim_psnr_str = add_psnr_ssim_to_report( - original_scheduler_name, ssim_scores, psnr_scores - ) - if ssim_psnr_str != "": - output_str += ssim_psnr_str - - return output_str - - -with gr.Blocks(title="Scheduler Evaluation") as demo: - gr.Markdown(f"## {TITLE}\n\n\n\n{ABSTRACT}") - - with gr.Row(): - with gr.Column(): - prompt = gr.Text( - max_lines=1, placeholder="a painting of a dog", label="prompt" - ) - num_images_per_prompt = gr.Slider( - 3, 10, value=3, step=1, label="num_images_per_prompt" - ) - num_inference_steps = gr.Slider( - 10, 100, value=50, step=1, label="num_inference_steps" - ) - model_ckpt = gr.Dropdown( - [ - "CompVis/stable-diffusion-v1-4", - "runwayml/stable-diffusion-v1-5", - "stabilityai/stable-diffusion-2-base", - "Other", - ], - value="CompVis/stable-diffusion-v1-4", - multiselect=False, - interactive=True, - label="model_ckpt", - ) - other_finedtuned_checkpoints = gr.Textbox( - visible=False, - interactive=True, - placeholder="valhalla/sd-pokemon-model", - label="custom_checkpoint", - ) - model_ckpt.change( - lambda x: gr.Dropdown.update(visible=x == "Other"), - model_ckpt, - other_finedtuned_checkpoints, - ) - schedulers_to_test = gr.Dropdown( - [ - "EulerDiscreteScheduler", - "PNDMScheduler", - "LMSDiscreteScheduler", - "DPMSolverMultistepScheduler", - "DDIMScheduler", - ], - value=["LMSDiscreteScheduler"], - multiselect=True, - label="schedulers_to_test", - ) - ssim = gr.Checkbox(label="Compute SSIM") - psnr = gr.Checkbox(label="Compute PSNR") - evaluation_button = gr.Button(value="Submit") - - with gr.Column(): - report = gr.Markdown(label="Evaluation Report").style() - - evaluation_button.click( - run, - inputs=[ - prompt, - num_images_per_prompt, - num_inference_steps, - model_ckpt, - other_finedtuned_checkpoints, - schedulers_to_test, - ssim, - psnr, - ], - outputs=report, - ) - - gr.Markdown(f"{DESCRIPTION}") - -demo.queue().launch(debug=True) diff --git a/spaces/sczhou/CodeFormer/CodeFormer/facelib/detection/retinaface/retinaface.py b/spaces/sczhou/CodeFormer/CodeFormer/facelib/detection/retinaface/retinaface.py deleted file mode 100644 index 02593556d88a90232bbe55a062875f4af4520621..0000000000000000000000000000000000000000 --- a/spaces/sczhou/CodeFormer/CodeFormer/facelib/detection/retinaface/retinaface.py +++ /dev/null @@ -1,370 +0,0 @@ -import cv2 -import numpy as np -import torch -import torch.nn as nn -import torch.nn.functional as F -from PIL import Image -from torchvision.models._utils import IntermediateLayerGetter as IntermediateLayerGetter - -from facelib.detection.align_trans import get_reference_facial_points, warp_and_crop_face -from facelib.detection.retinaface.retinaface_net import FPN, SSH, MobileNetV1, make_bbox_head, make_class_head, make_landmark_head -from facelib.detection.retinaface.retinaface_utils import (PriorBox, batched_decode, batched_decode_landm, decode, decode_landm, - py_cpu_nms) - -device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') - - -def generate_config(network_name): - - cfg_mnet = { - 'name': 'mobilenet0.25', - 'min_sizes': [[16, 32], [64, 128], [256, 512]], - 'steps': [8, 16, 32], - 'variance': [0.1, 0.2], - 'clip': False, - 'loc_weight': 2.0, - 'gpu_train': True, - 'batch_size': 32, - 'ngpu': 1, - 'epoch': 250, - 'decay1': 190, - 'decay2': 220, - 'image_size': 640, - 'return_layers': { - 'stage1': 1, - 'stage2': 2, - 'stage3': 3 - }, - 'in_channel': 32, - 'out_channel': 64 - } - - cfg_re50 = { - 'name': 'Resnet50', - 'min_sizes': [[16, 32], [64, 128], [256, 512]], - 'steps': [8, 16, 32], - 'variance': [0.1, 0.2], - 'clip': False, - 'loc_weight': 2.0, - 'gpu_train': True, - 'batch_size': 24, - 'ngpu': 4, - 'epoch': 100, - 'decay1': 70, - 'decay2': 90, - 'image_size': 840, - 'return_layers': { - 'layer2': 1, - 'layer3': 2, - 'layer4': 3 - }, - 'in_channel': 256, - 'out_channel': 256 - } - - if network_name == 'mobile0.25': - return cfg_mnet - elif network_name == 'resnet50': - return cfg_re50 - else: - raise NotImplementedError(f'network_name={network_name}') - - -class RetinaFace(nn.Module): - - def __init__(self, network_name='resnet50', half=False, phase='test'): - super(RetinaFace, self).__init__() - self.half_inference = half - cfg = generate_config(network_name) - self.backbone = cfg['name'] - - self.model_name = f'retinaface_{network_name}' - self.cfg = cfg - self.phase = phase - self.target_size, self.max_size = 1600, 2150 - self.resize, self.scale, self.scale1 = 1., None, None - self.mean_tensor = torch.tensor([[[[104.]], [[117.]], [[123.]]]]).to(device) - self.reference = get_reference_facial_points(default_square=True) - # Build network. - backbone = None - if cfg['name'] == 'mobilenet0.25': - backbone = MobileNetV1() - self.body = IntermediateLayerGetter(backbone, cfg['return_layers']) - elif cfg['name'] == 'Resnet50': - import torchvision.models as models - backbone = models.resnet50(pretrained=False) - self.body = IntermediateLayerGetter(backbone, cfg['return_layers']) - - in_channels_stage2 = cfg['in_channel'] - in_channels_list = [ - in_channels_stage2 * 2, - in_channels_stage2 * 4, - in_channels_stage2 * 8, - ] - - out_channels = cfg['out_channel'] - self.fpn = FPN(in_channels_list, out_channels) - self.ssh1 = SSH(out_channels, out_channels) - self.ssh2 = SSH(out_channels, out_channels) - self.ssh3 = SSH(out_channels, out_channels) - - self.ClassHead = make_class_head(fpn_num=3, inchannels=cfg['out_channel']) - self.BboxHead = make_bbox_head(fpn_num=3, inchannels=cfg['out_channel']) - self.LandmarkHead = make_landmark_head(fpn_num=3, inchannels=cfg['out_channel']) - - self.to(device) - self.eval() - if self.half_inference: - self.half() - - def forward(self, inputs): - out = self.body(inputs) - - if self.backbone == 'mobilenet0.25' or self.backbone == 'Resnet50': - out = list(out.values()) - # FPN - fpn = self.fpn(out) - - # SSH - feature1 = self.ssh1(fpn[0]) - feature2 = self.ssh2(fpn[1]) - feature3 = self.ssh3(fpn[2]) - features = [feature1, feature2, feature3] - - bbox_regressions = torch.cat([self.BboxHead[i](feature) for i, feature in enumerate(features)], dim=1) - classifications = torch.cat([self.ClassHead[i](feature) for i, feature in enumerate(features)], dim=1) - tmp = [self.LandmarkHead[i](feature) for i, feature in enumerate(features)] - ldm_regressions = (torch.cat(tmp, dim=1)) - - if self.phase == 'train': - output = (bbox_regressions, classifications, ldm_regressions) - else: - output = (bbox_regressions, F.softmax(classifications, dim=-1), ldm_regressions) - return output - - def __detect_faces(self, inputs): - # get scale - height, width = inputs.shape[2:] - self.scale = torch.tensor([width, height, width, height], dtype=torch.float32).to(device) - tmp = [width, height, width, height, width, height, width, height, width, height] - self.scale1 = torch.tensor(tmp, dtype=torch.float32).to(device) - - # forawrd - inputs = inputs.to(device) - if self.half_inference: - inputs = inputs.half() - loc, conf, landmarks = self(inputs) - - # get priorbox - priorbox = PriorBox(self.cfg, image_size=inputs.shape[2:]) - priors = priorbox.forward().to(device) - - return loc, conf, landmarks, priors - - # single image detection - def transform(self, image, use_origin_size): - # convert to opencv format - if isinstance(image, Image.Image): - image = cv2.cvtColor(np.asarray(image), cv2.COLOR_RGB2BGR) - image = image.astype(np.float32) - - # testing scale - im_size_min = np.min(image.shape[0:2]) - im_size_max = np.max(image.shape[0:2]) - resize = float(self.target_size) / float(im_size_min) - - # prevent bigger axis from being more than max_size - if np.round(resize * im_size_max) > self.max_size: - resize = float(self.max_size) / float(im_size_max) - resize = 1 if use_origin_size else resize - - # resize - if resize != 1: - image = cv2.resize(image, None, None, fx=resize, fy=resize, interpolation=cv2.INTER_LINEAR) - - # convert to torch.tensor format - # image -= (104, 117, 123) - image = image.transpose(2, 0, 1) - image = torch.from_numpy(image).unsqueeze(0) - - return image, resize - - def detect_faces( - self, - image, - conf_threshold=0.8, - nms_threshold=0.4, - use_origin_size=True, - ): - """ - Params: - imgs: BGR image - """ - image, self.resize = self.transform(image, use_origin_size) - image = image.to(device) - if self.half_inference: - image = image.half() - image = image - self.mean_tensor - - loc, conf, landmarks, priors = self.__detect_faces(image) - - boxes = decode(loc.data.squeeze(0), priors.data, self.cfg['variance']) - boxes = boxes * self.scale / self.resize - boxes = boxes.cpu().numpy() - - scores = conf.squeeze(0).data.cpu().numpy()[:, 1] - - landmarks = decode_landm(landmarks.squeeze(0), priors, self.cfg['variance']) - landmarks = landmarks * self.scale1 / self.resize - landmarks = landmarks.cpu().numpy() - - # ignore low scores - inds = np.where(scores > conf_threshold)[0] - boxes, landmarks, scores = boxes[inds], landmarks[inds], scores[inds] - - # sort - order = scores.argsort()[::-1] - boxes, landmarks, scores = boxes[order], landmarks[order], scores[order] - - # do NMS - bounding_boxes = np.hstack((boxes, scores[:, np.newaxis])).astype(np.float32, copy=False) - keep = py_cpu_nms(bounding_boxes, nms_threshold) - bounding_boxes, landmarks = bounding_boxes[keep, :], landmarks[keep] - # self.t['forward_pass'].toc() - # print(self.t['forward_pass'].average_time) - # import sys - # sys.stdout.flush() - return np.concatenate((bounding_boxes, landmarks), axis=1) - - def __align_multi(self, image, boxes, landmarks, limit=None): - - if len(boxes) < 1: - return [], [] - - if limit: - boxes = boxes[:limit] - landmarks = landmarks[:limit] - - faces = [] - for landmark in landmarks: - facial5points = [[landmark[2 * j], landmark[2 * j + 1]] for j in range(5)] - - warped_face = warp_and_crop_face(np.array(image), facial5points, self.reference, crop_size=(112, 112)) - faces.append(warped_face) - - return np.concatenate((boxes, landmarks), axis=1), faces - - def align_multi(self, img, conf_threshold=0.8, limit=None): - - rlt = self.detect_faces(img, conf_threshold=conf_threshold) - boxes, landmarks = rlt[:, 0:5], rlt[:, 5:] - - return self.__align_multi(img, boxes, landmarks, limit) - - # batched detection - def batched_transform(self, frames, use_origin_size): - """ - Arguments: - frames: a list of PIL.Image, or torch.Tensor(shape=[n, h, w, c], - type=np.float32, BGR format). - use_origin_size: whether to use origin size. - """ - from_PIL = True if isinstance(frames[0], Image.Image) else False - - # convert to opencv format - if from_PIL: - frames = [cv2.cvtColor(np.asarray(frame), cv2.COLOR_RGB2BGR) for frame in frames] - frames = np.asarray(frames, dtype=np.float32) - - # testing scale - im_size_min = np.min(frames[0].shape[0:2]) - im_size_max = np.max(frames[0].shape[0:2]) - resize = float(self.target_size) / float(im_size_min) - - # prevent bigger axis from being more than max_size - if np.round(resize * im_size_max) > self.max_size: - resize = float(self.max_size) / float(im_size_max) - resize = 1 if use_origin_size else resize - - # resize - if resize != 1: - if not from_PIL: - frames = F.interpolate(frames, scale_factor=resize) - else: - frames = [ - cv2.resize(frame, None, None, fx=resize, fy=resize, interpolation=cv2.INTER_LINEAR) - for frame in frames - ] - - # convert to torch.tensor format - if not from_PIL: - frames = frames.transpose(1, 2).transpose(1, 3).contiguous() - else: - frames = frames.transpose((0, 3, 1, 2)) - frames = torch.from_numpy(frames) - - return frames, resize - - def batched_detect_faces(self, frames, conf_threshold=0.8, nms_threshold=0.4, use_origin_size=True): - """ - Arguments: - frames: a list of PIL.Image, or np.array(shape=[n, h, w, c], - type=np.uint8, BGR format). - conf_threshold: confidence threshold. - nms_threshold: nms threshold. - use_origin_size: whether to use origin size. - Returns: - final_bounding_boxes: list of np.array ([n_boxes, 5], - type=np.float32). - final_landmarks: list of np.array ([n_boxes, 10], type=np.float32). - """ - # self.t['forward_pass'].tic() - frames, self.resize = self.batched_transform(frames, use_origin_size) - frames = frames.to(device) - frames = frames - self.mean_tensor - - b_loc, b_conf, b_landmarks, priors = self.__detect_faces(frames) - - final_bounding_boxes, final_landmarks = [], [] - - # decode - priors = priors.unsqueeze(0) - b_loc = batched_decode(b_loc, priors, self.cfg['variance']) * self.scale / self.resize - b_landmarks = batched_decode_landm(b_landmarks, priors, self.cfg['variance']) * self.scale1 / self.resize - b_conf = b_conf[:, :, 1] - - # index for selection - b_indice = b_conf > conf_threshold - - # concat - b_loc_and_conf = torch.cat((b_loc, b_conf.unsqueeze(-1)), dim=2).float() - - for pred, landm, inds in zip(b_loc_and_conf, b_landmarks, b_indice): - - # ignore low scores - pred, landm = pred[inds, :], landm[inds, :] - if pred.shape[0] == 0: - final_bounding_boxes.append(np.array([], dtype=np.float32)) - final_landmarks.append(np.array([], dtype=np.float32)) - continue - - # sort - # order = score.argsort(descending=True) - # box, landm, score = box[order], landm[order], score[order] - - # to CPU - bounding_boxes, landm = pred.cpu().numpy(), landm.cpu().numpy() - - # NMS - keep = py_cpu_nms(bounding_boxes, nms_threshold) - bounding_boxes, landmarks = bounding_boxes[keep, :], landm[keep] - - # append - final_bounding_boxes.append(bounding_boxes) - final_landmarks.append(landmarks) - # self.t['forward_pass'].toc(average=True) - # self.batch_time += self.t['forward_pass'].diff - # self.total_frame += len(frames) - # print(self.batch_time / self.total_frame) - - return final_bounding_boxes, final_landmarks diff --git a/spaces/sdhsdhk/bingo111/src/components/markdown.tsx b/spaces/sdhsdhk/bingo111/src/components/markdown.tsx deleted file mode 100644 index d4491467a1f14d1d72e535caac9c40636054e5df..0000000000000000000000000000000000000000 --- a/spaces/sdhsdhk/bingo111/src/components/markdown.tsx +++ /dev/null @@ -1,9 +0,0 @@ -import { FC, memo } from 'react' -import ReactMarkdown, { Options } from 'react-markdown' - -export const MemoizedReactMarkdown: FC = memo( - ReactMarkdown, - (prevProps, nextProps) => - prevProps.children === nextProps.children && - prevProps.className === nextProps.className -) diff --git a/spaces/segments-tobias/conex/espnet2/optimizers/__init__.py b/spaces/segments-tobias/conex/espnet2/optimizers/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/segments/panoptic-segment-anything/GroundingDINO/groundingdino/models/GroundingDINO/ms_deform_attn.py b/spaces/segments/panoptic-segment-anything/GroundingDINO/groundingdino/models/GroundingDINO/ms_deform_attn.py deleted file mode 100644 index 489d501bef364020212306d81e9b85c8daa27491..0000000000000000000000000000000000000000 --- a/spaces/segments/panoptic-segment-anything/GroundingDINO/groundingdino/models/GroundingDINO/ms_deform_attn.py +++ /dev/null @@ -1,413 +0,0 @@ -# ------------------------------------------------------------------------ -# Grounding DINO -# url: https://github.com/IDEA-Research/GroundingDINO -# Copyright (c) 2023 IDEA. All Rights Reserved. -# Licensed under the Apache License, Version 2.0 [see LICENSE for details] -# ------------------------------------------------------------------------ -# Deformable DETR -# Copyright (c) 2020 SenseTime. All Rights Reserved. -# Licensed under the Apache License, Version 2.0 [see LICENSE for details] -# ------------------------------------------------------------------------------------------------ -# Modified from: -# https://github.com/fundamentalvision/Deformable-DETR/blob/main/models/ops/functions/ms_deform_attn_func.py -# https://github.com/fundamentalvision/Deformable-DETR/blob/main/models/ops/modules/ms_deform_attn.py -# https://github.com/open-mmlab/mmcv/blob/master/mmcv/ops/multi_scale_deform_attn.py -# ------------------------------------------------------------------------------------------------ - -import math -import warnings -from typing import Optional - -import torch -import torch.nn as nn -import torch.nn.functional as F -from torch.autograd import Function -from torch.autograd.function import once_differentiable -from torch.nn.init import constant_, xavier_uniform_ - -try: - from groundingdino import _C -except: - warnings.warn("Failed to load custom C++ ops. Running on CPU mode Only!") - - -# helpers -def _is_power_of_2(n): - if (not isinstance(n, int)) or (n < 0): - raise ValueError("invalid input for _is_power_of_2: {} (type: {})".format(n, type(n))) - return (n & (n - 1) == 0) and n != 0 - - -class MultiScaleDeformableAttnFunction(Function): - @staticmethod - def forward( - ctx, - value, - value_spatial_shapes, - value_level_start_index, - sampling_locations, - attention_weights, - im2col_step, - ): - ctx.im2col_step = im2col_step - output = _C.ms_deform_attn_forward( - value, - value_spatial_shapes, - value_level_start_index, - sampling_locations, - attention_weights, - ctx.im2col_step, - ) - ctx.save_for_backward( - value, - value_spatial_shapes, - value_level_start_index, - sampling_locations, - attention_weights, - ) - return output - - @staticmethod - @once_differentiable - def backward(ctx, grad_output): - ( - value, - value_spatial_shapes, - value_level_start_index, - sampling_locations, - attention_weights, - ) = ctx.saved_tensors - grad_value, grad_sampling_loc, grad_attn_weight = _C.ms_deform_attn_backward( - value, - value_spatial_shapes, - value_level_start_index, - sampling_locations, - attention_weights, - grad_output, - ctx.im2col_step, - ) - - return grad_value, None, None, grad_sampling_loc, grad_attn_weight, None - - -def multi_scale_deformable_attn_pytorch( - value: torch.Tensor, - value_spatial_shapes: torch.Tensor, - sampling_locations: torch.Tensor, - attention_weights: torch.Tensor, -) -> torch.Tensor: - - bs, _, num_heads, embed_dims = value.shape - _, num_queries, num_heads, num_levels, num_points, _ = sampling_locations.shape - value_list = value.split([H_ * W_ for H_, W_ in value_spatial_shapes], dim=1) - sampling_grids = 2 * sampling_locations - 1 - sampling_value_list = [] - for level, (H_, W_) in enumerate(value_spatial_shapes): - # bs, H_*W_, num_heads, embed_dims -> - # bs, H_*W_, num_heads*embed_dims -> - # bs, num_heads*embed_dims, H_*W_ -> - # bs*num_heads, embed_dims, H_, W_ - value_l_ = ( - value_list[level].flatten(2).transpose(1, 2).reshape(bs * num_heads, embed_dims, H_, W_) - ) - # bs, num_queries, num_heads, num_points, 2 -> - # bs, num_heads, num_queries, num_points, 2 -> - # bs*num_heads, num_queries, num_points, 2 - sampling_grid_l_ = sampling_grids[:, :, :, level].transpose(1, 2).flatten(0, 1) - # bs*num_heads, embed_dims, num_queries, num_points - sampling_value_l_ = F.grid_sample( - value_l_, sampling_grid_l_, mode="bilinear", padding_mode="zeros", align_corners=False - ) - sampling_value_list.append(sampling_value_l_) - # (bs, num_queries, num_heads, num_levels, num_points) -> - # (bs, num_heads, num_queries, num_levels, num_points) -> - # (bs, num_heads, 1, num_queries, num_levels*num_points) - attention_weights = attention_weights.transpose(1, 2).reshape( - bs * num_heads, 1, num_queries, num_levels * num_points - ) - output = ( - (torch.stack(sampling_value_list, dim=-2).flatten(-2) * attention_weights) - .sum(-1) - .view(bs, num_heads * embed_dims, num_queries) - ) - return output.transpose(1, 2).contiguous() - - -class MultiScaleDeformableAttention(nn.Module): - """Multi-Scale Deformable Attention Module used in Deformable-DETR - - `Deformable DETR: Deformable Transformers for End-to-End Object Detection. - `_. - - Args: - embed_dim (int): The embedding dimension of Attention. Default: 256. - num_heads (int): The number of attention heads. Default: 8. - num_levels (int): The number of feature map used in Attention. Default: 4. - num_points (int): The number of sampling points for each query - in each head. Default: 4. - img2col_steps (int): The step used in image_to_column. Defualt: 64. - dropout (float): Dropout layer used in output. Default: 0.1. - batch_first (bool): if ``True``, then the input and output tensor will be - provided as `(bs, n, embed_dim)`. Default: False. `(n, bs, embed_dim)` - """ - - def __init__( - self, - embed_dim: int = 256, - num_heads: int = 8, - num_levels: int = 4, - num_points: int = 4, - img2col_step: int = 64, - batch_first: bool = False, - ): - super().__init__() - if embed_dim % num_heads != 0: - raise ValueError( - "embed_dim must be divisible by num_heads, but got {} and {}".format( - embed_dim, num_heads - ) - ) - head_dim = embed_dim // num_heads - - self.batch_first = batch_first - - if not _is_power_of_2(head_dim): - warnings.warn( - """ - You'd better set d_model in MSDeformAttn to make sure that - each dim of the attention head a power of 2, which is more efficient. - """ - ) - - self.im2col_step = img2col_step - self.embed_dim = embed_dim - self.num_heads = num_heads - self.num_levels = num_levels - self.num_points = num_points - self.sampling_offsets = nn.Linear(embed_dim, num_heads * num_levels * num_points * 2) - self.attention_weights = nn.Linear(embed_dim, num_heads * num_levels * num_points) - self.value_proj = nn.Linear(embed_dim, embed_dim) - self.output_proj = nn.Linear(embed_dim, embed_dim) - - self.init_weights() - - def _reset_parameters(self): - return self.init_weights() - - def init_weights(self): - """ - Default initialization for Parameters of Module. - """ - constant_(self.sampling_offsets.weight.data, 0.0) - thetas = torch.arange(self.num_heads, dtype=torch.float32) * ( - 2.0 * math.pi / self.num_heads - ) - grid_init = torch.stack([thetas.cos(), thetas.sin()], -1) - grid_init = ( - (grid_init / grid_init.abs().max(-1, keepdim=True)[0]) - .view(self.num_heads, 1, 1, 2) - .repeat(1, self.num_levels, self.num_points, 1) - ) - for i in range(self.num_points): - grid_init[:, :, i, :] *= i + 1 - with torch.no_grad(): - self.sampling_offsets.bias = nn.Parameter(grid_init.view(-1)) - constant_(self.attention_weights.weight.data, 0.0) - constant_(self.attention_weights.bias.data, 0.0) - xavier_uniform_(self.value_proj.weight.data) - constant_(self.value_proj.bias.data, 0.0) - xavier_uniform_(self.output_proj.weight.data) - constant_(self.output_proj.bias.data, 0.0) - - def freeze_sampling_offsets(self): - print("Freeze sampling offsets") - self.sampling_offsets.weight.requires_grad = False - self.sampling_offsets.bias.requires_grad = False - - def freeze_attention_weights(self): - print("Freeze attention weights") - self.attention_weights.weight.requires_grad = False - self.attention_weights.bias.requires_grad = False - - def forward( - self, - query: torch.Tensor, - key: Optional[torch.Tensor] = None, - value: Optional[torch.Tensor] = None, - query_pos: Optional[torch.Tensor] = None, - key_padding_mask: Optional[torch.Tensor] = None, - reference_points: Optional[torch.Tensor] = None, - spatial_shapes: Optional[torch.Tensor] = None, - level_start_index: Optional[torch.Tensor] = None, - **kwargs - ) -> torch.Tensor: - - """Forward Function of MultiScaleDeformableAttention - - Args: - query (torch.Tensor): Query embeddings with shape - `(num_query, bs, embed_dim)` - key (torch.Tensor): Key embeddings with shape - `(num_key, bs, embed_dim)` - value (torch.Tensor): Value embeddings with shape - `(num_key, bs, embed_dim)` - query_pos (torch.Tensor): The position embedding for `query`. Default: None. - key_padding_mask (torch.Tensor): ByteTensor for `query`, with shape `(bs, num_key)`, - indicating which elements within `key` to be ignored in attention. - reference_points (torch.Tensor): The normalized reference points - with shape `(bs, num_query, num_levels, 2)`, - all elements is range in [0, 1], top-left (0, 0), - bottom-right (1, 1), including padding are. - or `(N, Length_{query}, num_levels, 4)`, add additional - two dimensions `(h, w)` to form reference boxes. - spatial_shapes (torch.Tensor): Spatial shape of features in different levels. - With shape `(num_levels, 2)`, last dimension represents `(h, w)`. - level_start_index (torch.Tensor): The start index of each level. A tensor with - shape `(num_levels, )` which can be represented as - `[0, h_0 * w_0, h_0 * w_0 + h_1 * w_1, ...]`. - - Returns: - torch.Tensor: forward results with shape `(num_query, bs, embed_dim)` - """ - - if value is None: - value = query - - if query_pos is not None: - query = query + query_pos - - if not self.batch_first: - # change to (bs, num_query ,embed_dims) - query = query.permute(1, 0, 2) - value = value.permute(1, 0, 2) - - bs, num_query, _ = query.shape - bs, num_value, _ = value.shape - - assert (spatial_shapes[:, 0] * spatial_shapes[:, 1]).sum() == num_value - - value = self.value_proj(value) - if key_padding_mask is not None: - value = value.masked_fill(key_padding_mask[..., None], float(0)) - value = value.view(bs, num_value, self.num_heads, -1) - sampling_offsets = self.sampling_offsets(query).view( - bs, num_query, self.num_heads, self.num_levels, self.num_points, 2 - ) - attention_weights = self.attention_weights(query).view( - bs, num_query, self.num_heads, self.num_levels * self.num_points - ) - attention_weights = attention_weights.softmax(-1) - attention_weights = attention_weights.view( - bs, - num_query, - self.num_heads, - self.num_levels, - self.num_points, - ) - - # bs, num_query, num_heads, num_levels, num_points, 2 - if reference_points.shape[-1] == 2: - offset_normalizer = torch.stack([spatial_shapes[..., 1], spatial_shapes[..., 0]], -1) - sampling_locations = ( - reference_points[:, :, None, :, None, :] - + sampling_offsets / offset_normalizer[None, None, None, :, None, :] - ) - elif reference_points.shape[-1] == 4: - sampling_locations = ( - reference_points[:, :, None, :, None, :2] - + sampling_offsets - / self.num_points - * reference_points[:, :, None, :, None, 2:] - * 0.5 - ) - else: - raise ValueError( - "Last dim of reference_points must be 2 or 4, but get {} instead.".format( - reference_points.shape[-1] - ) - ) - - if torch.cuda.is_available() and value.is_cuda: - halffloat = False - if value.dtype == torch.float16: - halffloat = True - value = value.float() - sampling_locations = sampling_locations.float() - attention_weights = attention_weights.float() - - output = MultiScaleDeformableAttnFunction.apply( - value, - spatial_shapes, - level_start_index, - sampling_locations, - attention_weights, - self.im2col_step, - ) - - if halffloat: - output = output.half() - else: - output = multi_scale_deformable_attn_pytorch( - value, spatial_shapes, sampling_locations, attention_weights - ) - - output = self.output_proj(output) - - if not self.batch_first: - output = output.permute(1, 0, 2) - - return output - - -def create_dummy_class(klass, dependency, message=""): - """ - When a dependency of a class is not available, create a dummy class which throws ImportError - when used. - - Args: - klass (str): name of the class. - dependency (str): name of the dependency. - message: extra message to print - Returns: - class: a class object - """ - err = "Cannot import '{}', therefore '{}' is not available.".format(dependency, klass) - if message: - err = err + " " + message - - class _DummyMetaClass(type): - # throw error on class attribute access - def __getattr__(_, __): # noqa: B902 - raise ImportError(err) - - class _Dummy(object, metaclass=_DummyMetaClass): - # throw error on constructor - def __init__(self, *args, **kwargs): - raise ImportError(err) - - return _Dummy - - -def create_dummy_func(func, dependency, message=""): - """ - When a dependency of a function is not available, create a dummy function which throws - ImportError when used. - - Args: - func (str): name of the function. - dependency (str or list[str]): name(s) of the dependency. - message: extra message to print - Returns: - function: a function object - """ - err = "Cannot import '{}', therefore '{}' is not available.".format(dependency, func) - if message: - err = err + " " + message - - if isinstance(dependency, (list, tuple)): - dependency = ",".join(dependency) - - def _dummy(*args, **kwargs): - raise ImportError(err) - - return _dummy diff --git a/spaces/segments/panoptic-segment-anything/segment_anything/segment_anything/build_sam.py b/spaces/segments/panoptic-segment-anything/segment_anything/segment_anything/build_sam.py deleted file mode 100644 index 07abfca24e96eced7f13bdefd3212ce1b77b8999..0000000000000000000000000000000000000000 --- a/spaces/segments/panoptic-segment-anything/segment_anything/segment_anything/build_sam.py +++ /dev/null @@ -1,107 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. - -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import torch - -from functools import partial - -from .modeling import ImageEncoderViT, MaskDecoder, PromptEncoder, Sam, TwoWayTransformer - - -def build_sam_vit_h(checkpoint=None): - return _build_sam( - encoder_embed_dim=1280, - encoder_depth=32, - encoder_num_heads=16, - encoder_global_attn_indexes=[7, 15, 23, 31], - checkpoint=checkpoint, - ) - - -build_sam = build_sam_vit_h - - -def build_sam_vit_l(checkpoint=None): - return _build_sam( - encoder_embed_dim=1024, - encoder_depth=24, - encoder_num_heads=16, - encoder_global_attn_indexes=[5, 11, 17, 23], - checkpoint=checkpoint, - ) - - -def build_sam_vit_b(checkpoint=None): - return _build_sam( - encoder_embed_dim=768, - encoder_depth=12, - encoder_num_heads=12, - encoder_global_attn_indexes=[2, 5, 8, 11], - checkpoint=checkpoint, - ) - - -sam_model_registry = { - "default": build_sam, - "vit_h": build_sam, - "vit_l": build_sam_vit_l, - "vit_b": build_sam_vit_b, -} - - -def _build_sam( - encoder_embed_dim, - encoder_depth, - encoder_num_heads, - encoder_global_attn_indexes, - checkpoint=None, -): - prompt_embed_dim = 256 - image_size = 1024 - vit_patch_size = 16 - image_embedding_size = image_size // vit_patch_size - sam = Sam( - image_encoder=ImageEncoderViT( - depth=encoder_depth, - embed_dim=encoder_embed_dim, - img_size=image_size, - mlp_ratio=4, - norm_layer=partial(torch.nn.LayerNorm, eps=1e-6), - num_heads=encoder_num_heads, - patch_size=vit_patch_size, - qkv_bias=True, - use_rel_pos=True, - global_attn_indexes=encoder_global_attn_indexes, - window_size=14, - out_chans=prompt_embed_dim, - ), - prompt_encoder=PromptEncoder( - embed_dim=prompt_embed_dim, - image_embedding_size=(image_embedding_size, image_embedding_size), - input_image_size=(image_size, image_size), - mask_in_chans=16, - ), - mask_decoder=MaskDecoder( - num_multimask_outputs=3, - transformer=TwoWayTransformer( - depth=2, - embedding_dim=prompt_embed_dim, - mlp_dim=2048, - num_heads=8, - ), - transformer_dim=prompt_embed_dim, - iou_head_depth=3, - iou_head_hidden_dim=256, - ), - pixel_mean=[123.675, 116.28, 103.53], - pixel_std=[58.395, 57.12, 57.375], - ) - sam.eval() - if checkpoint is not None: - with open(checkpoint, "rb") as f: - state_dict = torch.load(f) - sam.load_state_dict(state_dict) - return sam diff --git a/spaces/sgxz/bingo/src/components/chat-message.tsx b/spaces/sgxz/bingo/src/components/chat-message.tsx deleted file mode 100644 index bf272d8d7005cfd06c53bd213e09ea217e803549..0000000000000000000000000000000000000000 --- a/spaces/sgxz/bingo/src/components/chat-message.tsx +++ /dev/null @@ -1,93 +0,0 @@ -import remarkGfm from 'remark-gfm' -import remarkMath from 'remark-math' -import supersub from 'remark-supersub' -import remarkBreaks from 'remark-breaks' -import { cn } from '@/lib/utils' -import { CodeBlock } from '@/components/ui/codeblock' -import { MemoizedReactMarkdown } from '@/components/markdown' -import { LearnMore } from './learn-more' -import { ChatMessageModel } from '@/lib/bots/bing/types' -import { useEffect } from 'react' -import { TurnCounter } from './turn-counter' - -export interface ChatMessageProps { - message: ChatMessageModel -} - -export function ChatMessage({ message, ...props }: ChatMessageProps) { - useEffect(() => { - if (document.body.scrollHeight - window.innerHeight - window.scrollY - 200 < 0) { - window.scrollBy(0, 200) - } - }, [message.text]) - - return message.text ? ( -
                  -
                  - {obj.alt} - } - } catch (e) { - } - return {obj.alt} - }, - p({ children }) { - return

                  {children}

                  - }, - code({ node, inline, className, children, ...props }) { - if (children.length) { - if (children[0] == '▍') { - return ( - - ) - } - - children[0] = (children[0] as string).replace('`▍`', '▍') - } - - const match = /language-(\w+)/.exec(className || '') - - if (inline) { - return ( - - {children} - - ) - } - - return ( - - ) - } - }} - > - {message.text} -
                  -
                  -
                  - {message.author === 'bot' && } - {message.author === 'bot' && } -
                  -
                  - ) : null -} diff --git a/spaces/shgao/EditAnything/ldm/modules/midas/midas/midas_net.py b/spaces/shgao/EditAnything/ldm/modules/midas/midas/midas_net.py deleted file mode 100644 index 8a954977800b0a0f48807e80fa63041910e33c1f..0000000000000000000000000000000000000000 --- a/spaces/shgao/EditAnything/ldm/modules/midas/midas/midas_net.py +++ /dev/null @@ -1,76 +0,0 @@ -"""MidashNet: Network for monocular depth estimation trained by mixing several datasets. -This file contains code that is adapted from -https://github.com/thomasjpfan/pytorch_refinenet/blob/master/pytorch_refinenet/refinenet/refinenet_4cascade.py -""" -import torch -import torch.nn as nn - -from .base_model import BaseModel -from .blocks import FeatureFusionBlock, Interpolate, _make_encoder - - -class MidasNet(BaseModel): - """Network for monocular depth estimation. - """ - - def __init__(self, path=None, features=256, non_negative=True): - """Init. - - Args: - path (str, optional): Path to saved model. Defaults to None. - features (int, optional): Number of features. Defaults to 256. - backbone (str, optional): Backbone network for encoder. Defaults to resnet50 - """ - print("Loading weights: ", path) - - super(MidasNet, self).__init__() - - use_pretrained = False if path is None else True - - self.pretrained, self.scratch = _make_encoder(backbone="resnext101_wsl", features=features, use_pretrained=use_pretrained) - - self.scratch.refinenet4 = FeatureFusionBlock(features) - self.scratch.refinenet3 = FeatureFusionBlock(features) - self.scratch.refinenet2 = FeatureFusionBlock(features) - self.scratch.refinenet1 = FeatureFusionBlock(features) - - self.scratch.output_conv = nn.Sequential( - nn.Conv2d(features, 128, kernel_size=3, stride=1, padding=1), - Interpolate(scale_factor=2, mode="bilinear"), - nn.Conv2d(128, 32, kernel_size=3, stride=1, padding=1), - nn.ReLU(True), - nn.Conv2d(32, 1, kernel_size=1, stride=1, padding=0), - nn.ReLU(True) if non_negative else nn.Identity(), - ) - - if path: - self.load(path) - - def forward(self, x): - """Forward pass. - - Args: - x (tensor): input data (image) - - Returns: - tensor: depth - """ - - layer_1 = self.pretrained.layer1(x) - layer_2 = self.pretrained.layer2(layer_1) - layer_3 = self.pretrained.layer3(layer_2) - layer_4 = self.pretrained.layer4(layer_3) - - layer_1_rn = self.scratch.layer1_rn(layer_1) - layer_2_rn = self.scratch.layer2_rn(layer_2) - layer_3_rn = self.scratch.layer3_rn(layer_3) - layer_4_rn = self.scratch.layer4_rn(layer_4) - - path_4 = self.scratch.refinenet4(layer_4_rn) - path_3 = self.scratch.refinenet3(path_4, layer_3_rn) - path_2 = self.scratch.refinenet2(path_3, layer_2_rn) - path_1 = self.scratch.refinenet1(path_2, layer_1_rn) - - out = self.scratch.output_conv(path_1) - - return torch.squeeze(out, dim=1) diff --git a/spaces/shi-labs/Prompt-Free-Diffusion/lib/model_zoo/seecoder_decoder.py b/spaces/shi-labs/Prompt-Free-Diffusion/lib/model_zoo/seecoder_decoder.py deleted file mode 100644 index 49ba6f5fa550d183d2424dc39eeddc6a78d64f97..0000000000000000000000000000000000000000 --- a/spaces/shi-labs/Prompt-Free-Diffusion/lib/model_zoo/seecoder_decoder.py +++ /dev/null @@ -1,15 +0,0 @@ -import numpy as np -import torch -import torch.nn as nn -import torch.nn.functional as F -from torch.nn.init import xavier_uniform_, constant_, uniform_, normal_ - -from lib.model_zoo.common.get_model import get_model, register - -from .seecoder_utils import PositionEmbeddingSine, _get_clones, \ - _get_activation_fn, _is_power_of_2, c2_xavier_fill, Conv2d_Convenience - -########### -# modules # -########### - diff --git a/spaces/shi-labs/Prompt-Free-Diffusion/lib/model_zoo/swin.py b/spaces/shi-labs/Prompt-Free-Diffusion/lib/model_zoo/swin.py deleted file mode 100644 index e6191009f528911b2b9cb518550ec9c48204bdb6..0000000000000000000000000000000000000000 --- a/spaces/shi-labs/Prompt-Free-Diffusion/lib/model_zoo/swin.py +++ /dev/null @@ -1,659 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F -import torch.utils.checkpoint as checkpoint -import numpy as np -from lib.model_zoo.common.get_model import register - - -############################## -# timm.models.layers helpers # -############################## - -def drop_path(x, drop_prob: float = 0., training: bool = False, scale_by_keep: bool = True): - if drop_prob == 0. or not training: - return x - keep_prob = 1 - drop_prob - shape = (x.shape[0],) + (1,) * (x.ndim - 1) # work with diff dim tensors, not just 2D ConvNets - random_tensor = x.new_empty(shape).bernoulli_(keep_prob) - if keep_prob > 0.0 and scale_by_keep: - random_tensor.div_(keep_prob) - return x * random_tensor - -class DropPath(nn.Module): - """Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks). - """ - def __init__(self, drop_prob: float = 0., scale_by_keep: bool = True): - super(DropPath, self).__init__() - self.drop_prob = drop_prob - self.scale_by_keep = scale_by_keep - - def forward(self, x): - return drop_path(x, self.drop_prob, self.training, self.scale_by_keep) - - def extra_repr(self): - return f'drop_prob={round(self.drop_prob,3):0.3f}' - -def _ntuple(n): - def parse(x): - from itertools import repeat - import collections.abc - if isinstance(x, collections.abc.Iterable) and not isinstance(x, str): - return tuple(x) - return tuple(repeat(x, n)) - return parse - -to_1tuple = _ntuple(1) -to_2tuple = _ntuple(2) -to_3tuple = _ntuple(3) -to_4tuple = _ntuple(4) -to_ntuple = _ntuple - -def _trunc_normal_(tensor, mean, std, a, b): - import warnings - import math - - def norm_cdf(x): - return (1. + math.erf(x / math.sqrt(2.))) / 2. - - if (mean < a - 2 * std) or (mean > b + 2 * std): - warnings.warn("mean is more than 2 std from [a, b] in nn.init.trunc_normal_. " - "The distribution of values may be incorrect.", - stacklevel=2) - - l = norm_cdf((a - mean) / std) - u = norm_cdf((b - mean) / std) - tensor.uniform_(2 * l - 1, 2 * u - 1) - tensor.erfinv_() - tensor.mul_(std * math.sqrt(2.)) - tensor.add_(mean) - tensor.clamp_(min=a, max=b) - return tensor - -def trunc_normal_(tensor, mean=0., std=1., a=-2., b=2.): - with torch.no_grad(): - return _trunc_normal_(tensor, mean, std, a, b) - -############# -# main swin # -############# - -class Mlp(nn.Module): - """ Multilayer perceptron.""" - - def __init__(self, in_features, hidden_features=None, out_features=None, act_layer=nn.GELU, drop=0.): - super().__init__() - out_features = out_features or in_features - hidden_features = hidden_features or in_features - self.fc1 = nn.Linear(in_features, hidden_features) - self.act = act_layer() - self.fc2 = nn.Linear(hidden_features, out_features) - self.drop = nn.Dropout(drop) - - def forward(self, x): - x = self.fc1(x) - x = self.act(x) - x = self.drop(x) - x = self.fc2(x) - x = self.drop(x) - return x - - -def window_partition(x, window_size): - """ - Args: - x: (B, H, W, C) - window_size (int): window size - Returns: - windows: (num_windows*B, window_size, window_size, C) - """ - B, H, W, C = x.shape - x = x.view(B, H // window_size, window_size, W // window_size, window_size, C) - windows = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, window_size, window_size, C) - return windows - - -def window_reverse(windows, window_size, H, W): - """ - Args: - windows: (num_windows*B, window_size, window_size, C) - window_size (int): Window size - H (int): Height of image - W (int): Width of image - Returns: - x: (B, H, W, C) - """ - B = int(windows.shape[0] / (H * W / window_size / window_size)) - x = windows.view(B, H // window_size, W // window_size, window_size, window_size, -1) - x = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(B, H, W, -1) - return x - - -class WindowAttention(nn.Module): - """ Window based multi-head self attention (W-MSA) module with relative position bias. - It supports both of shifted and non-shifted window. - Args: - dim (int): Number of input channels. - window_size (tuple[int]): The height and width of the window. - num_heads (int): Number of attention heads. - qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True - qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set - attn_drop (float, optional): Dropout ratio of attention weight. Default: 0.0 - proj_drop (float, optional): Dropout ratio of output. Default: 0.0 - """ - - def __init__(self, dim, window_size, num_heads, qkv_bias=True, qk_scale=None, attn_drop=0., proj_drop=0.): - - super().__init__() - self.dim = dim - self.window_size = window_size # Wh, Ww - self.num_heads = num_heads - head_dim = dim // num_heads - self.scale = qk_scale or head_dim ** -0.5 - - # define a parameter table of relative position bias - self.relative_position_bias_table = nn.Parameter( - torch.zeros((2 * window_size[0] - 1) * (2 * window_size[1] - 1), num_heads)) # 2*Wh-1 * 2*Ww-1, nH - - # get pair-wise relative position index for each token inside the window - coords_h = torch.arange(self.window_size[0]) - coords_w = torch.arange(self.window_size[1]) - coords = torch.stack(torch.meshgrid([coords_h, coords_w], indexing='ij')) # 2, Wh, Ww - coords_flatten = torch.flatten(coords, 1) # 2, Wh*Ww - relative_coords = coords_flatten[:, :, None] - coords_flatten[:, None, :] # 2, Wh*Ww, Wh*Ww - relative_coords = relative_coords.permute(1, 2, 0).contiguous() # Wh*Ww, Wh*Ww, 2 - relative_coords[:, :, 0] += self.window_size[0] - 1 # shift to start from 0 - relative_coords[:, :, 1] += self.window_size[1] - 1 - relative_coords[:, :, 0] *= 2 * self.window_size[1] - 1 - relative_position_index = relative_coords.sum(-1) # Wh*Ww, Wh*Ww - self.register_buffer("relative_position_index", relative_position_index) - - self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias) - self.attn_drop = nn.Dropout(attn_drop) - self.proj = nn.Linear(dim, dim) - self.proj_drop = nn.Dropout(proj_drop) - - trunc_normal_(self.relative_position_bias_table, std=.02) - self.softmax = nn.Softmax(dim=-1) - - def forward(self, x, mask=None): - """ Forward function. - Args: - x: input features with shape of (num_windows*B, N, C) - mask: (0/-inf) mask with shape of (num_windows, Wh*Ww, Wh*Ww) or None - """ - B_, N, C = x.shape - qkv = self.qkv(x).reshape(B_, N, 3, self.num_heads, C // self.num_heads).permute(2, 0, 3, 1, 4) - q, k, v = qkv[0], qkv[1], qkv[2] # make torchscript happy (cannot use tensor as tuple) - - q = q * self.scale - attn = (q @ k.transpose(-2, -1)) - - relative_position_bias = self.relative_position_bias_table[self.relative_position_index.view(-1)].view( - self.window_size[0] * self.window_size[1], self.window_size[0] * self.window_size[1], -1) # Wh*Ww,Wh*Ww,nH - relative_position_bias = relative_position_bias.permute(2, 0, 1).contiguous() # nH, Wh*Ww, Wh*Ww - attn = attn + relative_position_bias.unsqueeze(0) - - if mask is not None: - nW = mask.shape[0] - attn = attn.view(B_ // nW, nW, self.num_heads, N, N) + mask.unsqueeze(1).unsqueeze(0) - attn = attn.view(-1, self.num_heads, N, N) - attn = self.softmax(attn) - else: - attn = self.softmax(attn) - - attn = self.attn_drop(attn) - - x = (attn @ v).transpose(1, 2).reshape(B_, N, C) - x = self.proj(x) - x = self.proj_drop(x) - return x - - -class SwinTransformerBlock(nn.Module): - """ Swin Transformer Block. - Args: - dim (int): Number of input channels. - num_heads (int): Number of attention heads. - window_size (int): Window size. - shift_size (int): Shift size for SW-MSA. - mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. - qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True - qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set. - drop (float, optional): Dropout rate. Default: 0.0 - attn_drop (float, optional): Attention dropout rate. Default: 0.0 - drop_path (float, optional): Stochastic depth rate. Default: 0.0 - act_layer (nn.Module, optional): Activation layer. Default: nn.GELU - norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm - """ - - def __init__(self, dim, num_heads, window_size=7, shift_size=0, - mlp_ratio=4., qkv_bias=True, qk_scale=None, drop=0., attn_drop=0., drop_path=0., - act_layer=nn.GELU, norm_layer=nn.LayerNorm): - super().__init__() - self.dim = dim - self.num_heads = num_heads - self.window_size = window_size - self.shift_size = shift_size - self.mlp_ratio = mlp_ratio - assert 0 <= self.shift_size < self.window_size, "shift_size must in 0-window_size" - - self.norm1 = norm_layer(dim) - self.attn = WindowAttention( - dim, window_size=to_2tuple(self.window_size), num_heads=num_heads, - qkv_bias=qkv_bias, qk_scale=qk_scale, attn_drop=attn_drop, proj_drop=drop) - - self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity() - self.norm2 = norm_layer(dim) - mlp_hidden_dim = int(dim * mlp_ratio) - self.mlp = Mlp(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop) - - self.H = None - self.W = None - - def forward(self, x, mask_matrix): - """ Forward function. - Args: - x: Input feature, tensor size (B, H*W, C). - H, W: Spatial resolution of the input feature. - mask_matrix: Attention mask for cyclic shift. - """ - B, L, C = x.shape - H, W = self.H, self.W - assert L == H * W, "input feature has wrong size" - - shortcut = x - x = self.norm1(x) - x = x.view(B, H, W, C) - - # pad feature maps to multiples of window size - pad_l = pad_t = 0 - pad_r = (self.window_size - W % self.window_size) % self.window_size - pad_b = (self.window_size - H % self.window_size) % self.window_size - x = F.pad(x, (0, 0, pad_l, pad_r, pad_t, pad_b)) - _, Hp, Wp, _ = x.shape - - # cyclic shift - if self.shift_size > 0: - shifted_x = torch.roll(x, shifts=(-self.shift_size, -self.shift_size), dims=(1, 2)) - attn_mask = mask_matrix - else: - shifted_x = x - attn_mask = None - - # partition windows - x_windows = window_partition(shifted_x, self.window_size) # nW*B, window_size, window_size, C - x_windows = x_windows.view(-1, self.window_size * self.window_size, C) # nW*B, window_size*window_size, C - - # W-MSA/SW-MSA - attn_windows = self.attn(x_windows, mask=attn_mask) # nW*B, window_size*window_size, C - - # merge windows - attn_windows = attn_windows.view(-1, self.window_size, self.window_size, C) - shifted_x = window_reverse(attn_windows, self.window_size, Hp, Wp) # B H' W' C - - # reverse cyclic shift - if self.shift_size > 0: - x = torch.roll(shifted_x, shifts=(self.shift_size, self.shift_size), dims=(1, 2)) - else: - x = shifted_x - - if pad_r > 0 or pad_b > 0: - x = x[:, :H, :W, :].contiguous() - - x = x.view(B, H * W, C) - - # FFN - x = shortcut + self.drop_path(x) - x = x + self.drop_path(self.mlp(self.norm2(x))) - - return x - - -class PatchMerging(nn.Module): - """ Patch Merging Layer - Args: - dim (int): Number of input channels. - norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm - """ - def __init__(self, dim, norm_layer=nn.LayerNorm): - super().__init__() - self.dim = dim - self.reduction = nn.Linear(4 * dim, 2 * dim, bias=False) - self.norm = norm_layer(4 * dim) - - def forward(self, x, H, W): - """ Forward function. - Args: - x: Input feature, tensor size (B, H*W, C). - H, W: Spatial resolution of the input feature. - """ - B, L, C = x.shape - assert L == H * W, "input feature has wrong size" - - x = x.view(B, H, W, C) - - # padding - pad_input = (H % 2 == 1) or (W % 2 == 1) - if pad_input: - x = F.pad(x, (0, 0, 0, W % 2, 0, H % 2)) - - x0 = x[:, 0::2, 0::2, :] # B H/2 W/2 C - x1 = x[:, 1::2, 0::2, :] # B H/2 W/2 C - x2 = x[:, 0::2, 1::2, :] # B H/2 W/2 C - x3 = x[:, 1::2, 1::2, :] # B H/2 W/2 C - x = torch.cat([x0, x1, x2, x3], -1) # B H/2 W/2 4*C - x = x.view(B, -1, 4 * C) # B H/2*W/2 4*C - - x = self.norm(x) - x = self.reduction(x) - - return x - - -class BasicLayer(nn.Module): - """ A basic Swin Transformer layer for one stage. - Args: - dim (int): Number of feature channels - depth (int): Depths of this stage. - num_heads (int): Number of attention head. - window_size (int): Local window size. Default: 7. - mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. Default: 4. - qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True - qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set. - drop (float, optional): Dropout rate. Default: 0.0 - attn_drop (float, optional): Attention dropout rate. Default: 0.0 - drop_path (float | tuple[float], optional): Stochastic depth rate. Default: 0.0 - norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm - downsample (nn.Module | None, optional): Downsample layer at the end of the layer. Default: None - use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False. - """ - - def __init__(self, - dim, - depth, - num_heads, - window_size=7, - mlp_ratio=4., - qkv_bias=True, - qk_scale=None, - drop=0., - attn_drop=0., - drop_path=0., - norm_layer=nn.LayerNorm, - downsample=None, - use_checkpoint=False): - super().__init__() - self.window_size = window_size - self.shift_size = window_size // 2 - self.depth = depth - self.use_checkpoint = use_checkpoint - - # build blocks - self.blocks = nn.ModuleList([ - SwinTransformerBlock( - dim=dim, - num_heads=num_heads, - window_size=window_size, - shift_size=0 if (i % 2 == 0) else window_size // 2, - mlp_ratio=mlp_ratio, - qkv_bias=qkv_bias, - qk_scale=qk_scale, - drop=drop, - attn_drop=attn_drop, - drop_path=drop_path[i] if isinstance(drop_path, list) else drop_path, - norm_layer=norm_layer) - for i in range(depth)]) - - # patch merging layer - if downsample is not None: - self.downsample = downsample(dim=dim, norm_layer=norm_layer) - else: - self.downsample = None - - def forward(self, x, H, W): - """ Forward function. - Args: - x: Input feature, tensor size (B, H*W, C). - H, W: Spatial resolution of the input feature. - """ - - # calculate attention mask for SW-MSA - Hp = int(np.ceil(H / self.window_size)) * self.window_size - Wp = int(np.ceil(W / self.window_size)) * self.window_size - img_mask = torch.zeros((1, Hp, Wp, 1), device=x.device, dtype=x.dtype) # 1 Hp Wp 1 - h_slices = (slice(0, -self.window_size), - slice(-self.window_size, -self.shift_size), - slice(-self.shift_size, None)) - w_slices = (slice(0, -self.window_size), - slice(-self.window_size, -self.shift_size), - slice(-self.shift_size, None)) - cnt = 0 - for h in h_slices: - for w in w_slices: - img_mask[:, h, w, :] = cnt - cnt += 1 - - mask_windows = window_partition(img_mask, self.window_size) # nW, window_size, window_size, 1 - mask_windows = mask_windows.view(-1, self.window_size * self.window_size) - attn_mask = mask_windows.unsqueeze(1) - mask_windows.unsqueeze(2) - attn_mask = attn_mask.masked_fill(attn_mask != 0, float(-100.0)).masked_fill(attn_mask == 0, float(0.0)) - - for blk in self.blocks: - blk.H, blk.W = H, W - if self.use_checkpoint: - x = checkpoint.checkpoint(blk, x, attn_mask) - else: - x = blk(x, attn_mask) - if self.downsample is not None: - x_down = self.downsample(x, H, W) - Wh, Ww = (H + 1) // 2, (W + 1) // 2 - return x, H, W, x_down, Wh, Ww - else: - return x, H, W, x, H, W - - -class PatchEmbed(nn.Module): - """ Image to Patch Embedding - Args: - patch_size (int): Patch token size. Default: 4. - in_chans (int): Number of input image channels. Default: 3. - embed_dim (int): Number of linear projection output channels. Default: 96. - norm_layer (nn.Module, optional): Normalization layer. Default: None - """ - - def __init__(self, patch_size=4, in_chans=3, embed_dim=96, norm_layer=None): - super().__init__() - patch_size = to_2tuple(patch_size) - self.patch_size = patch_size - - self.in_chans = in_chans - self.embed_dim = embed_dim - - self.proj = nn.Conv2d(in_chans, embed_dim, kernel_size=patch_size, stride=patch_size) - if norm_layer is not None: - self.norm = norm_layer(embed_dim) - else: - self.norm = None - - def forward(self, x): - """Forward function.""" - # padding - _, _, H, W = x.size() - if W % self.patch_size[1] != 0: - x = F.pad(x, (0, self.patch_size[1] - W % self.patch_size[1])) - if H % self.patch_size[0] != 0: - x = F.pad(x, (0, 0, 0, self.patch_size[0] - H % self.patch_size[0])) - - x = self.proj(x) # B C Wh Ww - if self.norm is not None: - Wh, Ww = x.size(2), x.size(3) - x = x.flatten(2).transpose(1, 2) - x = self.norm(x) - x = x.transpose(1, 2).view(-1, self.embed_dim, Wh, Ww) - - return x - - -@register('swin') -class SwinTransformer(nn.Module): - """ Swin Transformer backbone. - A PyTorch impl of : `Swin Transformer: Hierarchical Vision Transformer using Shifted Windows` - - https://arxiv.org/pdf/2103.14030 - Args: - pretrain_img_size (int): Input image size for training the pretrained model, - used in absolute postion embedding. Default 224. - patch_size (int | tuple(int)): Patch size. Default: 4. - in_chans (int): Number of input image channels. Default: 3. - embed_dim (int): Number of linear projection output channels. Default: 96. - depths (tuple[int]): Depths of each Swin Transformer stage. - num_heads (tuple[int]): Number of attention head of each stage. - window_size (int): Window size. Default: 7. - mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. Default: 4. - qkv_bias (bool): If True, add a learnable bias to query, key, value. Default: True - qk_scale (float): Override default qk scale of head_dim ** -0.5 if set. - drop_rate (float): Dropout rate. - attn_drop_rate (float): Attention dropout rate. Default: 0. - drop_path_rate (float): Stochastic depth rate. Default: 0.2. - norm_layer (nn.Module): Normalization layer. Default: nn.LayerNorm. - ape (bool): If True, add absolute position embedding to the patch embedding. Default: False. - patch_norm (bool): If True, add normalization after patch embedding. Default: True. - out_indices (Sequence[int]): Output from which stages. - frozen_stages (int): Stages to be frozen (stop grad and set eval mode). - -1 means not freezing any parameters. - use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False. - """ - - def __init__(self, - pretrain_img_size=224, - patch_size=4, - in_chans=3, - embed_dim=96, - depths=[2, 2, 6, 2], - num_heads=[3, 6, 12, 24], - window_size=7, - mlp_ratio=4., - qkv_bias=True, - qk_scale=None, - drop_rate=0., - attn_drop_rate=0., - drop_path_rate=0.2, - norm_layer=nn.LayerNorm, - ape=False, - patch_norm=True, - out_indices=(0, 1, 2, 3), - frozen_stages=-1, - use_checkpoint=False): - super().__init__() - - self.pretrain_img_size = pretrain_img_size - self.num_layers = len(depths) - self.embed_dim = embed_dim - self.ape = ape - self.patch_norm = patch_norm - self.out_indices = out_indices - self.frozen_stages = frozen_stages - - # split image into non-overlapping patches - self.patch_embed = PatchEmbed( - patch_size=patch_size, in_chans=in_chans, embed_dim=embed_dim, - norm_layer=norm_layer if self.patch_norm else None) - - # absolute position embedding - if self.ape: - pretrain_img_size = to_2tuple(pretrain_img_size) - patch_size = to_2tuple(patch_size) - patches_resolution = [pretrain_img_size[0] // patch_size[0], pretrain_img_size[1] // patch_size[1]] - - self.absolute_pos_embed = nn.Parameter(torch.zeros(1, embed_dim, patches_resolution[0], patches_resolution[1])) - trunc_normal_(self.absolute_pos_embed, std=.02) - - self.pos_drop = nn.Dropout(p=drop_rate) - - # stochastic depth - dpr = [x.item() for x in torch.linspace(0, drop_path_rate, sum(depths))] # stochastic depth decay rule - - # build layers - self.layers = nn.ModuleList() - for i_layer in range(self.num_layers): - layer = BasicLayer( - dim=int(embed_dim * 2 ** i_layer), - depth=depths[i_layer], - num_heads=num_heads[i_layer], - window_size=window_size, - mlp_ratio=mlp_ratio, - qkv_bias=qkv_bias, - qk_scale=qk_scale, - drop=drop_rate, - attn_drop=attn_drop_rate, - drop_path=dpr[sum(depths[:i_layer]):sum(depths[:i_layer + 1])], - norm_layer=norm_layer, - downsample=PatchMerging if (i_layer < self.num_layers - 1) else None, - use_checkpoint=use_checkpoint) - self.layers.append(layer) - - num_features = [int(embed_dim * 2 ** i) for i in range(self.num_layers)] - self.num_features = num_features - - # add a norm layer for each output - for i_layer in out_indices: - layer = norm_layer(num_features[i_layer]) - layer_name = f'norm{i_layer}' - self.add_module(layer_name, layer) - - self._freeze_stages() - - def _freeze_stages(self): - if self.frozen_stages >= 0: - self.patch_embed.eval() - for param in self.patch_embed.parameters(): - param.requires_grad = False - - if self.frozen_stages >= 1 and self.ape: - self.absolute_pos_embed.requires_grad = False - - if self.frozen_stages >= 2: - self.pos_drop.eval() - for i in range(0, self.frozen_stages - 1): - m = self.layers[i] - m.eval() - for param in m.parameters(): - param.requires_grad = False - - def forward(self, x): - """Forward function.""" - x = self.patch_embed(x) - - Wh, Ww = x.size(2), x.size(3) - if self.ape: - # interpolate the position embedding to the corresponding size - absolute_pos_embed = F.interpolate(self.absolute_pos_embed, size=(Wh, Ww), mode='bicubic') - x = (x + absolute_pos_embed).flatten(2).transpose(1, 2) # B Wh*Ww C - else: - x = x.flatten(2).transpose(1, 2) - x = self.pos_drop(x) - - outs = [] - for i in range(self.num_layers): - layer = self.layers[i] - x_out, H, W, x, Wh, Ww = layer(x, Wh, Ww) - - if i in self.out_indices: - norm_layer = getattr(self, f'norm{i}') - x_out = norm_layer(x_out) - - out = x_out.view(-1, H, W, self.num_features[i]).permute(0, 3, 1, 2).contiguous() - outs.append(out) - - outputs = { - 'res2' : outs[0], - 'res3' : outs[1], - 'res4' : outs[2], - 'res5' : outs[3],} - return outputs - - def train(self, mode=True): - """Convert the model into training mode while keep layers freezed.""" - super(SwinTransformer, self).train(mode) - self._freeze_stages() - return self diff --git a/spaces/shiwan10000/CodeFormer/CodeFormer/basicsr/ops/__init__.py b/spaces/shiwan10000/CodeFormer/CodeFormer/basicsr/ops/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Epik APK Mod and Transform Your Photos with Stunning Tools and Options.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Epik APK Mod and Transform Your Photos with Stunning Tools and Options.md deleted file mode 100644 index 1d155e2d1ace963ba94114ab532dc69d798b2d3a..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Epik APK Mod and Transform Your Photos with Stunning Tools and Options.md +++ /dev/null @@ -1,102 +0,0 @@ - -

                  How to Download Epik APK Mod for Android

                  -

                  If you are looking for a way to edit your photos with amazing filters, stickers, and effects, then you should try Epik APK Mod. This is a modified version of the original Epik app that allows you to access all the premium features for free. In this article, we will show you what is Epik APK Mod, how to download and install it on your Android device, and what are the benefits of using it.

                  -

                  download epik apk mod


                  Download Zip ->>> https://ssurll.com/2uNQz8



                  -

                  What is Epik APK Mod?

                  -

                  Epik APK Mod is an application that lets you edit your photos with various tools and options. You can apply different filters, add stickers, adjust brightness, contrast, saturation, crop, rotate, and more. You can also create collages, memes, gifs, and videos with your photos. Epik APK Mod is a modified version of the original Epik app that was released by a third-party developer. This means that you can use all the features that are available in the original app without paying anything.

                  -

                  Features of Epik APK Mod

                  -

                  Epik APK Mod has many features that make it one of the best photo editing apps for Android. Here are some of them:

                  -

                  Unlimited Filters

                  -

                  You can choose from hundreds of filters that suit your mood and style. You can also adjust the intensity and blend mode of each filter to get the perfect result.

                  -

                  Premium Stickers

                  -

                  You can add fun and cute stickers to your photos from various categories such as animals, emojis, cartoons, celebrities, etc. You can also resize, rotate, and flip them as you like.

                  -

                  No Watermark

                  -

                  You can save and share your photos without any watermark or logo on them. This way, you can show off your creativity without any distraction.

                  -

                  download epik mod apk premium unlocked
                  -download epik mod apk latest version
                  -download epik mod apk free for android
                  -download epik mod apk full feature
                  -download epik mod apk no watermark
                  -download epik mod apk 2023
                  -download epik mod apk v4.0.2
                  -download epik mod apk v3.3.2
                  -download epik mod apk unlimited money
                  -download epik mod apk pro
                  -download epik photo editor mod apk
                  -download epik video editor mod apk
                  -download epik music editor mod apk
                  -download epik collage maker mod apk
                  -download epik sticker maker mod apk
                  -download epik filter maker mod apk
                  -download epik font maker mod apk
                  -download epik meme maker mod apk
                  -download epik logo maker mod apk
                  -download epik gif maker mod apk
                  -how to download epik mod apk
                  -where to download epik mod apk
                  -why download epik mod apk
                  -what is epik mod apk
                  -who created epik mod apk
                  -benefits of downloading epik mod apk
                  -features of downloading epik mod apk
                  -reviews of downloading epik mod apk
                  -ratings of downloading epik mod apk
                  -alternatives of downloading epik mod apk
                  -tips for downloading epik mod apk
                  -tricks for downloading epik mod apk
                  -hacks for downloading epik mod apk
                  -cheats for downloading epik mod apk
                  -guides for downloading epik mod apk
                  -tutorials for downloading epik mod apk
                  -steps for downloading epik mod apk
                  -methods for downloading epik mod apk
                  -best sites for downloading epik mod apk
                  -best apps for downloading epik mod apk
                  -best sources for downloading epik mod apk
                  -best ways for downloading epik mod apk
                  -best practices for downloading epik mod apk
                  -best strategies for downloading epik mod apk
                  -best tools for downloading epik mod apk
                  -best resources for downloading epik mod apk
                  -best platforms for downloading epik mod apk
                  -best devices for downloading epik mod apk
                  -best formats for downloading epik mod apk
                  -best languages for downloading epik mod apk

                  -

                  Easy to Use

                  -

                  Epik APK Mod has a simple and user-friendly interface that makes it easy for anyone to use. You can edit your photos in just a few taps and swipe. You can also preview your edits before saving them.

                  -

                  How to Download and Install Epik APK Mod?

                  -

                  If you want to download and install Epik APK Mod on your Android device, you need to follow these steps:

                  -

                  Step 1: Enable Unknown Sources

                  -

                  Since Epik APK Mod is not available on the Google Play Store, you need to enable unknown sources on your device. This will allow you to install apps from other sources than the official store. To do this, go to Settings > Security > Unknown Sources and toggle it on.

                  -

                  Step 2: Download Epik APK Mod File

                  -

                  Next, you need to download the Epik APK Mod file from a reliable source. You can use the link below to download it directly to your device.

                  -

                  Download Epik APK Mod

                  -

                  Step 3: Install Epik APK Mod File

                  -

                  Once you have downloaded the file, you need to locate it on your device and tap on it. This will start the installation process. You may see a warning message that asks you to confirm the installation. Just tap on Install and wait for a few seconds.

                  -

                  Step 4: Open Epik APK Mod App

                  -

                  After the installation is complete, you can open the Epik APK Mod app from your app drawer or home screen. You can now enjoy editing your photos with all the premium features for free.

                  -

                  Benefits of Using Epik APK Mod

                  -

                  Using Epik APK Mod has many benefits that make it worth trying. Here are some of them:

                  -

                  Enhance Your Photos

                  -

                  You can make your photos look more beautiful and professional with Epik APK Mod. You can apply filters, stickers, effects, and adjustments that suit your taste and preference. You can also create collages, memes, gifs, and videos with your photos and add some fun and creativity to them.

                  -

                  Share Your Creativity

                  -

                  You can share your edited photos with your friends and family on social media platforms such as Facebook, Instagram, Twitter, WhatsApp, etc. You can also save them to your device or cloud storage for future use. You can show off your skills and talent with Epik APK Mod.

                  -

                  Save Your Money

                  -

                  You can save your money by using Epik APK Mod instead of buying the original Epik app or other photo editing apps that charge you for their features. You can access all the premium features of Epik APK Mod for free without any limitations or restrictions. You can also save your data by using Epik APK Mod offline without any internet connection.

                  -

                  Conclusion

                  -

                  Epik APK Mod is a great photo editing app that allows you to edit your photos with amazing filters, stickers, and effects for free. You can download and install it on your Android device easily and safely by following the steps above. You can also enjoy the benefits of using Epik APK Mod such as enhancing your photos, sharing your creativity, and saving your money. If you are looking for a way to spice up your photos, then you should try Epik APK Mod today.

                  -

                  FAQs

                  -

                  Here are some frequently asked questions about Epik APK Mod:

                  -

                  Q: Is Epik APK Mod safe to use?

                  -

                  A: Yes, Epik APK Mod is safe to use as long as you download it from a trusted source. However, you should be careful when installing apps from unknown sources as they may contain malware or viruses that can harm your device.

                  -

                  Q: Is Epik APK Mod legal to use?

                  -

                  A: No, Epik APK Mod is not legal to use as it violates the terms and conditions of the original Epik app. It is also considered as piracy as it provides paid features for free. Therefore, we do not recommend using Epik APK Mod or any other modded apps.

                  -

                  Q: Does Epik APK Mod require root access?

                  -

                  A: No, Epik APK Mod does not require root access to work on your device. You can install it without rooting your device or modifying any system settings.

                  -

                  Q: Does Epik APK Mod support all Android devices?

                  -

                  A: Yes, Epik APK Mod supports all Android devices that run on Android 4.1 or higher versions. However, some features may not work properly on some devices due to compatibility issues.

                  -

                  Q: How can I update Epik APK Mod?

                  -

                  A: You can update Epik APK Mod by downloading the latest version from the same source that you downloaded it from. You can also check for updates from within the app by going to Settings > About > Check for Updates. However, you should be aware that updating Epik APK Mod may cause some issues or errors on your device.

                  -

                  I hope this article has helped you to learn how to download Epik APK Mod for Android. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading and have a great day!

                  197e85843d
                  -
                  -
                  \ No newline at end of file diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Evertale Free APK and Enter a World of Fantasy and Adventure.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Evertale Free APK and Enter a World of Fantasy and Adventure.md deleted file mode 100644 index fbe4f7f5c80673cd776644d6e4e398438945180c..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Evertale Free APK and Enter a World of Fantasy and Adventure.md +++ /dev/null @@ -1,74 +0,0 @@ -
                  -

                  Evertale Free Apk: How to Download and Play This Amazing RPG

                  -

                  If you are a fan of fantasy RPGs, you might have heard of Evertale, a game that has been compared to Pokémon due to its fighting style and monster collecting mechanics. Evertale is a game that takes you to the world of Erden, where you have to save it from an ancient curse called the Pandemonium. You will join a band of heroes and explore various regions, cities, and dungeons, while catching, training, and evolving over 180 monsters and warriors.

                  -

                  evertale free apk


                  Download File >>> https://ssurll.com/2uNTfH



                  -

                  Evertale is a game that has received positive reviews from players and critics alike, with over 5 million downloads from the Google Play Store alone. However, the game is not free to play, as it costs $0.99 to download from the official app stores. If you want to play Evertale without paying anything, you might be interested in downloading Evertale free apk, which is a modified version of the game that allows you to enjoy it for free.

                  -

                  How to Download Evertale Free Apk

                  -

                  Downloading Evertale free apk is not difficult, but you have to be careful about where you get it from. There are many websites that claim to offer Evertale free apk, but some of them might be scams or contain viruses that can harm your device. To avoid any risks, you should follow these steps:

                  -

                  Step 1: Find a reliable source for Evertale free apk

                  -

                  The first step is to find a website that offers Evertale free apk that is safe and working. You can do some research online or ask for recommendations from other players who have tried it. One of the websites that we recommend is [Games.lol](^4^), which is a platform that provides free game downloads for PC and mobile devices. You can find Evertale free apk on their website by searching for it or browsing their role-playing games category.

                  -

                  Step 2: Download and install Evertale free apk on your device

                  -

                  The second step is to download and install Evertale free apk on your device. You will need to enable the installation of apps from unknown sources on your device settings before doing so. This will allow you to install apps that are not from the official app stores. After enabling this option, you can download Evertale free apk from [Games.lol](^4^) by clicking on the download button and following the instructions. The installation process should take only a few minutes.

                  -

                  Step 3: Enjoy the game features and advantages of Evertale free apk

                  -

                  The third step is to enjoy the game features and advantages of Evertale free apk. By downloading this version of the game, you will be able to access all the content and features of Evertale without paying anything. You will also be able to play the game offline without any internet connection. Moreover, you will be able to use cheats and hacks that can make your gameplay easier and more fun.

                  -

                  Conclusion

                  -

                  Evertale is a game that deserves your attention if you love fantasy RPGs with monster collecting and battling elements. It has a captivating story, stunning graphics, strategic combat, and a variety of monsters and characters to choose from. However, if you don't want to spend money on downloading the game from the official app stores, you can try downloading Evertale free apk from [Games.lol](^4^), which is a safe and reliable source for free game downloads. By doing so, you will be able to enjoy all the benefits of playing Evertale for free.

                  -

                  FAQs

                  -

                  Here are some common questions about Evertale free apk that you might have:

                  -

                  Q: Is Evertale free apk safe to use?

                  -

                  A: Evertale free apk is safe to use as long as you download it from a trusted source like [Games.lol]. However, you should always be careful when installing apps from unknown sources and scan them for viruses before opening them.

                  -

                  evertale free download android
                  -evertale apk mod unlimited money
                  -evertale offline apk
                  -evertale game apk
                  -evertale apk latest version
                  -evertale free soul stones
                  -evertale apk obb
                  -evertale apk hack
                  -evertale free characters
                  -evertale apk full version
                  -evertale free gems
                  -evertale apk data
                  -evertale apk revdl
                  -evertale free coupon code
                  -evertale apk pure
                  -evertale free online
                  -evertale apk mirror
                  -evertale apk rexdl
                  -evertale free summon
                  -evertale apk uptodown
                  -evertale free play
                  -evertale apk no root
                  -evertale apk android 1
                  -evertale free codes
                  -evertale apk mob.org
                  -evertale free ios
                  -evertale apk android oyun club
                  -evertale apk happymod
                  -evertale free to play guide
                  -evertale apk andropalace
                  -evertale free account
                  -evertale apk android republic
                  -evertale apk platinmods
                  -evertale free tier list
                  -evertale apk apkpure.com
                  -evertale free legendary monsters
                  -evertale apk an1.com
                  -evertale apk mod menu
                  -evertale free ssr characters
                  -evertale apk apkmody.io
                  -evertale free gold coins
                  -evertale apk moddroid.com
                  -evertale apk unlimited everything
                  -evertale free weapons and armor

                  -

                  Q: Is Evertale free apk compatible with my device?

                  -

                  A: Evertale free apk is compatible with most Android devices that have Android 4.4 or higher. However, some devices may experience performance issues or bugs due to the modifications made to the game. If you encounter any problems, you can try adjusting the game settings or contacting the developer for support.

                  -

                  Q: Can I play Evertale free apk online with other players?

                  -

                  A: Yes, you can play Evertale free apk online with other players who have the same version of the game. You can join PvP leagues, guilds, and events and compete with or cooperate with other players. However, you may not be able to play with players who have the official version of the game or a different version of Evertale free apk.

                  -

                  Q: Can I update Evertale free apk to the latest version?

                  -

                  A: Yes, you can update Evertale free apk to the latest version by downloading it from [Games.lol] again. However, you may lose your progress or data if you uninstall the previous version of the game. To avoid this, you can backup your data using a cloud service or a file manager app.

                  -

                  Q: Can I transfer my data from Evertale free apk to the official version of the game?

                  -

                  A: No, you cannot transfer your data from Evertale free apk to the official version of the game. The two versions of the game have different servers and databases, and they are not compatible with each other. If you want to play the official version of the game, you will have to start from scratch.

                  -

                  I hope this article has helped you learn more about Evertale free apk and how to download and play it. If you have any questions or feedback, feel free to leave a comment below. Thank you for reading and have fun playing Evertale!

                  197e85843d
                  -
                  -
                  \ No newline at end of file diff --git a/spaces/skf15963/summary/fengshen/examples/pretrain_t5/pretrain_randeng_t5_char_700M.sh b/spaces/skf15963/summary/fengshen/examples/pretrain_t5/pretrain_randeng_t5_char_700M.sh deleted file mode 100644 index 5b3b2c6c87831ebce78d4f7e0ed133b7a8468ba2..0000000000000000000000000000000000000000 --- a/spaces/skf15963/summary/fengshen/examples/pretrain_t5/pretrain_randeng_t5_char_700M.sh +++ /dev/null @@ -1,129 +0,0 @@ -#!/bin/bash -#SBATCH --job-name=pretrain_randeng_t5_char_700M -#SBATCH --nodes=2 -#SBATCH --ntasks-per-node=8 -#SBATCH --gres=gpu:8 # number of gpus -#SBATCH --cpus-per-task=30 # cpu-cores per task (>1 if multi-threaded tasks) -#SBATCH -o /cognitive_comp/ganruyi/experiments/randeng_t5_char_700M/%x-%j.log -#SBATCH -e /cognitive_comp/ganruyi/experiments/randeng_t5_char_700M/%x-%j.err - -set -x -e - -echo "START TIME: $(date)" -MICRO_BATCH_SIZE=8 -ROOT_DIR=/cognitive_comp/ganruyi/experiments/randeng_t5_char_700M/ -if [ ! -d ${ROOT_DIR} ];then - mkdir ${ROOT_DIR} - echo ${ROOT_DIR} created!!!!!!!!!!!!!! -else - echo ${ROOT_DIR} exist!!!!!!!!!!!!!!! -fi - -ZERO_STAGE=1 - -config_json="$ROOT_DIR/ds_config.randeng_t5_char_700M.$SLURM_JOBID.json" -export MASTER_PORT=$[RANDOM%10000+30000] -# export CUDA_VISIBLE_DEVICES='2,5' - -cat < $config_json -{ - "train_micro_batch_size_per_gpu": ${MICRO_BATCH_SIZE}, - "steps_per_print": 100, - "gradient_clipping": 1.0, - "zero_optimization": { - "stage": $ZERO_STAGE, - "contiguous_gradients": false, - "overlap_comm": true, - "reduce_scatter": true, - "reduce_bucket_size": 50000000, - "allgather_bucket_size": 500000000 - }, - "optimizer": { - "type": "Adam", - "params": { - "lr": 1e-4, - "weight_decay": 1e-2 - } - }, - "scheduler": { - "params": { - "warmup_max_lr": 1e-04, - "warmup_min_lr": 1e-05, - "total_num_steps": 400000, - "warmup_num_steps" : 10000 - }, - "type": "WarmupDecayLR" - }, - "zero_allow_untested_optimizer": false, - "fp16": { - "enabled": true, - "loss_scale": 0, - "loss_scale_window": 1000, - "hysteresis": 2, - "min_loss_scale": 1 - }, - "activation_checkpointing": { - "partition_activations": false, - "contiguous_memory_optimization": false - }, - "wall_clock_breakdown": false -} -EOT - -export PL_DEEPSPEED_CONFIG_PATH=$config_json -export TORCH_EXTENSIONS_DIR=/cognitive_comp/ganruyi/tmp/torch_extendsions -# strategy=ddp -strategy=deepspeed_stage_1 - -TRAINER_ARGS=" - --max_epochs 1 \ - --gpus 8 \ - --num_nodes 2 \ - --strategy ${strategy} \ - --default_root_dir $ROOT_DIR \ - --dirpath $ROOT_DIR/ckpt \ - --save_top_k 3 \ - --every_n_train_steps 100000 \ - --monitor train_loss \ - --mode min \ - --save_last \ - --val_check_interval 0.1 \ - --dataset_num_workers 4 \ - --dataloader_num_workers 4 \ - --replace_sampler_ddp False \ - --accumulate_grad_batches 2 \ -" -# --accumulate_grad_batches 8 \ -DATA_DIR=wudao_180g_bert_tokenized_512 - -DATA_ARGS=" - --train_batchsize $MICRO_BATCH_SIZE \ - --valid_batchsize $MICRO_BATCH_SIZE \ - --train_data_path ${DATA_DIR} \ - --train_split_size 0.999 \ - --max_seq_length 512 \ -" - -MODEL_ARGS=" - --pretrained_model_path /cognitive_comp/ganruyi/experiments/randeng_t5_char_700M/randeng_t5_char_700M \ - --tokenizer_type bert_tokenizer \ -" - -SCRIPTS_PATH=/cognitive_comp/ganruyi/Fengshenbang-LM/fengshen/examples/pretrain_t5/pretrain_t5.py - -export CMD=" \ - $SCRIPTS_PATH \ - $TRAINER_ARGS \ - $MODEL_ARGS \ - $DATA_ARGS \ - " - -echo $CMD -# /home/ganruyi/anaconda3/bin/python $CMD -SINGULARITY_PATH=/cognitive_comp/ganruyi/pytorch21_06_py3_docker_image_v2.sif -srun singularity exec --nv -B /cognitive_comp/:/cognitive_comp/ $SINGULARITY_PATH bash -c '/home/ganruyi/anaconda3/bin/python $CMD' - -# source activate base -# python $CMD -# srun --nodes=1 --gres=gpu:8 --ntasks-per-node=8 --cpus-per-task=30 --jobid=171866 -e %x-%j.err -o %x-%j.log python $CMD - diff --git a/spaces/skf15963/summary/fengshen/models/longformer/tokenization_longformer.py b/spaces/skf15963/summary/fengshen/models/longformer/tokenization_longformer.py deleted file mode 100644 index 16b3452ec7545e39b9ef1de276cf1fe8111a35fa..0000000000000000000000000000000000000000 --- a/spaces/skf15963/summary/fengshen/models/longformer/tokenization_longformer.py +++ /dev/null @@ -1,16 +0,0 @@ -# coding=utf-8 -# Copyright 2021 The IDEA Authors. All rights reserved. - -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at - -# http://www.apache.org/licenses/LICENSE-2.0 - -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -from transformers import BertTokenizer as LongformerTokenizer diff --git a/spaces/sklkd93/CodeFormer/CodeFormer/basicsr/archs/arch_util.py b/spaces/sklkd93/CodeFormer/CodeFormer/basicsr/archs/arch_util.py deleted file mode 100644 index bad45ab34e901c47fb539152fca714a3795b0de2..0000000000000000000000000000000000000000 --- a/spaces/sklkd93/CodeFormer/CodeFormer/basicsr/archs/arch_util.py +++ /dev/null @@ -1,318 +0,0 @@ -import collections.abc -import math -import torch -import torchvision -import warnings -from distutils.version import LooseVersion -from itertools import repeat -from torch import nn as nn -from torch.nn import functional as F -from torch.nn import init as init -from torch.nn.modules.batchnorm import _BatchNorm - -from basicsr.ops.dcn import ModulatedDeformConvPack, modulated_deform_conv -from basicsr.utils import get_root_logger - - -@torch.no_grad() -def default_init_weights(module_list, scale=1, bias_fill=0, **kwargs): - """Initialize network weights. - - Args: - module_list (list[nn.Module] | nn.Module): Modules to be initialized. - scale (float): Scale initialized weights, especially for residual - blocks. Default: 1. - bias_fill (float): The value to fill bias. Default: 0 - kwargs (dict): Other arguments for initialization function. - """ - if not isinstance(module_list, list): - module_list = [module_list] - for module in module_list: - for m in module.modules(): - if isinstance(m, nn.Conv2d): - init.kaiming_normal_(m.weight, **kwargs) - m.weight.data *= scale - if m.bias is not None: - m.bias.data.fill_(bias_fill) - elif isinstance(m, nn.Linear): - init.kaiming_normal_(m.weight, **kwargs) - m.weight.data *= scale - if m.bias is not None: - m.bias.data.fill_(bias_fill) - elif isinstance(m, _BatchNorm): - init.constant_(m.weight, 1) - if m.bias is not None: - m.bias.data.fill_(bias_fill) - - -def make_layer(basic_block, num_basic_block, **kwarg): - """Make layers by stacking the same blocks. - - Args: - basic_block (nn.module): nn.module class for basic block. - num_basic_block (int): number of blocks. - - Returns: - nn.Sequential: Stacked blocks in nn.Sequential. - """ - layers = [] - for _ in range(num_basic_block): - layers.append(basic_block(**kwarg)) - return nn.Sequential(*layers) - - -class ResidualBlockNoBN(nn.Module): - """Residual block without BN. - - It has a style of: - ---Conv-ReLU-Conv-+- - |________________| - - Args: - num_feat (int): Channel number of intermediate features. - Default: 64. - res_scale (float): Residual scale. Default: 1. - pytorch_init (bool): If set to True, use pytorch default init, - otherwise, use default_init_weights. Default: False. - """ - - def __init__(self, num_feat=64, res_scale=1, pytorch_init=False): - super(ResidualBlockNoBN, self).__init__() - self.res_scale = res_scale - self.conv1 = nn.Conv2d(num_feat, num_feat, 3, 1, 1, bias=True) - self.conv2 = nn.Conv2d(num_feat, num_feat, 3, 1, 1, bias=True) - self.relu = nn.ReLU(inplace=True) - - if not pytorch_init: - default_init_weights([self.conv1, self.conv2], 0.1) - - def forward(self, x): - identity = x - out = self.conv2(self.relu(self.conv1(x))) - return identity + out * self.res_scale - - -class Upsample(nn.Sequential): - """Upsample module. - - Args: - scale (int): Scale factor. Supported scales: 2^n and 3. - num_feat (int): Channel number of intermediate features. - """ - - def __init__(self, scale, num_feat): - m = [] - if (scale & (scale - 1)) == 0: # scale = 2^n - for _ in range(int(math.log(scale, 2))): - m.append(nn.Conv2d(num_feat, 4 * num_feat, 3, 1, 1)) - m.append(nn.PixelShuffle(2)) - elif scale == 3: - m.append(nn.Conv2d(num_feat, 9 * num_feat, 3, 1, 1)) - m.append(nn.PixelShuffle(3)) - else: - raise ValueError(f'scale {scale} is not supported. Supported scales: 2^n and 3.') - super(Upsample, self).__init__(*m) - - -def flow_warp(x, flow, interp_mode='bilinear', padding_mode='zeros', align_corners=True): - """Warp an image or feature map with optical flow. - - Args: - x (Tensor): Tensor with size (n, c, h, w). - flow (Tensor): Tensor with size (n, h, w, 2), normal value. - interp_mode (str): 'nearest' or 'bilinear'. Default: 'bilinear'. - padding_mode (str): 'zeros' or 'border' or 'reflection'. - Default: 'zeros'. - align_corners (bool): Before pytorch 1.3, the default value is - align_corners=True. After pytorch 1.3, the default value is - align_corners=False. Here, we use the True as default. - - Returns: - Tensor: Warped image or feature map. - """ - assert x.size()[-2:] == flow.size()[1:3] - _, _, h, w = x.size() - # create mesh grid - grid_y, grid_x = torch.meshgrid(torch.arange(0, h).type_as(x), torch.arange(0, w).type_as(x)) - grid = torch.stack((grid_x, grid_y), 2).float() # W(x), H(y), 2 - grid.requires_grad = False - - vgrid = grid + flow - # scale grid to [-1,1] - vgrid_x = 2.0 * vgrid[:, :, :, 0] / max(w - 1, 1) - 1.0 - vgrid_y = 2.0 * vgrid[:, :, :, 1] / max(h - 1, 1) - 1.0 - vgrid_scaled = torch.stack((vgrid_x, vgrid_y), dim=3) - output = F.grid_sample(x, vgrid_scaled, mode=interp_mode, padding_mode=padding_mode, align_corners=align_corners) - - # TODO, what if align_corners=False - return output - - -def resize_flow(flow, size_type, sizes, interp_mode='bilinear', align_corners=False): - """Resize a flow according to ratio or shape. - - Args: - flow (Tensor): Precomputed flow. shape [N, 2, H, W]. - size_type (str): 'ratio' or 'shape'. - sizes (list[int | float]): the ratio for resizing or the final output - shape. - 1) The order of ratio should be [ratio_h, ratio_w]. For - downsampling, the ratio should be smaller than 1.0 (i.e., ratio - < 1.0). For upsampling, the ratio should be larger than 1.0 (i.e., - ratio > 1.0). - 2) The order of output_size should be [out_h, out_w]. - interp_mode (str): The mode of interpolation for resizing. - Default: 'bilinear'. - align_corners (bool): Whether align corners. Default: False. - - Returns: - Tensor: Resized flow. - """ - _, _, flow_h, flow_w = flow.size() - if size_type == 'ratio': - output_h, output_w = int(flow_h * sizes[0]), int(flow_w * sizes[1]) - elif size_type == 'shape': - output_h, output_w = sizes[0], sizes[1] - else: - raise ValueError(f'Size type should be ratio or shape, but got type {size_type}.') - - input_flow = flow.clone() - ratio_h = output_h / flow_h - ratio_w = output_w / flow_w - input_flow[:, 0, :, :] *= ratio_w - input_flow[:, 1, :, :] *= ratio_h - resized_flow = F.interpolate( - input=input_flow, size=(output_h, output_w), mode=interp_mode, align_corners=align_corners) - return resized_flow - - -# TODO: may write a cpp file -def pixel_unshuffle(x, scale): - """ Pixel unshuffle. - - Args: - x (Tensor): Input feature with shape (b, c, hh, hw). - scale (int): Downsample ratio. - - Returns: - Tensor: the pixel unshuffled feature. - """ - b, c, hh, hw = x.size() - out_channel = c * (scale**2) - assert hh % scale == 0 and hw % scale == 0 - h = hh // scale - w = hw // scale - x_view = x.view(b, c, h, scale, w, scale) - return x_view.permute(0, 1, 3, 5, 2, 4).reshape(b, out_channel, h, w) - - -class DCNv2Pack(ModulatedDeformConvPack): - """Modulated deformable conv for deformable alignment. - - Different from the official DCNv2Pack, which generates offsets and masks - from the preceding features, this DCNv2Pack takes another different - features to generate offsets and masks. - - Ref: - Delving Deep into Deformable Alignment in Video Super-Resolution. - """ - - def forward(self, x, feat): - out = self.conv_offset(feat) - o1, o2, mask = torch.chunk(out, 3, dim=1) - offset = torch.cat((o1, o2), dim=1) - mask = torch.sigmoid(mask) - - offset_absmean = torch.mean(torch.abs(offset)) - if offset_absmean > 50: - logger = get_root_logger() - logger.warning(f'Offset abs mean is {offset_absmean}, larger than 50.') - - if LooseVersion(torchvision.__version__) >= LooseVersion('0.9.0'): - return torchvision.ops.deform_conv2d(x, offset, self.weight, self.bias, self.stride, self.padding, - self.dilation, mask) - else: - return modulated_deform_conv(x, offset, mask, self.weight, self.bias, self.stride, self.padding, - self.dilation, self.groups, self.deformable_groups) - - -def _no_grad_trunc_normal_(tensor, mean, std, a, b): - # From: https://github.com/rwightman/pytorch-image-models/blob/master/timm/models/layers/weight_init.py - # Cut & paste from PyTorch official master until it's in a few official releases - RW - # Method based on https://people.sc.fsu.edu/~jburkardt/presentations/truncated_normal.pdf - def norm_cdf(x): - # Computes standard normal cumulative distribution function - return (1. + math.erf(x / math.sqrt(2.))) / 2. - - if (mean < a - 2 * std) or (mean > b + 2 * std): - warnings.warn( - 'mean is more than 2 std from [a, b] in nn.init.trunc_normal_. ' - 'The distribution of values may be incorrect.', - stacklevel=2) - - with torch.no_grad(): - # Values are generated by using a truncated uniform distribution and - # then using the inverse CDF for the normal distribution. - # Get upper and lower cdf values - low = norm_cdf((a - mean) / std) - up = norm_cdf((b - mean) / std) - - # Uniformly fill tensor with values from [low, up], then translate to - # [2l-1, 2u-1]. - tensor.uniform_(2 * low - 1, 2 * up - 1) - - # Use inverse cdf transform for normal distribution to get truncated - # standard normal - tensor.erfinv_() - - # Transform to proper mean, std - tensor.mul_(std * math.sqrt(2.)) - tensor.add_(mean) - - # Clamp to ensure it's in the proper range - tensor.clamp_(min=a, max=b) - return tensor - - -def trunc_normal_(tensor, mean=0., std=1., a=-2., b=2.): - r"""Fills the input Tensor with values drawn from a truncated - normal distribution. - - From: https://github.com/rwightman/pytorch-image-models/blob/master/timm/models/layers/weight_init.py - - The values are effectively drawn from the - normal distribution :math:`\mathcal{N}(\text{mean}, \text{std}^2)` - with values outside :math:`[a, b]` redrawn until they are within - the bounds. The method used for generating the random values works - best when :math:`a \leq \text{mean} \leq b`. - - Args: - tensor: an n-dimensional `torch.Tensor` - mean: the mean of the normal distribution - std: the standard deviation of the normal distribution - a: the minimum cutoff value - b: the maximum cutoff value - - Examples: - >>> w = torch.empty(3, 5) - >>> nn.init.trunc_normal_(w) - """ - return _no_grad_trunc_normal_(tensor, mean, std, a, b) - - -# From PyTorch -def _ntuple(n): - - def parse(x): - if isinstance(x, collections.abc.Iterable): - return x - return tuple(repeat(x, n)) - - return parse - - -to_1tuple = _ntuple(1) -to_2tuple = _ntuple(2) -to_3tuple = _ntuple(3) -to_4tuple = _ntuple(4) -to_ntuple = _ntuple \ No newline at end of file diff --git a/spaces/skytnt/moe-tts/text/cantonese.py b/spaces/skytnt/moe-tts/text/cantonese.py deleted file mode 100644 index 32eae72ef7eb43d493da6d6f75dd46176d0e8808..0000000000000000000000000000000000000000 --- a/spaces/skytnt/moe-tts/text/cantonese.py +++ /dev/null @@ -1,59 +0,0 @@ -import re -import cn2an -import opencc - - -converter = opencc.OpenCC('chinese_dialect_lexicons/jyutjyu') - -# List of (Latin alphabet, ipa) pairs: -_latin_to_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('A', 'ei˥'), - ('B', 'biː˥'), - ('C', 'siː˥'), - ('D', 'tiː˥'), - ('E', 'iː˥'), - ('F', 'e˥fuː˨˩'), - ('G', 'tsiː˥'), - ('H', 'ɪk̚˥tsʰyː˨˩'), - ('I', 'ɐi˥'), - ('J', 'tsei˥'), - ('K', 'kʰei˥'), - ('L', 'e˥llou˨˩'), - ('M', 'ɛːm˥'), - ('N', 'ɛːn˥'), - ('O', 'ou˥'), - ('P', 'pʰiː˥'), - ('Q', 'kʰiːu˥'), - ('R', 'aː˥lou˨˩'), - ('S', 'ɛː˥siː˨˩'), - ('T', 'tʰiː˥'), - ('U', 'juː˥'), - ('V', 'wiː˥'), - ('W', 'tʊk̚˥piː˥juː˥'), - ('X', 'ɪk̚˥siː˨˩'), - ('Y', 'waːi˥'), - ('Z', 'iː˨sɛːt̚˥') -]] - - -def number_to_cantonese(text): - return re.sub(r'\d+(?:\.?\d+)?', lambda x: cn2an.an2cn(x.group()), text) - - -def latin_to_ipa(text): - for regex, replacement in _latin_to_ipa: - text = re.sub(regex, replacement, text) - return text - - -def cantonese_to_ipa(text): - text = number_to_cantonese(text.upper()) - text = converter.convert(text).replace('-','').replace('$',' ') - text = re.sub(r'[A-Z]', lambda x: latin_to_ipa(x.group())+' ', text) - text = re.sub(r'[、;:]', ',', text) - text = re.sub(r'\s*,\s*', ', ', text) - text = re.sub(r'\s*。\s*', '. ', text) - text = re.sub(r'\s*?\s*', '? ', text) - text = re.sub(r'\s*!\s*', '! ', text) - text = re.sub(r'\s*$', '', text) - return text diff --git a/spaces/society-ethics/Average_diffusion_faces/README.md b/spaces/society-ethics/Average_diffusion_faces/README.md deleted file mode 100644 index 0b7648443b399db8fc2da945d0b4da0ae1b66d41..0000000000000000000000000000000000000000 --- a/spaces/society-ethics/Average_diffusion_faces/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Average Diffusion Faces -emoji: 🚀 -colorFrom: gray -colorTo: green -sdk: gradio -sdk_version: 3.16.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/speech_synthesis/preprocessing/denoiser/pretrained.py b/spaces/sriramelango/Social_Classification_Public/fairseq/examples/speech_synthesis/preprocessing/denoiser/pretrained.py deleted file mode 100644 index 2fa846075b6872cdcc0baebca0b9acbb9ffcd287..0000000000000000000000000000000000000000 --- a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/speech_synthesis/preprocessing/denoiser/pretrained.py +++ /dev/null @@ -1,81 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. -# author: adefossez - -import logging - -import torch.hub - -from .demucs import Demucs -from .utils import deserialize_model - -logger = logging.getLogger(__name__) -ROOT = "https://dl.fbaipublicfiles.com/adiyoss/denoiser/" -DNS_48_URL = ROOT + "dns48-11decc9d8e3f0998.th" -DNS_64_URL = ROOT + "dns64-a7761ff99a7d5bb6.th" -MASTER_64_URL = ROOT + "master64-8a5dfb4bb92753dd.th" - - -def _demucs(pretrained, url, **kwargs): - model = Demucs(**kwargs) - if pretrained: - state_dict = torch.hub.load_state_dict_from_url(url, map_location='cpu') - model.load_state_dict(state_dict) - return model - - -def dns48(pretrained=True): - return _demucs(pretrained, DNS_48_URL, hidden=48) - - -def dns64(pretrained=True): - return _demucs(pretrained, DNS_64_URL, hidden=64) - - -def master64(pretrained=True): - return _demucs(pretrained, MASTER_64_URL, hidden=64) - - -def add_model_flags(parser): - group = parser.add_mutually_exclusive_group(required=False) - group.add_argument( - "-m", "--model_path", help="Path to local trained model." - ) - group.add_argument( - "--dns48", action="store_true", - help="Use pre-trained real time H=48 model trained on DNS." - ) - group.add_argument( - "--dns64", action="store_true", - help="Use pre-trained real time H=64 model trained on DNS." - ) - group.add_argument( - "--master64", action="store_true", - help="Use pre-trained real time H=64 model trained on DNS and Valentini." - ) - - -def get_model(args): - """ - Load local model package or torchhub pre-trained model. - """ - if args.model_path: - logger.info("Loading model from %s", args.model_path) - pkg = torch.load(args.model_path) - model = deserialize_model(pkg) - elif args.dns64: - logger.info("Loading pre-trained real time H=64 model trained on DNS.") - model = dns64() - elif args.master64: - logger.info( - "Loading pre-trained real time H=64 model trained on DNS and Valentini." - ) - model = master64() - else: - logger.info("Loading pre-trained real time H=48 model trained on DNS.") - model = dns48() - logger.debug(model) - return model diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/data/transform_eos_lang_pair_dataset.py b/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/data/transform_eos_lang_pair_dataset.py deleted file mode 100644 index e21144a88e0038c2f35711333a40315613004256..0000000000000000000000000000000000000000 --- a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/data/transform_eos_lang_pair_dataset.py +++ /dev/null @@ -1,113 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - - -from typing import Optional - -import torch - -from . import FairseqDataset - - -class TransformEosLangPairDataset(FairseqDataset): - """A :class:`~fairseq.data.FairseqDataset` wrapper that transform bos on - collated samples of language pair dataset. - - Note that the transformation is applied in :func:`collater`. - - Args: - dataset (~fairseq.data.FairseqDataset): dataset that collates sample into - LanguagePairDataset schema - src_eos (int): original source end-of-sentence symbol index to be replaced - new_src_eos (int, optional): new end-of-sentence symbol index to replace source eos symbol - tgt_bos (int, optional): original target beginning-of-sentence symbol index to be replaced - new_tgt_bos (int, optional): new beginning-of-sentence symbol index to replace at the - beginning of 'prev_output_tokens' - """ - - def __init__( - self, - dataset: FairseqDataset, - src_eos: int, - new_src_eos: Optional[int] = None, - tgt_bos: Optional[int] = None, - new_tgt_bos: Optional[int] = None, - ): - self.dataset = dataset - self.src_eos = src_eos - self.new_src_eos = new_src_eos - self.tgt_bos = tgt_bos - self.new_tgt_bos = new_tgt_bos - - def __getitem__(self, index): - return self.dataset[index] - - def __len__(self): - return len(self.dataset) - - def collater(self, samples, **extra_args): - samples = self.dataset.collater(samples, **extra_args) - if len(samples) == 0: - return samples - - if 'net_input' not in samples: - return samples - - if self.new_src_eos is not None: - if self.dataset.left_pad_source: - assert ( - samples["net_input"]["src_tokens"][:, -1] != self.src_eos - ).sum() == 0 - samples["net_input"]["src_tokens"][:, -1] = self.new_src_eos - else: - eos_idx = samples["net_input"]["src_lengths"] - 1 - assert ( - samples["net_input"]["src_tokens"][ - torch.arange(eos_idx.size(0)), eos_idx - ] - != self.src_eos - ).sum() == 0 - eos_idx = eos_idx.resize_(len(samples["net_input"]["src_lengths"]), 1) - samples["net_input"]["src_tokens"].scatter_( - 1, eos_idx, self.new_src_eos - ) - - if ( - self.new_tgt_bos is not None - and "prev_output_tokens" in samples["net_input"] - ): - if self.dataset.left_pad_target: - # TODO: support different padding direction on target side - raise NotImplementedError( - "TransformEosLangPairDataset does not implement --left-pad-target True option" - ) - else: - assert ( - samples["net_input"]["prev_output_tokens"][:, 0] != self.tgt_bos - ).sum() == 0 - samples["net_input"]["prev_output_tokens"][:, 0] = self.new_tgt_bos - - return samples - - def num_tokens(self, index): - return self.dataset.num_tokens(index) - - def size(self, index): - return self.dataset.size(index) - - @property - def sizes(self): - # dataset.sizes can be a dynamically computed sizes: - return self.dataset.sizes - - def ordered_indices(self): - return self.dataset.ordered_indices() - - @property - def supports_prefetch(self): - return getattr(self.dataset, "supports_prefetch", False) - - def prefetch(self, indices): - return self.dataset.prefetch(indices) diff --git a/spaces/stallbr/microsoft-BioGPT-Large-PubMedQA/README.md b/spaces/stallbr/microsoft-BioGPT-Large-PubMedQA/README.md deleted file mode 100644 index 7259784b84ffb2b247659aab06d2f0eb75e77afb..0000000000000000000000000000000000000000 --- a/spaces/stallbr/microsoft-BioGPT-Large-PubMedQA/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Microsoft BioGPT Large PubMedQA -emoji: 🏆 -colorFrom: green -colorTo: yellow -sdk: gradio -sdk_version: 3.44.3 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/stomexserde/gpt4-ui/Examples/Covadis Pour Autocad 2013.md b/spaces/stomexserde/gpt4-ui/Examples/Covadis Pour Autocad 2013.md deleted file mode 100644 index 703a0620e9a2cf7ac6c2ab99bb7140b97e3b7b80..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Covadis Pour Autocad 2013.md +++ /dev/null @@ -1,32 +0,0 @@ -
                  -Here is a possible title and article with HTML formatting for the keyword "Covadis Pour Autocad 2013": - -

                  Covadis Pour Autocad 2013: Un logiciel de topographie et d'infrastructure VRD

                  -

                  Covadis est un logiciel développé par GEOMEDIA qui fonctionne sous AutoCAD. Il s'agit d'un applicatif de topographie, de terrassement et d'infrastructure VRD (voirie et réseaux divers) dédié aux géomètres, aux bureaux d’études, aux entreprises de BTP (bâtiment et travaux publics) et aux collectivités.

                  -

                  Covadis permet de réaliser des projets de topographie, de nivellement, de cubature, de lotissement, de route, d'assainissement, d'eau potable, d'éclairage public, etc. Il offre des fonctionnalités de dessin, de calcul, de modélisation 3D, de simulation et de production de documents.

                  -

                  Covadis Pour Autocad 2013


                  Download Filehttps://urlgoal.com/2uI5MT



                  -

                  Covadis est compatible avec différentes versions d'AutoCAD, allant de la version 2004 à la version 2016. La version 13.0h de Covadis est compatible avec AutoCAD 2013, en 32 ou 64 bits. Il faut toutefois respecter les numéros de série et les clés d'activation correspondant à chaque version.

                  -

                  Pour installer Covadis sur AutoCAD 2013, il faut suivre les étapes suivantes:

                  -

                  -
                    -
                  1. Télécharger et installer AutoCAD 2013 en utilisant le numéro de série 379-45228081 pour le 32 bits ou 371-5718439 pour le 64 bits.
                  2. -
                  3. Télécharger et installer le pack français pour AutoCAD 2013.
                  4. -
                  5. Cracker AutoCAD 2013 en utilisant le keygen pour 32 ou 64 bits.
                  6. -
                  7. Télécharger et installer Covadis version 13.0h en choisissant Covadis 32 bits ou Covadis 64 bits selon le cas.
                  8. -
                  9. Cracker Covadis version 13.0h en utilisant le driver emul_32 ou emul_64 selon le cas.
                  10. -
                  11. Copier les fichiers Covsrvechelle19.arx dans les répertoires C/programme/geomedia SA/Covadis/programmes et C/programme/geomedia SA/Covadis/patch/programmes.
                  12. -
                  -

                  Une fois l'installation terminée, il faut lancer AutoCAD et charger l'application Covadis depuis le menu Chargement des applications. On peut alors accéder aux différentes commandes et menus de Covadis depuis la barre d'outils ou la barre d'état.

                  Here is a possible continuation of the article: - -

                  Covadis Pour Autocad 2013: Un logiciel de topographie et d'infrastructure VRD

                  -

                  ...

                  -

                  Exemples de projets réalisés avec Covadis et AutoCAD 2013

                  -

                  Covadis et AutoCAD 2013 sont des outils puissants qui permettent de réaliser des projets variés et complexes dans le domaine de la topographie et de l'infrastructure VRD. Voici quelques exemples de projets qui illustrent les possibilités offertes par ces logiciels:

                  -
                    -
                  • Un projet de lotissement comprenant la conception du plan masse, le calcul des surfaces, le tracé des voiries, le dimensionnement des réseaux d'assainissement et d'eau potable, la production des plans de vente et des documents administratifs.
                  • -
                  • Un projet de route comprenant la définition du tracé en plan et en profil en long, le calcul des déblais et remblais, le dimensionnement des ouvrages d'art, la modélisation 3D du terrain et de la chaussée, la production des plans d'exécution et des métrés.
                  • -
                  • Un projet de rond-point comprenant la conception géométrique du carrefour, le calcul des rayons de courbure, le dimensionnement des îlots et des trottoirs, la production des plans de signalisation et de marquage au sol.
                  • -
                  -

                  Ces exemples ne sont pas exhaustifs et montrent seulement une partie des fonctionnalités de Covadis et AutoCAD 2013. Ces logiciels permettent également de réaliser des projets d'aménagement urbain, de terrassement, d'éclairage public, de topographie générale, etc.

                  7196e7f11a
                  -
                  -
                  \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Download Cs 1.6 Lh 2013 Esk.md b/spaces/stomexserde/gpt4-ui/Examples/Download Cs 1.6 Lh 2013 Esk.md deleted file mode 100644 index 6e92cf0aa249486dbdf6c061005471f73b73e502..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Download Cs 1.6 Lh 2013 Esk.md +++ /dev/null @@ -1,54 +0,0 @@ - -

                  How to Download Cs 1.6 Lh 2013 Esk - The Ultimate Guide

                  - -

                  If you are a fan of Counter-Strike 1.6, you might have heard of Cs 1.6 Lh 2013 Esk, a popular version of the game that was created by eSportsKosova. This version has many features and improvements that make it stand out from other versions of Cs 1.6. In this article, we will show you how to download Cs 1.6 Lh 2013 Esk and enjoy its benefits.

                  - -

                  What is Cs 1.6 Lh 2013 Esk?

                  - -

                  Cs 1.6 Lh 2013 Esk is a modified version of Counter-Strike 1.6 that was released in 2013 by eSportsKosova, a website dedicated to promoting esports in Kosovo and the Balkans. Cs 1.6 Lh 2013 Esk has many features that make it different from other versions of Cs 1.6, such as:

                  -

                  Download Cs 1.6 Lh 2013 Esk


                  Download File > https://urlgoal.com/2uI7pk



                  - -
                    -
                  • Engine 1.1.2.6 build 4554;
                  • -
                  • Non-Steam Patch Version 42.1 (Orange box);
                  • -
                  • Protocol 48;
                  • -
                  • Compatibility with Windows XP, VISTA, Win7, Win8, Win8.1, Win10;
                  • -
                  • Playable on Internet and LAN;
                  • -
                  • Working server browser with the internet, favorite, and LAN tabs;
                  • -
                  • CSS weapons models for CS 1.6;
                  • -
                  • REVOLUTION Emulator 9.85;
                  • -
                  • Fenix LT MasterServer (play online in CS 1.6 servers);
                  • -
                  • Dproto 0.9.179;
                  • -
                  • zBots included;
                  • -
                  • LongHorn GUI v4 (Graphical User Interface);
                  • -
                  • Some HUD redesign with default radar;
                  • -
                  • GameMenu fonts and colors (High Quality);
                  • -
                  • Spectator banner Professional look;
                  • -
                  • Professional “H” commander menu (zBot’s commands);
                  • -
                  • Icon Counter-Strike 1.6 LH 2013;
                  • -
                  • New game startup song;
                  • -
                  • More spray logo;
                  • -
                  • The client can join P47 as well as P48 servers (LAN);
                  • -
                  • AMX Mod X 1.8.2;
                  • -
                  • BackUp some ddl files;
                  • -
                  • Fast and easy to install.
                  • -
                  - -

                  As you can see, Cs 1.6 Lh 2013 Esk has many advantages over other versions of Cs 1.6, such as better graphics, more options, more servers, and more stability.

                  - -

                  How to Download Cs 1.6 Lh 2013 Esk?

                  - -

                  If you want to download Cs 1.6 Lh 2013 Esk, you have several options to choose from. You can download it from the official website of eSportsKosova[^1^], or from other websites that offer it for free[^2^]. You can also download it from torrent sites[^3^], or from file-sharing platforms[^4^]. However, you should be careful when downloading files from unknown sources, as they might contain viruses or malware that could harm your computer.

                  - -

                  The easiest and safest way to download Cs 1.6 Lh 2013 Esk is to use the direct link from eSportsKosova[^1^]. This link will take you to a page where you can download the setup file for Cs 1.6 Lh 2013 Esk, which has a size of about 234 MB. You just need to click on the download button and wait for the file to be downloaded to your computer.

                  - -

                  How to Install Cs 1.6 Lh 2013 Esk?

                  - -

                  Once you have downloaded the setup file for Cs 1.6 Lh 2013 Esk, you need to install it on your computer. To do so, follow these steps:

                  - -
                    -
                  1. Double-click on the setup file to launch the installation wizard.
                  2. -
                  3. Select your preferred language and click

                    -

                    81aa517590
                    -
                    -
                    \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Idecad Statik Full Crack Indir __EXCLUSIVE__.md b/spaces/stomexserde/gpt4-ui/Examples/Idecad Statik Full Crack Indir __EXCLUSIVE__.md deleted file mode 100644 index 9d5b3cd73d38aa1658477d5b318b883311d03417..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Idecad Statik Full Crack Indir __EXCLUSIVE__.md +++ /dev/null @@ -1,35 +0,0 @@ - -Here is a possible title and article for the keyword "Idecad Statik Full Crack Indir": - -

                    Idecad Statik Full Crack Indir: How to Download and Install the Latest Version of Idecad Statik

                    -

                    Idecad Statik is a powerful software for structural design and analysis of reinforced concrete buildings. It allows you to create 3D models, perform calculations, generate reports, and export drawings in various formats. If you are looking for a way to download and install the latest version of Idecad Statik with full crack, you have come to the right place.

                    -

                    In this article, we will show you how to get Idecad Statik Full Crack Indir from a reliable source, and how to install it on your computer without any problems. We will also give you some tips on how to use the software effectively and safely.

                    -

                    Idecad Statik Full Crack Indir


                    DOWNLOADhttps://urlgoal.com/2uI6Xc



                    -

                    Where to Download Idecad Statik Full Crack Indir

                    -

                    There are many websites that claim to offer Idecad Statik Full Crack Indir, but not all of them are trustworthy. Some of them may contain viruses, malware, or spyware that can harm your computer or steal your personal information. Others may provide outdated or incomplete versions of the software that may not work properly or cause errors.

                    -

                    To avoid these risks, we recommend you to download Idecad Statik Full Crack Indir from a reputable website that has positive reviews and feedback from other users. One such website is Full Programlar Indir, which offers the latest version of Idecad Statik (v5.14) in Turkish language. This website also provides detailed instructions on how to install and activate the software with full crack.

                    -

                    -

                    How to Install Idecad Statik Full Crack Indir

                    -

                    Once you have downloaded Idecad Statik Full Crack Indir from Full Programlar Indir, you can follow these steps to install it on your computer:

                    -
                      -
                    1. Extract the downloaded file using WinRAR or any other file compression tool.
                    2. -
                    3. Run the setup.exe file as administrator and follow the installation wizard.
                    4. -
                    5. When the installation is complete, do not run the program yet.
                    6. -
                    7. Copy the crack file (ideCADStatik.exe) from the crack folder and paste it into the installation directory (usually C:\Program Files\ideCAD\ideCAD Statik).
                    8. -
                    9. Replace the original file when prompted.
                    10. -
                    11. Run the program as administrator and enjoy using it with full features.
                    12. -
                    -

                    How to Use Idecad Statik Effectively and Safely

                    -

                    Idecad Statik is a professional software that requires some knowledge and experience in structural engineering and design. To use it effectively and safely, you should follow these tips:

                    -
                      -
                    • Read the user manual and watch the tutorial videos that are available on the official website of Idecad (https://portal.idecad.com.tr/portal/tr/).
                    • -
                    • Keep your software updated to the latest version by checking for updates regularly.
                    • -
                    • Use a reliable antivirus program and scan your computer regularly for any potential threats.
                    • -
                    • Do not share your license key or crack file with anyone else.
                    • -
                    • Do not use the software for illegal or unethical purposes.
                    • -
                    -

                    Conclusion

                    -

                    Idecad Statik is a great software for structural design and analysis of reinforced concrete buildings. It can help you create 3D models, perform calculations, generate reports, and export drawings in various formats. However, it is not easy to find a working version of Idecad Statik with full crack on the internet.

                    -

                    In this article, we have shown you how to download and install Idecad Statik Full Crack Indir from a reliable website, and how to use it effectively and safely. We hope this article has been helpful for you. If you have any questions or comments, please feel free to leave them below.

                    7196e7f11a
                    -
                    -
                    \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Kambakkht Ishq Hindi Dubbed 720p.md b/spaces/stomexserde/gpt4-ui/Examples/Kambakkht Ishq Hindi Dubbed 720p.md deleted file mode 100644 index 28a1a884176724edb791ce1d8f0359e619502fa8..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Kambakkht Ishq Hindi Dubbed 720p.md +++ /dev/null @@ -1,14 +0,0 @@ - -

                    Kambakkht Ishq: A Bollywood Romantic Comedy with Hollywood Stars

                    -

                    Kambakkht Ishq is a 2009 Hindi movie that stars Akshay Kumar and Kareena Kapoor as Viraj and Simrita, two people who hate each other but are forced to get married due to a twist of fate. The movie is a remake of the 2002 Tamil film Pammal K. Sambandam and features cameo appearances by Hollywood actors like Sylvester Stallone, Denise Richards, Brandon Routh and Holly Valance.

                    -

                    The movie revolves around the hilarious situations that arise when Simrita, a surgeon, accidentally leaves her watch inside Viraj's body during an operation. She tries to retrieve it without his knowledge, but he soon finds out and decides to make her life miserable. The movie also explores the themes of love, marriage and gender roles in a humorous way.

                    -

                    Kambakkht Ishq Hindi Dubbed 720p


                    Download Filehttps://urlgoal.com/2uIaDs



                    -

                    Kambakkht Ishq was a commercial success at the box office, earning over ₹840 million worldwide. It was also praised for its action sequences, music and performances by the lead actors. The movie was dubbed in Telugu as Khatarnak Ishq and in Tamil as Pammal K. Sambandam.

                    -

                    If you are looking for a fun and entertaining movie to watch with your family or friends, you can download Kambakkht Ishq Hindi Dubbed 720p from various online sources[^3^] [^4^]. You can also watch it online on streaming platforms like Eros Now[^1^] or Voot[^2^]. Enjoy the movie and let us know your feedback in the comments section below.

                    - -

                    Kambakkht Ishq received mixed reviews from critics, who praised the chemistry between Kumar and Kapoor, but criticized the plot, humor and direction. The movie has a rating of 3.8 out of 10 on IMDb[^2^] and 4 out of 10 on Movieguide[^3^], which also noted the strong Hindu content in the movie. The Hollywood Reporter called the movie "entirely lacking in wit" and said that "one wishes the filmmakers or stars would wink to let us know they're in on the joke"[^1^]. Masala gave the movie three stars out of five and said that "the film borders on the silly but the packaging has worked"[^4^].

                    -

                    Kambakkht Ishq also features some popular songs composed by Anu Malik, RDB and Salim-Sulaiman. The title track "Kambakkht Ishq" is sung by KK and Sunidhi Chauhan and features a rap by RDB. The song "Bebo" is sung by Alisha Chinai and has a catchy hook line. The song "Om Mangalam" is a traditional Hindu wedding chant mixed with techno beats. The song "Lakh Lakh" is a Punjabi folk song sung by Neeraj Shridhar. The soundtrack of the movie was well received by the audience and sold over 2 million copies.

                    -

                    Kambakkht Ishq is a movie that offers a blend of Bollywood and Hollywood elements, with a dash of romance, comedy and action. It may not be a masterpiece, but it is a fun watch for those who enjoy masala movies with glamorous stars and exotic locations. If you are one of them, you can download Kambakkht Ishq Hindi Dubbed 720p from various online sources[^3^] [^4^]. You can also watch it online on streaming platforms like Eros Now[^1^] or Voot[^2^]. Enjoy the movie and let us know your feedback in the comments section below.

                    -

                    7196e7f11a
                    -
                    -
                    \ No newline at end of file diff --git a/spaces/sub314xxl/MusicGen/audiocraft/data/audio_utils.py b/spaces/sub314xxl/MusicGen/audiocraft/data/audio_utils.py deleted file mode 100644 index 76d4bc2a33ce722d879db2af33cd1336bd6b1fb3..0000000000000000000000000000000000000000 --- a/spaces/sub314xxl/MusicGen/audiocraft/data/audio_utils.py +++ /dev/null @@ -1,174 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import sys -import typing as tp - -import julius -import torch -import torchaudio - - -def convert_audio_channels(wav: torch.Tensor, channels: int = 2) -> torch.Tensor: - """Convert audio to the given number of channels. - - Args: - wav (torch.Tensor): Audio wave of shape [B, C, T]. - channels (int): Expected number of channels as output. - Returns: - torch.Tensor: Downmixed or unchanged audio wave [B, C, T]. - """ - *shape, src_channels, length = wav.shape - if src_channels == channels: - pass - elif channels == 1: - # Case 1: - # The caller asked 1-channel audio, and the stream has multiple - # channels, downmix all channels. - wav = wav.mean(dim=-2, keepdim=True) - elif src_channels == 1: - # Case 2: - # The caller asked for multiple channels, but the input file has - # a single channel, replicate the audio over all channels. - wav = wav.expand(*shape, channels, length) - elif src_channels >= channels: - # Case 3: - # The caller asked for multiple channels, and the input file has - # more channels than requested. In that case return the first channels. - wav = wav[..., :channels, :] - else: - # Case 4: What is a reasonable choice here? - raise ValueError('The audio file has less channels than requested but is not mono.') - return wav - - -def convert_audio(wav: torch.Tensor, from_rate: float, - to_rate: float, to_channels: int) -> torch.Tensor: - """Convert audio to new sample rate and number of audio channels. - """ - wav = julius.resample_frac(wav, int(from_rate), int(to_rate)) - wav = convert_audio_channels(wav, to_channels) - return wav - - -def normalize_loudness(wav: torch.Tensor, sample_rate: int, loudness_headroom_db: float = 14, - loudness_compressor: bool = False, energy_floor: float = 2e-3): - """Normalize an input signal to a user loudness in dB LKFS. - Audio loudness is defined according to the ITU-R BS.1770-4 recommendation. - - Args: - wav (torch.Tensor): Input multichannel audio data. - sample_rate (int): Sample rate. - loudness_headroom_db (float): Target loudness of the output in dB LUFS. - loudness_compressor (bool): Uses tanh for soft clipping. - energy_floor (float): anything below that RMS level will not be rescaled. - Returns: - output (torch.Tensor): Loudness normalized output data. - """ - energy = wav.pow(2).mean().sqrt().item() - if energy < energy_floor: - return wav - transform = torchaudio.transforms.Loudness(sample_rate) - input_loudness_db = transform(wav).item() - # calculate the gain needed to scale to the desired loudness level - delta_loudness = -loudness_headroom_db - input_loudness_db - gain = 10.0 ** (delta_loudness / 20.0) - output = gain * wav - if loudness_compressor: - output = torch.tanh(output) - assert output.isfinite().all(), (input_loudness_db, wav.pow(2).mean().sqrt()) - return output - - -def _clip_wav(wav: torch.Tensor, log_clipping: bool = False, stem_name: tp.Optional[str] = None) -> None: - """Utility function to clip the audio with logging if specified.""" - max_scale = wav.abs().max() - if log_clipping and max_scale > 1: - clamp_prob = (wav.abs() > 1).float().mean().item() - print(f"CLIPPING {stem_name or ''} happening with proba (a bit of clipping is okay):", - clamp_prob, "maximum scale: ", max_scale.item(), file=sys.stderr) - wav.clamp_(-1, 1) - - -def normalize_audio(wav: torch.Tensor, normalize: bool = True, - strategy: str = 'peak', peak_clip_headroom_db: float = 1, - rms_headroom_db: float = 18, loudness_headroom_db: float = 14, - loudness_compressor: bool = False, log_clipping: bool = False, - sample_rate: tp.Optional[int] = None, - stem_name: tp.Optional[str] = None) -> torch.Tensor: - """Normalize the audio according to the prescribed strategy (see after). - - Args: - wav (torch.Tensor): Audio data. - normalize (bool): if `True` (default), normalizes according to the prescribed - strategy (see after). If `False`, the strategy is only used in case clipping - would happen. - strategy (str): Can be either 'clip', 'peak', or 'rms'. Default is 'peak', - i.e. audio is normalized by its largest value. RMS normalizes by root-mean-square - with extra headroom to avoid clipping. 'clip' just clips. - peak_clip_headroom_db (float): Headroom in dB when doing 'peak' or 'clip' strategy. - rms_headroom_db (float): Headroom in dB when doing 'rms' strategy. This must be much larger - than the `peak_clip` one to avoid further clipping. - loudness_headroom_db (float): Target loudness for loudness normalization. - loudness_compressor (bool): If True, uses tanh based soft clipping. - log_clipping (bool): If True, basic logging on stderr when clipping still - occurs despite strategy (only for 'rms'). - sample_rate (int): Sample rate for the audio data (required for loudness). - stem_name (Optional[str]): Stem name for clipping logging. - Returns: - torch.Tensor: Normalized audio. - """ - scale_peak = 10 ** (-peak_clip_headroom_db / 20) - scale_rms = 10 ** (-rms_headroom_db / 20) - if strategy == 'peak': - rescaling = (scale_peak / wav.abs().max()) - if normalize or rescaling < 1: - wav = wav * rescaling - elif strategy == 'clip': - wav = wav.clamp(-scale_peak, scale_peak) - elif strategy == 'rms': - mono = wav.mean(dim=0) - rescaling = scale_rms / mono.pow(2).mean().sqrt() - if normalize or rescaling < 1: - wav = wav * rescaling - _clip_wav(wav, log_clipping=log_clipping, stem_name=stem_name) - elif strategy == 'loudness': - assert sample_rate is not None, "Loudness normalization requires sample rate." - wav = normalize_loudness(wav, sample_rate, loudness_headroom_db, loudness_compressor) - _clip_wav(wav, log_clipping=log_clipping, stem_name=stem_name) - else: - assert wav.abs().max() < 1 - assert strategy == '' or strategy == 'none', f"Unexpected strategy: '{strategy}'" - return wav - - -def f32_pcm(wav: torch.Tensor) -> torch.Tensor: - """Convert audio to float 32 bits PCM format. - """ - if wav.dtype.is_floating_point: - return wav - else: - assert wav.dtype == torch.int16 - return wav.float() / 2**15 - - -def i16_pcm(wav: torch.Tensor) -> torch.Tensor: - """Convert audio to int 16 bits PCM format. - - ..Warning:: There exist many formula for doing this convertion. None are perfect - due to the asymetry of the int16 range. One either have possible clipping, DC offset, - or inconsistancies with f32_pcm. If the given wav doesn't have enough headroom, - it is possible that `i16_pcm(f32_pcm)) != Identity`. - """ - if wav.dtype.is_floating_point: - assert wav.abs().max() <= 1 - candidate = (wav * 2 ** 15).round() - if candidate.max() >= 2 ** 15: # clipping would occur - candidate = (wav * (2 ** 15 - 1)).round() - return candidate.short() - else: - assert wav.dtype == torch.int16 - return wav diff --git a/spaces/subhc/Guess-What-Moves/determinism.py b/spaces/subhc/Guess-What-Moves/determinism.py deleted file mode 100644 index 2056e9c6d0d76d7c50aef3bf116b7aaf10d0d26a..0000000000000000000000000000000000000000 --- a/spaces/subhc/Guess-What-Moves/determinism.py +++ /dev/null @@ -1,24 +0,0 @@ -import os -lvl = int(os.environ.get('TRY_DETERMISM_LVL', '0')) -if lvl > 0: - print(f'Attempting to enable deterministic cuDNN and cuBLAS operations to lvl {lvl}') -if lvl >= 2: - # turn on deterministic operations - os.environ['CUBLAS_WORKSPACE_CONFIG'] = ":4096:8" #Need to set before torch gets loaded - import torch - # Since using unstable torch version, it looks like 1.12.0.devXXXXXXX - if torch.version.__version__ >= '1.12.0': - torch.use_deterministic_algorithms(True, warn_only=(lvl < 3)) - elif lvl >= 3: - torch.use_deterministic_algorithms(True) # This will throw errors if implementations are missing - else: - print(f"Torch verions is only {torch.version.__version__}, which will cause errors on lvl {lvl}") -if lvl >= 1: - import torch - if torch.cuda.is_available(): - torch.backends.cudnn.benchmark = False - - -def i_do_nothing_but_dont_remove_me_otherwise_things_break(): - """This exists to prevent formatters from treating this file as dead code""" - pass diff --git a/spaces/sudip1310/BANAO_Tiny_Shakespeare/README.md b/spaces/sudip1310/BANAO_Tiny_Shakespeare/README.md deleted file mode 100644 index afb26a74966d21fba2acf1c1a860663621362772..0000000000000000000000000000000000000000 --- a/spaces/sudip1310/BANAO_Tiny_Shakespeare/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: BANAO Tiny Shakespeare -emoji: 🌍 -colorFrom: yellow -colorTo: red -sdk: gradio -sdk_version: 3.32.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Guitarhero3pspcsodownload !EXCLUSIVE!.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Guitarhero3pspcsodownload !EXCLUSIVE!.md deleted file mode 100644 index 0719e5d4082b7d288f859b645bbbcaaebe20d336..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Guitarhero3pspcsodownload !EXCLUSIVE!.md +++ /dev/null @@ -1,6 +0,0 @@ -

                    guitarhero3pspcsodownload


                    Download File ––– https://cinurl.com/2uEYzi



                    - - 899543212b
                    -
                    -
                    -

                    diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Hoyle 2013 Card Puzzle And Board Games Torrent Bolero Cantante Litt !!TOP!!.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Hoyle 2013 Card Puzzle And Board Games Torrent Bolero Cantante Litt !!TOP!!.md deleted file mode 100644 index ad4942f9b3c19accb0ef41941756aec2616fad04..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Hoyle 2013 Card Puzzle And Board Games Torrent Bolero Cantante Litt !!TOP!!.md +++ /dev/null @@ -1,14 +0,0 @@ -

                    Hoyle 2013 Card Puzzle And Board Games Torrent bolero cantante litt


                    DOWNLOAD ★★★★★ https://cinurl.com/2uEZ5k



                    -
                    -TORRENT ·.) X26 · 2010-11-10 00:22:24. Ciao, il mio codice è: 1.12958. Ciao, il mio codice è: 1.12958. Codice: 1.12958. Hoyle 2013 Card Puzzle And Board Games Torrent Bolero Cantante Litt Download Bolero Cantante Litt Download.ogm.img. It uses a hex grid similar to Sudoku. But instead of the horizontal and vertical numbering, it has numbered squares as well. Hoyle 2013 Card Puzzle And Board Games Torrent Bolero Cantante Litt Download. - -Torrent bolero cantante litt download - -EXE Download 1.100.000.0.3. The main difference from Sudoku, is that instead of placing a number in each of the numbers cells, you use the letters of the alphabet to fill in the cells. Hoyle 2013 Card Puzzle And Board Games Torrent Bolero Cantante Litt Download. The game was written by Scott Hoyle and released under the GPL license. And when the board is filled in correctly, you get the prize. Ciao, il mio codice è: 1.12958. Hoyle 2013 Card Puzzle And Board Games Torrent Bolero Cantante Litt Download. It's also possible to save the solution for later. How to play. Hoyle 2013 Card Puzzle And Board Games Torrent Bolero Cantante Litt Download. The game is a puzzle game that was developed by a team at Need2Solve. The game came to my attention through a series of tweets, as a reminder of something called: I have been looking for a way to play Hoyle, and that should work on any smartphone, with no fuss at all, since it is fully compatible with any smartphone, and is available as a standalone App for free. Hoyle 2013 Card Puzzle And Board Games Torrent Bolero Cantante Litt Download. Hoyle, the title for Scott Hoyle's latest game project, has been available for a while now. Codice: 1.12958. Ciao, il mio codice è: 1.12958. How to play. Ciao, il mio codice è: 1.12958. - -Hoyle 2013 Card Puzzle And Board Games Torrent Bolero Cantante Litt Download - -Torrnent Bolero Cantante Litt Download. The 4fefd39f24
                    -
                    -
                    -

                    diff --git a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmcv/ops/voxelize.py b/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmcv/ops/voxelize.py deleted file mode 100644 index ca3226a4fbcbfe58490fa2ea8e1c16b531214121..0000000000000000000000000000000000000000 --- a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmcv/ops/voxelize.py +++ /dev/null @@ -1,132 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from torch import nn -from torch.autograd import Function -from torch.nn.modules.utils import _pair - -from ..utils import ext_loader - -ext_module = ext_loader.load_ext( - '_ext', ['dynamic_voxelize_forward', 'hard_voxelize_forward']) - - -class _Voxelization(Function): - - @staticmethod - def forward(ctx, - points, - voxel_size, - coors_range, - max_points=35, - max_voxels=20000): - """Convert kitti points(N, >=3) to voxels. - - Args: - points (torch.Tensor): [N, ndim]. Points[:, :3] contain xyz points - and points[:, 3:] contain other information like reflectivity. - voxel_size (tuple or float): The size of voxel with the shape of - [3]. - coors_range (tuple or float): The coordinate range of voxel with - the shape of [6]. - max_points (int, optional): maximum points contained in a voxel. if - max_points=-1, it means using dynamic_voxelize. Default: 35. - max_voxels (int, optional): maximum voxels this function create. - for second, 20000 is a good choice. Users should shuffle points - before call this function because max_voxels may drop points. - Default: 20000. - - Returns: - voxels_out (torch.Tensor): Output voxels with the shape of [M, - max_points, ndim]. Only contain points and returned when - max_points != -1. - coors_out (torch.Tensor): Output coordinates with the shape of - [M, 3]. - num_points_per_voxel_out (torch.Tensor): Num points per voxel with - the shape of [M]. Only returned when max_points != -1. - """ - if max_points == -1 or max_voxels == -1: - coors = points.new_zeros(size=(points.size(0), 3), dtype=torch.int) - ext_module.dynamic_voxelize_forward(points, coors, voxel_size, - coors_range, 3) - return coors - else: - voxels = points.new_zeros( - size=(max_voxels, max_points, points.size(1))) - coors = points.new_zeros(size=(max_voxels, 3), dtype=torch.int) - num_points_per_voxel = points.new_zeros( - size=(max_voxels, ), dtype=torch.int) - voxel_num = ext_module.hard_voxelize_forward( - points, voxels, coors, num_points_per_voxel, voxel_size, - coors_range, max_points, max_voxels, 3) - # select the valid voxels - voxels_out = voxels[:voxel_num] - coors_out = coors[:voxel_num] - num_points_per_voxel_out = num_points_per_voxel[:voxel_num] - return voxels_out, coors_out, num_points_per_voxel_out - - -voxelization = _Voxelization.apply - - -class Voxelization(nn.Module): - """Convert kitti points(N, >=3) to voxels. - - Please refer to `PVCNN `_ for more - details. - - Args: - voxel_size (tuple or float): The size of voxel with the shape of [3]. - point_cloud_range (tuple or float): The coordinate range of voxel with - the shape of [6]. - max_num_points (int): maximum points contained in a voxel. if - max_points=-1, it means using dynamic_voxelize. - max_voxels (int, optional): maximum voxels this function create. - for second, 20000 is a good choice. Users should shuffle points - before call this function because max_voxels may drop points. - Default: 20000. - """ - - def __init__(self, - voxel_size, - point_cloud_range, - max_num_points, - max_voxels=20000): - super().__init__() - - self.voxel_size = voxel_size - self.point_cloud_range = point_cloud_range - self.max_num_points = max_num_points - if isinstance(max_voxels, tuple): - self.max_voxels = max_voxels - else: - self.max_voxels = _pair(max_voxels) - - point_cloud_range = torch.tensor( - point_cloud_range, dtype=torch.float32) - voxel_size = torch.tensor(voxel_size, dtype=torch.float32) - grid_size = (point_cloud_range[3:] - - point_cloud_range[:3]) / voxel_size - grid_size = torch.round(grid_size).long() - input_feat_shape = grid_size[:2] - self.grid_size = grid_size - # the origin shape is as [x-len, y-len, z-len] - # [w, h, d] -> [d, h, w] - self.pcd_shape = [*input_feat_shape, 1][::-1] - - def forward(self, input): - if self.training: - max_voxels = self.max_voxels[0] - else: - max_voxels = self.max_voxels[1] - - return voxelization(input, self.voxel_size, self.point_cloud_range, - self.max_num_points, max_voxels) - - def __repr__(self): - s = self.__class__.__name__ + '(' - s += 'voxel_size=' + str(self.voxel_size) - s += ', point_cloud_range=' + str(self.point_cloud_range) - s += ', max_num_points=' + str(self.max_num_points) - s += ', max_voxels=' + str(self.max_voxels) - s += ')' - return s diff --git a/spaces/t110-ai-admin/InspectLens/video_llama/datasets/datasets/dataloader_utils.py b/spaces/t110-ai-admin/InspectLens/video_llama/datasets/datasets/dataloader_utils.py deleted file mode 100644 index 3e2f574e24d2a32a18533a11492cfd481ff2cfbb..0000000000000000000000000000000000000000 --- a/spaces/t110-ai-admin/InspectLens/video_llama/datasets/datasets/dataloader_utils.py +++ /dev/null @@ -1,162 +0,0 @@ -""" - Copyright (c) 2022, salesforce.com, inc. - All rights reserved. - SPDX-License-Identifier: BSD-3-Clause - For full license text, see the LICENSE_Lavis file in the repo root or https://opensource.org/licenses/BSD-3-Clause -""" - -import time -import random -import torch -from video_llama.datasets.data_utils import move_to_cuda -from torch.utils.data import DataLoader - - -class MultiIterLoader: - """ - A simple wrapper for iterating over multiple iterators. - - Args: - loaders (List[Loader]): List of Iterator loaders. - ratios (List[float]): List of ratios to sample from each loader. If None, all loaders are sampled uniformly. - """ - - def __init__(self, loaders, ratios=None): - # assert all loaders has __next__ method - for loader in loaders: - assert hasattr( - loader, "__next__" - ), "Loader {} has no __next__ method.".format(loader) - - if ratios is None: - ratios = [1.0] * len(loaders) - else: - assert len(ratios) == len(loaders) - ratios = [float(ratio) / sum(ratios) for ratio in ratios] - - self.loaders = loaders - self.ratios = ratios - - def __next__(self): - # random sample from each loader by ratio - loader_idx = random.choices(range(len(self.loaders)), self.ratios, k=1)[0] - return next(self.loaders[loader_idx]) - - -class PrefetchLoader(object): - """ - Modified from https://github.com/ChenRocks/UNITER. - - overlap compute and cuda data transfer - (copied and then modified from nvidia apex) - """ - - def __init__(self, loader): - self.loader = loader - self.stream = torch.cuda.Stream() - - def __iter__(self): - loader_it = iter(self.loader) - self.preload(loader_it) - batch = self.next(loader_it) - while batch is not None: - is_tuple = isinstance(batch, tuple) - if is_tuple: - task, batch = batch - - if is_tuple: - yield task, batch - else: - yield batch - batch = self.next(loader_it) - - def __len__(self): - return len(self.loader) - - def preload(self, it): - try: - self.batch = next(it) - except StopIteration: - self.batch = None - return - # if record_stream() doesn't work, another option is to make sure - # device inputs are created on the main stream. - # self.next_input_gpu = torch.empty_like(self.next_input, - # device='cuda') - # self.next_target_gpu = torch.empty_like(self.next_target, - # device='cuda') - # Need to make sure the memory allocated for next_* is not still in use - # by the main stream at the time we start copying to next_*: - # self.stream.wait_stream(torch.cuda.current_stream()) - with torch.cuda.stream(self.stream): - self.batch = move_to_cuda(self.batch) - # more code for the alternative if record_stream() doesn't work: - # copy_ will record the use of the pinned source tensor in this - # side stream. - # self.next_input_gpu.copy_(self.next_input, non_blocking=True) - # self.next_target_gpu.copy_(self.next_target, non_blocking=True) - # self.next_input = self.next_input_gpu - # self.next_target = self.next_target_gpu - - def next(self, it): - torch.cuda.current_stream().wait_stream(self.stream) - batch = self.batch - if batch is not None: - record_cuda_stream(batch) - self.preload(it) - return batch - - def __getattr__(self, name): - method = self.loader.__getattribute__(name) - return method - - -def record_cuda_stream(batch): - if isinstance(batch, torch.Tensor): - batch.record_stream(torch.cuda.current_stream()) - elif isinstance(batch, list) or isinstance(batch, tuple): - for t in batch: - record_cuda_stream(t) - elif isinstance(batch, dict): - for t in batch.values(): - record_cuda_stream(t) - else: - pass - - -class IterLoader: - """ - A wrapper to convert DataLoader as an infinite iterator. - - Modified from: - https://github.com/open-mmlab/mmcv/blob/master/mmcv/runner/iter_based_runner.py - """ - - def __init__(self, dataloader: DataLoader, use_distributed: bool = False): - self._dataloader = dataloader - self.iter_loader = iter(self._dataloader) - self._use_distributed = use_distributed - self._epoch = 0 - - @property - def epoch(self) -> int: - return self._epoch - - def __next__(self): - try: - data = next(self.iter_loader) - except StopIteration: - self._epoch += 1 - if hasattr(self._dataloader.sampler, "set_epoch") and self._use_distributed: - self._dataloader.sampler.set_epoch(self._epoch) - time.sleep(2) # Prevent possible deadlock during epoch transition - self.iter_loader = iter(self._dataloader) - data = next(self.iter_loader) - - return data - - def __iter__(self): - return self - - def __len__(self): - return len(self._dataloader) diff --git a/spaces/t13718236382/bingoGPT4/src/lib/hooks/use-copy-to-clipboard.tsx b/spaces/t13718236382/bingoGPT4/src/lib/hooks/use-copy-to-clipboard.tsx deleted file mode 100644 index 62f7156dca246c46b213151af003a3a177977ccf..0000000000000000000000000000000000000000 --- a/spaces/t13718236382/bingoGPT4/src/lib/hooks/use-copy-to-clipboard.tsx +++ /dev/null @@ -1,33 +0,0 @@ -'use client' - -import * as React from 'react' - -export interface useCopyToClipboardProps { - timeout?: number -} - -export function useCopyToClipboard({ - timeout = 2000 -}: useCopyToClipboardProps) { - const [isCopied, setIsCopied] = React.useState(false) - - const copyToClipboard = (value: string) => { - if (typeof window === 'undefined' || !navigator.clipboard?.writeText) { - return - } - - if (!value) { - return - } - - navigator.clipboard.writeText(value).then(() => { - setIsCopied(true) - - setTimeout(() => { - setIsCopied(false) - }, timeout) - }) - } - - return { isCopied, copyToClipboard } -} diff --git a/spaces/tddschn/yaml-parser/README.md b/spaces/tddschn/yaml-parser/README.md deleted file mode 100644 index 0aaa574b6975a87b31b4f477c4427a75b72b4245..0000000000000000000000000000000000000000 --- a/spaces/tddschn/yaml-parser/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Yaml Parser -emoji: 👁 -colorFrom: green -colorTo: red -sdk: gradio -sdk_version: 3.21.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/terfces0erbo/CollegeProjectV2/A Cor No Processo Criativo Pdf Download BETTER.md b/spaces/terfces0erbo/CollegeProjectV2/A Cor No Processo Criativo Pdf Download BETTER.md deleted file mode 100644 index c3b0182e3310b7ceac9478188939494d7a854049..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/A Cor No Processo Criativo Pdf Download BETTER.md +++ /dev/null @@ -1,11 +0,0 @@ -
                    -

                    download processo de como fazer, processo, processo, processo processo de processos de processo criativo de carreira, processo de como fazer. the matrix - hbo. if you're looking for other pdf files related to "processo criativo: do processo de modelar processo processo criativo processo.

                    -

                    choose any of the processos criativos processo criativo defined in the template. all of the. the line can be as vivid as pink while the layer needs to be as black as grey. the right composition, placement, and colour balance are needed. if you find it complicated.

                    -

                    a cor no processo criativo pdf download


                    DOWNLOADhttps://bytlly.com/2uGiKZ



                    -

                    - download this colour sequence design character background file for free right now!. como isso!! vamos lá! processo criativo de design grfico. creative drawing - design - inspiration - process | creative creative design. everything you need to make winning ideas better. view the this tool as a basic template or. processo criativo de vetorizao - fondos de pantalla de deadpool clipart image is available in pictures/clipart/cliparts.

                    -

                    meu projeto do curso: design e identidade visual: explore processos criativos. meu projeto do curso: design e identidade. your creative process to make your ideas better. the student's name piccadilly proposal design kit. the process of the building process.

                    -

                    upload processo criativo. the process of the building process. there is no wrong answer. processo criativo de vetorizao - fondos de pantalla de deadpool clipart image. the creative process and. level by level, you might at some point start to notice a pattern. you probably noticed this pattern before.

                    -

                    - download this transparency graphic design character file for free right now!. 4: windows 8 presentation desktop. the process of the building process. guidar a processo criativo di a cor do processo criativo fazer. 14; 1; 0. processo criativo de minuca adobe photoshop en ubuntu. de esta forma, puedo ver visualmente.

                    -

                    899543212b
                    -
                    -
                    \ No newline at end of file diff --git a/spaces/terfces0erbo/CollegeProjectV2/CS 1.6 Clean Crack.md b/spaces/terfces0erbo/CollegeProjectV2/CS 1.6 Clean Crack.md deleted file mode 100644 index 10d3fcea83aaa9be51c8a507367ce6067390d43f..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/CS 1.6 Clean Crack.md +++ /dev/null @@ -1,6 +0,0 @@ -

                    CS 1.6 Clean Crack


                    Download File ✓✓✓ https://bytlly.com/2uGkWQ



                    -
                    -at 1920x1080 i get 20 fps on cs 1.6 on csgo 1920x1080 i get 100+ fps pls help how can i get more fps on cs 1.6. 4d29de3e1b
                    -
                    -
                    -

                    diff --git a/spaces/terfces0erbo/CollegeProjectV2/Download BETTERtransformersthegamepcfullversion.md b/spaces/terfces0erbo/CollegeProjectV2/Download BETTERtransformersthegamepcfullversion.md deleted file mode 100644 index ecb1d5e69ac6b394110abdcff8aff4b136a75d16..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/Download BETTERtransformersthegamepcfullversion.md +++ /dev/null @@ -1,13 +0,0 @@ -

                    downloadtransformersthegamepcfullversion


                    Download File ►►► https://bytlly.com/2uGkyI



                    -
                    -Free Download-PC Game Transformers: Full Game. Transformers: The Game, a game taken from the Transformers action movie, most people ... Download Transformers: The Game. -The Complete Game Collection of PC Game Transformers: The Game, a game taken from the Transformers action movie, most people ... -Download game Transformers: The Game - full versions of games from R.G.Mechanics Download games for PC Transformers: The Game, a game taken from the Transformers action movie, most people ... -Download Transformers: The Game via torrent for free on PC. -Full version of the game. -Game Transformers: The Game. -Transformers: The Game. -Full version of the game. 8a78ff9644
                    -
                    -
                    -

                    diff --git a/spaces/thanhtvt/uetasr/app.py b/spaces/thanhtvt/uetasr/app.py deleted file mode 100644 index 2889a4d836fd29dcb70b1fcaa8177b1dc5ba36bc..0000000000000000000000000000000000000000 --- a/spaces/thanhtvt/uetasr/app.py +++ /dev/null @@ -1,303 +0,0 @@ -import base64 -import gradio as gr -import librosa -import logging -import os -import soundfile as sf -import subprocess -import tempfile -import urllib.request - -from datetime import datetime -from time import time - -from examples import examples -from model import UETASRModel - - -def get_duration(filename: str) -> float: - return librosa.get_duration(path=filename) - - -def convert_to_wav(in_filename: str) -> str: - out_filename = os.path.splitext(in_filename)[0] + ".wav" - logging.info(f"Converting {in_filename} to {out_filename}") - y, sr = librosa.load(in_filename, sr=16000) - sf.write(out_filename, y, sr) - return out_filename - - -def build_html_output(s: str, style: str = "result_item_success"): - return f""" -
                    -
                    - {s} -
                    -
                    - """ - - -def process_url( - url: str, - decoding_method: str, - beam_size: int, - max_symbols_per_step: int, -): - logging.info(f"Processing URL: {url}") - with tempfile.NamedTemporaryFile() as f: - try: - urllib.request.urlretrieve(url, f.name) - return process(in_filename=f.name, - decoding_method=decoding_method, - beam_size=beam_size, - max_symbols_per_step=max_symbols_per_step) - except Exception as e: - logging.info(str(e)) - return "", build_html_output(str(e), "result_item_error") - - -def process_uploaded_file( - in_filename: str, - decoding_method: str, - beam_size: int, - max_symbols_per_step: int, -): - if in_filename is None or in_filename == "": - return "", build_html_output( - "Please first upload a file and then click " - 'the button "submit for recognition"', - "result_item_error", - ) - - logging.info(f"Processing uploaded file: {in_filename}") - try: - return process(in_filename=in_filename, - decoding_method=decoding_method, - beam_size=beam_size, - max_symbols_per_step=max_symbols_per_step) - except Exception as e: - logging.info(str(e)) - return "", build_html_output(str(e), "result_item_error") - - -def process_microphone( - in_filename: str, - decoding_method: str, - beam_size: int, - max_symbols_per_step: int, -): - if in_filename is None or in_filename == "": - return "", build_html_output( - "Please first upload a file and then click " - 'the button "submit for recognition"', - "result_item_error", - ) - - logging.info(f"Processing microphone: {in_filename}") - try: - return process(in_filename=in_filename, - decoding_method=decoding_method, - beam_size=beam_size, - max_symbols_per_step=max_symbols_per_step) - except Exception as e: - logging.info(str(e)) - return "", build_html_output(str(e), "result_item_error") - - -def process( - in_filename: str, - decoding_method: str, - beam_size: int, - max_symbols_per_step: int, -): - logging.info(f"in_filename: {in_filename}") - - filename = convert_to_wav(in_filename) - - now = datetime.now() - date_time = now.strftime("%d/%m/%Y, %H:%M:%S.%f") - logging.info(f"Started at {date_time}") - - repo_id = "thanhtvt/uetasr-conformer_30.3m" - - start = time() - - recognizer = UETASRModel(repo_id, - decoding_method, - beam_size, - max_symbols_per_step) - text = recognizer.predict(filename) - - date_time = now.strftime("%d/%m/%Y, %H:%M:%S.%f") - end = time() - - duration = get_duration(filename) - rtf = (end - start) / duration - - logging.info(f"Finished at {date_time} s. Elapsed: {end - start: .3f} s") - - info = f""" - Wave duration : {duration: .3f} s
                    - Processing time: {end - start: .3f} s
                    - RTF: {end - start: .3f}/{duration: .3f} = {rtf:.3f}
                    - """ - if rtf > 1: - info += ( - "
                    We are loading required resources for the first run. " - "Please run again to measure the real RTF.
                    " - ) - - logging.info(info) - - return text, build_html_output(info) - - -title = "Vietnamese Automatic Speech Recognition with UETASR" -description = """ -This space shows how to use UETASR for Vietnamese Automatic Speech Recognition. - -It is running on CPU provided by Hugging Face 🤗 - -See more information by visiting the [Github repository](https://github.com/thanhtvt/uetasr/) -""" - -# css style is copied from -# https://huggingface.co/spaces/alphacep/asr/blob/main/app.py#L113 -css = """ -.result {display:flex;flex-direction:column} -.result_item {padding:15px;margin-bottom:8px;border-radius:15px;width:100%} -.result_item_success {background-color:mediumaquamarine;color:white;align-self:start} -.result_item_error {background-color:#ff7070;color:white;align-self:start} -""" - -demo = gr.Blocks(css=css) - - -with demo: - gr.Markdown(title) - - decode_method_radio = gr.Radio( - label="Decoding method", - choices=["greedy_search", "beam_search"], - value="greedy_search", - interactive=True, - ) - - beam_size_slider = gr.Slider( - label="Beam size", - minimum=1, - maximum=20, - step=1, - value=1, - interactive=False, - ) - - def interact_beam_slider(decoding_method): - if decoding_method == "greedy_search": - return gr.update(value=1, interactive=False) - else: - return gr.update(interactive=True) - - decode_method_radio.change(interact_beam_slider, - decode_method_radio, - beam_size_slider) - - max_symbols_per_step_slider = gr.Slider( - label="Maximum symbols per step", - minimum=1, - maximum=20, - step=1, - value=5, - interactive=True, - visible=True, - ) - - with gr.Tabs(): - with gr.TabItem("Upload from disk"): - uploaded_file = gr.Audio( - source="upload", # Choose between "microphone", "upload" - type="filepath", - label="Upload from disk", - ) - upload_button = gr.Button("Submit for recognition") - uploaded_output = gr.Textbox(label="Recognized speech from uploaded file") - uploaded_html_info = gr.HTML(label="Info") - - gr.Examples( - examples=examples, - inputs=uploaded_file, - outputs=[uploaded_output, uploaded_html_info], - fn=process_uploaded_file, - ) - - with gr.TabItem("Record from microphone"): - microphone = gr.Audio( - source="microphone", - type="filepath", - label="Record from microphone", - ) - - record_button = gr.Button("Submit for recognition") - recorded_output = gr.Textbox(label="Recognized speech from recordings") - recorded_html_info = gr.HTML(label="Info") - - gr.Examples( - examples=examples, - inputs=microphone, - outputs=[uploaded_output, uploaded_html_info], - fn=process_microphone, - ) - - with gr.TabItem("From URL"): - url_textbox = gr.Textbox( - max_lines=1, - placeholder="URL to an audio file", - label="URL", - interactive=True, - ) - - url_button = gr.Button("Submit for recognition") - url_output = gr.Textbox(label="Recognized speech from URL") - url_html_info = gr.HTML(label="Info") - - upload_button.click( - process_uploaded_file, - inputs=[ - uploaded_file, - decode_method_radio, - beam_size_slider, - max_symbols_per_step_slider, - ], - outputs=[uploaded_output, uploaded_html_info], - ) - - record_button.click( - process_microphone, - inputs=[ - microphone, - decode_method_radio, - beam_size_slider, - max_symbols_per_step_slider, - ], - outputs=[recorded_output, recorded_html_info], - ) - - url_button.click( - process_url, - inputs=[ - url_textbox, - decode_method_radio, - beam_size_slider, - max_symbols_per_step_slider, - ], - outputs=[url_output, url_html_info], - ) - gr.Markdown(description) - - -if __name__ == "__main__": - formatter = "%(asctime)s %(levelname)s [%(filename)s:%(lineno)d] %(message)s" - - logging.basicConfig(format=formatter, level=logging.INFO) - - demo.launch() diff --git a/spaces/tharunG17/TharunChatGPT/app.py b/spaces/tharunG17/TharunChatGPT/app.py deleted file mode 100644 index a362dcc7d0ddd1eee86961f1bc3db6d894fbd3d5..0000000000000000000000000000000000000000 --- a/spaces/tharunG17/TharunChatGPT/app.py +++ /dev/null @@ -1,34 +0,0 @@ -import os -import gradio as gr -from langchain.chat_models import ChatOpenAI -from langchain import LLMChain, PromptTemplate -from langchain.memory import ConversationBufferMemory - -OPENAI_API_KEY=os.getenv('OPENAI_API_KEY') - -template = """You are a helpful assistant to answer all user queries. -{chat_history} -User: {user_message} -Chatbot:""" - -prompt = PromptTemplate( - input_variables=["chat_history", "user_message"], template=template -) - -memory = ConversationBufferMemory(memory_key="chat_history") - -llm_chain = LLMChain( - llm=ChatOpenAI(temperature='0.5', model_name="gpt-3.5-turbo"), - prompt=prompt, - verbose=True, - memory=memory, -) - -def get_text_response(user_message,history): - response = llm_chain.predict(user_message = user_message) - return response - -demo = gr.ChatInterface(get_text_response) - -if __name__ == "__main__": - demo.launch() #To create a public link, set `share=True` in `launch()`. To enable errors and logs, set `debug=True` in `launch()`. diff --git a/spaces/thePhenom21/AdaptLLM-medicine-LLM/app.py b/spaces/thePhenom21/AdaptLLM-medicine-LLM/app.py deleted file mode 100644 index 8c09cedd6d3182b52ae2f25d2e0cff61544eff9a..0000000000000000000000000000000000000000 --- a/spaces/thePhenom21/AdaptLLM-medicine-LLM/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/AdaptLLM/medicine-LLM").launch() \ No newline at end of file diff --git a/spaces/therealcyberlord/abstract-art-generation/app.py b/spaces/therealcyberlord/abstract-art-generation/app.py deleted file mode 100644 index cffb61c5e5639a780e2118d2d838499aa90351f5..0000000000000000000000000000000000000000 --- a/spaces/therealcyberlord/abstract-art-generation/app.py +++ /dev/null @@ -1,87 +0,0 @@ -import streamlit as st -import torch -import DCGAN -import SRGAN -from utils import color_histogram_mapping, denormalize_images -import torch.nn as nn -import random - -device = torch.device("cpu") - -if torch.cuda.is_available(): - device = torch.device("cuda") - -latent_size = 100 -checkpoint_path = "Checkpoints/150epochs.chkpt" - -st.title("Generating Abstract Art") - -st.sidebar.subheader("Configurations") -seed = st.sidebar.slider('Seed', -100000, 100000, 0) - -num_images = st.sidebar.slider('Number of Images', 1, 10, 4) - -use_srgan = st.sidebar.selectbox( - 'Apply image enhancement', - ('Yes', 'No') -) - -generate = st.sidebar.button("Generate") -st.write("Get started using the left side bar :sunglasses:") - -# caching the expensive model loading - -@st.cache(allow_output_mutation=True) -def load_dcgan(): - model = torch.jit.load('Checkpoints/dcgan.pt', map_location=device) - return model - -@st.cache(allow_output_mutation=True) -def load_esrgan(): - model_state_dict = torch.load("Checkpoints/esrgan.pt", map_location=device) - return model_state_dict - -# if the user wants to generate something new -if generate: - torch.manual_seed(seed) - random.seed(seed) - - sampled_noise = torch.randn(num_images, latent_size, 1, 1, device=device) - generator = load_dcgan() - generator.eval() - - with torch.no_grad(): - fakes = generator(sampled_noise).detach() - - # use srgan for super resolution - if use_srgan == "Yes": - # restore to the checkpoint - esrgan_generator = SRGAN.GeneratorRRDB(channels=3, filters=64, num_res_blocks=23).to(device) - esrgan_checkpoint = load_esrgan() - esrgan_generator.load_state_dict(esrgan_checkpoint) - - esrgan_generator.eval() - with torch.no_grad(): - enhanced_fakes = esrgan_generator(fakes).detach().cpu() - color_match = color_histogram_mapping(enhanced_fakes, fakes.cpu()) - - cols = st.columns(num_images) - for i in range(len(color_match)): - # denormalize and permute to correct color channel - cols[i].image(denormalize_images(color_match[i]).permute(1, 2, 0).numpy(), use_column_width=True) - st.image("pointing.jpg", use_column_width=True, caption="https://knowyourmeme.com/memes/two-soyjaks-pointing") - - # default setting -> vanilla dcgan generation - if use_srgan == "No": - fakes = fakes.cpu() - - cols = st.columns(num_images) - for i in range(len(fakes)): - cols[i].image(denormalize_images(fakes[i]).permute(1, 2, 0).numpy(), use_column_width=True) - st.image("pointing.jpg", use_column_width=True, caption="https://knowyourmeme.com/memes/two-soyjaks-pointing") - - - - - - diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/BitTorrent APK The Ultimate Guide to Peer-to-Peer File Sharing on Android.md b/spaces/tialenAdioni/chat-gpt-api/logs/BitTorrent APK The Ultimate Guide to Peer-to-Peer File Sharing on Android.md deleted file mode 100644 index a14700f3bc6c7079ee861b7abf504a146f3a89db..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/BitTorrent APK The Ultimate Guide to Peer-to-Peer File Sharing on Android.md +++ /dev/null @@ -1,49 +0,0 @@ -
                    -

                    How to Download and Install BitTorrent APK on Your Android Device

                    - -

                    BitTorrent is one of the most popular peer-to-peer file sharing protocols that allows users to download and upload large files such as movies, music, games, and software. BitTorrent works by splitting files into small pieces and distributing them among multiple users who can then download them from each other. This way, the download speed is increased and the network load is reduced.

                    -

                    bittorrent apk


                    Downloadhttps://urlcod.com/2uKabY



                    - -

                    If you want to use BitTorrent on your Android device, you will need to download and install the BitTorrent APK file. The BitTorrent APK file is the application package that contains the BitTorrent app for Android. You can download the BitTorrent APK file from various sources online, but you should always be careful and choose a trusted and reputable site. Some sites may offer fake or malicious APK files that can harm your device or compromise your privacy.

                    - -

                    In this article, we will show you how to download and install the BitTorrent APK file on your Android device safely and easily. Follow these steps:

                    - -
                      -
                    1. Go to https://www.bittorrent.com/android/ on your Android device's browser. This is the official website of BitTorrent where you can download the latest version of the BitTorrent APK file.
                    2. -
                    3. Tap on the "Download BitTorrent" button and wait for the download to start. You may see a warning message that says "This type of file can harm your device. Do you want to keep BitTorrent.apk anyway?". Tap on "OK" to proceed.
                    4. -
                    5. Once the download is complete, open the BitTorrent APK file by tapping on the notification bar or by going to your device's file manager. You may see another warning message that says "For your security, your phone is not allowed to install unknown apps from this source.". Tap on "Settings" and enable the option "Allow from this source".
                    6. -
                    7. Go back to the BitTorrent APK file and tap on "Install". Wait for the installation to finish.
                    8. -
                    9. Once the installation is done, you can open the BitTorrent app by tapping on its icon on your device's home screen or app drawer.
                    10. -
                    - -

                    Congratulations! You have successfully downloaded and installed the BitTorrent APK file on your Android device. Now you can enjoy using BitTorrent to download and share files with other users around the world.

                    - -

                    How to Use BitTorrent on Your Android Device

                    - -

                    After installing the BitTorrent app on your Android device, you can start using it to download and share files with other users. Here are some basic steps to use BitTorrent on your Android device:

                    - -
                      -
                    1. Open the BitTorrent app and tap on the "+" icon at the bottom right corner. You can choose to add a torrent file from your device's storage, scan a QR code, or enter a magnet link.
                    2. -
                    3. Select the torrent file you want to download and tap on "Add". You can also customize the download settings such as the download location, the file selection, the bandwidth limit, and the download priority.
                    4. -
                    5. Tap on "Start" to begin the download. You can see the progress of the download on the main screen of the app. You can also pause, resume, or delete the download at any time.
                    6. -
                    7. Once the download is complete, you can tap on the file to open it with your device's default app. You can also share the file with other users by tapping on the "Share" icon.
                    8. -
                    - -

                    Note that downloading and sharing files with BitTorrent may consume a lot of data and battery on your device. You should always use a Wi-Fi connection and a charger when using BitTorrent. You should also respect the intellectual property rights of the content creators and only download and share files that are legal and authorized.

                    -

                    - -

                    How to Update BitTorrent APK on Your Android Device

                    - -

                    It is important to keep your BitTorrent app updated to enjoy the latest features and bug fixes. You can update your BitTorrent APK file on your Android device by following these steps:

                    - -
                      -
                    1. Go to https://www.bittorrent.com/android/ on your Android device's browser. This is the official website of BitTorrent where you can download the latest version of the BitTorrent APK file.
                    2. -
                    3. Tap on the "Download BitTorrent" button and wait for the download to start. You may see a warning message that says "This type of file can harm your device. Do you want to keep BitTorrent.apk anyway?". Tap on "OK" to proceed.
                    4. -
                    5. Once the download is complete, open the BitTorrent APK file by tapping on the notification bar or by going to your device's file manager. You may see another warning message that says "Do you want to install an update to this existing application? Your existing data will not be lost.". Tap on "Install" to confirm.
                    6. -
                    7. Wait for the installation to finish.
                    8. -
                    9. Once the installation is done, you can open the BitTorrent app by tapping on its icon on your device's home screen or app drawer.
                    10. -
                    - -

                    Congratulations! You have successfully updated your BitTorrent APK file on your Android device. Now you can enjoy using BitTorrent with its latest features and improvements.

                    ddb901b051
                    -
                    -
                    \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/How to Download Download Manager Cracked for Free in 2023.md b/spaces/tialenAdioni/chat-gpt-api/logs/How to Download Download Manager Cracked for Free in 2023.md deleted file mode 100644 index 43ef885081f4fad4c315720cf938974088002717..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/How to Download Download Manager Cracked for Free in 2023.md +++ /dev/null @@ -1,18 +0,0 @@ -
                    -

                    How to Download Download Manager Cracked for Free

                    -

                    If you are looking for a way to download files faster and more reliably, you might be interested in Download Manager, a tool that can increase download speeds by up to 5 times, resume and schedule downloads, and integrate with various browsers. However, Download Manager is not a free software, and you need to purchase a license or serial key to use it. But what if you don't want to spend money on it? Is there a way to get Download Manager cracked for free?

                    -

                    download download manager cracked


                    Download →→→ https://urlcod.com/2uK7vE



                    -

                    The answer is yes, but you need to be careful. There are many websites that claim to offer Download Manager cracked versions, patches, keygens, or serial numbers, but most of them are fake, malicious, or illegal. Some of them may contain viruses, malware, spyware, or adware that can harm your computer or steal your personal information. Some of them may not work at all, or cause Download Manager to malfunction or crash. And some of them may violate the copyright laws and expose you to legal risks.

                    -

                    Therefore, before you download any Download Manager cracked version, you need to do some research and check the credibility and reputation of the website. You also need to scan the downloaded file with an antivirus program and make sure it is safe and clean. And you need to be aware of the consequences and risks of using a cracked software.

                    -

                    However, if you still want to try Download Manager cracked for free, here are some possible sources that you can check out:

                    -

                    -
                      -
                    • CrackingCity: This website offers IDM Crack with Internet Download Manager 6.41 Build 10 [Latest], which is a patch that can activate or reset Download Manager without a serial key. You need to disable your antivirus before using the patch, as it may detect it as a virus or trojan. You also need to enter the password 123 to extract the file.
                    • -
                    • YASIR252: This website offers Download IDM Full Crack 6.41 Build 11 Free, which is a full version of Download Manager with a crack included. You need to copy the crack file to the installation folder of Download Manager and run it as administrator.
                    • -
                    • Google Drive: This link contains Internet Download Manager (IDM) 6.30 Build 5 Full + Crack [TipuCrack], which is an older version of Download Manager with a crack file. You need to download both the setup file and the crack file from the link and follow the instructions.
                    • -
                    -

                    These are some of the possible sources that you can use to download Download Manager cracked for free. However, we do not recommend or endorse any of them, and we are not responsible for any damage or loss that may occur from using them. Use them at your own risk and discretion.

                    -

                    If you want to use Download Manager legally and safely, we suggest that you buy a license or serial key from the official website. You can also try the free trial version for 30 days and see if it meets your needs.

                    -

                    We hope this article has been helpful for you. If you have any questions or feedback, please leave a comment below.

                    ddb901b051
                    -
                    -
                    \ No newline at end of file diff --git a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Cooking Fever APK The Most Popular Cooking Game on the Play Store.md b/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Cooking Fever APK The Most Popular Cooking Game on the Play Store.md deleted file mode 100644 index 596f24af925d0a817d129b28d71fd2d87a44ba12..0000000000000000000000000000000000000000 --- a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Cooking Fever APK The Most Popular Cooking Game on the Play Store.md +++ /dev/null @@ -1,99 +0,0 @@ -
                    -

                    Cooking Fever: Restaurant Game APK - A Fun and Addictive Cooking Simulation Game

                    -

                    Do you love cooking and serving delicious food? Do you dream of running your own restaurant empire? If yes, then you should try Cooking Fever: Restaurant Game APK, a fun and addictive cooking simulation game that will test your culinary skills and time management abilities. In this game, you can cook hundreds of dishes, serve customers from different countries, and explore various restaurants and cuisines. You can also upgrade your kitchen and equipment, complete daily tasks and achievements, and play offline or online with friends. Whether you are a beginner or a master chef, you will find something to enjoy in Cooking Fever: Restaurant Game APK.

                    -

                    What is Cooking Fever: Restaurant Game APK?

                    -

                    Cooking Fever: Restaurant Game APK is a modified version of the original Cooking Fever game, which is available on Google Play Store. The APK version allows you to access all the features and content of the game without any restrictions or limitations. You can download and install the APK file on your Android device for free, and enjoy the game without any ads or in-app purchases. You can also play the game offline, without an internet connection, or online, with your Facebook friends or other players from around the world.

                    -

                    cooking fever restaurant game apk


                    Download Filehttps://bltlly.com/2uOlzN



                    -

                    Features of Cooking Fever: Restaurant Game APK

                    -

                    Hundreds of dishes to cook and serve

                    -

                    One of the main attractions of Cooking Fever: Restaurant Game APK is the variety of dishes that you can cook and serve in the game. You can choose from over 400 ingredients and 1500 recipes, ranging from burgers and pizzas to sushi and desserts. You can also customize your dishes with different toppings, sauces, and decorations. You will need to follow the recipes carefully, prepare the ingredients correctly, and cook them at the right temperature and time. You will also need to serve them quickly and accurately, according to the customers' orders and preferences.

                    -

                    Various restaurants and cuisines to explore

                    -

                    Another feature that makes Cooking Fever: Restaurant Game APK exciting is the diversity of restaurants and cuisines that you can explore in the game. You can start with a simple fast food joint, and then move on to more exotic and challenging locations, such as a Chinese restaurant, a seafood bistro, a pizza parlor, an ice cream shop, a bakery, a sushi bar, a breakfast cafe, a Mexican diner, a steakhouse, an Indian restaurant, a French patisserie, a Hawaiian luau, a Brazilian carnival, a Moroccan tagine, an Italian buffet, a Greek salad bar, an Alpine resort, a Thai food truck, a Vietnamese pho stand, a Korean barbecue grill, a Japanese ramen shop, an Australian outback pub, a Spanish tapas bar, a German beer garden, a Russian stroganoff house, an American diner, a British tea room, a Turkish kebab house, an Egyptian falafel stand, a Caribbean cocktail bar, a Nordic smorgasbord, a Chinese hot pot, a Moroccan couscous, a Hawaiian poke bowl, a Brazilian churrasco, an Italian gelato, a French crepe, a Greek gyro, a Thai pad thai, a Vietnamese banh mi, a Korean bibimbap, a Japanese sushi, an Australian pavlova, a Spanish paella, a German pretzel, a Russian blini, an American burger, a British fish and chips, a Turkish bak [assistant](#continue) lava, and more. Each restaurant has its own theme, design, menu, and customers. You will need to adapt to the different cuisines and cultures, and learn new cooking techniques and skills. You will also face different challenges and surprises, such as VIP customers, food critics, special events, and disasters.

                    -

                    Upgrade your kitchen and equipment

                    -

                    As you progress in Cooking Fever: Restaurant Game APK, you will need to upgrade your kitchen and equipment to keep up with the increasing demand and complexity of the dishes. You can use the gems and coins that you earn from serving customers to buy new stoves, ovens, fryers, grills, mixers, blenders, toasters, microwaves, coffee machines, ice cream makers, popcorn machines, pizza ovens, sushi rollers, waffle irons, chocolate fountains, and more. You can also buy new utensils, pots, pans, knives, cutting boards, plates, cups, glasses, trays, napkins, and more. Upgrading your kitchen and equipment will help you cook faster, serve more customers, reduce waiting time, increase customer satisfaction, and earn more tips.

                    -

                    Complete daily tasks and achievements

                    -

                    Besides cooking and serving customers, you can also complete daily tasks and achievements in Cooking Fever: Restaurant Game APK. These are optional challenges that will reward you with extra gems and coins, as well as trophies and stars. Some examples of daily tasks are serving a certain number of customers, earning a certain amount of money, using a certain number of boosters or power-ups, or cooking a certain dish. Some examples of achievements are completing a certain number of levels or restaurants, upgrading your kitchen or equipment to a certain level, collecting a certain number of gems or coins, or serving a certain type of customer. Completing daily tasks and achievements will help you level up faster, unlock more content, and improve your skills.

                    -

                    Play offline or online with friends

                    -

                    One of the best features of Cooking Fever: Restaurant Game APK is that you can play it offline or online with friends. If you play offline, you can enjoy the game without any internet connection, and save your progress on your device. If you play online, you can connect your game to your Facebook account, and invite your friends to join you in the cooking adventure. You can also visit your friends' restaurants, send and receive gifts, and compete with them in leaderboards and tournaments. Playing online with friends will make the game more fun, social, and challenging.

                    -

                    How to download and install Cooking Fever: Restaurant Game APK?

                    -

                    Download the APK file from a trusted source

                    -

                    The first step to download and install Cooking Fever: Restaurant Game APK is to find a trusted source that offers the APK file for free. You can search for the APK file on Google or other search engines, or use a reliable website that provides APK files for various games and apps. Some examples of such websites are APKPure, APKMirror, or APKMonk. You can also use the link below to download the APK file directly:

                    -

                    Cooking Fever: Restaurant Game APK Download

                    -

                    Enable unknown sources on your device

                    -

                    The next step to download and install Cooking Fever: Restaurant Game APK is to enable unknown sources on your device. This is a security setting that allows you to install apps from sources other than the Google Play Store. To enable unknown sources on your device, follow these steps:

                    -

                    cooking fever restaurant game apk download
                    -cooking fever restaurant game apk mod
                    -cooking fever restaurant game apk latest version
                    -cooking fever restaurant game apk for android
                    -cooking fever restaurant game apk free
                    -cooking fever restaurant game apk offline
                    -cooking fever restaurant game apk hack
                    -cooking fever restaurant game apk unlimited money
                    -cooking fever restaurant game apk update
                    -cooking fever restaurant game apk old version
                    -cooking fever restaurant game apk for pc
                    -cooking fever restaurant game apk online
                    -cooking fever restaurant game apk no ads
                    -cooking fever restaurant game apk full version
                    -cooking fever restaurant game apk cheats
                    -cooking fever restaurant game apk premium
                    -cooking fever restaurant game apk 2023
                    -cooking fever restaurant game apk install
                    -cooking fever restaurant game apk review
                    -cooking fever restaurant game apk gameplay
                    -cooking fever restaurant game apk tips and tricks
                    -cooking fever restaurant game apk best restaurants
                    -cooking fever restaurant game apk new restaurants
                    -cooking fever restaurant game apk flipping pancake
                    -cooking fever restaurant game apk sushi bar
                    -cooking fever restaurant game apk bakery
                    -cooking fever restaurant game apk pizza shop
                    -cooking fever restaurant game apk ice cream bar
                    -cooking fever restaurant game apk chinese restaurant
                    -cooking fever restaurant game apk indian diner
                    -cooking fever restaurant game apk breakfast cafe
                    -cooking fever restaurant game apk seafood bistro
                    -cooking fever restaurant game apk corn dog van
                    -cooking fever restaurant game apk paradise cocktail bar
                    -cooking fever restaurant game apk mexican buffet
                    -cooking fever restaurant game apk house of crab
                    -cooking fever restaurant game apk salad bar
                    -cooking fever restaurant game apk aloha bistro
                    -cooking fever restaurant game apk sunset waffles
                    -cooking fever restaurant game apk thai food stall
                    -cooking fever restaurant game apk smoky grill bbq
                    -cooking fever restaurant game apk italian buffet
                    -cooking fever restaurant game apk cupcakes shop
                    -cooking fever restaurant game apk fast food court
                    -cooking fever restaurant game apk sports bar
                    -cooking fever restaurant game apk pizzeria

                    - - Go to Settings > Security > Unknown Sources - Tap on the switch to turn it on - Confirm by tapping OK

                    Note: The steps may vary depending on your device model and Android version.

                    -

                    Install the APK file and launch the game

                    -

                    The final step to download and install Cooking Fever: Restaurant Game APK is to install the APK file and launch the game. To install the APK file, follow these steps:

                    - - Locate the downloaded APK file on your device - Tap on it to open it - Tap on Install - Wait for the installation process to finish - Tap on Open

                    Congratulations! You have successfully downloaded and installed Cooking Fever: Restaurant Game APK on your device. You can now enjoy the game without any restrictions or limitations.

                    -

                    Tips and tricks for playing Cooking Fever: Restaurant Game APK

                    -

                    Manage your time and customers efficiently

                    -

                    The key to success in Cooking Fever: Restaurant Game APK is to manage your time and customers efficiently. You will need to cook and serve as many customers as possible within the given time limit, and keep them happy and satisfied. To do this, you should follow these tips:

                    - Prepare the dishes in advance and store them on the warmers - Serve the customers in the order they arrive, and don't make them wait too long - Use the right ingredients and dishes for each customer, and don't make any mistakes - Pay attention to the customers' expressions and moods, and try to cheer them up with smiles or treats - Collect the money and tips as soon as possible, and don't let them pile up on the counter

                    Use boosters and power-ups wisely

                    -

                    Another tip for playing Cooking Fever: Restaurant Game APK is to use boosters and power-ups wisely. These are special items that can help you improve your performance and score in the game. You can buy them with gems or coins, or get them for free by watching ads or completing tasks. Some examples of boosters and power-ups are:

                    - - Instant Cooking: This booster allows you to cook any dish instantly, without any waiting time - Food Warmer: This booster keeps your dishes warm and fresh, and prevents them from burning or spoiling - Automatic Food Machine: This booster automatically prepares and serves the dishes for you, without any input from you - Double Coins: This power-up doubles the amount of coins that you earn from each customer - Double Experience: This power-up doubles the amount of experience that you earn from each level - Customer Wait Time Increase: This power-up increases the amount of time that customers are willing to wait for their orders - Customer Tip Increase: This power-up increases the amount of tips that customers give you

                    You should use these boosters and power-ups strategically, depending on the level of difficulty and your goals. You should also save them for the harder levels or restaurants, where you need more help.

                    -

                    Collect gems and coins to unlock more content

                    -

                    The final tip for playing Cooking Fever: Restaurant Game APK is to collect gems and coins to unlock more content. Gems and coins are the main currencies in the game, which you can use to buy new restaurants, kitchen upgrades, equipment, boosters, power-ups, and more. You can earn gems and coins by serving customers, completing levels, achieving goals, watching ads, or buying them with real money. You can also get free gems and coins by logging in daily, playing the casino, or visiting your friends' restaurants. You should collect as many gems and coins as possible, and spend them wisely, to unlock more content and enjoy the game.

                    -

                    Conclusion

                    -

                    Cooking Fever: Restaurant Game APK is a fun and addictive cooking simulation game that will test your culinary skills and time management abilities. You can cook hundreds of dishes, serve customers from different countries, and explore various restaurants and cuisines. You can also upgrade your kitchen and equipment, complete daily tasks and achievements, and play offline or online with friends. Whether you are a beginner or a master chef, you will find something to enjoy in Cooking Fever: Restaurant Game APK.

                    -

                    If you want to download and install Cooking Fever: Restaurant Game APK on your device, you can follow the steps above. You can also use the tips and tricks above to improve your performance and score in the game. Cooking Fever: Restaurant Game APK is a game that will keep you entertained for hours, and make you feel like a real chef.

                    -

                    So what are you waiting for? Download Cooking Fever: Restaurant Game APK now, and start cooking!

                    -

                    FAQs

                    -

                    Here are some frequently asked questions about Cooking Fever: Restaurant Game APK:

                    -
                      -
                    1. Is Cooking Fever: Restaurant Game APK safe to download and install?
                    2. -

                      Yes, Cooking Fever: Restaurant Game APK is safe to download and install, as long as you use a trusted source that offers the APK file for free. You should also scan the APK file with an antivirus program before installing it on your device.

                      -
                    3. What are the differences between Cooking Fever: Restaurant Game APK and the original Cooking Fever game?
                    4. -

                      The main differences between Cooking Fever: Restaurant Game APK and the original Cooking Fever game are that the APK version allows you to access all the features and content of the game without any restrictions or limitations. You can download and install the APK file on your Android device for free, and enjoy the game without any ads or in-app purchases. You can also play the game offline, without an internet connection, or online, with your Facebook friends or other players from around the world.

                      -
                    5. How can I update Cooking Fever: Restaurant Game APK?
                    6. -

                      To update Cooking Fever: Restaurant Game APK, you need to download and install the latest version of the APK file from a trusted source. You should also uninstall the previous version of the APK file before installing the new one. You can also check for updates within the game, by tapping on the settings icon, and then on the update button.

                    7. How can I get more gems and coins in Cooking Fever: Restaurant Game APK?
                    8. -

                      There are several ways to get more gems and coins in Cooking Fever: Restaurant Game APK. You can earn them by serving customers, completing levels, achieving goals, watching ads, or buying them with real money. You can also get free gems and coins by logging in daily, playing the casino, or visiting your friends' restaurants.

                      -
                    9. How can I play Cooking Fever: Restaurant Game APK with my friends?
                    10. -

                      To play Cooking Fever: Restaurant Game APK with your friends, you need to connect your game to your Facebook account. You can then invite your friends to join you in the cooking adventure, or accept their invitations. You can also visit your friends' restaurants, send and receive gifts, and compete with them in leaderboards and tournaments.

                      -

                    401be4b1e0
                    -
                    -
                    \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Kpg 49d Software Serial Number 4.2 22.md b/spaces/tioseFevbu/cartoon-converter/scripts/Kpg 49d Software Serial Number 4.2 22.md deleted file mode 100644 index c2dee4f01ac84fa0aa93bdac0678b48f7fee6e55..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Kpg 49d Software Serial Number 4.2 22.md +++ /dev/null @@ -1,69 +0,0 @@ -
                    -

                    Kpg 49d Software Serial Number 4.2 22: What You Need to Know

                    -

                    If you are looking for a reliable and easy-to-use programming software for your Kenwood radios and transceivers, you might want to check out Kpg 49d software. This software allows you to program various models of Kenwood radios and transceivers, such as TK-880, TK-980, TK-981, TK-7180, TK-8180, etc. However, before you can use this software, you need to register it with a valid serial number. In this article, we will tell you everything you need to know about Kpg 49d software serial number 4.2 22, including how to get it, how to install and use it, and how to make the most out of it.

                    -

                    Kpg 49d Software Serial Number 4.2 22


                    DOWNLOADhttps://urlcod.com/2uHyMA



                    -

                    Introduction

                    -

                    Kpg 49d software is a radio service software (RSS) developed by Kenwood Corporation for programming various models of Kenwood radios and transceivers. It allows you to read and write data from and to your Kenwood radio or transceiver, such as frequencies, channels, tones, power levels, scan lists, etc. You can also backup and restore your data using Kpg 49d software, as well as update and upgrade your firmware.

                    -

                    However, in order to use Kpg 49d software, you need to register it with a valid serial number. The serial number is a combination of letters and numbers that acts as a license key for activating the software. Without the serial number, you cannot install or run the software on your PC.

                    -

                    So, how can you get a serial number for Kpg 49d software? One way is to purchase the original CD-ROM from Kenwood or an authorized dealer. The CD-ROM contains the installation files and the serial number for the software. However, this option might be expensive or unavailable in some regions.

                    -

                    Another way is to download the software from online sources, such as HamFiles or other websites that offer radio programming software. HamFiles is a popular website that provides free downloads of various radio programming software for amateur radio enthusiasts. You can find Kpg 49d software version 4.0 or version 4.02 on HamFiles, along with the serial number for each version. However, you need to register an account on HamFiles before you can download any files from it.

                    -

                    -

                    Alternatively, you can also search for other websites that offer Kpg 49 d software serial number 4.2 22 on the internet, but be careful of the source and the quality of the files. Some websites might contain malware or viruses that can harm your PC or your radio. Some files might also be corrupted or incomplete, which can cause errors or failures in the programming process. Therefore, always scan the files before downloading them and verify their authenticity and integrity.

                    -

                    Features and Benefits of Kpg 49d Software

                    -

                    Kpg 49d software is a powerful and versatile programming software for Kenwood radios and transceivers. It has many features and benefits that make it a preferred choice for many radio users and programmers. Here are some of the main features and benefits of Kpg 49d software:

                    -
                      -
                    • It supports a wide range of Kenwood radios and transceivers, such as TK-880, TK-980, TK-981, TK-7180, TK-8180, etc. You can program different models of radios and transceivers with the same software, which saves you time and money.
                    • -
                    • It has a user-friendly and intuitive interface that makes it easy to use and operate. You can access all the functions and settings of the software from the main menu or the toolbar. You can also customize the layout and appearance of the software according to your preferences.
                    • -
                    • It allows you to read and write data from and to your Kenwood radio or transceiver, such as frequencies, channels, tones, power levels, scan lists, etc. You can edit and modify the data using the built-in editor or the spreadsheet mode. You can also copy and paste data between different radios or transceivers.
                    • -
                    • It allows you to backup and restore your data using Kpg 49d software. You can save your data as a file on your PC or a diskette for future use. You can also load your data from a file or a diskette to your Kenwood radio or transceiver. This way, you can protect your data from loss or damage and restore it whenever you need it.
                    • -
                    • It allows you to update and upgrade your firmware using Kpg 49d software. You can download the latest firmware version from Kenwood's website or other sources and install it on your Kenwood radio or transceiver using Kpg 49d software. This way, you can improve the performance and functionality of your radio or transceiver and fix any bugs or issues.
                    • -
                    -

                    These are just some of the features and benefits of Kpg 49d software. There are many more functions and capabilities that you can explore and discover by using this software.

                    -

                    How to Install and Use Kpg 49d Software

                    -

                    Now that you know what Kpg 49d software is and what it can do for you, you might be wondering how to install and use it on your PC. Here are the steps that you need to follow:

                    -
                      -
                    1. First, you need to get a serial number for Kpg 49d software. As we mentioned before, you can get it from HamFiles or other online sources, or from the original CD-ROM if you have it.
                    2. -
                    3. Next, you need to download the installation files for Kpg 49d software from HamFiles or other online sources, or from the original CD-ROM if you have it. Make sure that you download the correct version of the software that matches your serial number.
                    4. -
                    5. Then, you need to run the installation file on your PC and follow the instructions on the screen. You will be asked to enter the serial number during the installation process. Make sure that you enter it correctly and without any spaces or dashes.
                    6. -
                    7. After the installation is complete, you need to connect your Kenwood radio or transceiver to your PC using a programming cable. The programming cable is a special cable that connects your radio or transceiver to your PC's serial port or USB port. You can buy a programming cable from Kenwood or an authorized dealer, or make one yourself if you have the skills and materials.
                    8. -
                    9. Next, you need to launch Kpg 49d software on your PC by clicking on its icon on your desktop or in your start menu. You will see the main window of the software with various menus and buttons.
                    10. -
                    11. Then, you need to configure Kpg 49d software according to your radio or transceiver model and type. You can do this by clicking on "File" > "New" > "Model" > "Type" in the main menu. You will see a list of supported models and types of radios and transceivers. Select the one that matches yours and click "OK".
                    12. -
                    13. Next, you need to set up the communication parameters between Kpg 49d software and your radio or transceiver. You can do this by clicking on "Setup" > "Communication" in the main menu. You will see a window with various options for setting the port, speed, parity, data bits, stop bits, and flow control. Select the ones that match your programming cable and your radio or transceiver and click "OK".
                    14. -
                    15. Next, you need to read the data from your radio or transceiver using Kpg 49d software. You can do this by clicking on "Program" > "Read" in the main menu. You will see a progress bar showing the reading process. Wait until it is finished and you will see the data displayed on the screen.
                    16. -
                    17. Then, you can edit and modify the data using Kpg 49d software. You can use the built-in editor or the spreadsheet mode to change the values of the data, such as frequencies, channels, tones, power levels, scan lists, etc. You can also copy and paste data between different radios or transceivers.
                    18. -
                    19. Next, you need to write the data to your radio or transceiver using Kpg 49d software. You can do this by clicking on "Program" > "Write" in the main menu. You will see a progress bar showing the writing process. Wait until it is finished and you will hear a beep sound from your radio or transceiver indicating that the programming is successful.
                    20. -
                    21. Finally, you can test and verify the programming by turning on your radio or transceiver and checking its functions and settings. You can also use Kpg 49d software to monitor and control your radio or transceiver remotely from your PC.
                    22. -
                    -

                    These are the basic steps for installing and using Kpg 49d software. You can also explore other features and functions of the software by reading the user manual or accessing the online help.

                    -

                    Tips and Tricks for Using Kpg 49d Software

                    -

                    Kpg 49d software is a powerful and versatile programming software for Kenwood radios and transceivers. However, it also has some limitations and challenges that you need to be aware of and overcome. Here are some tips and tricks for using Kpg 49d software effectively and efficiently:

                    -
                      -
                    • Always backup your data before programming your radio or transceiver using Kpg 49d software. You can save your data as a file on your PC or a diskette using Kpg 49d software. This way, you can restore your data in case of any errors or failures during the programming process.
                    • -
                    • Always update and upgrade your Kpg 49d software to the latest version available. You can download the latest version from Kenwood's website or other sources and install it on your PC using Kpg 49d software. This way, you can improve the performance and functionality of your software and fix any bugs or issues.
                    • -
                    • Always troubleshoot common problems and errors with Kpg 49d software before giving up or seeking help from others. You can check the user manual or the online help for possible solutions and explanations for common problems and errors, such as communication errors, read/write errors, invalid serial number errors, etc.
                    • -
                    • Always access online resources and support for Kpg 49d software when you need more information or assistance. You can visit Kenwood's website or other websites that offer radio programming software for more tips and tricks, tutorials, videos, forums, blogs, etc. You can also contact Kenwood's customer service or technical support for more help.
                    • -
                    -

                    These are some of the tips and tricks for using Kpg 49d software. There are many more that you can learn and discover by using this software regularly and frequently.

                    -

                    Conclusion

                    -

                    Kpg 49d software is a reliable and easy-to-use programming software for Kenwood radios and transceivers. It allows you to program various models of Kenwood radios and transceivers, such as TK-880, TK-980, TK-981, TK-7180, TK-8180, etc. It has many features and benefits that make it a preferred choice for many radio users and programmers.

                    -

                    However, in order to use Kpg 49d software, you need to register it with a valid serial number. The serial number is a combination of letters and numbers that acts as a license key for activating the software. Without the serial number, you cannot install or run the software on your PC.

                    -

                    You can get a serial number for Kpg 49d software from HamFiles or other online sources, or from the original CD-ROM if you have it. However, you need to be careful of the source and the quality of the files that you download from the internet. Some websites might contain malware or viruses that can harm your PC or your radio. Some files might also be corrupted or incomplete, which can cause errors or failures in the programming process. Therefore, always scan the files before downloading them and verify their authenticity and integrity.

                    -

                    In this article, we have told you everything you need to know about Kpg 49d software serial number 4.2 22, including how to get it, how to install and use it, and how to make the most out of it. We hope that this article has been helpful and informative for you. If you have any questions or comments, please feel free to leave them below.

                    -

                    If you are interested in trying out Kpg 49d software for yourself, you can download it from HamFiles or other online sources, or from the original CD-ROM if you have it. You can also visit Kenwood's website or other websites that offer radio programming software for more information and support. Remember to backup your data before programming your radio or transceiver using Kpg 49d software, and to update and upgrade your software to the latest version available.

                    -

                    Thank you for reading this article and happy programming!

                    -

                    FAQs

                    -

                    Here are some of the frequently asked questions about Kpg 49d software:

                    -
                      -
                    1. What is the difference between Kpg 49d software version 4.0 and version 4.02?
                    2. -

                      The main difference between Kpg 49d software version 4.0 and version 4.02 is that version 4.02 supports more models of Kenwood radios and transceivers, such as TK-7180E3, TK-8180E3, TK-7189E3, TK-8189E3, etc. Version 4.02 also has some bug fixes and improvements over version 4.0.

                      -
                    3. Can I use Kpg 49d software with other brands of radios and transceivers?
                    4. -

                      No, you cannot use Kpg 49d software with other brands of radios and transceivers. Kpg 49d software is designed specifically for Kenwood radios and transceivers, and it will not work with other brands of radios and transceivers. You need to use the appropriate programming software for your radio or transceiver brand.

                      -
                    5. Can I use Kpg 49d software with Windows 10?
                    6. -

                      Yes, you can use Kpg 49d software with Windows 10. However, you might need to run the software in compatibility mode or as an administrator to avoid any errors or issues. You can also check the user manual or the online help for more instructions on how to install and run Kpg 49d software on Windows 10.

                      -
                    7. Can I use Kpg 49d software without a programming cable?
                    8. -

                      No, you cannot use Kpg 49d software without a programming cable. The programming cable is a necessary component for connecting your Kenwood radio or transceiver to your PC and transferring data between them. Without the programming cable, you cannot communicate with your radio or transceiver using Kpg 49d software.

                      -
                    9. Can I use Kpg 49d software on a Mac or Linux computer?
                    10. -

                      No, you cannot use Kpg 49d software on a Mac or Linux computer. Kpg 49d software is only compatible with Windows operating systems, such as Windows XP, Windows Vista, Windows 7, Windows 8, and Windows 10. You need to use a Windows PC to run Kpg 49d software.

                      -

                    b2dd77e56b
                    -
                    -
                    \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Libro De Calculo 4000 152.md b/spaces/tioseFevbu/cartoon-converter/scripts/Libro De Calculo 4000 152.md deleted file mode 100644 index 2a586ecc6c22f7f66473e067821f76bff2fbc8be..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Libro De Calculo 4000 152.md +++ /dev/null @@ -1,57 +0,0 @@ -
                    -``` -

                    Libro De Calculo 4000 152: A Comprehensive Guide to Calculus

                    -

                    If you are looking for a book that covers all the topics of calculus, from functions and limits to integrals and applications, you might want to check out Libro De Calculo 4000 152. This book, written by Victor Manuel Gonzalez Cabrera and published by Editorial Progreso, is a popular choice among students and teachers of mathematics in Latin America.

                    -

                    Libro De Calculo 4000 152


                    Download Ziphttps://urlcod.com/2uHxEV



                    -

                    In this article, we will review some of the features and benefits of Libro De Calculo 4000 152, as well as some of the drawbacks and challenges that you might face while using it. We will also provide some tips and resources on how to get the most out of this book and improve your calculus skills.

                    -

                    What is Libro De Calculo 4000 152?

                    -

                    Libro De Calculo 4000 152 is a calculus textbook that contains 246 pages of theory, examples, exercises, and problems. It covers the following topics:

                    -
                      -
                    • Functions Algebraicas
                    • -
                    • Limites de Funciones Algebraicas
                    • -
                    • Derivada de Funciones Algebraicas
                    • -
                    • Derivacion Implicita
                    • -
                    • Tangentes, Normales y Angulos de Corte
                    • -
                    • Maximos, Minimos, Inflexion en Funciones Algebraicas
                    • -
                    • Problemas de Maximos y Minimos
                    • -
                    • Funciones Trigonometricas
                    • -
                    • Funciones Trigonometricas Inversas
                    • -
                    • Funciones Logaritmicas y Exponenciales
                    • -
                    • Problemas con Funciones Trascendentes
                    • -
                    • Derivada con Respecto al Tiempo
                    • -
                    • Diferenciacion
                    • -
                    • Integracion de Funciones Algebraicas
                    • -
                    • Integracion de Funciones Trascendentes
                    • -
                    • Integracion por Partes
                    • -
                    • Integracion por Sustitucion Trigonometrica
                    • -
                    • Integracion por Sustitucion Algebraica
                    • -
                    • Integracion por Descomposicion en Fracciones Simples
                    • -
                    • Integrales Definidas
                    • -
                    • Areas de Superficies Planas
                    • -
                    • Volumenes de Revolucion
                    • -
                    • Longitud de Arco
                    • -
                    • Areas de Superficies de Revolucion
                    • -
                    • Centroides de Figuras Planas
                    • -
                    • Integracion Aproximada
                    • -
                    -

                    The book is written in Spanish and uses the metric system for measurements. It also includes a glossary of terms and symbols, as well as answers to selected exercises and problems.

                    -

                    What are the benefits of Libro De Calculo 4000 152?

                    -

                    Some of the benefits of Libro De Calculo 4000 152 are:

                    -
                      -
                    • It covers a wide range of topics in calculus, from basic concepts to advanced applications.
                    • -
                    • It provides clear explanations and examples for each topic, as well as numerous exercises and problems for practice and assessment.
                    • -
                    • It follows a logical and progressive order of topics, starting from functions and limits and ending with integration techniques and applications.
                    • -
                    • It uses real-world scenarios and applications to illustrate the relevance and usefulness of calculus.
                    • -
                    • It is affordable and accessible, as it can be found online or in physical stores.
                    • -
                    - -

                    What are the drawbacks of Libro De Calculo 4000 152?

                    - -

                    Some of the drawbacks of Libro De Calculo 4000 152 are:

                    -

                    - -
                      - -
                    • It is written in Spanish, which might be a challenge for non-native speakers or learners of the language. 7b8c122e87
                      -
                      -
                      \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Mama Film German [BEST] Download.md b/spaces/tioseFevbu/cartoon-converter/scripts/Mama Film German [BEST] Download.md deleted file mode 100644 index e0b316f52b70abcefe7c4c2f9f5791f9cfdc3804..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Mama Film German [BEST] Download.md +++ /dev/null @@ -1,9 +0,0 @@ - -

                      Mama: A Terrifying Tale of Maternal Love

                      -

                      Mama is a 2013 supernatural horror film directed by Andy Muschietti, based on his short film of the same name. The film stars Jessica Chastain and Nikolaj Coster-Waldau as a couple who adopt two young sisters who were found alone in the woods after five years. However, they soon discover that the girls are haunted by a mysterious entity they call "Mama", who may not be willing to let them go.

                      -

                      The film was a critical and commercial success, earning praise for its atmosphere, performances, and scares. It also spawned a sequel, Mama 2, which was released in 2016. If you are looking for a thrilling and chilling movie to watch, you can download Mama in German from various online platforms.

                      -

                      mama film german download


                      DOWNLOAD ✑ ✑ ✑ https://urlcod.com/2uHyPC



                      Mama was released in theaters on January 18, 2013, by Universal Pictures. The film received mixed reviews from critics, who praised the performances of Chastain and Coster-Waldau, the atmosphere, and the scares, but criticized the plot and writing. The film has a 64% approval rating on Rotten Tomatoes, based on 178 reviews, with an average rating of 5.9/10. The website's critical consensus reads, \"If you're into old-school scares over cheap gore, you'll be able to get over Mama's confusing script and contrived plot devices.\" On Metacritic, the film has a score of 57 out of 100, based on 35 critics, indicating \"mixed or average reviews\". Audiences polled by CinemaScore gave the film an average grade of \"B-\" on an A+ to F scale.

                      -

                      The film was also a box office success, grossing $148.1 million worldwide against a $15 million budget. It was the second-highest-grossing horror film of 2013, behind The Conjuring. The film was nominated for several awards, including Best Horror Film at the Saturn Awards and Best Actress for Chastain at the Fangoria Chainsaw Awards.

                      Mama explores various themes related to motherhood, family, and loss. The film contrasts the different types of maternal figures that the girls encounter: Mama, who is a vengeful and possessive spirit; Annabel, who is reluctant and inexperienced; and Jean, who is caring but distant. The film also shows how the girls cope with their trauma and attachment issues, as Victoria adapts to her new life while Lily remains loyal to Mama. The film also questions the nature of Mama's love, as she is both protective and harmful to the girls. The film suggests that Mama's actions are driven by her own tragic past, as she was a mentally ill woman who killed her baby and herself in the 19th century. The film also explores the theme of sacrifice, as Lucas risks his life to find his nieces, Annabel grows to love and care for them, and Victoria chooses to stay with Annabel instead of Mama.

                      The film uses sound and music to create tension and suspense throughout the story. The film's score was composed by Fernando Velázquez, who also worked with director Andy Muschietti on his short film Mamá. The score features a mix of orchestral and electronic elements, as well as a haunting theme for Mama that is played by a theremin. The score also incorporates sounds from the film's setting, such as wind, water, and creaking wood. The film also uses sound effects to suggest Mama's presence and movements, such as whispers, growls, thumps, and moans. The film also features songs by Jack White, From Autumn to Ashes, Gotye, and Tom Holkenborg that contrast with the film's dark tone and add to the character development of Annabel.

                      -

                      81aa517590
                      -
                      -
                      \ No newline at end of file diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_internal/cli/__init__.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_internal/cli/__init__.py deleted file mode 100644 index e589bb917e23823e25f9fff7e0849c4d6d4a62bc..0000000000000000000000000000000000000000 --- a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_internal/cli/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -"""Subpackage containing all of pip's command line interface related code -""" - -# This file intentionally does not import submodules diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pkg_resources/_vendor/pyparsing/helpers.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pkg_resources/_vendor/pyparsing/helpers.py deleted file mode 100644 index be8a3657884806a8e7bf5e8e338b3fc86eeffa5b..0000000000000000000000000000000000000000 --- a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pkg_resources/_vendor/pyparsing/helpers.py +++ /dev/null @@ -1,1083 +0,0 @@ -# helpers.py -import html.entities -import re - -from . import __diag__ -from .core import * -from .util import _bslash, _flatten, _escape_regex_range_chars - - -# -# global helpers -# -def delimited_list( - expr: Union[str, ParserElement], - delim: Union[str, ParserElement] = ",", - combine: bool = False, - min: OptionalType[int] = None, - max: OptionalType[int] = None, - *, - allow_trailing_delim: bool = False, -) -> ParserElement: - """Helper to define a delimited list of expressions - the delimiter - defaults to ','. By default, the list elements and delimiters can - have intervening whitespace, and comments, but this can be - overridden by passing ``combine=True`` in the constructor. If - ``combine`` is set to ``True``, the matching tokens are - returned as a single token string, with the delimiters included; - otherwise, the matching tokens are returned as a list of tokens, - with the delimiters suppressed. - - If ``allow_trailing_delim`` is set to True, then the list may end with - a delimiter. - - Example:: - - delimited_list(Word(alphas)).parse_string("aa,bb,cc") # -> ['aa', 'bb', 'cc'] - delimited_list(Word(hexnums), delim=':', combine=True).parse_string("AA:BB:CC:DD:EE") # -> ['AA:BB:CC:DD:EE'] - """ - if isinstance(expr, str_type): - expr = ParserElement._literalStringClass(expr) - - dlName = "{expr} [{delim} {expr}]...{end}".format( - expr=str(expr.copy().streamline()), - delim=str(delim), - end=" [{}]".format(str(delim)) if allow_trailing_delim else "", - ) - - if not combine: - delim = Suppress(delim) - - if min is not None: - if min < 1: - raise ValueError("min must be greater than 0") - min -= 1 - if max is not None: - if min is not None and max <= min: - raise ValueError("max must be greater than, or equal to min") - max -= 1 - delimited_list_expr = expr + (delim + expr)[min, max] - - if allow_trailing_delim: - delimited_list_expr += Opt(delim) - - if combine: - return Combine(delimited_list_expr).set_name(dlName) - else: - return delimited_list_expr.set_name(dlName) - - -def counted_array( - expr: ParserElement, - int_expr: OptionalType[ParserElement] = None, - *, - intExpr: OptionalType[ParserElement] = None, -) -> ParserElement: - """Helper to define a counted list of expressions. - - This helper defines a pattern of the form:: - - integer expr expr expr... - - where the leading integer tells how many expr expressions follow. - The matched tokens returns the array of expr tokens as a list - the - leading count token is suppressed. - - If ``int_expr`` is specified, it should be a pyparsing expression - that produces an integer value. - - Example:: - - counted_array(Word(alphas)).parse_string('2 ab cd ef') # -> ['ab', 'cd'] - - # in this parser, the leading integer value is given in binary, - # '10' indicating that 2 values are in the array - binary_constant = Word('01').set_parse_action(lambda t: int(t[0], 2)) - counted_array(Word(alphas), int_expr=binary_constant).parse_string('10 ab cd ef') # -> ['ab', 'cd'] - - # if other fields must be parsed after the count but before the - # list items, give the fields results names and they will - # be preserved in the returned ParseResults: - count_with_metadata = integer + Word(alphas)("type") - typed_array = counted_array(Word(alphanums), int_expr=count_with_metadata)("items") - result = typed_array.parse_string("3 bool True True False") - print(result.dump()) - - # prints - # ['True', 'True', 'False'] - # - items: ['True', 'True', 'False'] - # - type: 'bool' - """ - intExpr = intExpr or int_expr - array_expr = Forward() - - def count_field_parse_action(s, l, t): - nonlocal array_expr - n = t[0] - array_expr <<= (expr * n) if n else Empty() - # clear list contents, but keep any named results - del t[:] - - if intExpr is None: - intExpr = Word(nums).set_parse_action(lambda t: int(t[0])) - else: - intExpr = intExpr.copy() - intExpr.set_name("arrayLen") - intExpr.add_parse_action(count_field_parse_action, call_during_try=True) - return (intExpr + array_expr).set_name("(len) " + str(expr) + "...") - - -def match_previous_literal(expr: ParserElement) -> ParserElement: - """Helper to define an expression that is indirectly defined from - the tokens matched in a previous expression, that is, it looks for - a 'repeat' of a previous expression. For example:: - - first = Word(nums) - second = match_previous_literal(first) - match_expr = first + ":" + second - - will match ``"1:1"``, but not ``"1:2"``. Because this - matches a previous literal, will also match the leading - ``"1:1"`` in ``"1:10"``. If this is not desired, use - :class:`match_previous_expr`. Do *not* use with packrat parsing - enabled. - """ - rep = Forward() - - def copy_token_to_repeater(s, l, t): - if t: - if len(t) == 1: - rep << t[0] - else: - # flatten t tokens - tflat = _flatten(t.as_list()) - rep << And(Literal(tt) for tt in tflat) - else: - rep << Empty() - - expr.add_parse_action(copy_token_to_repeater, callDuringTry=True) - rep.set_name("(prev) " + str(expr)) - return rep - - -def match_previous_expr(expr: ParserElement) -> ParserElement: - """Helper to define an expression that is indirectly defined from - the tokens matched in a previous expression, that is, it looks for - a 'repeat' of a previous expression. For example:: - - first = Word(nums) - second = match_previous_expr(first) - match_expr = first + ":" + second - - will match ``"1:1"``, but not ``"1:2"``. Because this - matches by expressions, will *not* match the leading ``"1:1"`` - in ``"1:10"``; the expressions are evaluated first, and then - compared, so ``"1"`` is compared with ``"10"``. Do *not* use - with packrat parsing enabled. - """ - rep = Forward() - e2 = expr.copy() - rep <<= e2 - - def copy_token_to_repeater(s, l, t): - matchTokens = _flatten(t.as_list()) - - def must_match_these_tokens(s, l, t): - theseTokens = _flatten(t.as_list()) - if theseTokens != matchTokens: - raise ParseException( - s, l, "Expected {}, found{}".format(matchTokens, theseTokens) - ) - - rep.set_parse_action(must_match_these_tokens, callDuringTry=True) - - expr.add_parse_action(copy_token_to_repeater, callDuringTry=True) - rep.set_name("(prev) " + str(expr)) - return rep - - -def one_of( - strs: Union[IterableType[str], str], - caseless: bool = False, - use_regex: bool = True, - as_keyword: bool = False, - *, - useRegex: bool = True, - asKeyword: bool = False, -) -> ParserElement: - """Helper to quickly define a set of alternative :class:`Literal` s, - and makes sure to do longest-first testing when there is a conflict, - regardless of the input order, but returns - a :class:`MatchFirst` for best performance. - - Parameters: - - - ``strs`` - a string of space-delimited literals, or a collection of - string literals - - ``caseless`` - treat all literals as caseless - (default= ``False``) - - ``use_regex`` - as an optimization, will - generate a :class:`Regex` object; otherwise, will generate - a :class:`MatchFirst` object (if ``caseless=True`` or ``asKeyword=True``, or if - creating a :class:`Regex` raises an exception) - (default= ``True``) - - ``as_keyword`` - enforce :class:`Keyword`-style matching on the - generated expressions - (default= ``False``) - - ``asKeyword`` and ``useRegex`` are retained for pre-PEP8 compatibility, - but will be removed in a future release - - Example:: - - comp_oper = one_of("< = > <= >= !=") - var = Word(alphas) - number = Word(nums) - term = var | number - comparison_expr = term + comp_oper + term - print(comparison_expr.search_string("B = 12 AA=23 B<=AA AA>12")) - - prints:: - - [['B', '=', '12'], ['AA', '=', '23'], ['B', '<=', 'AA'], ['AA', '>', '12']] - """ - asKeyword = asKeyword or as_keyword - useRegex = useRegex and use_regex - - if ( - isinstance(caseless, str_type) - and __diag__.warn_on_multiple_string_args_to_oneof - ): - warnings.warn( - "More than one string argument passed to one_of, pass" - " choices as a list or space-delimited string", - stacklevel=2, - ) - - if caseless: - isequal = lambda a, b: a.upper() == b.upper() - masks = lambda a, b: b.upper().startswith(a.upper()) - parseElementClass = CaselessKeyword if asKeyword else CaselessLiteral - else: - isequal = lambda a, b: a == b - masks = lambda a, b: b.startswith(a) - parseElementClass = Keyword if asKeyword else Literal - - symbols: List[str] = [] - if isinstance(strs, str_type): - symbols = strs.split() - elif isinstance(strs, Iterable): - symbols = list(strs) - else: - raise TypeError("Invalid argument to one_of, expected string or iterable") - if not symbols: - return NoMatch() - - # reorder given symbols to take care to avoid masking longer choices with shorter ones - # (but only if the given symbols are not just single characters) - if any(len(sym) > 1 for sym in symbols): - i = 0 - while i < len(symbols) - 1: - cur = symbols[i] - for j, other in enumerate(symbols[i + 1 :]): - if isequal(other, cur): - del symbols[i + j + 1] - break - elif masks(cur, other): - del symbols[i + j + 1] - symbols.insert(i, other) - break - else: - i += 1 - - if useRegex: - re_flags: int = re.IGNORECASE if caseless else 0 - - try: - if all(len(sym) == 1 for sym in symbols): - # symbols are just single characters, create range regex pattern - patt = "[{}]".format( - "".join(_escape_regex_range_chars(sym) for sym in symbols) - ) - else: - patt = "|".join(re.escape(sym) for sym in symbols) - - # wrap with \b word break markers if defining as keywords - if asKeyword: - patt = r"\b(?:{})\b".format(patt) - - ret = Regex(patt, flags=re_flags).set_name(" | ".join(symbols)) - - if caseless: - # add parse action to return symbols as specified, not in random - # casing as found in input string - symbol_map = {sym.lower(): sym for sym in symbols} - ret.add_parse_action(lambda s, l, t: symbol_map[t[0].lower()]) - - return ret - - except re.error: - warnings.warn( - "Exception creating Regex for one_of, building MatchFirst", stacklevel=2 - ) - - # last resort, just use MatchFirst - return MatchFirst(parseElementClass(sym) for sym in symbols).set_name( - " | ".join(symbols) - ) - - -def dict_of(key: ParserElement, value: ParserElement) -> ParserElement: - """Helper to easily and clearly define a dictionary by specifying - the respective patterns for the key and value. Takes care of - defining the :class:`Dict`, :class:`ZeroOrMore`, and - :class:`Group` tokens in the proper order. The key pattern - can include delimiting markers or punctuation, as long as they are - suppressed, thereby leaving the significant key text. The value - pattern can include named results, so that the :class:`Dict` results - can include named token fields. - - Example:: - - text = "shape: SQUARE posn: upper left color: light blue texture: burlap" - attr_expr = (label + Suppress(':') + OneOrMore(data_word, stop_on=label).set_parse_action(' '.join)) - print(OneOrMore(attr_expr).parse_string(text).dump()) - - attr_label = label - attr_value = Suppress(':') + OneOrMore(data_word, stop_on=label).set_parse_action(' '.join) - - # similar to Dict, but simpler call format - result = dict_of(attr_label, attr_value).parse_string(text) - print(result.dump()) - print(result['shape']) - print(result.shape) # object attribute access works too - print(result.as_dict()) - - prints:: - - [['shape', 'SQUARE'], ['posn', 'upper left'], ['color', 'light blue'], ['texture', 'burlap']] - - color: 'light blue' - - posn: 'upper left' - - shape: 'SQUARE' - - texture: 'burlap' - SQUARE - SQUARE - {'color': 'light blue', 'shape': 'SQUARE', 'posn': 'upper left', 'texture': 'burlap'} - """ - return Dict(OneOrMore(Group(key + value))) - - -def original_text_for( - expr: ParserElement, as_string: bool = True, *, asString: bool = True -) -> ParserElement: - """Helper to return the original, untokenized text for a given - expression. Useful to restore the parsed fields of an HTML start - tag into the raw tag text itself, or to revert separate tokens with - intervening whitespace back to the original matching input text. By - default, returns astring containing the original parsed text. - - If the optional ``as_string`` argument is passed as - ``False``, then the return value is - a :class:`ParseResults` containing any results names that - were originally matched, and a single token containing the original - matched text from the input string. So if the expression passed to - :class:`original_text_for` contains expressions with defined - results names, you must set ``as_string`` to ``False`` if you - want to preserve those results name values. - - The ``asString`` pre-PEP8 argument is retained for compatibility, - but will be removed in a future release. - - Example:: - - src = "this is test bold text normal text " - for tag in ("b", "i"): - opener, closer = make_html_tags(tag) - patt = original_text_for(opener + SkipTo(closer) + closer) - print(patt.search_string(src)[0]) - - prints:: - - [' bold text '] - ['text'] - """ - asString = asString and as_string - - locMarker = Empty().set_parse_action(lambda s, loc, t: loc) - endlocMarker = locMarker.copy() - endlocMarker.callPreparse = False - matchExpr = locMarker("_original_start") + expr + endlocMarker("_original_end") - if asString: - extractText = lambda s, l, t: s[t._original_start : t._original_end] - else: - - def extractText(s, l, t): - t[:] = [s[t.pop("_original_start") : t.pop("_original_end")]] - - matchExpr.set_parse_action(extractText) - matchExpr.ignoreExprs = expr.ignoreExprs - matchExpr.suppress_warning(Diagnostics.warn_ungrouped_named_tokens_in_collection) - return matchExpr - - -def ungroup(expr: ParserElement) -> ParserElement: - """Helper to undo pyparsing's default grouping of And expressions, - even if all but one are non-empty. - """ - return TokenConverter(expr).add_parse_action(lambda t: t[0]) - - -def locatedExpr(expr: ParserElement) -> ParserElement: - """ - (DEPRECATED - future code should use the Located class) - Helper to decorate a returned token with its starting and ending - locations in the input string. - - This helper adds the following results names: - - - ``locn_start`` - location where matched expression begins - - ``locn_end`` - location where matched expression ends - - ``value`` - the actual parsed results - - Be careful if the input text contains ```` characters, you - may want to call :class:`ParserElement.parseWithTabs` - - Example:: - - wd = Word(alphas) - for match in locatedExpr(wd).searchString("ljsdf123lksdjjf123lkkjj1222"): - print(match) - - prints:: - - [[0, 'ljsdf', 5]] - [[8, 'lksdjjf', 15]] - [[18, 'lkkjj', 23]] - """ - locator = Empty().set_parse_action(lambda ss, ll, tt: ll) - return Group( - locator("locn_start") - + expr("value") - + locator.copy().leaveWhitespace()("locn_end") - ) - - -def nested_expr( - opener: Union[str, ParserElement] = "(", - closer: Union[str, ParserElement] = ")", - content: OptionalType[ParserElement] = None, - ignore_expr: ParserElement = quoted_string(), - *, - ignoreExpr: ParserElement = quoted_string(), -) -> ParserElement: - """Helper method for defining nested lists enclosed in opening and - closing delimiters (``"("`` and ``")"`` are the default). - - Parameters: - - ``opener`` - opening character for a nested list - (default= ``"("``); can also be a pyparsing expression - - ``closer`` - closing character for a nested list - (default= ``")"``); can also be a pyparsing expression - - ``content`` - expression for items within the nested lists - (default= ``None``) - - ``ignore_expr`` - expression for ignoring opening and closing delimiters - (default= :class:`quoted_string`) - - ``ignoreExpr`` - this pre-PEP8 argument is retained for compatibility - but will be removed in a future release - - If an expression is not provided for the content argument, the - nested expression will capture all whitespace-delimited content - between delimiters as a list of separate values. - - Use the ``ignore_expr`` argument to define expressions that may - contain opening or closing characters that should not be treated as - opening or closing characters for nesting, such as quoted_string or - a comment expression. Specify multiple expressions using an - :class:`Or` or :class:`MatchFirst`. The default is - :class:`quoted_string`, but if no expressions are to be ignored, then - pass ``None`` for this argument. - - Example:: - - data_type = one_of("void int short long char float double") - decl_data_type = Combine(data_type + Opt(Word('*'))) - ident = Word(alphas+'_', alphanums+'_') - number = pyparsing_common.number - arg = Group(decl_data_type + ident) - LPAR, RPAR = map(Suppress, "()") - - code_body = nested_expr('{', '}', ignore_expr=(quoted_string | c_style_comment)) - - c_function = (decl_data_type("type") - + ident("name") - + LPAR + Opt(delimited_list(arg), [])("args") + RPAR - + code_body("body")) - c_function.ignore(c_style_comment) - - source_code = ''' - int is_odd(int x) { - return (x%2); - } - - int dec_to_hex(char hchar) { - if (hchar >= '0' && hchar <= '9') { - return (ord(hchar)-ord('0')); - } else { - return (10+ord(hchar)-ord('A')); - } - } - ''' - for func in c_function.search_string(source_code): - print("%(name)s (%(type)s) args: %(args)s" % func) - - - prints:: - - is_odd (int) args: [['int', 'x']] - dec_to_hex (int) args: [['char', 'hchar']] - """ - if ignoreExpr != ignore_expr: - ignoreExpr = ignore_expr if ignoreExpr == quoted_string() else ignoreExpr - if opener == closer: - raise ValueError("opening and closing strings cannot be the same") - if content is None: - if isinstance(opener, str_type) and isinstance(closer, str_type): - if len(opener) == 1 and len(closer) == 1: - if ignoreExpr is not None: - content = Combine( - OneOrMore( - ~ignoreExpr - + CharsNotIn( - opener + closer + ParserElement.DEFAULT_WHITE_CHARS, - exact=1, - ) - ) - ).set_parse_action(lambda t: t[0].strip()) - else: - content = empty.copy() + CharsNotIn( - opener + closer + ParserElement.DEFAULT_WHITE_CHARS - ).set_parse_action(lambda t: t[0].strip()) - else: - if ignoreExpr is not None: - content = Combine( - OneOrMore( - ~ignoreExpr - + ~Literal(opener) - + ~Literal(closer) - + CharsNotIn(ParserElement.DEFAULT_WHITE_CHARS, exact=1) - ) - ).set_parse_action(lambda t: t[0].strip()) - else: - content = Combine( - OneOrMore( - ~Literal(opener) - + ~Literal(closer) - + CharsNotIn(ParserElement.DEFAULT_WHITE_CHARS, exact=1) - ) - ).set_parse_action(lambda t: t[0].strip()) - else: - raise ValueError( - "opening and closing arguments must be strings if no content expression is given" - ) - ret = Forward() - if ignoreExpr is not None: - ret <<= Group( - Suppress(opener) + ZeroOrMore(ignoreExpr | ret | content) + Suppress(closer) - ) - else: - ret <<= Group(Suppress(opener) + ZeroOrMore(ret | content) + Suppress(closer)) - ret.set_name("nested %s%s expression" % (opener, closer)) - return ret - - -def _makeTags(tagStr, xml, suppress_LT=Suppress("<"), suppress_GT=Suppress(">")): - """Internal helper to construct opening and closing tag expressions, given a tag name""" - if isinstance(tagStr, str_type): - resname = tagStr - tagStr = Keyword(tagStr, caseless=not xml) - else: - resname = tagStr.name - - tagAttrName = Word(alphas, alphanums + "_-:") - if xml: - tagAttrValue = dbl_quoted_string.copy().set_parse_action(remove_quotes) - openTag = ( - suppress_LT - + tagStr("tag") - + Dict(ZeroOrMore(Group(tagAttrName + Suppress("=") + tagAttrValue))) - + Opt("/", default=[False])("empty").set_parse_action( - lambda s, l, t: t[0] == "/" - ) - + suppress_GT - ) - else: - tagAttrValue = quoted_string.copy().set_parse_action(remove_quotes) | Word( - printables, exclude_chars=">" - ) - openTag = ( - suppress_LT - + tagStr("tag") - + Dict( - ZeroOrMore( - Group( - tagAttrName.set_parse_action(lambda t: t[0].lower()) - + Opt(Suppress("=") + tagAttrValue) - ) - ) - ) - + Opt("/", default=[False])("empty").set_parse_action( - lambda s, l, t: t[0] == "/" - ) - + suppress_GT - ) - closeTag = Combine(Literal("", adjacent=False) - - openTag.set_name("<%s>" % resname) - # add start results name in parse action now that ungrouped names are not reported at two levels - openTag.add_parse_action( - lambda t: t.__setitem__( - "start" + "".join(resname.replace(":", " ").title().split()), t.copy() - ) - ) - closeTag = closeTag( - "end" + "".join(resname.replace(":", " ").title().split()) - ).set_name("" % resname) - openTag.tag = resname - closeTag.tag = resname - openTag.tag_body = SkipTo(closeTag()) - return openTag, closeTag - - -def make_html_tags( - tag_str: Union[str, ParserElement] -) -> Tuple[ParserElement, ParserElement]: - """Helper to construct opening and closing tag expressions for HTML, - given a tag name. Matches tags in either upper or lower case, - attributes with namespaces and with quoted or unquoted values. - - Example:: - - text = 'More info at the pyparsing wiki page' - # make_html_tags returns pyparsing expressions for the opening and - # closing tags as a 2-tuple - a, a_end = make_html_tags("A") - link_expr = a + SkipTo(a_end)("link_text") + a_end - - for link in link_expr.search_string(text): - # attributes in the tag (like "href" shown here) are - # also accessible as named results - print(link.link_text, '->', link.href) - - prints:: - - pyparsing -> https://github.com/pyparsing/pyparsing/wiki - """ - return _makeTags(tag_str, False) - - -def make_xml_tags( - tag_str: Union[str, ParserElement] -) -> Tuple[ParserElement, ParserElement]: - """Helper to construct opening and closing tag expressions for XML, - given a tag name. Matches tags only in the given upper/lower case. - - Example: similar to :class:`make_html_tags` - """ - return _makeTags(tag_str, True) - - -any_open_tag, any_close_tag = make_html_tags( - Word(alphas, alphanums + "_:").set_name("any tag") -) - -_htmlEntityMap = {k.rstrip(";"): v for k, v in html.entities.html5.items()} -common_html_entity = Regex("&(?P" + "|".join(_htmlEntityMap) + ");").set_name( - "common HTML entity" -) - - -def replace_html_entity(t): - """Helper parser action to replace common HTML entities with their special characters""" - return _htmlEntityMap.get(t.entity) - - -class OpAssoc(Enum): - LEFT = 1 - RIGHT = 2 - - -InfixNotationOperatorArgType = Union[ - ParserElement, str, Tuple[Union[ParserElement, str], Union[ParserElement, str]] -] -InfixNotationOperatorSpec = Union[ - Tuple[ - InfixNotationOperatorArgType, - int, - OpAssoc, - OptionalType[ParseAction], - ], - Tuple[ - InfixNotationOperatorArgType, - int, - OpAssoc, - ], -] - - -def infix_notation( - base_expr: ParserElement, - op_list: List[InfixNotationOperatorSpec], - lpar: Union[str, ParserElement] = Suppress("("), - rpar: Union[str, ParserElement] = Suppress(")"), -) -> ParserElement: - """Helper method for constructing grammars of expressions made up of - operators working in a precedence hierarchy. Operators may be unary - or binary, left- or right-associative. Parse actions can also be - attached to operator expressions. The generated parser will also - recognize the use of parentheses to override operator precedences - (see example below). - - Note: if you define a deep operator list, you may see performance - issues when using infix_notation. See - :class:`ParserElement.enable_packrat` for a mechanism to potentially - improve your parser performance. - - Parameters: - - ``base_expr`` - expression representing the most basic operand to - be used in the expression - - ``op_list`` - list of tuples, one for each operator precedence level - in the expression grammar; each tuple is of the form ``(op_expr, - num_operands, right_left_assoc, (optional)parse_action)``, where: - - - ``op_expr`` is the pyparsing expression for the operator; may also - be a string, which will be converted to a Literal; if ``num_operands`` - is 3, ``op_expr`` is a tuple of two expressions, for the two - operators separating the 3 terms - - ``num_operands`` is the number of terms for this operator (must be 1, - 2, or 3) - - ``right_left_assoc`` is the indicator whether the operator is right - or left associative, using the pyparsing-defined constants - ``OpAssoc.RIGHT`` and ``OpAssoc.LEFT``. - - ``parse_action`` is the parse action to be associated with - expressions matching this operator expression (the parse action - tuple member may be omitted); if the parse action is passed - a tuple or list of functions, this is equivalent to calling - ``set_parse_action(*fn)`` - (:class:`ParserElement.set_parse_action`) - - ``lpar`` - expression for matching left-parentheses; if passed as a - str, then will be parsed as Suppress(lpar). If lpar is passed as - an expression (such as ``Literal('(')``), then it will be kept in - the parsed results, and grouped with them. (default= ``Suppress('(')``) - - ``rpar`` - expression for matching right-parentheses; if passed as a - str, then will be parsed as Suppress(rpar). If rpar is passed as - an expression (such as ``Literal(')')``), then it will be kept in - the parsed results, and grouped with them. (default= ``Suppress(')')``) - - Example:: - - # simple example of four-function arithmetic with ints and - # variable names - integer = pyparsing_common.signed_integer - varname = pyparsing_common.identifier - - arith_expr = infix_notation(integer | varname, - [ - ('-', 1, OpAssoc.RIGHT), - (one_of('* /'), 2, OpAssoc.LEFT), - (one_of('+ -'), 2, OpAssoc.LEFT), - ]) - - arith_expr.run_tests(''' - 5+3*6 - (5+3)*6 - -2--11 - ''', full_dump=False) - - prints:: - - 5+3*6 - [[5, '+', [3, '*', 6]]] - - (5+3)*6 - [[[5, '+', 3], '*', 6]] - - -2--11 - [[['-', 2], '-', ['-', 11]]] - """ - # captive version of FollowedBy that does not do parse actions or capture results names - class _FB(FollowedBy): - def parseImpl(self, instring, loc, doActions=True): - self.expr.try_parse(instring, loc) - return loc, [] - - _FB.__name__ = "FollowedBy>" - - ret = Forward() - if isinstance(lpar, str): - lpar = Suppress(lpar) - if isinstance(rpar, str): - rpar = Suppress(rpar) - - # if lpar and rpar are not suppressed, wrap in group - if not (isinstance(rpar, Suppress) and isinstance(rpar, Suppress)): - lastExpr = base_expr | Group(lpar + ret + rpar) - else: - lastExpr = base_expr | (lpar + ret + rpar) - - for i, operDef in enumerate(op_list): - opExpr, arity, rightLeftAssoc, pa = (operDef + (None,))[:4] - if isinstance(opExpr, str_type): - opExpr = ParserElement._literalStringClass(opExpr) - if arity == 3: - if not isinstance(opExpr, (tuple, list)) or len(opExpr) != 2: - raise ValueError( - "if numterms=3, opExpr must be a tuple or list of two expressions" - ) - opExpr1, opExpr2 = opExpr - term_name = "{}{} term".format(opExpr1, opExpr2) - else: - term_name = "{} term".format(opExpr) - - if not 1 <= arity <= 3: - raise ValueError("operator must be unary (1), binary (2), or ternary (3)") - - if rightLeftAssoc not in (OpAssoc.LEFT, OpAssoc.RIGHT): - raise ValueError("operator must indicate right or left associativity") - - thisExpr = Forward().set_name(term_name) - if rightLeftAssoc is OpAssoc.LEFT: - if arity == 1: - matchExpr = _FB(lastExpr + opExpr) + Group(lastExpr + opExpr[1, ...]) - elif arity == 2: - if opExpr is not None: - matchExpr = _FB(lastExpr + opExpr + lastExpr) + Group( - lastExpr + (opExpr + lastExpr)[1, ...] - ) - else: - matchExpr = _FB(lastExpr + lastExpr) + Group(lastExpr[2, ...]) - elif arity == 3: - matchExpr = _FB( - lastExpr + opExpr1 + lastExpr + opExpr2 + lastExpr - ) + Group(lastExpr + OneOrMore(opExpr1 + lastExpr + opExpr2 + lastExpr)) - elif rightLeftAssoc is OpAssoc.RIGHT: - if arity == 1: - # try to avoid LR with this extra test - if not isinstance(opExpr, Opt): - opExpr = Opt(opExpr) - matchExpr = _FB(opExpr.expr + thisExpr) + Group(opExpr + thisExpr) - elif arity == 2: - if opExpr is not None: - matchExpr = _FB(lastExpr + opExpr + thisExpr) + Group( - lastExpr + (opExpr + thisExpr)[1, ...] - ) - else: - matchExpr = _FB(lastExpr + thisExpr) + Group( - lastExpr + thisExpr[1, ...] - ) - elif arity == 3: - matchExpr = _FB( - lastExpr + opExpr1 + thisExpr + opExpr2 + thisExpr - ) + Group(lastExpr + opExpr1 + thisExpr + opExpr2 + thisExpr) - if pa: - if isinstance(pa, (tuple, list)): - matchExpr.set_parse_action(*pa) - else: - matchExpr.set_parse_action(pa) - thisExpr <<= (matchExpr | lastExpr).setName(term_name) - lastExpr = thisExpr - ret <<= lastExpr - return ret - - -def indentedBlock(blockStatementExpr, indentStack, indent=True, backup_stacks=[]): - """ - (DEPRECATED - use IndentedBlock class instead) - Helper method for defining space-delimited indentation blocks, - such as those used to define block statements in Python source code. - - Parameters: - - - ``blockStatementExpr`` - expression defining syntax of statement that - is repeated within the indented block - - ``indentStack`` - list created by caller to manage indentation stack - (multiple ``statementWithIndentedBlock`` expressions within a single - grammar should share a common ``indentStack``) - - ``indent`` - boolean indicating whether block must be indented beyond - the current level; set to ``False`` for block of left-most statements - (default= ``True``) - - A valid block must contain at least one ``blockStatement``. - - (Note that indentedBlock uses internal parse actions which make it - incompatible with packrat parsing.) - - Example:: - - data = ''' - def A(z): - A1 - B = 100 - G = A2 - A2 - A3 - B - def BB(a,b,c): - BB1 - def BBA(): - bba1 - bba2 - bba3 - C - D - def spam(x,y): - def eggs(z): - pass - ''' - - - indentStack = [1] - stmt = Forward() - - identifier = Word(alphas, alphanums) - funcDecl = ("def" + identifier + Group("(" + Opt(delimitedList(identifier)) + ")") + ":") - func_body = indentedBlock(stmt, indentStack) - funcDef = Group(funcDecl + func_body) - - rvalue = Forward() - funcCall = Group(identifier + "(" + Opt(delimitedList(rvalue)) + ")") - rvalue << (funcCall | identifier | Word(nums)) - assignment = Group(identifier + "=" + rvalue) - stmt << (funcDef | assignment | identifier) - - module_body = OneOrMore(stmt) - - parseTree = module_body.parseString(data) - parseTree.pprint() - - prints:: - - [['def', - 'A', - ['(', 'z', ')'], - ':', - [['A1'], [['B', '=', '100']], [['G', '=', 'A2']], ['A2'], ['A3']]], - 'B', - ['def', - 'BB', - ['(', 'a', 'b', 'c', ')'], - ':', - [['BB1'], [['def', 'BBA', ['(', ')'], ':', [['bba1'], ['bba2'], ['bba3']]]]]], - 'C', - 'D', - ['def', - 'spam', - ['(', 'x', 'y', ')'], - ':', - [[['def', 'eggs', ['(', 'z', ')'], ':', [['pass']]]]]]] - """ - backup_stacks.append(indentStack[:]) - - def reset_stack(): - indentStack[:] = backup_stacks[-1] - - def checkPeerIndent(s, l, t): - if l >= len(s): - return - curCol = col(l, s) - if curCol != indentStack[-1]: - if curCol > indentStack[-1]: - raise ParseException(s, l, "illegal nesting") - raise ParseException(s, l, "not a peer entry") - - def checkSubIndent(s, l, t): - curCol = col(l, s) - if curCol > indentStack[-1]: - indentStack.append(curCol) - else: - raise ParseException(s, l, "not a subentry") - - def checkUnindent(s, l, t): - if l >= len(s): - return - curCol = col(l, s) - if not (indentStack and curCol in indentStack): - raise ParseException(s, l, "not an unindent") - if curCol < indentStack[-1]: - indentStack.pop() - - NL = OneOrMore(LineEnd().set_whitespace_chars("\t ").suppress()) - INDENT = (Empty() + Empty().set_parse_action(checkSubIndent)).set_name("INDENT") - PEER = Empty().set_parse_action(checkPeerIndent).set_name("") - UNDENT = Empty().set_parse_action(checkUnindent).set_name("UNINDENT") - if indent: - smExpr = Group( - Opt(NL) - + INDENT - + OneOrMore(PEER + Group(blockStatementExpr) + Opt(NL)) - + UNDENT - ) - else: - smExpr = Group( - Opt(NL) - + OneOrMore(PEER + Group(blockStatementExpr) + Opt(NL)) - + Opt(UNDENT) - ) - - # add a parse action to remove backup_stack from list of backups - smExpr.add_parse_action( - lambda: backup_stacks.pop(-1) and None if backup_stacks else None - ) - smExpr.set_fail_action(lambda a, b, c, d: reset_stack()) - blockStatementExpr.ignore(_bslash + LineEnd()) - return smExpr.set_name("indented block") - - -# it's easy to get these comment structures wrong - they're very common, so may as well make them available -c_style_comment = Combine(Regex(r"/\*(?:[^*]|\*(?!/))*") + "*/").set_name( - "C style comment" -) -"Comment of the form ``/* ... */``" - -html_comment = Regex(r"").set_name("HTML comment") -"Comment of the form ````" - -rest_of_line = Regex(r".*").leave_whitespace().set_name("rest of line") -dbl_slash_comment = Regex(r"//(?:\\\n|[^\n])*").set_name("// comment") -"Comment of the form ``// ... (to end of line)``" - -cpp_style_comment = Combine( - Regex(r"/\*(?:[^*]|\*(?!/))*") + "*/" | dbl_slash_comment -).set_name("C++ style comment") -"Comment of either form :class:`c_style_comment` or :class:`dbl_slash_comment`" - -java_style_comment = cpp_style_comment -"Same as :class:`cpp_style_comment`" - -python_style_comment = Regex(r"#.*").set_name("Python style comment") -"Comment of the form ``# ... (to end of line)``" - - -# build list of built-in expressions, for future reference if a global default value -# gets updated -_builtin_exprs = [v for v in vars().values() if isinstance(v, ParserElement)] - - -# pre-PEP8 compatible names -delimitedList = delimited_list -countedArray = counted_array -matchPreviousLiteral = match_previous_literal -matchPreviousExpr = match_previous_expr -oneOf = one_of -dictOf = dict_of -originalTextFor = original_text_for -nestedExpr = nested_expr -makeHTMLTags = make_html_tags -makeXMLTags = make_xml_tags -anyOpenTag, anyCloseTag = any_open_tag, any_close_tag -commonHTMLEntity = common_html_entity -replaceHTMLEntity = replace_html_entity -opAssoc = OpAssoc -infixNotation = infix_notation -cStyleComment = c_style_comment -htmlComment = html_comment -restOfLine = rest_of_line -dblSlashComment = dbl_slash_comment -cppStyleComment = cpp_style_comment -javaStyleComment = java_style_comment -pythonStyleComment = python_style_comment diff --git a/spaces/tomofi/MMOCR/mmocr/core/evaluation/kie_metric.py b/spaces/tomofi/MMOCR/mmocr/core/evaluation/kie_metric.py deleted file mode 100644 index 2ba695b5bb778ca792d4aabb7b3f9ed62041e2ee..0000000000000000000000000000000000000000 --- a/spaces/tomofi/MMOCR/mmocr/core/evaluation/kie_metric.py +++ /dev/null @@ -1,28 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch - - -def compute_f1_score(preds, gts, ignores=[]): - """Compute the F1-score of prediction. - - Args: - preds (Tensor): The predicted probability NxC map - with N and C being the sample number and class - number respectively. - gts (Tensor): The ground truth vector of size N. - ignores (list): The index set of classes that are ignored when - reporting results. - Note: all samples are participated in computing. - - Returns: - The numpy list of f1-scores of valid classes. - """ - C = preds.size(1) - classes = torch.LongTensor(sorted(set(range(C)) - set(ignores))) - hist = torch.bincount( - gts * C + preds.argmax(1), minlength=C**2).view(C, C).float() - diag = torch.diag(hist) - recalls = diag / hist.sum(1).clamp(min=1) - precisions = diag / hist.sum(0).clamp(min=1) - f1 = 2 * recalls * precisions / (recalls + precisions).clamp(min=1e-8) - return f1[classes].cpu().numpy() diff --git a/spaces/tomofi/MaskTextSpotterV3-OCR/maskrcnn_benchmark/utils/README.md b/spaces/tomofi/MaskTextSpotterV3-OCR/maskrcnn_benchmark/utils/README.md deleted file mode 100644 index 9765b24a730b77556104187ac3ef5439ab0859fd..0000000000000000000000000000000000000000 --- a/spaces/tomofi/MaskTextSpotterV3-OCR/maskrcnn_benchmark/utils/README.md +++ /dev/null @@ -1,5 +0,0 @@ -# Utility functions - -This folder contain utility functions that are not used in the -core library, but are useful for building models or training -code using the config system. diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/core/anchor/utils.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/core/anchor/utils.py deleted file mode 100644 index ab9b53f37f7be1f52fe63c5e53df64ac1303b9e0..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/core/anchor/utils.py +++ /dev/null @@ -1,71 +0,0 @@ -import torch - - -def images_to_levels(target, num_levels): - """Convert targets by image to targets by feature level. - - [target_img0, target_img1] -> [target_level0, target_level1, ...] - """ - target = torch.stack(target, 0) - level_targets = [] - start = 0 - for n in num_levels: - end = start + n - # level_targets.append(target[:, start:end].squeeze(0)) - level_targets.append(target[:, start:end]) - start = end - return level_targets - - -def anchor_inside_flags(flat_anchors, - valid_flags, - img_shape, - allowed_border=0): - """Check whether the anchors are inside the border. - - Args: - flat_anchors (torch.Tensor): Flatten anchors, shape (n, 4). - valid_flags (torch.Tensor): An existing valid flags of anchors. - img_shape (tuple(int)): Shape of current image. - allowed_border (int, optional): The border to allow the valid anchor. - Defaults to 0. - - Returns: - torch.Tensor: Flags indicating whether the anchors are inside a \ - valid range. - """ - img_h, img_w = img_shape[:2] - if allowed_border >= 0: - inside_flags = valid_flags & \ - (flat_anchors[:, 0] >= -allowed_border) & \ - (flat_anchors[:, 1] >= -allowed_border) & \ - (flat_anchors[:, 2] < img_w + allowed_border) & \ - (flat_anchors[:, 3] < img_h + allowed_border) - else: - inside_flags = valid_flags - return inside_flags - - -def calc_region(bbox, ratio, featmap_size=None): - """Calculate a proportional bbox region. - - The bbox center are fixed and the new h' and w' is h * ratio and w * ratio. - - Args: - bbox (Tensor): Bboxes to calculate regions, shape (n, 4). - ratio (float): Ratio of the output region. - featmap_size (tuple): Feature map size used for clipping the boundary. - - Returns: - tuple: x1, y1, x2, y2 - """ - x1 = torch.round((1 - ratio) * bbox[0] + ratio * bbox[2]).long() - y1 = torch.round((1 - ratio) * bbox[1] + ratio * bbox[3]).long() - x2 = torch.round(ratio * bbox[0] + (1 - ratio) * bbox[2]).long() - y2 = torch.round(ratio * bbox[1] + (1 - ratio) * bbox[3]).long() - if featmap_size is not None: - x1 = x1.clamp(min=0, max=featmap_size[1]) - y1 = y1.clamp(min=0, max=featmap_size[0]) - x2 = x2.clamp(min=0, max=featmap_size[1]) - y2 = y2.clamp(min=0, max=featmap_size[0]) - return (x1, y1, x2, y2) diff --git a/spaces/tornadoslims/instruct-pix2pix/prompt_app.py b/spaces/tornadoslims/instruct-pix2pix/prompt_app.py deleted file mode 100644 index 4c796d71b34b4a39b7c1ddfc930ca37c9b8c4aca..0000000000000000000000000000000000000000 --- a/spaces/tornadoslims/instruct-pix2pix/prompt_app.py +++ /dev/null @@ -1,55 +0,0 @@ -from __future__ import annotations - -from argparse import ArgumentParser - -import datasets -import gradio as gr -import numpy as np -import openai - -from dataset_creation.generate_txt_dataset import generate - - -def main(openai_model: str): - dataset = datasets.load_dataset("ChristophSchuhmann/improved_aesthetics_6.5plus", split="train") - captions = dataset[np.random.permutation(len(dataset))]["TEXT"] - index = 0 - - def click_random(): - nonlocal index - output = captions[index] - index = (index + 1) % len(captions) - return output - - def click_generate(input: str): - if input == "": - raise gr.Error("Input caption is missing!") - edit_output = generate(openai_model, input) - if edit_output is None: - return "Failed :(", "Failed :(" - return edit_output - - with gr.Blocks(css="footer {visibility: hidden}") as demo: - txt_input = gr.Textbox(lines=3, label="Input Caption", interactive=True, placeholder="Type image caption here...") # fmt: skip - txt_edit = gr.Textbox(lines=1, label="GPT-3 Instruction", interactive=False) - txt_output = gr.Textbox(lines=3, label="GPT3 Edited Caption", interactive=False) - - with gr.Row(): - clear_btn = gr.Button("Clear") - random_btn = gr.Button("Random Input") - generate_btn = gr.Button("Generate Instruction + Edited Caption") - - clear_btn.click(fn=lambda: ("", "", ""), inputs=[], outputs=[txt_input, txt_edit, txt_output]) - random_btn.click(fn=click_random, inputs=[], outputs=[txt_input]) - generate_btn.click(fn=click_generate, inputs=[txt_input], outputs=[txt_edit, txt_output]) - - demo.launch(share=True) - - -if __name__ == "__main__": - parser = ArgumentParser() - parser.add_argument("--openai-api-key", required=True, type=str) - parser.add_argument("--openai-model", required=True, type=str) - args = parser.parse_args() - openai.api_key = args.openai_api_key - main(args.openai_model) diff --git a/spaces/tracinginsights/F1-analysis/pages/Positions_Change.py b/spaces/tracinginsights/F1-analysis/pages/Positions_Change.py deleted file mode 100644 index 93177d32933e2fc6e1dfda00ad9a1303992e3cc1..0000000000000000000000000000000000000000 --- a/spaces/tracinginsights/F1-analysis/pages/Positions_Change.py +++ /dev/null @@ -1,13 +0,0 @@ -import streamlit as st -from repo_directory import Postions_Change - -from repo_directory import button - - -GRANDPRIX = st.text_input("Grand Prix Name, eg.Monaco") - -START_URL = st.text_input(label="Starting Grid URL from Formula1.com", value="https://www.formula1.com/en/results.html/2022/races/1136/mexico/starting-grid.html") - -FINISH_URL = st.text_input(label="Race Result URL from Formula1.com", value="https://www.formula1.com/en/results.html/2022/races/1136/mexico/race-result.html") - -Postions_Change.plot(START_URL, FINISH_URL, GRANDPRIX) \ No newline at end of file diff --git a/spaces/trttung1610/musicgen/audiocraft/losses/stftloss.py b/spaces/trttung1610/musicgen/audiocraft/losses/stftloss.py deleted file mode 100644 index 5ad4b7d3324ee5b0e6064b6f71cf8caf0fdc3be7..0000000000000000000000000000000000000000 --- a/spaces/trttung1610/musicgen/audiocraft/losses/stftloss.py +++ /dev/null @@ -1,207 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. -# Adapted from MIT code under the original license -# Copyright 2019 Tomoki Hayashi -# MIT License (https://opensource.org/licenses/MIT) -import typing as tp - -import torch -from torch import nn -from torch.nn import functional as F - - -# TODO: Replace with torchaudio.STFT? -def _stft(x: torch.Tensor, fft_size: int, hop_length: int, win_length: int, - window: tp.Optional[torch.Tensor], normalized: bool) -> torch.Tensor: - """Perform STFT and convert to magnitude spectrogram. - - Args: - x: Input signal tensor (B, C, T). - fft_size (int): FFT size. - hop_length (int): Hop size. - win_length (int): Window length. - window (torch.Tensor or None): Window function type. - normalized (bool): Whether to normalize the STFT or not. - - Returns: - torch.Tensor: Magnitude spectrogram (B, C, #frames, fft_size // 2 + 1). - """ - B, C, T = x.shape - x_stft = torch.stft( - x.view(-1, T), fft_size, hop_length, win_length, window, - normalized=normalized, return_complex=True, - ) - x_stft = x_stft.view(B, C, *x_stft.shape[1:]) - real = x_stft.real - imag = x_stft.imag - - # NOTE(kan-bayashi): clamp is needed to avoid nan or inf - return torch.sqrt(torch.clamp(real ** 2 + imag ** 2, min=1e-7)).transpose(2, 1) - - -class SpectralConvergenceLoss(nn.Module): - """Spectral convergence loss. - """ - def __init__(self, epsilon: float = torch.finfo(torch.float32).eps): - super().__init__() - self.epsilon = epsilon - - def forward(self, x_mag: torch.Tensor, y_mag: torch.Tensor): - """Calculate forward propagation. - - Args: - x_mag: Magnitude spectrogram of predicted signal (B, #frames, #freq_bins). - y_mag: Magnitude spectrogram of groundtruth signal (B, #frames, #freq_bins). - Returns: - torch.Tensor: Spectral convergence loss value. - """ - return torch.norm(y_mag - x_mag, p="fro") / (torch.norm(y_mag, p="fro") + self.epsilon) - - -class LogSTFTMagnitudeLoss(nn.Module): - """Log STFT magnitude loss. - - Args: - epsilon (float): Epsilon value for numerical stability. - """ - def __init__(self, epsilon: float = torch.finfo(torch.float32).eps): - super().__init__() - self.epsilon = epsilon - - def forward(self, x_mag: torch.Tensor, y_mag: torch.Tensor): - """Calculate forward propagation. - - Args: - x_mag (torch.Tensor): Magnitude spectrogram of predicted signal (B, #frames, #freq_bins). - y_mag (torch.Tensor): Magnitude spectrogram of groundtruth signal (B, #frames, #freq_bins). - Returns: - torch.Tensor: Log STFT magnitude loss value. - """ - return F.l1_loss(torch.log(self.epsilon + y_mag), torch.log(self.epsilon + x_mag)) - - -class STFTLosses(nn.Module): - """STFT losses. - - Args: - n_fft (int): Size of FFT. - hop_length (int): Hop length. - win_length (int): Window length. - window (str): Window function type. - normalized (bool): Whether to use normalized STFT or not. - epsilon (float): Epsilon for numerical stability. - """ - def __init__(self, n_fft: int = 1024, hop_length: int = 120, win_length: int = 600, - window: str = "hann_window", normalized: bool = False, - epsilon: float = torch.finfo(torch.float32).eps): - super().__init__() - self.n_fft = n_fft - self.hop_length = hop_length - self.win_length = win_length - self.normalized = normalized - self.register_buffer("window", getattr(torch, window)(win_length)) - self.spectral_convergenge_loss = SpectralConvergenceLoss(epsilon) - self.log_stft_magnitude_loss = LogSTFTMagnitudeLoss(epsilon) - - def forward(self, x: torch.Tensor, y: torch.Tensor) -> tp.Tuple[torch.Tensor, torch.Tensor]: - """Calculate forward propagation. - - Args: - x (torch.Tensor): Predicted signal (B, T). - y (torch.Tensor): Groundtruth signal (B, T). - Returns: - torch.Tensor: Spectral convergence loss value. - torch.Tensor: Log STFT magnitude loss value. - """ - x_mag = _stft(x, self.n_fft, self.hop_length, - self.win_length, self.window, self.normalized) # type: ignore - y_mag = _stft(y, self.n_fft, self.hop_length, - self.win_length, self.window, self.normalized) # type: ignore - sc_loss = self.spectral_convergenge_loss(x_mag, y_mag) - mag_loss = self.log_stft_magnitude_loss(x_mag, y_mag) - - return sc_loss, mag_loss - - -class STFTLoss(nn.Module): - """Single Resolution STFT loss. - - Args: - n_fft (int): Nb of FFT. - hop_length (int): Hop length. - win_length (int): Window length. - window (str): Window function type. - normalized (bool): Whether to use normalized STFT or not. - epsilon (float): Epsilon for numerical stability. - factor_sc (float): Coefficient for the spectral loss. - factor_mag (float): Coefficient for the magnitude loss. - """ - def __init__(self, n_fft: int = 1024, hop_length: int = 120, win_length: int = 600, - window: str = "hann_window", normalized: bool = False, - factor_sc: float = 0.1, factor_mag: float = 0.1, - epsilon: float = torch.finfo(torch.float32).eps): - super().__init__() - self.loss = STFTLosses(n_fft, hop_length, win_length, window, normalized, epsilon) - self.factor_sc = factor_sc - self.factor_mag = factor_mag - - def forward(self, x: torch.Tensor, y: torch.Tensor) -> tp.Tuple[torch.Tensor, torch.Tensor]: - """Calculate forward propagation. - - Args: - x (torch.Tensor): Predicted signal (B, T). - y (torch.Tensor): Groundtruth signal (B, T). - Returns: - torch.Tensor: Single resolution STFT loss. - """ - sc_loss, mag_loss = self.loss(x, y) - return self.factor_sc * sc_loss + self.factor_mag * mag_loss - - -class MRSTFTLoss(nn.Module): - """Multi resolution STFT loss. - - Args: - n_ffts (Sequence[int]): Sequence of FFT sizes. - hop_lengths (Sequence[int]): Sequence of hop sizes. - win_lengths (Sequence[int]): Sequence of window lengths. - window (str): Window function type. - factor_sc (float): Coefficient for the spectral loss. - factor_mag (float): Coefficient for the magnitude loss. - normalized (bool): Whether to use normalized STFT or not. - epsilon (float): Epsilon for numerical stability. - """ - def __init__(self, n_ffts: tp.Sequence[int] = [1024, 2048, 512], hop_lengths: tp.Sequence[int] = [120, 240, 50], - win_lengths: tp.Sequence[int] = [600, 1200, 240], window: str = "hann_window", - factor_sc: float = 0.1, factor_mag: float = 0.1, - normalized: bool = False, epsilon: float = torch.finfo(torch.float32).eps): - super().__init__() - assert len(n_ffts) == len(hop_lengths) == len(win_lengths) - self.stft_losses = torch.nn.ModuleList() - for fs, ss, wl in zip(n_ffts, hop_lengths, win_lengths): - self.stft_losses += [STFTLosses(fs, ss, wl, window, normalized, epsilon)] - self.factor_sc = factor_sc - self.factor_mag = factor_mag - - def forward(self, x: torch.Tensor, y: torch.Tensor) -> torch.Tensor: - """Calculate forward propagation. - - Args: - x (torch.Tensor): Predicted signal (B, T). - y (torch.Tensor): Groundtruth signal (B, T). - Returns: - torch.Tensor: Multi resolution STFT loss. - """ - sc_loss = torch.Tensor([0.0]) - mag_loss = torch.Tensor([0.0]) - for f in self.stft_losses: - sc_l, mag_l = f(x, y) - sc_loss += sc_l - mag_loss += mag_l - sc_loss /= len(self.stft_losses) - mag_loss /= len(self.stft_losses) - - return self.factor_sc * sc_loss + self.factor_mag * mag_loss diff --git a/spaces/usbethFlerru/sovits-modelsV2/example/Detective Byomkesh Bakshy! 1080p movie torrent Experience the suspense and intrigue of Kolkata in the 1940s.md b/spaces/usbethFlerru/sovits-modelsV2/example/Detective Byomkesh Bakshy! 1080p movie torrent Experience the suspense and intrigue of Kolkata in the 1940s.md deleted file mode 100644 index c4b60060673c29c5e494b0f06ab5f054b714df21..0000000000000000000000000000000000000000 --- a/spaces/usbethFlerru/sovits-modelsV2/example/Detective Byomkesh Bakshy! 1080p movie torrent Experience the suspense and intrigue of Kolkata in the 1940s.md +++ /dev/null @@ -1,6 +0,0 @@ -

                      Detective Byomkesh Bakshy! 1080p movie torrent


                      Download Zip ……… https://urlcod.com/2uyWZl



                      -
                      - aaccfb2cb3
                      -
                      -
                      -

                      diff --git a/spaces/user238921933/stable-diffusion-webui/extensions/deforum/scripts/deforum_helpers/animation_key_frames.py b/spaces/user238921933/stable-diffusion-webui/extensions/deforum/scripts/deforum_helpers/animation_key_frames.py deleted file mode 100644 index 4448b846c038641208cf3e90f02171e32953b2f9..0000000000000000000000000000000000000000 --- a/spaces/user238921933/stable-diffusion-webui/extensions/deforum/scripts/deforum_helpers/animation_key_frames.py +++ /dev/null @@ -1,106 +0,0 @@ -import re -import numpy as np -import numexpr -import pandas as pd -from .prompt import check_is_number - -class DeformAnimKeys(): - def __init__(self, anim_args): - self.angle_series = get_inbetweens(parse_key_frames(anim_args.angle), anim_args.max_frames) - self.zoom_series = get_inbetweens(parse_key_frames(anim_args.zoom), anim_args.max_frames) - self.translation_x_series = get_inbetweens(parse_key_frames(anim_args.translation_x), anim_args.max_frames) - self.translation_y_series = get_inbetweens(parse_key_frames(anim_args.translation_y), anim_args.max_frames) - self.translation_z_series = get_inbetweens(parse_key_frames(anim_args.translation_z), anim_args.max_frames) - self.rotation_3d_x_series = get_inbetweens(parse_key_frames(anim_args.rotation_3d_x), anim_args.max_frames) - self.rotation_3d_y_series = get_inbetweens(parse_key_frames(anim_args.rotation_3d_y), anim_args.max_frames) - self.rotation_3d_z_series = get_inbetweens(parse_key_frames(anim_args.rotation_3d_z), anim_args.max_frames) - self.perspective_flip_theta_series = get_inbetweens(parse_key_frames(anim_args.perspective_flip_theta), anim_args.max_frames) - self.perspective_flip_phi_series = get_inbetweens(parse_key_frames(anim_args.perspective_flip_phi), anim_args.max_frames) - self.perspective_flip_gamma_series = get_inbetweens(parse_key_frames(anim_args.perspective_flip_gamma), anim_args.max_frames) - self.perspective_flip_fv_series = get_inbetweens(parse_key_frames(anim_args.perspective_flip_fv), anim_args.max_frames) - self.noise_schedule_series = get_inbetweens(parse_key_frames(anim_args.noise_schedule), anim_args.max_frames) - self.strength_schedule_series = get_inbetweens(parse_key_frames(anim_args.strength_schedule), anim_args.max_frames) - self.contrast_schedule_series = get_inbetweens(parse_key_frames(anim_args.contrast_schedule), anim_args.max_frames) - self.cfg_scale_schedule_series = get_inbetweens(parse_key_frames(anim_args.cfg_scale_schedule), anim_args.max_frames) - self.pix2pix_img_cfg_scale_series = get_inbetweens(parse_key_frames(anim_args.pix2pix_img_cfg_scale_schedule), anim_args.max_frames) - self.subseed_schedule_series = get_inbetweens(parse_key_frames(anim_args.subseed_schedule), anim_args.max_frames) - self.subseed_strength_schedule_series = get_inbetweens(parse_key_frames(anim_args.subseed_strength_schedule), anim_args.max_frames) - self.checkpoint_schedule_series = get_inbetweens(parse_key_frames(anim_args.checkpoint_schedule), anim_args.max_frames, is_single_string = True) - self.steps_schedule_series = get_inbetweens(parse_key_frames(anim_args.steps_schedule), anim_args.max_frames) - self.seed_schedule_series = get_inbetweens(parse_key_frames(anim_args.seed_schedule), anim_args.max_frames) - self.sampler_schedule_series = get_inbetweens(parse_key_frames(anim_args.sampler_schedule), anim_args.max_frames, is_single_string = True) - self.clipskip_schedule_series = get_inbetweens(parse_key_frames(anim_args.clipskip_schedule), anim_args.max_frames) - self.mask_schedule_series = get_inbetweens(parse_key_frames(anim_args.mask_schedule), anim_args.max_frames, is_single_string = True) - self.noise_mask_schedule_series = get_inbetweens(parse_key_frames(anim_args.noise_mask_schedule), anim_args.max_frames, is_single_string = True) - self.kernel_schedule_series = get_inbetweens(parse_key_frames(anim_args.kernel_schedule), anim_args.max_frames) - self.sigma_schedule_series = get_inbetweens(parse_key_frames(anim_args.sigma_schedule), anim_args.max_frames) - self.amount_schedule_series = get_inbetweens(parse_key_frames(anim_args.amount_schedule), anim_args.max_frames) - self.threshold_schedule_series = get_inbetweens(parse_key_frames(anim_args.threshold_schedule), anim_args.max_frames) - self.fov_series = get_inbetweens(parse_key_frames(anim_args.fov_schedule), anim_args.max_frames) - self.near_series = get_inbetweens(parse_key_frames(anim_args.near_schedule), anim_args.max_frames) - self.far_series = get_inbetweens(parse_key_frames(anim_args.far_schedule), anim_args.max_frames) - self.hybrid_comp_alpha_schedule_series = get_inbetweens(parse_key_frames(anim_args.hybrid_comp_alpha_schedule), anim_args.max_frames) - self.hybrid_comp_mask_blend_alpha_schedule_series = get_inbetweens(parse_key_frames(anim_args.hybrid_comp_mask_blend_alpha_schedule), anim_args.max_frames) - self.hybrid_comp_mask_contrast_schedule_series = get_inbetweens(parse_key_frames(anim_args.hybrid_comp_mask_contrast_schedule), anim_args.max_frames) - self.hybrid_comp_mask_auto_contrast_cutoff_high_schedule_series = get_inbetweens(parse_key_frames(anim_args.hybrid_comp_mask_auto_contrast_cutoff_high_schedule), anim_args.max_frames) - self.hybrid_comp_mask_auto_contrast_cutoff_low_schedule_series = get_inbetweens(parse_key_frames(anim_args.hybrid_comp_mask_auto_contrast_cutoff_low_schedule), anim_args.max_frames) - -class LooperAnimKeys(): - def __init__(self, loop_args, anim_args): - self.use_looper = loop_args.use_looper - self.imagesToKeyframe = loop_args.init_images - self.image_strength_schedule_series = get_inbetweens(parse_key_frames(loop_args.image_strength_schedule), anim_args.max_frames) - self.blendFactorMax_series = get_inbetweens(parse_key_frames(loop_args.blendFactorMax), anim_args.max_frames) - self.blendFactorSlope_series = get_inbetweens(parse_key_frames(loop_args.blendFactorSlope), anim_args.max_frames) - self.tweening_frames_schedule_series = get_inbetweens(parse_key_frames(loop_args.tweening_frames_schedule), anim_args.max_frames) - self.color_correction_factor_series = get_inbetweens(parse_key_frames(loop_args.color_correction_factor), anim_args.max_frames) - -def get_inbetweens(key_frames, max_frames, integer=False, interp_method='Linear', is_single_string = False): - key_frame_series = pd.Series([np.nan for a in range(max_frames)]) - for i in range(0, max_frames): - if i in key_frames: - value = key_frames[i] - value_is_number = check_is_number(value) - # if it's only a number, leave the rest for the default interpolation - if value_is_number: - t = i - key_frame_series[i] = value - if not value_is_number: - t = i - if is_single_string: - if value.find("'") > -1: - value = value.replace("'","") - if value.find('"') > -1: - value = value.replace('"',"") - key_frame_series[i] = numexpr.evaluate(value) if not is_single_string else value # workaround for values formatted like 0:("I am test") //used for sampler schedules - key_frame_series = key_frame_series.astype(float) if not is_single_string else key_frame_series # as string - - if interp_method == 'Cubic' and len(key_frames.items()) <= 3: - interp_method = 'Quadratic' - if interp_method == 'Quadratic' and len(key_frames.items()) <= 2: - interp_method = 'Linear' - - key_frame_series[0] = key_frame_series[key_frame_series.first_valid_index()] - key_frame_series[max_frames-1] = key_frame_series[key_frame_series.last_valid_index()] - key_frame_series = key_frame_series.interpolate(method=interp_method.lower(), limit_direction='both') - if integer: - return key_frame_series.astype(int) - return key_frame_series - -def parse_key_frames(string, prompt_parser=None): - # because math functions (i.e. sin(t)) can utilize brackets - # it extracts the value in form of some stuff - # which has previously been enclosed with brackets and - # with a comma or end of line existing after the closing one - pattern = r'((?P[0-9]+):[\s]*\((?P[\S\s]*?)\)([,][\s]?|[\s]?$))' - frames = dict() - for match_object in re.finditer(pattern, string): - frame = int(match_object.groupdict()['frame']) - param = match_object.groupdict()['param'] - if prompt_parser: - frames[frame] = prompt_parser(param) - else: - frames[frame] = param - if frames == {} and len(string) != 0: - raise RuntimeError('Key Frame string not correctly formatted') - return frames \ No newline at end of file diff --git a/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/docs/reference/yolo/utils/callbacks/mlflow.md b/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/docs/reference/yolo/utils/callbacks/mlflow.md deleted file mode 100644 index 9d69d0fd7a200ab1fb8f50121b6d1a3f05e26324..0000000000000000000000000000000000000000 --- a/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/docs/reference/yolo/utils/callbacks/mlflow.md +++ /dev/null @@ -1,19 +0,0 @@ ---- -description: Track model performance and metrics with MLflow in YOLOv5. Use callbacks like on_pretrain_routine_end or on_train_end to log information. -keywords: Ultralytics, YOLO, Utils, MLflow, callbacks, on_pretrain_routine_end, on_train_end, Tracking, Model Management, training ---- - -## on_pretrain_routine_end ---- -### ::: ultralytics.yolo.utils.callbacks.mlflow.on_pretrain_routine_end -

                      - -## on_fit_epoch_end ---- -### ::: ultralytics.yolo.utils.callbacks.mlflow.on_fit_epoch_end -

                      - -## on_train_end ---- -### ::: ultralytics.yolo.utils.callbacks.mlflow.on_train_end -

                      diff --git a/spaces/vinay123/panoptic-segment-anything/GroundingDINO/groundingdino/util/vl_utils.py b/spaces/vinay123/panoptic-segment-anything/GroundingDINO/groundingdino/util/vl_utils.py deleted file mode 100644 index c91bb02f584398f08a28e6b7719e2b99f6e28616..0000000000000000000000000000000000000000 --- a/spaces/vinay123/panoptic-segment-anything/GroundingDINO/groundingdino/util/vl_utils.py +++ /dev/null @@ -1,100 +0,0 @@ -import os -import random -from typing import List - -import torch - - -def create_positive_map_from_span(tokenized, token_span, max_text_len=256): - """construct a map such that positive_map[i,j] = True iff box i is associated to token j - Input: - - tokenized: - - input_ids: Tensor[1, ntokens] - - attention_mask: Tensor[1, ntokens] - - token_span: list with length num_boxes. - - each item: [start_idx, end_idx] - """ - positive_map = torch.zeros((len(token_span), max_text_len), dtype=torch.float) - for j, tok_list in enumerate(token_span): - for (beg, end) in tok_list: - beg_pos = tokenized.char_to_token(beg) - end_pos = tokenized.char_to_token(end - 1) - if beg_pos is None: - try: - beg_pos = tokenized.char_to_token(beg + 1) - if beg_pos is None: - beg_pos = tokenized.char_to_token(beg + 2) - except: - beg_pos = None - if end_pos is None: - try: - end_pos = tokenized.char_to_token(end - 2) - if end_pos is None: - end_pos = tokenized.char_to_token(end - 3) - except: - end_pos = None - if beg_pos is None or end_pos is None: - continue - - assert beg_pos is not None and end_pos is not None - if os.environ.get("SHILONG_DEBUG_ONLY_ONE_POS", None) == "TRUE": - positive_map[j, beg_pos] = 1 - break - else: - positive_map[j, beg_pos : end_pos + 1].fill_(1) - - return positive_map / (positive_map.sum(-1)[:, None] + 1e-6) - - -def build_captions_and_token_span(cat_list, force_lowercase): - """ - Return: - captions: str - cat2tokenspan: dict - { - 'dog': [[0, 2]], - ... - } - """ - - cat2tokenspan = {} - captions = "" - for catname in cat_list: - class_name = catname - if force_lowercase: - class_name = class_name.lower() - if "/" in class_name: - class_name_list: List = class_name.strip().split("/") - class_name_list.append(class_name) - class_name: str = random.choice(class_name_list) - - tokens_positive_i = [] - subnamelist = [i.strip() for i in class_name.strip().split(" ")] - for subname in subnamelist: - if len(subname) == 0: - continue - if len(captions) > 0: - captions = captions + " " - strat_idx = len(captions) - end_idx = strat_idx + len(subname) - tokens_positive_i.append([strat_idx, end_idx]) - captions = captions + subname - - if len(tokens_positive_i) > 0: - captions = captions + " ." - cat2tokenspan[class_name] = tokens_positive_i - - return captions, cat2tokenspan - - -def build_id2posspan_and_caption(category_dict: dict): - """Build id2pos_span and caption from category_dict - - Args: - category_dict (dict): category_dict - """ - cat_list = [item["name"].lower() for item in category_dict] - id2catname = {item["id"]: item["name"].lower() for item in category_dict} - caption, cat2posspan = build_captions_and_token_span(cat_list, force_lowercase=True) - id2posspan = {catid: cat2posspan[catname] for catid, catname in id2catname.items()} - return id2posspan, caption diff --git a/spaces/vitaliykinakh/Galaxy_Zoo_Generation/src/models/big/datasets.py b/spaces/vitaliykinakh/Galaxy_Zoo_Generation/src/models/big/datasets.py deleted file mode 100644 index 57103cef462bb0e84530652521758e9259a61cb6..0000000000000000000000000000000000000000 --- a/spaces/vitaliykinakh/Galaxy_Zoo_Generation/src/models/big/datasets.py +++ /dev/null @@ -1,362 +0,0 @@ -''' Datasets - This file contains definitions for our CIFAR, ImageFolder, and HDF5 datasets -''' -import os -import os.path -import sys -from PIL import Image -import numpy as np -from tqdm import tqdm, trange - -import torchvision.datasets as dset -import torchvision.transforms as transforms -from torchvision.datasets.utils import download_url, check_integrity -import torch.utils.data as data -from torch.utils.data import DataLoader - -IMG_EXTENSIONS = ['.jpg', '.jpeg', '.png', '.ppm', '.bmp', '.pgm'] - - -def is_image_file(filename): - """Checks if a file is an image. - - Args: - filename (string): path to a file - - Returns: - bool: True if the filename ends with a known image extension - """ - filename_lower = filename.lower() - return any(filename_lower.endswith(ext) for ext in IMG_EXTENSIONS) - - -def find_classes(dir): - classes = [d for d in os.listdir(dir) if os.path.isdir(os.path.join(dir, d))] - classes.sort() - class_to_idx = {classes[i]: i for i in range(len(classes))} - return classes, class_to_idx - - -def make_dataset(dir, class_to_idx): - images = [] - dir = os.path.expanduser(dir) - for target in tqdm(sorted(os.listdir(dir))): - d = os.path.join(dir, target) - if not os.path.isdir(d): - continue - - for root, _, fnames in sorted(os.walk(d)): - for fname in sorted(fnames): - if is_image_file(fname): - path = os.path.join(root, fname) - item = (path, class_to_idx[target]) - images.append(item) - - return images - - -def pil_loader(path): - # open path as file to avoid ResourceWarning (https://github.com/python-pillow/Pillow/issues/835) - with open(path, 'rb') as f: - img = Image.open(f) - return img.convert('RGB') - - -def accimage_loader(path): - import accimage - try: - return accimage.Image(path) - except IOError: - # Potentially a decoding problem, fall back to PIL.Image - return pil_loader(path) - - -def default_loader(path): - from torchvision import get_image_backend - if get_image_backend() == 'accimage': - return accimage_loader(path) - else: - return pil_loader(path) - - -class ImageFolder(data.Dataset): - """A generic data loader where the images are arranged in this way: :: - - root/dogball/xxx.png - root/dogball/xxy.png - root/dogball/xxz.png - - root/cat/123.png - root/cat/nsdf3.png - root/cat/asd932_.png - - Args: - root (string): Root directory path. - transform (callable, optional): A function/transform that takes in an PIL image - and returns a transformed version. E.g, ``transforms.RandomCrop`` - target_transform (callable, optional): A function/transform that takes in the - target and transforms it. - loader (callable, optional): A function to load an image given its path. - - Attributes: - classes (list): List of the class names. - class_to_idx (dict): Dict with items (class_name, class_index). - imgs (list): List of (image path, class_index) tuples - """ - - def __init__(self, root, transform=None, target_transform=None, - loader=default_loader, load_in_mem=False, - index_filename='imagenet_imgs.npz', **kwargs): - classes, class_to_idx = find_classes(root) - # Load pre-computed image directory walk - if os.path.exists(index_filename): - print('Loading pre-saved Index file %s...' % index_filename) - imgs = np.load(index_filename)['imgs'] - # If first time, walk the folder directory and save the - # results to a pre-computed file. - else: - print('Generating Index file %s...' % index_filename) - imgs = make_dataset(root, class_to_idx) - np.savez_compressed(index_filename, **{'imgs' : imgs}) - if len(imgs) == 0: - raise(RuntimeError("Found 0 images in subfolders of: " + root + "\n" - "Supported image extensions are: " + ",".join(IMG_EXTENSIONS))) - - self.root = root - self.imgs = imgs - self.classes = classes - self.class_to_idx = class_to_idx - self.transform = transform - self.target_transform = target_transform - self.loader = loader - self.load_in_mem = load_in_mem - - if self.load_in_mem: - print('Loading all images into memory...') - self.data, self.labels = [], [] - for index in tqdm(range(len(self.imgs))): - path, target = imgs[index][0], imgs[index][1] - self.data.append(self.transform(self.loader(path))) - self.labels.append(target) - - - def __getitem__(self, index): - """ - Args: - index (int): Index - - Returns: - tuple: (image, target) where target is class_index of the target class. - """ - if self.load_in_mem: - img = self.data[index] - target = self.labels[index] - else: - path, target = self.imgs[index] - img = self.loader(str(path)) - if self.transform is not None: - img = self.transform(img) - - if self.target_transform is not None: - target = self.target_transform(target) - - # print(img.size(), target) - return img, int(target) - - def __len__(self): - return len(self.imgs) - - def __repr__(self): - fmt_str = 'Dataset ' + self.__class__.__name__ + '\n' - fmt_str += ' Number of datapoints: {}\n'.format(self.__len__()) - fmt_str += ' Root Location: {}\n'.format(self.root) - tmp = ' Transforms (if any): ' - fmt_str += '{0}{1}\n'.format(tmp, self.transform.__repr__().replace('\n', '\n' + ' ' * len(tmp))) - tmp = ' Target Transforms (if any): ' - fmt_str += '{0}{1}'.format(tmp, self.target_transform.__repr__().replace('\n', '\n' + ' ' * len(tmp))) - return fmt_str - - -''' ILSVRC_HDF5: A dataset to support I/O from an HDF5 to avoid - having to load individual images all the time. ''' -import h5py as h5 -import torch -class ILSVRC_HDF5(data.Dataset): - def __init__(self, root, transform=None, target_transform=None, - load_in_mem=False, train=True,download=False, validate_seed=0, - val_split=0, **kwargs): # last four are dummies - - self.root = root - self.num_imgs = len(h5.File(root, 'r')['labels']) - - # self.transform = transform - self.target_transform = target_transform - - # Set the transform here - self.transform = transform - - # load the entire dataset into memory? - self.load_in_mem = load_in_mem - - # If loading into memory, do so now - if self.load_in_mem: - print('Loading %s into memory...' % root) - with h5.File(root,'r') as f: - self.data = f['imgs'][:] - self.labels = f['labels'][:] - - def __getitem__(self, index): - """ - Args: - index (int): Index - - Returns: - tuple: (image, target) where target is class_index of the target class. - """ - # If loaded the entire dataset in RAM, get image from memory - if self.load_in_mem: - img = self.data[index] - target = self.labels[index] - - # Else load it from disk - else: - with h5.File(self.root,'r') as f: - img = f['imgs'][index] - target = f['labels'][index] - - - # if self.transform is not None: - # img = self.transform(img) - # Apply my own transform - img = ((torch.from_numpy(img).float() / 255) - 0.5) * 2 - - if self.target_transform is not None: - target = self.target_transform(target) - - return img, int(target) - - def __len__(self): - return self.num_imgs - # return len(self.f['imgs']) - -import pickle -class CIFAR10(dset.CIFAR10): - - def __init__(self, root, train=True, - transform=None, target_transform=None, - download=True, validate_seed=0, - val_split=0, load_in_mem=True, **kwargs): - self.root = os.path.expanduser(root) - self.transform = transform - self.target_transform = target_transform - self.train = train # training set or test set - self.val_split = val_split - - if download: - self.download() - - if not self._check_integrity(): - raise RuntimeError('Dataset not found or corrupted.' + - ' You can use download=True to download it') - - # now load the picked numpy arrays - self.data = [] - self.labels= [] - for fentry in self.train_list: - f = fentry[0] - file = os.path.join(self.root, self.base_folder, f) - fo = open(file, 'rb') - if sys.version_info[0] == 2: - entry = pickle.load(fo) - else: - entry = pickle.load(fo, encoding='latin1') - self.data.append(entry['data']) - if 'labels' in entry: - self.labels += entry['labels'] - else: - self.labels += entry['fine_labels'] - fo.close() - - self.data = np.concatenate(self.data) - # Randomly select indices for validation - if self.val_split > 0: - label_indices = [[] for _ in range(max(self.labels)+1)] - for i,l in enumerate(self.labels): - label_indices[l] += [i] - label_indices = np.asarray(label_indices) - - # randomly grab 500 elements of each class - np.random.seed(validate_seed) - self.val_indices = [] - for l_i in label_indices: - self.val_indices += list(l_i[np.random.choice(len(l_i), int(len(self.data) * val_split) // (max(self.labels) + 1) ,replace=False)]) - - if self.train=='validate': - self.data = self.data[self.val_indices] - self.labels = list(np.asarray(self.labels)[self.val_indices]) - - self.data = self.data.reshape((int(50e3 * self.val_split), 3, 32, 32)) - self.data = self.data.transpose((0, 2, 3, 1)) # convert to HWC - - elif self.train: - print(np.shape(self.data)) - if self.val_split > 0: - self.data = np.delete(self.data,self.val_indices,axis=0) - self.labels = list(np.delete(np.asarray(self.labels),self.val_indices,axis=0)) - - self.data = self.data.reshape((int(50e3 * (1.-self.val_split)), 3, 32, 32)) - self.data = self.data.transpose((0, 2, 3, 1)) # convert to HWC - else: - f = self.test_list[0][0] - file = os.path.join(self.root, self.base_folder, f) - fo = open(file, 'rb') - if sys.version_info[0] == 2: - entry = pickle.load(fo) - else: - entry = pickle.load(fo, encoding='latin1') - self.data = entry['data'] - if 'labels' in entry: - self.labels = entry['labels'] - else: - self.labels = entry['fine_labels'] - fo.close() - self.data = self.data.reshape((10000, 3, 32, 32)) - self.data = self.data.transpose((0, 2, 3, 1)) # convert to HWC - - def __getitem__(self, index): - """ - Args: - index (int): Index - Returns: - tuple: (image, target) where target is index of the target class. - """ - img, target = self.data[index], self.labels[index] - - # doing this so that it is consistent with all other datasets - # to return a PIL Image - img = Image.fromarray(img) - - if self.transform is not None: - img = self.transform(img) - - if self.target_transform is not None: - target = self.target_transform(target) - - return img, target - - def __len__(self): - return len(self.data) - - -class CIFAR100(CIFAR10): - base_folder = 'cifar-100-python' - url = "http://www.cs.toronto.edu/~kriz/cifar-100-python.tar.gz" - filename = "cifar-100-python.tar.gz" - tgz_md5 = 'eb9058c3a382ffc7106e4002c42a8d85' - train_list = [ - ['train', '16019d7e3df5f24257cddd939b257f8d'], - ] - - test_list = [ - ['test', 'f0ef6b0ae62326f3e7ffdfab6717acfc'], - ] diff --git a/spaces/w1zrd/MusicGen/audiocraft/__init__.py b/spaces/w1zrd/MusicGen/audiocraft/__init__.py deleted file mode 100644 index 2befac60faf6f406f78ff7b7da05225dbfe7b111..0000000000000000000000000000000000000000 --- a/spaces/w1zrd/MusicGen/audiocraft/__init__.py +++ /dev/null @@ -1,10 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -# flake8: noqa -from . import data, modules, models - -__version__ = '0.0.2a1' diff --git a/spaces/wzq10314/VITS-Umamusume-voice-synthesizer1/text/japanese.py b/spaces/wzq10314/VITS-Umamusume-voice-synthesizer1/text/japanese.py deleted file mode 100644 index 375e4d50872d5c68ee57ca17470a2ca425425eba..0000000000000000000000000000000000000000 --- a/spaces/wzq10314/VITS-Umamusume-voice-synthesizer1/text/japanese.py +++ /dev/null @@ -1,153 +0,0 @@ -import re -from unidecode import unidecode -import pyopenjtalk - - -# Regular expression matching Japanese without punctuation marks: -_japanese_characters = re.compile( - r'[A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]') - -# Regular expression matching non-Japanese characters or punctuation marks: -_japanese_marks = re.compile( - r'[^A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]') - -# List of (symbol, Japanese) pairs for marks: -_symbols_to_japanese = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('%', 'パーセント') -]] - -# List of (romaji, ipa) pairs for marks: -_romaji_to_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('ts', 'ʦ'), - ('u', 'ɯ'), - ('j', 'ʥ'), - ('y', 'j'), - ('ni', 'n^i'), - ('nj', 'n^'), - ('hi', 'çi'), - ('hj', 'ç'), - ('f', 'ɸ'), - ('I', 'i*'), - ('U', 'ɯ*'), - ('r', 'ɾ') -]] - -# List of (romaji, ipa2) pairs for marks: -_romaji_to_ipa2 = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('u', 'ɯ'), - ('ʧ', 'tʃ'), - ('j', 'dʑ'), - ('y', 'j'), - ('ni', 'n^i'), - ('nj', 'n^'), - ('hi', 'çi'), - ('hj', 'ç'), - ('f', 'ɸ'), - ('I', 'i*'), - ('U', 'ɯ*'), - ('r', 'ɾ') -]] - -# List of (consonant, sokuon) pairs: -_real_sokuon = [(re.compile('%s' % x[0]), x[1]) for x in [ - (r'Q([↑↓]*[kg])', r'k#\1'), - (r'Q([↑↓]*[tdjʧ])', r't#\1'), - (r'Q([↑↓]*[sʃ])', r's\1'), - (r'Q([↑↓]*[pb])', r'p#\1') -]] - -# List of (consonant, hatsuon) pairs: -_real_hatsuon = [(re.compile('%s' % x[0]), x[1]) for x in [ - (r'N([↑↓]*[pbm])', r'm\1'), - (r'N([↑↓]*[ʧʥj])', r'n^\1'), - (r'N([↑↓]*[tdn])', r'n\1'), - (r'N([↑↓]*[kg])', r'ŋ\1') -]] - - -def symbols_to_japanese(text): - for regex, replacement in _symbols_to_japanese: - text = re.sub(regex, replacement, text) - return text - - -def japanese_to_romaji_with_accent(text): - '''Reference https://r9y9.github.io/ttslearn/latest/notebooks/ch10_Recipe-Tacotron.html''' - text = symbols_to_japanese(text) - sentences = re.split(_japanese_marks, text) - marks = re.findall(_japanese_marks, text) - text = '' - for i, sentence in enumerate(sentences): - if re.match(_japanese_characters, sentence): - if text != '': - text += ' ' - labels = pyopenjtalk.extract_fullcontext(sentence) - for n, label in enumerate(labels): - phoneme = re.search(r'\-([^\+]*)\+', label).group(1) - if phoneme not in ['sil', 'pau']: - text += phoneme.replace('ch', 'ʧ').replace('sh', - 'ʃ').replace('cl', 'Q') - else: - continue - # n_moras = int(re.search(r'/F:(\d+)_', label).group(1)) - a1 = int(re.search(r"/A:(\-?[0-9]+)\+", label).group(1)) - a2 = int(re.search(r"\+(\d+)\+", label).group(1)) - a3 = int(re.search(r"\+(\d+)/", label).group(1)) - if re.search(r'\-([^\+]*)\+', labels[n + 1]).group(1) in ['sil', 'pau']: - a2_next = -1 - else: - a2_next = int( - re.search(r"\+(\d+)\+", labels[n + 1]).group(1)) - # Accent phrase boundary - if a3 == 1 and a2_next == 1: - text += ' ' - # Falling - elif a1 == 0 and a2_next == a2 + 1: - text += '↓' - # Rising - elif a2 == 1 and a2_next == 2: - text += '↑' - if i < len(marks): - text += unidecode(marks[i]).replace(' ', '') - return text - - -def get_real_sokuon(text): - for regex, replacement in _real_sokuon: - text = re.sub(regex, replacement, text) - return text - - -def get_real_hatsuon(text): - for regex, replacement in _real_hatsuon: - text = re.sub(regex, replacement, text) - return text - - -def japanese_to_ipa(text): - text = japanese_to_romaji_with_accent(text).replace('...', '…') - text = re.sub( - r'([aiueo])\1+', lambda x: x.group(0)[0]+'ː'*(len(x.group(0))-1), text) - text = get_real_sokuon(text) - text = get_real_hatsuon(text) - for regex, replacement in _romaji_to_ipa: - text = re.sub(regex, replacement, text) - return text - - -def japanese_to_ipa2(text): - text = japanese_to_romaji_with_accent(text).replace('...', '…') - text = get_real_sokuon(text) - text = get_real_hatsuon(text) - for regex, replacement in _romaji_to_ipa2: - text = re.sub(regex, replacement, text) - return text - - -def japanese_to_ipa3(text): - text = japanese_to_ipa2(text).replace('n^', 'ȵ').replace( - 'ʃ', 'ɕ').replace('*', '\u0325').replace('#', '\u031a') - text = re.sub( - r'([aiɯeo])\1+', lambda x: x.group(0)[0]+'ː'*(len(x.group(0))-1), text) - text = re.sub(r'((?:^|\s)(?:ts|tɕ|[kpt]))', r'\1ʰ', text) - return text diff --git a/spaces/xdecoder/Demo/xdecoder/language/LangEncoder/transformer.py b/spaces/xdecoder/Demo/xdecoder/language/LangEncoder/transformer.py deleted file mode 100644 index 00123460f0aa93801bdf750af62e3a14753c0366..0000000000000000000000000000000000000000 --- a/spaces/xdecoder/Demo/xdecoder/language/LangEncoder/transformer.py +++ /dev/null @@ -1,222 +0,0 @@ -from collections import OrderedDict -from typing import Tuple, Union -import logging -import os - -import numpy as np -import torch -import torch.nn.functional as F -from torch import nn - -from timm.models.layers import DropPath, trunc_normal_ - -from .registry import register_lang_encoder -from utils.distributed import is_main_process -from utils.model import register_norm_module - -logger = logging.getLogger(__name__) - - -@register_norm_module -class LayerNorm(nn.Module): - def __init__(self, hidden_size, eps=1e-12): - """Construct a layernorm module in the TF style (epsilon inside the square root). - """ - super(LayerNorm, self).__init__() - self.weight = nn.Parameter(torch.ones(hidden_size)) - self.bias = nn.Parameter(torch.zeros(hidden_size)) - self.variance_epsilon = eps - - def forward(self, x): - pdtype = x.dtype - x = x.float() - u = x.mean(-1, keepdim=True) - s = (x - u).pow(2).mean(-1, keepdim=True) - x = (x - u) / torch.sqrt(s + self.variance_epsilon) - return self.weight * x.to(pdtype) + self.bias - - -class QuickGELU(nn.Module): - def forward(self, x: torch.Tensor): - return x * torch.sigmoid(1.702 * x) - - -class ResidualAttentionBlock(nn.Module): - def __init__(self, - d_model: int, - n_head: int, - attn_mask: torch.Tensor = None, - drop_path: float = 0.0): - super().__init__() - - self.attn = nn.MultiheadAttention(d_model, n_head) - self.ln_1 = LayerNorm(d_model) - self.mlp = nn.Sequential(OrderedDict([ - ("c_fc", nn.Linear(d_model, d_model * 4)), - ("gelu", QuickGELU()), - ("c_proj", nn.Linear(d_model * 4, d_model)) - ])) - self.ln_2 = LayerNorm(d_model) - self.attn_mask = attn_mask - self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity() - - def attention(self, x: torch.Tensor, key_padding_mask: torch.Tensor = None): - self.attn_mask = self.attn_mask.to(dtype=x.dtype, device=x.device) \ - if self.attn_mask is not None else None - - - return self.attn( - x, x, x, - key_padding_mask=key_padding_mask, - need_weights=False, - attn_mask=self.attn_mask - )[0] - - def forward(self, x: torch.Tensor, key_padding_mask: torch.Tensor = None): - x = x + self.drop_path(self.attention(self.ln_1(x), key_padding_mask=key_padding_mask)) - x = x + self.drop_path(self.mlp(self.ln_2(x))) - return x - - -class Transformer(nn.Module): - def __init__(self, - context_length: int, - vocab_size: int, - width: int, - layers: int, - heads: int, - drop_path: float = 0.0, - autogressive: bool =True): - super().__init__() - - self.token_embedding = nn.Embedding(vocab_size, width) - - self.context_length = context_length - self.positional_embedding = nn.Parameter( - torch.empty(self.context_length, width) - ) - - self.width = width - self.layers = layers - self.autogressive = autogressive - attn_mask = self.build_attention_mask() if autogressive else None - dpr = [x.item() for x in torch.linspace(0, drop_path, layers)] # stochastic depth decay rule - self.resblocks = nn.ModuleList( - [ - ResidualAttentionBlock(width, heads, attn_mask, dpr[i]) - for i in range(layers) - ] - ) - - self.ln_final = LayerNorm(width) - - trunc_normal_(self.positional_embedding, std=.02) - # nn.init.normal_(self.token_embedding, std=.02) - trunc_normal_(self.token_embedding.weight, std=.02) - self.apply(self._init_weights) - - @property - def dim_out(self): - return self.width - - def build_attention_mask(self): - # lazily create causal attention mask, with full attention between the vision tokens - # pytorch uses additive attention mask; fill with -inf - mask = torch.empty(self.context_length, self.context_length) - mask.fill_(float("-inf")) - mask.triu_(1) # zero out the lower diagonal - return mask - - def _init_weights(self, m): - if isinstance(m, (nn.Linear, nn.Conv2d)): - if is_main_process(): - logger.info('=> init weight of Linear/Conv2d from trunc norm') - trunc_normal_(m.weight, std=0.02) - if m.bias is not None: - if is_main_process(): - logger.info('=> init bias of Linear/Conv2d to zeros') - nn.init.constant_(m.bias, 0) - elif isinstance(m, (nn.LayerNorm, nn.BatchNorm2d)): - nn.init.constant_(m.bias, 0) - - def load_pretrained(self, pretrained='', pretrained_layers=[], verbose=True): - if os.path.isfile(pretrained): - pretrained_dict = torch.load(pretrained, map_location='cpu') - logging.info(f'=> loading pretrained model {pretrained}') - model_dict = self.state_dict() - stripped_key = lambda x: x[13:] if x.startswith('lang_encoder.') else x - pretrained_dict = { - stripped_key(k): v for k, v in pretrained_dict.items() - if stripped_key(k) in model_dict.keys() - } - need_init_state_dict = {} - for k, v in pretrained_dict.items(): - need_init = ( - k.split('.')[0] in pretrained_layers - or pretrained_layers[0] == '*' - ) - if need_init: - if verbose: - logger.info(f'=> init {k} from {pretrained}') - - if 'positional_embedding' in k and v.size() != model_dict[k].size(): - positional_embedding_pretrained = v - positional_embedding_current = model_dict[k] - L1, nH1 = positional_embedding_pretrained.size() - L2, nH2 = positional_embedding_current.size() - if nH1 != nH2: - logger.info(f"Error in loading {k}, passing") - else: - if L1 != L2: - logger.info( - '=> load_pretrained: resized variant: {} to {}' - .format((L1, nH1), (L2, nH2)) - ) - - posemb = positional_embedding_pretrained.float() - posemb_grid = posemb.unsqueeze(dim=0).permute(0, 2, 1) - posemb_grid = torch.nn.functional.interpolate(posemb_grid, size=L2, mode='linear') - posemb_grid = posemb_grid.permute(0, 2, 1).squeeze(dim=0) - v = posemb_grid - - need_init_state_dict[k] = v - - self.load_state_dict(need_init_state_dict, strict=False) - - - @torch.jit.ignore - def no_weight_decay(self): - return { - 'positional_embedding', - 'token_embedding', - } - - def forward(self, input_ids, attention_mask=None): - key_padding_mask = (attention_mask == 0) if (not self.autogressive and attention_mask is not None) else None - # key_padding_mask = (input_ids == 0) if not self.autogressive else None - x = self.token_embedding(input_ids) # [batch_size, n_ctx, d_model] - x = x + self.positional_embedding - x = x.permute(1, 0, 2) # NLD -> LND - for block in self.resblocks: - x = block(x, key_padding_mask) - x = x.permute(1, 0, 2) # LND -> NLD - - x = self.ln_final(x) - - return {'last_hidden_state': x} - - -@register_lang_encoder -def lang_encoder(config_encoder, tokenizer, verbose, **kwargs): - transformer = Transformer( - context_length=config_encoder['CONTEXT_LENGTH'], - vocab_size=tokenizer.vocab_size, - width=config_encoder['WIDTH'], - layers=config_encoder['LAYERS'], - heads=config_encoder['HEADS'], - autogressive=config_encoder.get('AUTOGRESSIVE', True) - ) - - if config_encoder.get('LOAD_PRETRAINED', False): - transformer.load_pretrained(config_encoder['PRETRAINED'], config_encoder.get('PRETRAINED_LAYERS', ['*'])) - return transformer diff --git a/spaces/xfys/yolov5_tracking/trackers/strong_sort/deep/reid/torchreid/metrics/__init__.py b/spaces/xfys/yolov5_tracking/trackers/strong_sort/deep/reid/torchreid/metrics/__init__.py deleted file mode 100644 index 5159e1e9aa07d972d2a07ef00feac341349c66b8..0000000000000000000000000000000000000000 --- a/spaces/xfys/yolov5_tracking/trackers/strong_sort/deep/reid/torchreid/metrics/__init__.py +++ /dev/null @@ -1,5 +0,0 @@ -from __future__ import absolute_import - -from .rank import evaluate_rank -from .accuracy import accuracy -from .distance import compute_distance_matrix diff --git a/spaces/xiaogang/image_emotion/README.md b/spaces/xiaogang/image_emotion/README.md deleted file mode 100644 index 9e605238f4b08e59802a658229ff90813ded65dd..0000000000000000000000000000000000000000 --- a/spaces/xiaogang/image_emotion/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Image_emotion -emoji: 📈 -colorFrom: green -colorTo: gray -sdk: gradio -sdk_version: 2.9.4 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/yangheng/Super-Resolution-Anime-Diffusion/RealESRGANv030/docs/model_zoo.md b/spaces/yangheng/Super-Resolution-Anime-Diffusion/RealESRGANv030/docs/model_zoo.md deleted file mode 100644 index 132cc514bac6b447addac8485e0622a834d34474..0000000000000000000000000000000000000000 --- a/spaces/yangheng/Super-Resolution-Anime-Diffusion/RealESRGANv030/docs/model_zoo.md +++ /dev/null @@ -1,49 +0,0 @@ -# :european_castle: Model Zoo - -- [For General Images](#for-general-images) -- [For Anime Images](#for-anime-images) -- [For Anime Videos](#for-anime-videos) - ---- - -## For General Images - -| Models | Scale | Description | -| ------------------------------------------------------------------------------------------------------------------------------- | :---- | :------------------------------------------- | -| [RealESRGAN_x4plus](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth) | X4 | X4 model for general images | -| [RealESRGAN_x2plus](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.1/RealESRGAN_x2plus.pth) | X2 | X2 model for general images | -| [RealESRNet_x4plus](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.1/RealESRNet_x4plus.pth) | X4 | X4 model with MSE loss (over-smooth effects) | -| [official ESRGAN_x4](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.1/ESRGAN_SRx4_DF2KOST_official-ff704c30.pth) | X4 | official ESRGAN model | -| [realesr-general-x4v3](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesr-general-x4v3.pth) | X4 (can also be used for X1, X2, X3) | A tiny small model (consume much fewer GPU memory and time); not too strong deblur and denoise capacity | - -The following models are **discriminators**, which are usually used for fine-tuning. - -| Models | Corresponding model | -| ---------------------------------------------------------------------------------------------------------------------- | :------------------ | -| [RealESRGAN_x4plus_netD](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.3/RealESRGAN_x4plus_netD.pth) | RealESRGAN_x4plus | -| [RealESRGAN_x2plus_netD](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.3/RealESRGAN_x2plus_netD.pth) | RealESRGAN_x2plus | - -## For Anime Images / Illustrations - -| Models | Scale | Description | -| ------------------------------------------------------------------------------------------------------------------------------ | :---- | :---------------------------------------------------------- | -| [RealESRGAN_x4plus_anime_6B](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth) | X4 | Optimized for anime images; 6 RRDB blocks (smaller network) | - -The following models are **discriminators**, which are usually used for fine-tuning. - -| Models | Corresponding model | -| ---------------------------------------------------------------------------------------------------------------------------------------- | :------------------------- | -| [RealESRGAN_x4plus_anime_6B_netD](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B_netD.pth) | RealESRGAN_x4plus_anime_6B | - -## For Animation Videos - -| Models | Scale | Description | -| ---------------------------------------------------------------------------------------------------------------------------------- | :---- | :----------------------------- | -| [realesr-animevideov3](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesr-animevideov3.pth) | X41 | Anime video model with XS size | - -Note:
                      -1 This model can also be used for X1, X2, X3. - -The following models are **discriminators**, which are usually used for fine-tuning. - -TODO diff --git a/spaces/yixin6178/ChatPaper/scipdf_utils.py b/spaces/yixin6178/ChatPaper/scipdf_utils.py deleted file mode 100644 index 6550bd6728b3d34c0588240721250f635e1e79c8..0000000000000000000000000000000000000000 --- a/spaces/yixin6178/ChatPaper/scipdf_utils.py +++ /dev/null @@ -1,424 +0,0 @@ -import re -import os -import os.path as op -from glob import glob -import urllib -import subprocess -import requests -from bs4 import BeautifulSoup, NavigableString - - -# or https://cloud.science-miner.com/grobid/ for cloud service -GROBID_URL = "http://localhost:8070" -DIR_PATH = op.dirname(op.abspath(__file__)) -PDF_FIGURES_JAR_PATH = op.join( - DIR_PATH, "pdffigures2", "pdffigures2-assembly-0.0.12-SNAPSHOT.jar" -) - - -def list_pdf_paths(pdf_folder: str): - """ - list of pdf paths in pdf folder - """ - return glob(op.join(pdf_folder, "*", "*", "*.pdf")) - - -def validate_url(path: str): - """ - Validate a given ``path`` if it is URL or not - """ - regex = re.compile( - r"^(?:http|ftp)s?://" # http:// or https:// - # domain... - r"(?:(?:[A-Z0-9](?:[A-Z0-9-]{0,61}[A-Z0-9])?\.)+(?:[A-Z]{2,6}\.?|[A-Z0-9-]{2,}\.?)|" - r"localhost|" # localhost... - r"\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})" # ...or ip - r"(?::\d+)?" # optional port - r"(?:/?|[/?]\S+)$", - re.IGNORECASE, - ) - return re.match(regex, path) is not None - - -def parse_pdf( - pdf_path: str, - fulltext: bool = True, - soup: bool = False, - grobid_url: str = GROBID_URL, -): - """ - Function to parse PDF to XML or BeautifulSoup using GROBID tool - - You can see http://grobid.readthedocs.io/en/latest/Install-Grobid/ on how to run GROBID locally - After loading GROBID zip file, you can run GROBID by using the following - >> ./gradlew run - - Parameters - ========== - pdf_path: str or bytes, path or URL to publication or article or bytes string of PDF - fulltext: bool, option for parsing, if True, parse full text of the article - if False, parse only header - grobid_url: str, url to GROBID parser, default at 'http://localhost:8070' - This could be changed to "https://cloud.science-miner.com/grobid/" for the cloud service - soup: bool, if True, return BeautifulSoup of the article - - Output - ====== - parsed_article: if soup is False, return parsed XML in text format, - else return BeautifulSoup of the XML - Example - ======= - >> parsed_article = parse_pdf(pdf_path, fulltext=True, soup=True) - """ - # GROBID URL - if fulltext: - url = "%s/api/processFulltextDocument" % grobid_url - else: - url = "%s/api/processHeaderDocument" % grobid_url - - if isinstance(pdf_path, str): - if validate_url(pdf_path) and op.splitext(pdf_path)[-1].lower() != ".pdf": - print("The input URL has to end with ``.pdf``") - parsed_article = None - elif validate_url(pdf_path) and op.splitext(pdf_path)[-1] == ".pdf": - page = urllib.request.urlopen(pdf_path).read() - parsed_article = requests.post(url, files={"input": page}).text - elif op.exists(pdf_path): - parsed_article = requests.post( - url, files={"input": open(pdf_path, "rb")} - ).text - else: - parsed_article = None - elif isinstance(pdf_path, bytes): - # assume that incoming is byte string - parsed_article = requests.post(url, files={"input": pdf_path}).text - else: - parsed_article = None - - if soup and parsed_article is not None: - parsed_article = BeautifulSoup(parsed_article, "lxml") - return parsed_article - - -def parse_authors(article): - """ - Parse authors from a given BeautifulSoup of an article - """ - author_names = article.find("sourcedesc").findAll("persname") - authors = [] - for author in author_names: - firstname = author.find("forename", {"type": "first"}) - firstname = firstname.text.strip() if firstname is not None else "" - middlename = author.find("forename", {"type": "middle"}) - middlename = middlename.text.strip() if middlename is not None else "" - lastname = author.find("surname") - lastname = lastname.text.strip() if lastname is not None else "" - if middlename != "": - authors.append(firstname + " " + middlename + " " + lastname) - else: - authors.append(firstname + " " + lastname) - authors = "; ".join(authors) - return authors - - -def parse_date(article): - """ - Parse date from a given BeautifulSoup of an article - """ - pub_date = article.find("publicationstmt") - year = pub_date.find("date") - year = year.attrs.get("when") if year is not None else "" - return year - - -def parse_abstract(article): - """ - Parse abstract from a given BeautifulSoup of an article - """ - div = article.find("abstract") - abstract = "" - for p in list(div.children): - if not isinstance(p, NavigableString) and len(list(p)) > 0: - abstract += " ".join( - [elem.text for elem in p if not isinstance( - elem, NavigableString)] - ) - return abstract - - -def calculate_number_of_references(div): - """ - For a given section, calculate number of references made in the section - """ - n_publication_ref = len( - [ref for ref in div.find_all("ref") if ref.attrs.get("type") == "bibr"] - ) - n_figure_ref = len( - [ref for ref in div.find_all( - "ref") if ref.attrs.get("type") == "figure"] - ) - return {"n_publication_ref": n_publication_ref, "n_figure_ref": n_figure_ref} - - -def parse_sections(article, as_list: bool = False): - """ - Parse list of sections from a given BeautifulSoup of an article - - Parameters - ========== - as_list: bool, if True, output text as a list of paragraph instead - of joining it together as one single text - """ - article_text = article.find("text") - divs = article_text.find_all( - "div", attrs={"xmlns": "http://www.tei-c.org/ns/1.0"}) - sections = [] - for div in divs: - div_list = list(div.children) - if len(div_list) == 0: - heading = "" - text = "" - all_paragraphs = [] - elif len(div_list) == 1: - if isinstance(div_list[0], NavigableString): - heading = str(div_list[0]) - text = "" - all_paragraphs = [] - else: - heading = "" - text = div_list[0].text - all_paragraphs = [text] - else: - text = [] - heading = div_list[0] - all_paragraphs = [] - if isinstance(heading, NavigableString): - heading = str(heading) - p_all = list(div.children)[1:] - else: - heading = "" - p_all = list(div.children) - for p in p_all: - if p is not None: - try: - text.append(p.text) - all_paragraphs.append(p.text) - except: - pass - if not as_list: - text = "\n".join(text) - if heading != "" or text != "": - ref_dict = calculate_number_of_references(div) - sections.append( - { - "heading": heading, - "text": text, - "all_paragraphs": all_paragraphs, - "n_publication_ref": ref_dict["n_publication_ref"], - "n_figure_ref": ref_dict["n_figure_ref"], - } - ) - return sections - - -def parse_references(article): - """ - Parse list of references from a given BeautifulSoup of an article - """ - reference_list = [] - references = article.find("text").find("div", attrs={"type": "references"}) - references = references.find_all( - "biblstruct") if references is not None else [] - reference_list = [] - for reference in references: - title = reference.find("title", attrs={"level": "a"}) - if title is None: - title = reference.find("title", attrs={"level": "m"}) - title = title.text if title is not None else "" - journal = reference.find("title", attrs={"level": "j"}) - journal = journal.text if journal is not None else "" - if journal == "": - journal = reference.find("publisher") - journal = journal.text if journal is not None else "" - year = reference.find("date") - year = year.attrs.get("when") if year is not None else "" - authors = [] - for author in reference.find_all("author"): - firstname = author.find("forename", {"type": "first"}) - firstname = firstname.text.strip() if firstname is not None else "" - middlename = author.find("forename", {"type": "middle"}) - middlename = middlename.text.strip() if middlename is not None else "" - lastname = author.find("surname") - lastname = lastname.text.strip() if lastname is not None else "" - if middlename != "": - authors.append(firstname + " " + middlename + " " + lastname) - else: - authors.append(firstname + " " + lastname) - authors = "; ".join(authors) - reference_list.append( - {"title": title, "journal": journal, "year": year, "authors": authors} - ) - return reference_list - - -def parse_figure_caption(article): - """ - Parse list of figures/tables from a given BeautifulSoup of an article - """ - figures_list = [] - figures = article.find_all("figure") - for figure in figures: - figure_type = figure.attrs.get("type") or "" - figure_id = figure.attrs["xml:id"] or "" - label = figure.find("label").text - if figure_type == "table": - caption = figure.find("figdesc").text - data = figure.table.text - else: - caption = figure.text - data = "" - figures_list.append( - { - "figure_label": label, - "figure_type": figure_type, - "figure_id": figure_id, - "figure_caption": caption, - "figure_data": data, - } - ) - return figures_list - - -def convert_article_soup_to_dict(article, as_list: bool = False): - """ - Function to convert BeautifulSoup to JSON format - similar to the output from https://github.com/allenai/science-parse/ - - Parameters - ========== - article: BeautifulSoup - - Output - ====== - article_json: dict, parsed dictionary of a given article in the following format - { - 'title': ..., - 'abstract': ..., - 'sections': [ - {'heading': ..., 'text': ...}, - {'heading': ..., 'text': ...}, - ... - ], - 'references': [ - {'title': ..., 'journal': ..., 'year': ..., 'authors': ...}, - {'title': ..., 'journal': ..., 'year': ..., 'authors': ...}, - ... - ], - 'figures': [ - {'figure_label': ..., 'figure_type': ..., 'figure_id': ..., 'figure_caption': ..., 'figure_data': ...}, - ... - ] - } - """ - article_dict = {} - if article is not None: - title = article.find("title", attrs={"type": "main"}) - title = title.text.strip() if title is not None else "" - article_dict["authors"] = parse_authors(article) - article_dict["pub_date"] = parse_date(article) - article_dict["title"] = title - article_dict["abstract"] = parse_abstract(article) - article_dict["sections"] = parse_sections(article, as_list=as_list) - article_dict["references"] = parse_references(article) - article_dict["figures"] = parse_figure_caption(article) - - doi = article.find("idno", attrs={"type": "DOI"}) - doi = doi.text if doi is not None else "" - article_dict["doi"] = doi - - return article_dict - else: - return None - - -def parse_pdf_to_dict( - pdf_path: str, - fulltext: bool = True, - soup: bool = True, - as_list: bool = False, - grobid_url: str = GROBID_URL, -): - """ - Parse the given PDF and return dictionary of the parsed article - - Parameters - ========== - pdf_path: str, path to publication or article - fulltext: bool, whether to extract fulltext or not - soup: bool, whether to return BeautifulSoup or not - as_list: bool, whether to return list of sections or not - grobid_url: str, url to grobid server, default is `GROBID_URL` - This could be changed to "https://cloud.science-miner.com/grobid/" for the cloud service - - Ouput - ===== - article_dict: dict, dictionary of an article - """ - parsed_article = parse_pdf( - pdf_path, fulltext=fulltext, soup=soup, grobid_url=grobid_url - ) - article_dict = convert_article_soup_to_dict( - parsed_article, as_list=as_list) - return article_dict - - -def parse_figures( - pdf_folder: str, - jar_path: str = PDF_FIGURES_JAR_PATH, - resolution: int = 300, - output_folder: str = "figures", -): - """ - Parse figures from the given scientific PDF using pdffigures2 - - Parameters - ========== - pdf_folder: str, path to a folder that contains PDF files. A folder must contains only PDF files - jar_path: str, default path to pdffigures2-assembly-0.0.12-SNAPSHOT.jar file - resolution: int, resolution of the output figures - output_folder: str, path to folder that we want to save parsed data (related to figures) and figures - - Output - ====== - folder: making a folder of output_folder/data and output_folder/figures of parsed data and figures relatively - """ - if not op.isdir(output_folder): - os.makedirs(output_folder) - - # create ``data`` and ``figures`` subfolder within ``output_folder`` - data_path = op.join(output_folder, "data") - figure_path = op.join(output_folder, "figures") - if not op.exists(data_path): - os.makedirs(data_path) - if not op.exists(figure_path): - os.makedirs(figure_path) - - if op.isdir(data_path) and op.isdir(figure_path): - args = [ - "java", - "-jar", - jar_path, - pdf_folder, - "-i", - str(resolution), - "-d", - os.path.join(os.path.abspath(data_path), ""), - "-m", - op.join(os.path.abspath(figure_path), ""), # end path with "/" - ] - _ = subprocess.run( - args, stdout=subprocess.PIPE, stderr=subprocess.PIPE, timeout=20 - ) - print("Done parsing figures from PDFs!") - else: - print("You may have to check of ``data`` and ``figures`` in the the output folder path.") diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/deit/convert_deit_timm_to_pytorch.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/deit/convert_deit_timm_to_pytorch.py deleted file mode 100644 index 2b5c795ff2d2ab6d8b3e6ce6f8a0150ff3911f33..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/deit/convert_deit_timm_to_pytorch.py +++ /dev/null @@ -1,219 +0,0 @@ -# coding=utf-8 -# Copyright 2021 The HuggingFace Inc. team. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -"""Convert DeiT distilled checkpoints from the timm library.""" - - -import argparse -import json -from pathlib import Path - -import requests -import timm -import torch -from huggingface_hub import hf_hub_download -from PIL import Image - -from transformers import DeiTConfig, DeiTForImageClassificationWithTeacher, DeiTImageProcessor -from transformers.utils import logging - - -logging.set_verbosity_info() -logger = logging.get_logger(__name__) - - -# here we list all keys to be renamed (original name on the left, our name on the right) -def create_rename_keys(config, base_model=False): - rename_keys = [] - for i in range(config.num_hidden_layers): - # encoder layers: output projection, 2 feedforward neural networks and 2 layernorms - rename_keys.append((f"blocks.{i}.norm1.weight", f"deit.encoder.layer.{i}.layernorm_before.weight")) - rename_keys.append((f"blocks.{i}.norm1.bias", f"deit.encoder.layer.{i}.layernorm_before.bias")) - rename_keys.append((f"blocks.{i}.attn.proj.weight", f"deit.encoder.layer.{i}.attention.output.dense.weight")) - rename_keys.append((f"blocks.{i}.attn.proj.bias", f"deit.encoder.layer.{i}.attention.output.dense.bias")) - rename_keys.append((f"blocks.{i}.norm2.weight", f"deit.encoder.layer.{i}.layernorm_after.weight")) - rename_keys.append((f"blocks.{i}.norm2.bias", f"deit.encoder.layer.{i}.layernorm_after.bias")) - rename_keys.append((f"blocks.{i}.mlp.fc1.weight", f"deit.encoder.layer.{i}.intermediate.dense.weight")) - rename_keys.append((f"blocks.{i}.mlp.fc1.bias", f"deit.encoder.layer.{i}.intermediate.dense.bias")) - rename_keys.append((f"blocks.{i}.mlp.fc2.weight", f"deit.encoder.layer.{i}.output.dense.weight")) - rename_keys.append((f"blocks.{i}.mlp.fc2.bias", f"deit.encoder.layer.{i}.output.dense.bias")) - - # projection layer + position embeddings - rename_keys.extend( - [ - ("cls_token", "deit.embeddings.cls_token"), - ("dist_token", "deit.embeddings.distillation_token"), - ("patch_embed.proj.weight", "deit.embeddings.patch_embeddings.projection.weight"), - ("patch_embed.proj.bias", "deit.embeddings.patch_embeddings.projection.bias"), - ("pos_embed", "deit.embeddings.position_embeddings"), - ] - ) - - if base_model: - # layernorm + pooler - rename_keys.extend( - [ - ("norm.weight", "layernorm.weight"), - ("norm.bias", "layernorm.bias"), - ("pre_logits.fc.weight", "pooler.dense.weight"), - ("pre_logits.fc.bias", "pooler.dense.bias"), - ] - ) - - # if just the base model, we should remove "deit" from all keys that start with "deit" - rename_keys = [(pair[0], pair[1][4:]) if pair[1].startswith("deit") else pair for pair in rename_keys] - else: - # layernorm + classification heads - rename_keys.extend( - [ - ("norm.weight", "deit.layernorm.weight"), - ("norm.bias", "deit.layernorm.bias"), - ("head.weight", "cls_classifier.weight"), - ("head.bias", "cls_classifier.bias"), - ("head_dist.weight", "distillation_classifier.weight"), - ("head_dist.bias", "distillation_classifier.bias"), - ] - ) - - return rename_keys - - -# we split up the matrix of each encoder layer into queries, keys and values -def read_in_q_k_v(state_dict, config, base_model=False): - for i in range(config.num_hidden_layers): - if base_model: - prefix = "" - else: - prefix = "deit." - # read in weights + bias of input projection layer (in timm, this is a single matrix + bias) - in_proj_weight = state_dict.pop(f"blocks.{i}.attn.qkv.weight") - in_proj_bias = state_dict.pop(f"blocks.{i}.attn.qkv.bias") - # next, add query, keys and values (in that order) to the state dict - state_dict[f"{prefix}encoder.layer.{i}.attention.attention.query.weight"] = in_proj_weight[ - : config.hidden_size, : - ] - state_dict[f"{prefix}encoder.layer.{i}.attention.attention.query.bias"] = in_proj_bias[: config.hidden_size] - state_dict[f"{prefix}encoder.layer.{i}.attention.attention.key.weight"] = in_proj_weight[ - config.hidden_size : config.hidden_size * 2, : - ] - state_dict[f"{prefix}encoder.layer.{i}.attention.attention.key.bias"] = in_proj_bias[ - config.hidden_size : config.hidden_size * 2 - ] - state_dict[f"{prefix}encoder.layer.{i}.attention.attention.value.weight"] = in_proj_weight[ - -config.hidden_size :, : - ] - state_dict[f"{prefix}encoder.layer.{i}.attention.attention.value.bias"] = in_proj_bias[-config.hidden_size :] - - -def rename_key(dct, old, new): - val = dct.pop(old) - dct[new] = val - - -# We will verify our results on an image of cute cats -def prepare_img(): - url = "http://images.cocodataset.org/val2017/000000039769.jpg" - im = Image.open(requests.get(url, stream=True).raw) - return im - - -@torch.no_grad() -def convert_deit_checkpoint(deit_name, pytorch_dump_folder_path): - """ - Copy/paste/tweak model's weights to our DeiT structure. - """ - - # define default DeiT configuration - config = DeiTConfig() - # all deit models have fine-tuned heads - base_model = False - # dataset (fine-tuned on ImageNet 2012), patch_size and image_size - config.num_labels = 1000 - repo_id = "huggingface/label-files" - filename = "imagenet-1k-id2label.json" - id2label = json.load(open(hf_hub_download(repo_id, filename, repo_type="dataset"), "r")) - id2label = {int(k): v for k, v in id2label.items()} - config.id2label = id2label - config.label2id = {v: k for k, v in id2label.items()} - config.patch_size = int(deit_name[-6:-4]) - config.image_size = int(deit_name[-3:]) - # size of the architecture - if deit_name[9:].startswith("tiny"): - config.hidden_size = 192 - config.intermediate_size = 768 - config.num_hidden_layers = 12 - config.num_attention_heads = 3 - elif deit_name[9:].startswith("small"): - config.hidden_size = 384 - config.intermediate_size = 1536 - config.num_hidden_layers = 12 - config.num_attention_heads = 6 - if deit_name[9:].startswith("base"): - pass - elif deit_name[4:].startswith("large"): - config.hidden_size = 1024 - config.intermediate_size = 4096 - config.num_hidden_layers = 24 - config.num_attention_heads = 16 - - # load original model from timm - timm_model = timm.create_model(deit_name, pretrained=True) - timm_model.eval() - - # load state_dict of original model, remove and rename some keys - state_dict = timm_model.state_dict() - rename_keys = create_rename_keys(config, base_model) - for src, dest in rename_keys: - rename_key(state_dict, src, dest) - read_in_q_k_v(state_dict, config, base_model) - - # load HuggingFace model - model = DeiTForImageClassificationWithTeacher(config).eval() - model.load_state_dict(state_dict) - - # Check outputs on an image, prepared by DeiTImageProcessor - size = int( - (256 / 224) * config.image_size - ) # to maintain same ratio w.r.t. 224 images, see https://github.com/facebookresearch/deit/blob/ab5715372db8c6cad5740714b2216d55aeae052e/datasets.py#L103 - image_processor = DeiTImageProcessor(size=size, crop_size=config.image_size) - encoding = image_processor(images=prepare_img(), return_tensors="pt") - pixel_values = encoding["pixel_values"] - outputs = model(pixel_values) - - timm_logits = timm_model(pixel_values) - assert timm_logits.shape == outputs.logits.shape - assert torch.allclose(timm_logits, outputs.logits, atol=1e-3) - - Path(pytorch_dump_folder_path).mkdir(exist_ok=True) - print(f"Saving model {deit_name} to {pytorch_dump_folder_path}") - model.save_pretrained(pytorch_dump_folder_path) - print(f"Saving image processor to {pytorch_dump_folder_path}") - image_processor.save_pretrained(pytorch_dump_folder_path) - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - # Required parameters - parser.add_argument( - "--deit_name", - default="vit_deit_base_distilled_patch16_224", - type=str, - help="Name of the DeiT timm model you'd like to convert.", - ) - parser.add_argument( - "--pytorch_dump_folder_path", default=None, type=str, help="Path to the output PyTorch model directory." - ) - - args = parser.parse_args() - convert_deit_checkpoint(args.deit_name, args.pytorch_dump_folder_path) diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/efficientformer/image_processing_efficientformer.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/efficientformer/image_processing_efficientformer.py deleted file mode 100644 index be8477678c5f985873a4ee3d134667234c121391..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/efficientformer/image_processing_efficientformer.py +++ /dev/null @@ -1,299 +0,0 @@ -# coding=utf-8 -# Copyright 2022 The HuggingFace Inc. team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -"""Image processor class for EfficientFormer.""" - -from typing import Dict, List, Optional, Union - -import numpy as np - -from ...image_processing_utils import BaseImageProcessor, BatchFeature, get_size_dict -from ...image_transforms import ( - get_resize_output_image_size, - resize, - to_channel_dimension_format, -) -from ...image_utils import ( - IMAGENET_DEFAULT_MEAN, - IMAGENET_DEFAULT_STD, - ChannelDimension, - ImageInput, - PILImageResampling, - infer_channel_dimension_format, - is_batched, - is_scaled_image, - to_numpy_array, - valid_images, -) -from ...utils import TensorType, logging - - -logger = logging.get_logger(__name__) - - -class EfficientFormerImageProcessor(BaseImageProcessor): - r""" - Constructs a EfficientFormer image processor. - - Args: - do_resize (`bool`, *optional*, defaults to `True`): - Whether to resize the image's (height, width) dimensions to the specified `(size["height"], - size["width"])`. Can be overridden by the `do_resize` parameter in the `preprocess` method. - size (`dict`, *optional*, defaults to `{"height": 224, "width": 224}`): - Size of the output image after resizing. Can be overridden by the `size` parameter in the `preprocess` - method. - resample (`PILImageResampling`, *optional*, defaults to `PILImageResampling.BILINEAR`): - Resampling filter to use if resizing the image. Can be overridden by the `resample` parameter in the - `preprocess` method. - do_center_crop (`bool`, *optional*, defaults to `True`): - Whether to center crop the image to the specified `crop_size`. Can be overridden by `do_center_crop` in the - `preprocess` method. - crop_size (`Dict[str, int]` *optional*, defaults to 224): - Size of the output image after applying `center_crop`. Can be overridden by `crop_size` in the `preprocess` - method. - do_rescale (`bool`, *optional*, defaults to `True`): - Whether to rescale the image by the specified scale `rescale_factor`. Can be overridden by the `do_rescale` - parameter in the `preprocess` method. - rescale_factor (`int` or `float`, *optional*, defaults to `1/255`): - Scale factor to use if rescaling the image. Can be overridden by the `rescale_factor` parameter in the - `preprocess` method. - do_normalize: - Whether to normalize the image. Can be overridden by the `do_normalize` parameter in the `preprocess` - method. - image_mean (`float` or `List[float]`, *optional*, defaults to `IMAGENET_STANDARD_MEAN`): - Mean to use if normalizing the image. This is a float or list of floats the length of the number of - channels in the image. Can be overridden by the `image_mean` parameter in the `preprocess` method. - image_std (`float` or `List[float]`, *optional*, defaults to `IMAGENET_STANDARD_STD`): - Standard deviation to use if normalizing the image. This is a float or list of floats the length of the - number of channels in the image. Can be overridden by the `image_std` parameter in the `preprocess` method. - """ - - model_input_names = ["pixel_values"] - - def __init__( - self, - do_resize: bool = True, - size: Optional[Dict[str, int]] = None, - resample: PILImageResampling = PILImageResampling.BICUBIC, - do_center_crop: bool = True, - do_rescale: bool = True, - rescale_factor: Union[int, float] = 1 / 255, - crop_size: Dict[str, int] = None, - do_normalize: bool = True, - image_mean: Optional[Union[float, List[float]]] = None, - image_std: Optional[Union[float, List[float]]] = None, - **kwargs, - ) -> None: - super().__init__(**kwargs) - size = size if size is not None else {"height": 224, "width": 224} - size = get_size_dict(size) - crop_size = crop_size if crop_size is not None else {"height": 224, "width": 224} - crop_size = get_size_dict(crop_size, default_to_square=True, param_name="crop_size") - - self.do_resize = do_resize - self.do_rescale = do_rescale - self.do_normalize = do_normalize - self.do_center_crop = do_center_crop - self.crop_size = crop_size - self.size = size - self.resample = resample - self.rescale_factor = rescale_factor - self.image_mean = image_mean if image_mean is not None else IMAGENET_DEFAULT_MEAN - self.image_std = image_std if image_std is not None else IMAGENET_DEFAULT_STD - - def resize( - self, - image: np.ndarray, - size: Dict[str, int], - resample: PILImageResampling = PILImageResampling.BILINEAR, - data_format: Optional[Union[str, ChannelDimension]] = None, - input_data_format: Optional[Union[str, ChannelDimension]] = None, - **kwargs, - ) -> np.ndarray: - """ - Resize an image to `(size["height"], size["width"])`. - - Args: - image (`np.ndarray`): - Image to resize. - size (`Dict[str, int]`): - Dictionary in the format `{"height": int, "width": int}` specifying the size of the output image. - resample: - `PILImageResampling` filter to use when resizing the image e.g. `PILImageResampling.BILINEAR`. - data_format (`ChannelDimension` or `str`, *optional*): - The channel dimension format for the output image. If unset, the channel dimension format of the input - image is used. Can be one of: - - `"channels_first"` or `ChannelDimension.FIRST`: image in (num_channels, height, width) format. - - `"channels_last"` or `ChannelDimension.LAST`: image in (height, width, num_channels) format. - input_data_format (`ChannelDimension` or `str`, *optional*): - The channel dimension format of the input image. If not provided, it will be inferred. - - Returns: - `np.ndarray`: The resized image. - """ - size = get_size_dict(size) - - if "shortest_edge" in size: - size = get_resize_output_image_size( - image, size=size["shortest_edge"], default_to_square=False, input_data_format=input_data_format - ) - # size = get_resize_output_image_size(image, size["shortest_edge"], size["longest_edge"]) - elif "height" in size and "width" in size: - size = (size["height"], size["width"]) - else: - raise ValueError(f"Size must contain 'height' and 'width' keys or 'shortest_edge' key. Got {size.keys()}") - return resize( - image, size=size, resample=resample, data_format=data_format, input_data_format=input_data_format, **kwargs - ) - - def preprocess( - self, - images: ImageInput, - do_resize: Optional[bool] = None, - size: Dict[str, int] = None, - resample: PILImageResampling = None, - do_center_crop: bool = None, - crop_size: int = None, - do_rescale: Optional[bool] = None, - rescale_factor: Optional[float] = None, - do_normalize: Optional[bool] = None, - image_mean: Optional[Union[float, List[float]]] = None, - image_std: Optional[Union[float, List[float]]] = None, - return_tensors: Optional[Union[str, TensorType]] = None, - data_format: Union[str, ChannelDimension] = ChannelDimension.FIRST, - input_data_format: Optional[Union[str, ChannelDimension]] = None, - **kwargs, - ) -> BatchFeature: - """ - Preprocess an image or batch of images. - - Args: - images (`ImageInput`): - Image to preprocess. Expects a single or batch of images with pixel values ranging from 0 to 255. If - passing in images with pixel values between 0 and 1, set `do_rescale=False`. - do_resize (`bool`, *optional*, defaults to `self.do_resize`): - Whether to resize the image. - size (`Dict[str, int]`, *optional*, defaults to `self.size`): - Dictionary in the format `{"height": h, "width": w}` specifying the size of the output image after - resizing. - resample (`PILImageResampling` filter, *optional*, defaults to `self.resample`): - `PILImageResampling` filter to use if resizing the image e.g. `PILImageResampling.BILINEAR`. Only has - an effect if `do_resize` is set to `True`. - do_center_crop (`bool`, *optional*, defaults to `self.do_center_crop`): - Whether to center crop the image. - do_rescale (`bool`, *optional*, defaults to `self.do_rescale`): - Whether to rescale the image values between [0 - 1]. - rescale_factor (`float`, *optional*, defaults to `self.rescale_factor`): - Rescale factor to rescale the image by if `do_rescale` is set to `True`. - crop_size (`Dict[str, int]`, *optional*, defaults to `self.crop_size`): - Size of the center crop. Only has an effect if `do_center_crop` is set to `True`. - do_normalize (`bool`, *optional*, defaults to `self.do_normalize`): - Whether to normalize the image. - image_mean (`float` or `List[float]`, *optional*, defaults to `self.image_mean`): - Image mean to use if `do_normalize` is set to `True`. - image_std (`float` or `List[float]`, *optional*, defaults to `self.image_std`): - Image standard deviation to use if `do_normalize` is set to `True`. - return_tensors (`str` or `TensorType`, *optional*): - The type of tensors to return. Can be one of: - - Unset: Return a list of `np.ndarray`. - - `TensorType.TENSORFLOW` or `'tf'`: Return a batch of type `tf.Tensor`. - - `TensorType.PYTORCH` or `'pt'`: Return a batch of type `torch.Tensor`. - - `TensorType.NUMPY` or `'np'`: Return a batch of type `np.ndarray`. - - `TensorType.JAX` or `'jax'`: Return a batch of type `jax.numpy.ndarray`. - data_format (`ChannelDimension` or `str`, *optional*, defaults to `ChannelDimension.FIRST`): - The channel dimension format for the output image. Can be one of: - - `"channels_first"` or `ChannelDimension.FIRST`: image in (num_channels, height, width) format. - - `"channels_last"` or `ChannelDimension.LAST`: image in (height, width, num_channels) format. - - Unset: Use the channel dimension format of the input image. - input_data_format (`ChannelDimension` or `str`, *optional*): - The channel dimension format for the input image. If unset, the channel dimension format is inferred - from the input image. Can be one of: - - `"channels_first"` or `ChannelDimension.FIRST`: image in (num_channels, height, width) format. - - `"channels_last"` or `ChannelDimension.LAST`: image in (height, width, num_channels) format. - - `"none"` or `ChannelDimension.NONE`: image in (height, width) format. - """ - do_resize = do_resize if do_resize is not None else self.do_resize - do_rescale = do_rescale if do_rescale is not None else self.do_rescale - do_normalize = do_normalize if do_normalize is not None else self.do_normalize - do_center_crop = do_center_crop if do_center_crop is not None else self.do_center_crop - crop_size = crop_size if crop_size is not None else self.crop_size - crop_size = get_size_dict(crop_size, param_name="crop_size", default_to_square=True) - resample = resample if resample is not None else self.resample - rescale_factor = rescale_factor if rescale_factor is not None else self.rescale_factor - image_mean = image_mean if image_mean is not None else self.image_mean - image_std = image_std if image_std is not None else self.image_std - - size = size if size is not None else self.size - size_dict = get_size_dict(size) - - if not is_batched(images): - images = [images] - - if not valid_images(images): - raise ValueError( - "Invalid image type. Must be of type PIL.Image.Image, numpy.ndarray, " - "torch.Tensor, tf.Tensor or jax.ndarray." - ) - - if do_resize and size is None: - raise ValueError("Size must be specified if do_resize is True.") - - if do_center_crop and crop_size is None: - raise ValueError("Crop size must be specified if do_center_crop is True.") - - if do_rescale and rescale_factor is None: - raise ValueError("Rescale factor must be specified if do_rescale is True.") - - # All transformations expect numpy arrays. - images = [to_numpy_array(image) for image in images] - - if is_scaled_image(images[0]) and do_rescale: - logger.warning_once( - "It looks like you are trying to rescale already rescaled images. If the input" - " images have pixel values between 0 and 1, set `do_rescale=False` to avoid rescaling them again." - ) - - if input_data_format is None: - # We assume that all images have the same channel dimension format. - input_data_format = infer_channel_dimension_format(images[0]) - - if do_resize: - images = [ - self.resize(image=image, size=size_dict, resample=resample, input_data_format=input_data_format) - for image in images - ] - - if do_center_crop: - images = [ - self.center_crop(image=image, size=crop_size, input_data_format=input_data_format) for image in images - ] - - if do_rescale: - images = [ - self.rescale(image=image, scale=rescale_factor, input_data_format=input_data_format) - for image in images - ] - - if do_normalize: - images = [ - self.normalize(image=image, mean=image_mean, std=image_std, input_data_format=input_data_format) - for image in images - ] - - images = [ - to_channel_dimension_format(image, data_format, input_channel_dim=input_data_format) for image in images - ] - - data = {"pixel_values": images} - return BatchFeature(data=data, tensor_type=return_tensors) diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/markuplm/tokenization_markuplm.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/markuplm/tokenization_markuplm.py deleted file mode 100644 index 24fa4b7763a9e16f61ea31cca04141816beb068f..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/markuplm/tokenization_markuplm.py +++ /dev/null @@ -1,1464 +0,0 @@ -# coding=utf-8 -# Copyright Microsoft Research and The HuggingFace Inc. team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -"""Tokenization class for MarkupLM.""" - -import json -import os -from functools import lru_cache -from typing import Dict, List, Optional, Tuple, Union - -import regex as re - -from ...file_utils import PaddingStrategy, TensorType, add_end_docstrings -from ...tokenization_utils import AddedToken, PreTrainedTokenizer -from ...tokenization_utils_base import ( - ENCODE_KWARGS_DOCSTRING, - BatchEncoding, - EncodedInput, - PreTokenizedInput, - TextInput, - TextInputPair, - TruncationStrategy, -) -from ...utils import logging - - -logger = logging.get_logger(__name__) - -VOCAB_FILES_NAMES = {"vocab_file": "vocab.json", "merges_file": "merges.txt", "tokenizer_file": "tokenizer.json"} - -PRETRAINED_VOCAB_FILES_MAP = { - "vocab_file": { - "microsoft/markuplm-base": "https://huggingface.co/microsoft/markuplm-base/resolve/main/vocab.json", - "microsoft/markuplm-large": "https://huggingface.co/microsoft/markuplm-large/resolve/main/vocab.json", - }, - "merges_file": { - "microsoft/markuplm-base": "https://huggingface.co/microsoft/markuplm-base/resolve/main/merges.txt", - "microsoft/markuplm-large": "https://huggingface.co/microsoft/markuplm-large/resolve/main/merges.txt", - }, -} - - -PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES = { - "microsoft/markuplm-base": 512, - "microsoft/markuplm-large": 512, -} - - -MARKUPLM_ENCODE_PLUS_ADDITIONAL_KWARGS_DOCSTRING = r""" - add_special_tokens (`bool`, *optional*, defaults to `True`): - Whether or not to encode the sequences with the special tokens relative to their model. - padding (`bool`, `str` or [`~file_utils.PaddingStrategy`], *optional*, defaults to `False`): - Activates and controls padding. Accepts the following values: - - - `True` or `'longest'`: Pad to the longest sequence in the batch (or no padding if only a single - sequence if provided). - - `'max_length'`: Pad to a maximum length specified with the argument `max_length` or to the maximum - acceptable input length for the model if that argument is not provided. - - `False` or `'do_not_pad'` (default): No padding (i.e., can output a batch with sequences of different - lengths). - truncation (`bool`, `str` or [`~tokenization_utils_base.TruncationStrategy`], *optional*, defaults to `False`): - Activates and controls truncation. Accepts the following values: - - - `True` or `'longest_first'`: Truncate to a maximum length specified with the argument `max_length` or - to the maximum acceptable input length for the model if that argument is not provided. This will - truncate token by token, removing a token from the longest sequence in the pair if a pair of - sequences (or a batch of pairs) is provided. - - `'only_first'`: Truncate to a maximum length specified with the argument `max_length` or to the - maximum acceptable input length for the model if that argument is not provided. This will only - truncate the first sequence of a pair if a pair of sequences (or a batch of pairs) is provided. - - `'only_second'`: Truncate to a maximum length specified with the argument `max_length` or to the - maximum acceptable input length for the model if that argument is not provided. This will only - truncate the second sequence of a pair if a pair of sequences (or a batch of pairs) is provided. - - `False` or `'do_not_truncate'` (default): No truncation (i.e., can output batch with sequence lengths - greater than the model maximum admissible input size). - max_length (`int`, *optional*): - Controls the maximum length to use by one of the truncation/padding parameters. If left unset or set to - `None`, this will use the predefined model maximum length if a maximum length is required by one of the - truncation/padding parameters. If the model has no specific maximum input length (like XLNet) - truncation/padding to a maximum length will be deactivated. - stride (`int`, *optional*, defaults to 0): - If set to a number along with `max_length`, the overflowing tokens returned when - `return_overflowing_tokens=True` will contain some tokens from the end of the truncated sequence - returned to provide some overlap between truncated and overflowing sequences. The value of this - argument defines the number of overlapping tokens. - pad_to_multiple_of (`int`, *optional*): - If set will pad the sequence to a multiple of the provided value. This is especially useful to enable - the use of Tensor Cores on NVIDIA hardware with compute capability `>= 7.5` (Volta). - return_tensors (`str` or [`~file_utils.TensorType`], *optional*): - If set, will return tensors instead of list of python integers. Acceptable values are: - - - `'tf'`: Return TensorFlow `tf.constant` objects. - - `'pt'`: Return PyTorch `torch.Tensor` objects. - - `'np'`: Return Numpy `np.ndarray` objects. -""" - - -@lru_cache() -def bytes_to_unicode(): - """ - Returns list of utf-8 byte and a mapping to unicode strings. We specifically avoids mapping to whitespace/control - characters the bpe code barfs on. The reversible bpe codes work on unicode strings. This means you need a large # - of unicode characters in your vocab if you want to avoid UNKs. When you're at something like a 10B token dataset - you end up needing around 5K for decent coverage. This is a significant percentage of your normal, say, 32K bpe - vocab. To avoid that, we want lookup tables between utf-8 bytes and unicode strings. - """ - bs = ( - list(range(ord("!"), ord("~") + 1)) + list(range(ord("¡"), ord("¬") + 1)) + list(range(ord("®"), ord("ÿ") + 1)) - ) - cs = bs[:] - n = 0 - for b in range(2**8): - if b not in bs: - bs.append(b) - cs.append(2**8 + n) - n += 1 - cs = [chr(n) for n in cs] - return dict(zip(bs, cs)) - - -def get_pairs(word): - """ - Return set of symbol pairs in a word. Word is represented as tuple of symbols (symbols being variable-length - strings). - """ - pairs = set() - prev_char = word[0] - for char in word[1:]: - pairs.add((prev_char, char)) - prev_char = char - return pairs - - -class MarkupLMTokenizer(PreTrainedTokenizer): - r""" - Construct a MarkupLM tokenizer. Based on byte-level Byte-Pair-Encoding (BPE). [`MarkupLMTokenizer`] can be used to - turn HTML strings into to token-level `input_ids`, `attention_mask`, `token_type_ids`, `xpath_tags_seq` and - `xpath_tags_seq`. This tokenizer inherits from [`PreTrainedTokenizer`] which contains most of the main methods. - Users should refer to this superclass for more information regarding those methods. - - Args: - vocab_file (`str`): - Path to the vocabulary file. - merges_file (`str`): - Path to the merges file. - errors (`str`, *optional*, defaults to `"replace"`): - Paradigm to follow when decoding bytes to UTF-8. See - [bytes.decode](https://docs.python.org/3/library/stdtypes.html#bytes.decode) for more information. - bos_token (`str`, *optional*, defaults to `""`): - The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token. - - - - When building a sequence using special tokens, this is not the token that is used for the beginning of - sequence. The token used is the `cls_token`. - - - - eos_token (`str`, *optional*, defaults to `""`): - The end of sequence token. - - - - When building a sequence using special tokens, this is not the token that is used for the end of sequence. - The token used is the `sep_token`. - - - - sep_token (`str`, *optional*, defaults to `""`): - The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for - sequence classification or for a text and a question for question answering. It is also used as the last - token of a sequence built with special tokens. - cls_token (`str`, *optional*, defaults to `""`): - The classifier token which is used when doing sequence classification (classification of the whole sequence - instead of per-token classification). It is the first token of the sequence when built with special tokens. - unk_token (`str`, *optional*, defaults to `""`): - The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this - token instead. - pad_token (`str`, *optional*, defaults to `""`): - The token used for padding, for example when batching sequences of different lengths. - mask_token (`str`, *optional*, defaults to `""`): - The token used for masking values. This is the token used when training this model with masked language - modeling. This is the token which the model will try to predict. - add_prefix_space (`bool`, *optional*, defaults to `False`): - Whether or not to add an initial space to the input. This allows to treat the leading word just as any - other word. (RoBERTa tokenizer detect beginning of words by the preceding space). - """ - - vocab_files_names = VOCAB_FILES_NAMES - pretrained_vocab_files_map = PRETRAINED_VOCAB_FILES_MAP - max_model_input_sizes = PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES - - def __init__( - self, - vocab_file, - merges_file, - tags_dict, - errors="replace", - bos_token="", - eos_token="", - sep_token="", - cls_token="", - unk_token="", - pad_token="", - mask_token="", - add_prefix_space=False, - max_depth=50, - max_width=1000, - pad_width=1001, - pad_token_label=-100, - only_label_first_subword=True, - **kwargs, - ): - bos_token = AddedToken(bos_token, lstrip=False, rstrip=False) if isinstance(bos_token, str) else bos_token - eos_token = AddedToken(eos_token, lstrip=False, rstrip=False) if isinstance(eos_token, str) else eos_token - sep_token = AddedToken(sep_token, lstrip=False, rstrip=False) if isinstance(sep_token, str) else sep_token - cls_token = AddedToken(cls_token, lstrip=False, rstrip=False) if isinstance(cls_token, str) else cls_token - unk_token = AddedToken(unk_token, lstrip=False, rstrip=False) if isinstance(unk_token, str) else unk_token - pad_token = AddedToken(pad_token, lstrip=False, rstrip=False) if isinstance(pad_token, str) else pad_token - - # Mask token behave like a normal word, i.e. include the space before it - mask_token = AddedToken(mask_token, lstrip=True, rstrip=False) if isinstance(mask_token, str) else mask_token - - with open(vocab_file, encoding="utf-8") as vocab_handle: - self.encoder = json.load(vocab_handle) - - self.tags_dict = tags_dict - self.decoder = {v: k for k, v in self.encoder.items()} - self.errors = errors # how to handle errors in decoding - self.byte_encoder = bytes_to_unicode() - self.byte_decoder = {v: k for k, v in self.byte_encoder.items()} - with open(merges_file, encoding="utf-8") as merges_handle: - bpe_merges = merges_handle.read().split("\n")[1:-1] - bpe_merges = [tuple(merge.split()) for merge in bpe_merges] - self.bpe_ranks = dict(zip(bpe_merges, range(len(bpe_merges)))) - self.cache = {} - self.add_prefix_space = add_prefix_space - - # Should have added re.IGNORECASE so BPE merges can happen for capitalized versions of contractions - self.pat = re.compile(r"""'s|'t|'re|'ve|'m|'ll|'d| ?\p{L}+| ?\p{N}+| ?[^\s\p{L}\p{N}]+|\s+(?!\S)|\s+""") - - # additional properties - self.max_depth = max_depth - self.max_width = max_width - self.pad_width = pad_width - self.unk_tag_id = len(self.tags_dict) - self.pad_tag_id = self.unk_tag_id + 1 - self.pad_xpath_tags_seq = [self.pad_tag_id] * self.max_depth - self.pad_xpath_subs_seq = [self.pad_width] * self.max_depth - - super().__init__( - vocab_file=vocab_file, - merges_file=merges_file, - tags_dict=tags_dict, - errors=errors, - bos_token=bos_token, - eos_token=eos_token, - unk_token=unk_token, - sep_token=sep_token, - cls_token=cls_token, - pad_token=pad_token, - mask_token=mask_token, - add_prefix_space=add_prefix_space, - max_depth=max_depth, - max_width=max_width, - pad_width=pad_width, - pad_token_label=pad_token_label, - only_label_first_subword=only_label_first_subword, - **kwargs, - ) - - self.pad_token_label = pad_token_label - self.only_label_first_subword = only_label_first_subword - - def get_xpath_seq(self, xpath): - """ - Given the xpath expression of one particular node (like "/html/body/div/li[1]/div/span[2]"), return a list of - tag IDs and corresponding subscripts, taking into account max depth. - """ - xpath_tags_list = [] - xpath_subs_list = [] - - xpath_units = xpath.split("/") - for unit in xpath_units: - if not unit.strip(): - continue - name_subs = unit.strip().split("[") - tag_name = name_subs[0] - sub = 0 if len(name_subs) == 1 else int(name_subs[1][:-1]) - xpath_tags_list.append(self.tags_dict.get(tag_name, self.unk_tag_id)) - xpath_subs_list.append(min(self.max_width, sub)) - - xpath_tags_list = xpath_tags_list[: self.max_depth] - xpath_subs_list = xpath_subs_list[: self.max_depth] - xpath_tags_list += [self.pad_tag_id] * (self.max_depth - len(xpath_tags_list)) - xpath_subs_list += [self.pad_width] * (self.max_depth - len(xpath_subs_list)) - - return xpath_tags_list, xpath_subs_list - - @property - def vocab_size(self): - return len(self.encoder) - - def get_vocab(self): - vocab = self.encoder.copy() - vocab.update(self.added_tokens_encoder) - return vocab - - def bpe(self, token): - if token in self.cache: - return self.cache[token] - word = tuple(token) - pairs = get_pairs(word) - - if not pairs: - return token - - while True: - bigram = min(pairs, key=lambda pair: self.bpe_ranks.get(pair, float("inf"))) - if bigram not in self.bpe_ranks: - break - first, second = bigram - new_word = [] - i = 0 - while i < len(word): - try: - j = word.index(first, i) - except ValueError: - new_word.extend(word[i:]) - break - else: - new_word.extend(word[i:j]) - i = j - - if word[i] == first and i < len(word) - 1 and word[i + 1] == second: - new_word.append(first + second) - i += 2 - else: - new_word.append(word[i]) - i += 1 - new_word = tuple(new_word) - word = new_word - if len(word) == 1: - break - else: - pairs = get_pairs(word) - word = " ".join(word) - self.cache[token] = word - return word - - def _tokenize(self, text): - """Tokenize a string.""" - bpe_tokens = [] - for token in re.findall(self.pat, text): - token = "".join( - self.byte_encoder[b] for b in token.encode("utf-8") - ) # Maps all our bytes to unicode strings, avoiding control tokens of the BPE (spaces in our case) - bpe_tokens.extend(bpe_token for bpe_token in self.bpe(token).split(" ")) - return bpe_tokens - - def _convert_token_to_id(self, token): - """Converts a token (str) in an id using the vocab.""" - return self.encoder.get(token, self.encoder.get(self.unk_token)) - - def _convert_id_to_token(self, index): - """Converts an index (integer) in a token (str) using the vocab.""" - return self.decoder.get(index) - - def convert_tokens_to_string(self, tokens): - """Converts a sequence of tokens (string) in a single string.""" - logger.warning( - "MarkupLM now does not support generative tasks, decoding is experimental and subject to change." - ) - text = "".join(tokens) - text = bytearray([self.byte_decoder[c] for c in text]).decode("utf-8", errors=self.errors) - return text - - def save_vocabulary(self, save_directory: str, filename_prefix: Optional[str] = None) -> Tuple[str]: - if not os.path.isdir(save_directory): - logger.error(f"Vocabulary path ({save_directory}) should be a directory") - return - vocab_file = os.path.join( - save_directory, (filename_prefix + "-" if filename_prefix else "") + VOCAB_FILES_NAMES["vocab_file"] - ) - merge_file = os.path.join( - save_directory, (filename_prefix + "-" if filename_prefix else "") + VOCAB_FILES_NAMES["merges_file"] - ) - - # save vocab_file - with open(vocab_file, "w", encoding="utf-8") as f: - f.write(json.dumps(self.encoder, indent=2, sort_keys=True, ensure_ascii=False) + "\n") - - # save merge_file - index = 0 - with open(merge_file, "w", encoding="utf-8") as writer: - writer.write("#version: 0.2\n") - for bpe_tokens, token_index in sorted(self.bpe_ranks.items(), key=lambda kv: kv[1]): - if index != token_index: - logger.warning( - f"Saving vocabulary to {merge_file}: BPE merge indices are not consecutive." - " Please check that the tokenizer is not corrupted!" - ) - index = token_index - writer.write(" ".join(bpe_tokens) + "\n") - index += 1 - - return vocab_file, merge_file - - def prepare_for_tokenization(self, text, is_split_into_words=False, **kwargs): - add_prefix_space = kwargs.pop("add_prefix_space", self.add_prefix_space) - if (is_split_into_words or add_prefix_space) and (len(text) > 0 and not text[0].isspace()): - text = " " + text - return (text, kwargs) - - def build_inputs_with_special_tokens( - self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None - ) -> List[int]: - """ - Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and - adding special tokens. A RoBERTa sequence has the following format: - - single sequence: ` X ` - - pair of sequences: ` A B
                      ` - - Args: - token_ids_0 (`List[int]`): - List of IDs to which the special tokens will be added. - token_ids_1 (`List[int]`, *optional*): - Optional second list of IDs for sequence pairs. - Returns: - `List[int]`: List of [input IDs](../glossary#input-ids) with the appropriate special tokens. - """ - if token_ids_1 is None: - return [self.cls_token_id] + token_ids_0 + [self.sep_token_id] - cls = [self.cls_token_id] - sep = [self.sep_token_id] - return cls + token_ids_0 + sep + token_ids_1 + sep - - def build_xpath_tags_with_special_tokens( - self, xpath_tags_0: List[int], xpath_tags_1: Optional[List[int]] = None - ) -> List[int]: - pad = [self.pad_xpath_tags_seq] - if len(xpath_tags_1) == 0: - return pad + xpath_tags_0 + pad - return pad + xpath_tags_0 + pad + xpath_tags_1 + pad - - def build_xpath_subs_with_special_tokens( - self, xpath_subs_0: List[int], xpath_subs_1: Optional[List[int]] = None - ) -> List[int]: - pad = [self.pad_xpath_subs_seq] - if len(xpath_subs_1) == 0: - return pad + xpath_subs_0 + pad - return pad + xpath_subs_0 + pad + xpath_subs_1 + pad - - def get_special_tokens_mask( - self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None, already_has_special_tokens: bool = False - ) -> List[int]: - """ - Args: - Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding - special tokens using the tokenizer `prepare_for_model` method. - token_ids_0 (`List[int]`): - List of IDs. - token_ids_1 (`List[int]`, *optional*): - Optional second list of IDs for sequence pairs. - already_has_special_tokens (`bool`, *optional*, defaults to `False`): - Whether or not the token list is already formatted with special tokens for the model. - Returns: - `List[int]`: A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token. - """ - if already_has_special_tokens: - return super().get_special_tokens_mask( - token_ids_0=token_ids_0, token_ids_1=token_ids_1, already_has_special_tokens=True - ) - - if token_ids_1 is None: - return [1] + ([0] * len(token_ids_0)) + [1] - return [1] + ([0] * len(token_ids_0)) + [1, 1] + ([0] * len(token_ids_1)) + [1] - - def create_token_type_ids_from_sequences( - self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None - ) -> List[int]: - """ - Create a mask from the two sequences passed to be used in a sequence-pair classification task. RoBERTa does not - make use of token type ids, therefore a list of zeros is returned. - - Args: - token_ids_0 (`List[int]`): - List of IDs. - token_ids_1 (`List[int]`, *optional*): - Optional second list of IDs for sequence pairs. - Returns: - `List[int]`: List of zeros. - """ - sep = [self.sep_token_id] - cls = [self.cls_token_id] - - if token_ids_1 is None: - return len(cls + token_ids_0 + sep) * [0] - return len(cls + token_ids_0 + sep + token_ids_1 + sep) * [0] - - @add_end_docstrings(ENCODE_KWARGS_DOCSTRING, MARKUPLM_ENCODE_PLUS_ADDITIONAL_KWARGS_DOCSTRING) - def __call__( - self, - text: Union[TextInput, PreTokenizedInput, List[TextInput], List[PreTokenizedInput]], - text_pair: Optional[Union[PreTokenizedInput, List[PreTokenizedInput]]] = None, - xpaths: Union[List[List[int]], List[List[List[int]]]] = None, - node_labels: Optional[Union[List[int], List[List[int]]]] = None, - add_special_tokens: bool = True, - padding: Union[bool, str, PaddingStrategy] = False, - truncation: Union[bool, str, TruncationStrategy] = None, - max_length: Optional[int] = None, - stride: int = 0, - pad_to_multiple_of: Optional[int] = None, - return_tensors: Optional[Union[str, TensorType]] = None, - return_token_type_ids: Optional[bool] = None, - return_attention_mask: Optional[bool] = None, - return_overflowing_tokens: bool = False, - return_special_tokens_mask: bool = False, - return_offsets_mapping: bool = False, - return_length: bool = False, - verbose: bool = True, - **kwargs, - ) -> BatchEncoding: - """ - Main method to tokenize and prepare for the model one or several sequence(s) or one or several pair(s) of - sequences with node-level xpaths and optional labels. - - Args: - text (`str`, `List[str]`, `List[List[str]]`): - The sequence or batch of sequences to be encoded. Each sequence can be a string, a list of strings - (nodes of a single example or questions of a batch of examples) or a list of list of strings (batch of - nodes). - text_pair (`List[str]`, `List[List[str]]`): - The sequence or batch of sequences to be encoded. Each sequence should be a list of strings - (pretokenized string). - xpaths (`List[List[int]]`, `List[List[List[int]]]`): - Node-level xpaths. - node_labels (`List[int]`, `List[List[int]]`, *optional*): - Node-level integer labels (for token classification tasks). - """ - - # Input type checking for clearer error - def _is_valid_text_input(t): - if isinstance(t, str): - # Strings are fine - return True - elif isinstance(t, (list, tuple)): - # List are fine as long as they are... - if len(t) == 0: - # ... empty - return True - elif isinstance(t[0], str): - # ... list of strings - return True - elif isinstance(t[0], (list, tuple)): - # ... list with an empty list or with a list of strings - return len(t[0]) == 0 or isinstance(t[0][0], str) - else: - return False - else: - return False - - if text_pair is not None: - # in case text + text_pair are provided, text = questions, text_pair = nodes - if not _is_valid_text_input(text): - raise ValueError("text input must of type `str` (single example) or `List[str]` (batch of examples). ") - if not isinstance(text_pair, (list, tuple)): - raise ValueError( - "Nodes must be of type `List[str]` (single pretokenized example), " - "or `List[List[str]]` (batch of pretokenized examples)." - ) - else: - # in case only text is provided => must be nodes - if not isinstance(text, (list, tuple)): - raise ValueError( - "Nodes must be of type `List[str]` (single pretokenized example), " - "or `List[List[str]]` (batch of pretokenized examples)." - ) - - if text_pair is not None: - is_batched = isinstance(text, (list, tuple)) - else: - is_batched = isinstance(text, (list, tuple)) and text and isinstance(text[0], (list, tuple)) - - nodes = text if text_pair is None else text_pair - assert xpaths is not None, "You must provide corresponding xpaths" - if is_batched: - assert len(nodes) == len(xpaths), "You must provide nodes and xpaths for an equal amount of examples" - for nodes_example, xpaths_example in zip(nodes, xpaths): - assert len(nodes_example) == len(xpaths_example), "You must provide as many nodes as there are xpaths" - else: - assert len(nodes) == len(xpaths), "You must provide as many nodes as there are xpaths" - - if is_batched: - if text_pair is not None and len(text) != len(text_pair): - raise ValueError( - f"batch length of `text`: {len(text)} does not match batch length of `text_pair`:" - f" {len(text_pair)}." - ) - batch_text_or_text_pairs = list(zip(text, text_pair)) if text_pair is not None else text - is_pair = bool(text_pair is not None) - return self.batch_encode_plus( - batch_text_or_text_pairs=batch_text_or_text_pairs, - is_pair=is_pair, - xpaths=xpaths, - node_labels=node_labels, - add_special_tokens=add_special_tokens, - padding=padding, - truncation=truncation, - max_length=max_length, - stride=stride, - pad_to_multiple_of=pad_to_multiple_of, - return_tensors=return_tensors, - return_token_type_ids=return_token_type_ids, - return_attention_mask=return_attention_mask, - return_overflowing_tokens=return_overflowing_tokens, - return_special_tokens_mask=return_special_tokens_mask, - return_offsets_mapping=return_offsets_mapping, - return_length=return_length, - verbose=verbose, - **kwargs, - ) - else: - return self.encode_plus( - text=text, - text_pair=text_pair, - xpaths=xpaths, - node_labels=node_labels, - add_special_tokens=add_special_tokens, - padding=padding, - truncation=truncation, - max_length=max_length, - stride=stride, - pad_to_multiple_of=pad_to_multiple_of, - return_tensors=return_tensors, - return_token_type_ids=return_token_type_ids, - return_attention_mask=return_attention_mask, - return_overflowing_tokens=return_overflowing_tokens, - return_special_tokens_mask=return_special_tokens_mask, - return_offsets_mapping=return_offsets_mapping, - return_length=return_length, - verbose=verbose, - **kwargs, - ) - - @add_end_docstrings(ENCODE_KWARGS_DOCSTRING, MARKUPLM_ENCODE_PLUS_ADDITIONAL_KWARGS_DOCSTRING) - def batch_encode_plus( - self, - batch_text_or_text_pairs: Union[ - List[TextInput], - List[TextInputPair], - List[PreTokenizedInput], - ], - is_pair: bool = None, - xpaths: Optional[List[List[List[int]]]] = None, - node_labels: Optional[Union[List[int], List[List[int]]]] = None, - add_special_tokens: bool = True, - padding: Union[bool, str, PaddingStrategy] = False, - truncation: Union[bool, str, TruncationStrategy] = None, - max_length: Optional[int] = None, - stride: int = 0, - pad_to_multiple_of: Optional[int] = None, - return_tensors: Optional[Union[str, TensorType]] = None, - return_token_type_ids: Optional[bool] = None, - return_attention_mask: Optional[bool] = None, - return_overflowing_tokens: bool = False, - return_special_tokens_mask: bool = False, - return_offsets_mapping: bool = False, - return_length: bool = False, - verbose: bool = True, - **kwargs, - ) -> BatchEncoding: - # Backward compatibility for 'truncation_strategy', 'pad_to_max_length' - padding_strategy, truncation_strategy, max_length, kwargs = self._get_padding_truncation_strategies( - padding=padding, - truncation=truncation, - max_length=max_length, - pad_to_multiple_of=pad_to_multiple_of, - verbose=verbose, - **kwargs, - ) - - return self._batch_encode_plus( - batch_text_or_text_pairs=batch_text_or_text_pairs, - is_pair=is_pair, - xpaths=xpaths, - node_labels=node_labels, - add_special_tokens=add_special_tokens, - padding_strategy=padding_strategy, - truncation_strategy=truncation_strategy, - max_length=max_length, - stride=stride, - pad_to_multiple_of=pad_to_multiple_of, - return_tensors=return_tensors, - return_token_type_ids=return_token_type_ids, - return_attention_mask=return_attention_mask, - return_overflowing_tokens=return_overflowing_tokens, - return_special_tokens_mask=return_special_tokens_mask, - return_offsets_mapping=return_offsets_mapping, - return_length=return_length, - verbose=verbose, - **kwargs, - ) - - def _batch_encode_plus( - self, - batch_text_or_text_pairs: Union[ - List[TextInput], - List[TextInputPair], - List[PreTokenizedInput], - ], - is_pair: bool = None, - xpaths: Optional[List[List[List[int]]]] = None, - node_labels: Optional[List[List[int]]] = None, - add_special_tokens: bool = True, - padding_strategy: PaddingStrategy = PaddingStrategy.DO_NOT_PAD, - truncation_strategy: TruncationStrategy = TruncationStrategy.DO_NOT_TRUNCATE, - max_length: Optional[int] = None, - stride: int = 0, - pad_to_multiple_of: Optional[int] = None, - return_tensors: Optional[Union[str, TensorType]] = None, - return_token_type_ids: Optional[bool] = None, - return_attention_mask: Optional[bool] = None, - return_overflowing_tokens: bool = False, - return_special_tokens_mask: bool = False, - return_offsets_mapping: bool = False, - return_length: bool = False, - verbose: bool = True, - **kwargs, - ) -> BatchEncoding: - if return_offsets_mapping: - raise NotImplementedError( - "return_offset_mapping is not available when using Python tokenizers. " - "To use this feature, change your tokenizer to one deriving from " - "transformers.PreTrainedTokenizerFast." - ) - - batch_outputs = self._batch_prepare_for_model( - batch_text_or_text_pairs=batch_text_or_text_pairs, - is_pair=is_pair, - xpaths=xpaths, - node_labels=node_labels, - add_special_tokens=add_special_tokens, - padding_strategy=padding_strategy, - truncation_strategy=truncation_strategy, - max_length=max_length, - stride=stride, - pad_to_multiple_of=pad_to_multiple_of, - return_attention_mask=return_attention_mask, - return_token_type_ids=return_token_type_ids, - return_overflowing_tokens=return_overflowing_tokens, - return_special_tokens_mask=return_special_tokens_mask, - return_length=return_length, - return_tensors=return_tensors, - verbose=verbose, - ) - - return BatchEncoding(batch_outputs) - - @add_end_docstrings(ENCODE_KWARGS_DOCSTRING, MARKUPLM_ENCODE_PLUS_ADDITIONAL_KWARGS_DOCSTRING) - def _batch_prepare_for_model( - self, - batch_text_or_text_pairs, - is_pair: bool = None, - xpaths: Optional[List[List[int]]] = None, - node_labels: Optional[List[List[int]]] = None, - add_special_tokens: bool = True, - padding_strategy: PaddingStrategy = PaddingStrategy.DO_NOT_PAD, - truncation_strategy: TruncationStrategy = TruncationStrategy.DO_NOT_TRUNCATE, - max_length: Optional[int] = None, - stride: int = 0, - pad_to_multiple_of: Optional[int] = None, - return_tensors: Optional[str] = None, - return_token_type_ids: Optional[bool] = None, - return_attention_mask: Optional[bool] = None, - return_overflowing_tokens: bool = False, - return_special_tokens_mask: bool = False, - return_length: bool = False, - verbose: bool = True, - ) -> BatchEncoding: - """ - Prepares a sequence of input id, or a pair of sequences of inputs ids so that it can be used by the model. It - adds special tokens, truncates sequences if overflowing while taking into account the special tokens and - manages a moving window (with user defined stride) for overflowing tokens. - - Args: - batch_ids_pairs: list of tokenized input ids or input ids pairs - """ - - batch_outputs = {} - for idx, example in enumerate(zip(batch_text_or_text_pairs, xpaths)): - batch_text_or_text_pair, xpaths_example = example - outputs = self.prepare_for_model( - batch_text_or_text_pair[0] if is_pair else batch_text_or_text_pair, - batch_text_or_text_pair[1] if is_pair else None, - xpaths_example, - node_labels=node_labels[idx] if node_labels is not None else None, - add_special_tokens=add_special_tokens, - padding=PaddingStrategy.DO_NOT_PAD.value, # we pad in batch afterward - truncation=truncation_strategy.value, - max_length=max_length, - stride=stride, - pad_to_multiple_of=None, # we pad in batch afterward - return_attention_mask=False, # we pad in batch afterward - return_token_type_ids=return_token_type_ids, - return_overflowing_tokens=return_overflowing_tokens, - return_special_tokens_mask=return_special_tokens_mask, - return_length=return_length, - return_tensors=None, # We convert the whole batch to tensors at the end - prepend_batch_axis=False, - verbose=verbose, - ) - - for key, value in outputs.items(): - if key not in batch_outputs: - batch_outputs[key] = [] - batch_outputs[key].append(value) - - batch_outputs = self.pad( - batch_outputs, - padding=padding_strategy.value, - max_length=max_length, - pad_to_multiple_of=pad_to_multiple_of, - return_attention_mask=return_attention_mask, - ) - - batch_outputs = BatchEncoding(batch_outputs, tensor_type=return_tensors) - - return batch_outputs - - @add_end_docstrings(ENCODE_KWARGS_DOCSTRING) - def encode( - self, - text: Union[TextInput, PreTokenizedInput], - text_pair: Optional[PreTokenizedInput] = None, - xpaths: Optional[List[List[int]]] = None, - node_labels: Optional[List[int]] = None, - add_special_tokens: bool = True, - padding: Union[bool, str, PaddingStrategy] = False, - truncation: Union[bool, str, TruncationStrategy] = None, - max_length: Optional[int] = None, - stride: int = 0, - pad_to_multiple_of: Optional[int] = None, - return_tensors: Optional[Union[str, TensorType]] = None, - return_token_type_ids: Optional[bool] = None, - return_attention_mask: Optional[bool] = None, - return_overflowing_tokens: bool = False, - return_special_tokens_mask: bool = False, - return_offsets_mapping: bool = False, - return_length: bool = False, - verbose: bool = True, - **kwargs, - ) -> List[int]: - encoded_inputs = self.encode_plus( - text=text, - text_pair=text_pair, - xpaths=xpaths, - node_labels=node_labels, - add_special_tokens=add_special_tokens, - padding=padding, - truncation=truncation, - max_length=max_length, - stride=stride, - pad_to_multiple_of=pad_to_multiple_of, - return_tensors=return_tensors, - return_token_type_ids=return_token_type_ids, - return_attention_mask=return_attention_mask, - return_overflowing_tokens=return_overflowing_tokens, - return_special_tokens_mask=return_special_tokens_mask, - return_offsets_mapping=return_offsets_mapping, - return_length=return_length, - verbose=verbose, - **kwargs, - ) - - return encoded_inputs["input_ids"] - - @add_end_docstrings(ENCODE_KWARGS_DOCSTRING, MARKUPLM_ENCODE_PLUS_ADDITIONAL_KWARGS_DOCSTRING) - def encode_plus( - self, - text: Union[TextInput, PreTokenizedInput], - text_pair: Optional[PreTokenizedInput] = None, - xpaths: Optional[List[List[int]]] = None, - node_labels: Optional[List[int]] = None, - add_special_tokens: bool = True, - padding: Union[bool, str, PaddingStrategy] = False, - truncation: Union[bool, str, TruncationStrategy] = None, - max_length: Optional[int] = None, - stride: int = 0, - pad_to_multiple_of: Optional[int] = None, - return_tensors: Optional[Union[str, TensorType]] = None, - return_token_type_ids: Optional[bool] = None, - return_attention_mask: Optional[bool] = None, - return_overflowing_tokens: bool = False, - return_special_tokens_mask: bool = False, - return_offsets_mapping: bool = False, - return_length: bool = False, - verbose: bool = True, - **kwargs, - ) -> BatchEncoding: - """ - Tokenize and prepare for the model a sequence or a pair of sequences. .. warning:: This method is deprecated, - `__call__` should be used instead. - - Args: - text (`str`, `List[str]`, `List[List[str]]`): - The first sequence to be encoded. This can be a string, a list of strings or a list of list of strings. - text_pair (`List[str]` or `List[int]`, *optional*): - Optional second sequence to be encoded. This can be a list of strings (nodes of a single example) or a - list of list of strings (nodes of a batch of examples). - """ - - # Backward compatibility for 'truncation_strategy', 'pad_to_max_length' - padding_strategy, truncation_strategy, max_length, kwargs = self._get_padding_truncation_strategies( - padding=padding, - truncation=truncation, - max_length=max_length, - pad_to_multiple_of=pad_to_multiple_of, - verbose=verbose, - **kwargs, - ) - - return self._encode_plus( - text=text, - xpaths=xpaths, - text_pair=text_pair, - node_labels=node_labels, - add_special_tokens=add_special_tokens, - padding_strategy=padding_strategy, - truncation_strategy=truncation_strategy, - max_length=max_length, - stride=stride, - pad_to_multiple_of=pad_to_multiple_of, - return_tensors=return_tensors, - return_token_type_ids=return_token_type_ids, - return_attention_mask=return_attention_mask, - return_overflowing_tokens=return_overflowing_tokens, - return_special_tokens_mask=return_special_tokens_mask, - return_offsets_mapping=return_offsets_mapping, - return_length=return_length, - verbose=verbose, - **kwargs, - ) - - def _encode_plus( - self, - text: Union[TextInput, PreTokenizedInput], - text_pair: Optional[PreTokenizedInput] = None, - xpaths: Optional[List[List[int]]] = None, - node_labels: Optional[List[int]] = None, - add_special_tokens: bool = True, - padding_strategy: PaddingStrategy = PaddingStrategy.DO_NOT_PAD, - truncation_strategy: TruncationStrategy = TruncationStrategy.DO_NOT_TRUNCATE, - max_length: Optional[int] = None, - stride: int = 0, - pad_to_multiple_of: Optional[int] = None, - return_tensors: Optional[Union[str, TensorType]] = None, - return_token_type_ids: Optional[bool] = None, - return_attention_mask: Optional[bool] = None, - return_overflowing_tokens: bool = False, - return_special_tokens_mask: bool = False, - return_offsets_mapping: bool = False, - return_length: bool = False, - verbose: bool = True, - **kwargs, - ) -> BatchEncoding: - if return_offsets_mapping: - raise NotImplementedError( - "return_offset_mapping is not available when using Python tokenizers. " - "To use this feature, change your tokenizer to one deriving from " - "transformers.PreTrainedTokenizerFast. " - "More information on available tokenizers at " - "https://github.com/huggingface/transformers/pull/2674" - ) - - return self.prepare_for_model( - text=text, - text_pair=text_pair, - xpaths=xpaths, - node_labels=node_labels, - add_special_tokens=add_special_tokens, - padding=padding_strategy.value, - truncation=truncation_strategy.value, - max_length=max_length, - stride=stride, - pad_to_multiple_of=pad_to_multiple_of, - return_tensors=return_tensors, - prepend_batch_axis=True, - return_attention_mask=return_attention_mask, - return_token_type_ids=return_token_type_ids, - return_overflowing_tokens=return_overflowing_tokens, - return_special_tokens_mask=return_special_tokens_mask, - return_length=return_length, - verbose=verbose, - ) - - @add_end_docstrings(ENCODE_KWARGS_DOCSTRING, MARKUPLM_ENCODE_PLUS_ADDITIONAL_KWARGS_DOCSTRING) - def prepare_for_model( - self, - text: Union[TextInput, PreTokenizedInput], - text_pair: Optional[PreTokenizedInput] = None, - xpaths: Optional[List[List[int]]] = None, - node_labels: Optional[List[int]] = None, - add_special_tokens: bool = True, - padding: Union[bool, str, PaddingStrategy] = False, - truncation: Union[bool, str, TruncationStrategy] = None, - max_length: Optional[int] = None, - stride: int = 0, - pad_to_multiple_of: Optional[int] = None, - return_tensors: Optional[Union[str, TensorType]] = None, - return_token_type_ids: Optional[bool] = None, - return_attention_mask: Optional[bool] = None, - return_overflowing_tokens: bool = False, - return_special_tokens_mask: bool = False, - return_offsets_mapping: bool = False, - return_length: bool = False, - verbose: bool = True, - prepend_batch_axis: bool = False, - **kwargs, - ) -> BatchEncoding: - """ - Prepares a sequence or a pair of sequences so that it can be used by the model. It adds special tokens, - truncates sequences if overflowing while taking into account the special tokens and manages a moving window - (with user defined stride) for overflowing tokens. Please Note, for *text_pair* different than `None` and - *truncation_strategy = longest_first* or `True`, it is not possible to return overflowing tokens. Such a - combination of arguments will raise an error. - - Node-level `xpaths` are turned into token-level `xpath_tags_seq` and `xpath_subs_seq`. If provided, node-level - `node_labels` are turned into token-level `labels`. The node label is used for the first token of the node, - while remaining tokens are labeled with -100, such that they will be ignored by the loss function. - - Args: - text (`str`, `List[str]`, `List[List[str]]`): - The first sequence to be encoded. This can be a string, a list of strings or a list of list of strings. - text_pair (`List[str]` or `List[int]`, *optional*): - Optional second sequence to be encoded. This can be a list of strings (nodes of a single example) or a - list of list of strings (nodes of a batch of examples). - """ - - # Backward compatibility for 'truncation_strategy', 'pad_to_max_length' - padding_strategy, truncation_strategy, max_length, kwargs = self._get_padding_truncation_strategies( - padding=padding, - truncation=truncation, - max_length=max_length, - pad_to_multiple_of=pad_to_multiple_of, - verbose=verbose, - **kwargs, - ) - - tokens = [] - pair_tokens = [] - xpath_tags_seq = [] - xpath_subs_seq = [] - pair_xpath_tags_seq = [] - pair_xpath_subs_seq = [] - labels = [] - - if text_pair is None: - if node_labels is None: - # CASE 1: web page classification (training + inference) + CASE 2: token classification (inference) - for word, xpath in zip(text, xpaths): - if len(word) < 1: # skip empty nodes - continue - word_tokens = self.tokenize(word) - tokens.extend(word_tokens) - xpath_tags_list, xpath_subs_list = self.get_xpath_seq(xpath) - xpath_tags_seq.extend([xpath_tags_list] * len(word_tokens)) - xpath_subs_seq.extend([xpath_subs_list] * len(word_tokens)) - else: - # CASE 2: token classification (training) - for word, xpath, label in zip(text, xpaths, node_labels): - if len(word) < 1: # skip empty nodes - continue - word_tokens = self.tokenize(word) - tokens.extend(word_tokens) - xpath_tags_list, xpath_subs_list = self.get_xpath_seq(xpath) - xpath_tags_seq.extend([xpath_tags_list] * len(word_tokens)) - xpath_subs_seq.extend([xpath_subs_list] * len(word_tokens)) - if self.only_label_first_subword: - # Use the real label id for the first token of the word, and padding ids for the remaining tokens - labels.extend([label] + [self.pad_token_label] * (len(word_tokens) - 1)) - else: - labels.extend([label] * len(word_tokens)) - else: - # CASE 3: web page question answering (inference) - # text = question - # text_pair = nodes - tokens = self.tokenize(text) - xpath_tags_seq = [self.pad_xpath_tags_seq for _ in range(len(tokens))] - xpath_subs_seq = [self.pad_xpath_subs_seq for _ in range(len(tokens))] - - for word, xpath in zip(text_pair, xpaths): - if len(word) < 1: # skip empty nodes - continue - word_tokens = self.tokenize(word) - pair_tokens.extend(word_tokens) - xpath_tags_list, xpath_subs_list = self.get_xpath_seq(xpath) - pair_xpath_tags_seq.extend([xpath_tags_list] * len(word_tokens)) - pair_xpath_subs_seq.extend([xpath_subs_list] * len(word_tokens)) - - # Create ids + pair_ids - ids = self.convert_tokens_to_ids(tokens) - pair_ids = self.convert_tokens_to_ids(pair_tokens) if pair_tokens else None - - if ( - return_overflowing_tokens - and truncation_strategy == TruncationStrategy.LONGEST_FIRST - and pair_ids is not None - ): - raise ValueError( - "Not possible to return overflowing tokens for pair of sequences with the " - "`longest_first`. Please select another truncation strategy than `longest_first`, " - "for instance `only_second` or `only_first`." - ) - - # Compute the total size of the returned encodings - pair = bool(pair_ids is not None) - len_ids = len(ids) - len_pair_ids = len(pair_ids) if pair else 0 - total_len = len_ids + len_pair_ids + (self.num_special_tokens_to_add(pair=pair) if add_special_tokens else 0) - - # Truncation: Handle max sequence length - overflowing_tokens = [] - overflowing_xpath_tags_seq = [] - overflowing_xpath_subs_seq = [] - overflowing_labels = [] - if truncation_strategy != TruncationStrategy.DO_NOT_TRUNCATE and max_length and total_len > max_length: - ( - ids, - xpath_tags_seq, - xpath_subs_seq, - pair_ids, - pair_xpath_tags_seq, - pair_xpath_subs_seq, - labels, - overflowing_tokens, - overflowing_xpath_tags_seq, - overflowing_xpath_subs_seq, - overflowing_labels, - ) = self.truncate_sequences( - ids, - xpath_tags_seq=xpath_tags_seq, - xpath_subs_seq=xpath_subs_seq, - pair_ids=pair_ids, - pair_xpath_tags_seq=pair_xpath_tags_seq, - pair_xpath_subs_seq=pair_xpath_subs_seq, - labels=labels, - num_tokens_to_remove=total_len - max_length, - truncation_strategy=truncation_strategy, - stride=stride, - ) - - if return_token_type_ids and not add_special_tokens: - raise ValueError( - "Asking to return token_type_ids while setting add_special_tokens to False " - "results in an undefined behavior. Please set add_special_tokens to True or " - "set return_token_type_ids to None." - ) - - # Load from model defaults - if return_token_type_ids is None: - return_token_type_ids = "token_type_ids" in self.model_input_names - if return_attention_mask is None: - return_attention_mask = "attention_mask" in self.model_input_names - - encoded_inputs = {} - - if return_overflowing_tokens: - encoded_inputs["overflowing_tokens"] = overflowing_tokens - encoded_inputs["overflowing_xpath_tags_seq"] = overflowing_xpath_tags_seq - encoded_inputs["overflowing_xpath_subs_seq"] = overflowing_xpath_subs_seq - encoded_inputs["overflowing_labels"] = overflowing_labels - encoded_inputs["num_truncated_tokens"] = total_len - max_length - - # Add special tokens - if add_special_tokens: - sequence = self.build_inputs_with_special_tokens(ids, pair_ids) - token_type_ids = self.create_token_type_ids_from_sequences(ids, pair_ids) - xpath_tags_ids = self.build_xpath_tags_with_special_tokens(xpath_tags_seq, pair_xpath_tags_seq) - xpath_subs_ids = self.build_xpath_subs_with_special_tokens(xpath_subs_seq, pair_xpath_subs_seq) - if labels: - labels = [self.pad_token_label] + labels + [self.pad_token_label] - else: - sequence = ids + pair_ids if pair else ids - token_type_ids = [0] * len(ids) + ([0] * len(pair_ids) if pair else []) - xpath_tags_ids = xpath_tags_seq + pair_xpath_tags_seq if pair else xpath_tags_seq - xpath_subs_ids = xpath_subs_seq + pair_xpath_subs_seq if pair else xpath_subs_seq - - # Build output dictionary - encoded_inputs["input_ids"] = sequence - encoded_inputs["xpath_tags_seq"] = xpath_tags_ids - encoded_inputs["xpath_subs_seq"] = xpath_subs_ids - if return_token_type_ids: - encoded_inputs["token_type_ids"] = token_type_ids - if return_special_tokens_mask: - if add_special_tokens: - encoded_inputs["special_tokens_mask"] = self.get_special_tokens_mask(ids, pair_ids) - else: - encoded_inputs["special_tokens_mask"] = [0] * len(sequence) - - if labels: - encoded_inputs["labels"] = labels - - # Check lengths - self._eventual_warn_about_too_long_sequence(encoded_inputs["input_ids"], max_length, verbose) - - # Padding - if padding_strategy != PaddingStrategy.DO_NOT_PAD or return_attention_mask: - encoded_inputs = self.pad( - encoded_inputs, - max_length=max_length, - padding=padding_strategy.value, - pad_to_multiple_of=pad_to_multiple_of, - return_attention_mask=return_attention_mask, - ) - - if return_length: - encoded_inputs["length"] = len(encoded_inputs["input_ids"]) - - batch_outputs = BatchEncoding( - encoded_inputs, tensor_type=return_tensors, prepend_batch_axis=prepend_batch_axis - ) - - return batch_outputs - - def truncate_sequences( - self, - ids: List[int], - xpath_tags_seq: List[List[int]], - xpath_subs_seq: List[List[int]], - pair_ids: Optional[List[int]] = None, - pair_xpath_tags_seq: Optional[List[List[int]]] = None, - pair_xpath_subs_seq: Optional[List[List[int]]] = None, - labels: Optional[List[int]] = None, - num_tokens_to_remove: int = 0, - truncation_strategy: Union[str, TruncationStrategy] = "longest_first", - stride: int = 0, - ) -> Tuple[List[int], List[int], List[int]]: - """ - Args: - Truncates a sequence pair in-place following the strategy. - ids (`List[int]`): - Tokenized input ids of the first sequence. Can be obtained from a string by chaining the `tokenize` and - `convert_tokens_to_ids` methods. - xpath_tags_seq (`List[List[int]]`): - XPath tag IDs of the first sequence. - xpath_subs_seq (`List[List[int]]`): - XPath sub IDs of the first sequence. - pair_ids (`List[int]`, *optional*): - Tokenized input ids of the second sequence. Can be obtained from a string by chaining the `tokenize` - and `convert_tokens_to_ids` methods. - pair_xpath_tags_seq (`List[List[int]]`, *optional*): - XPath tag IDs of the second sequence. - pair_xpath_subs_seq (`List[List[int]]`, *optional*): - XPath sub IDs of the second sequence. - num_tokens_to_remove (`int`, *optional*, defaults to 0): - Number of tokens to remove using the truncation strategy. - truncation_strategy (`str` or [`~tokenization_utils_base.TruncationStrategy`], *optional*, defaults to - `False`): - The strategy to follow for truncation. Can be: - - `'longest_first'`: Truncate to a maximum length specified with the argument `max_length` or to the - maximum acceptable input length for the model if that argument is not provided. This will truncate - token by token, removing a token from the longest sequence in the pair if a pair of sequences (or a - batch of pairs) is provided. - - `'only_first'`: Truncate to a maximum length specified with the argument `max_length` or to the - maximum acceptable input length for the model if that argument is not provided. This will only - truncate the first sequence of a pair if a pair of sequences (or a batch of pairs) is provided. - - `'only_second'`: Truncate to a maximum length specified with the argument `max_length` or to the - maximum acceptable input length for the model if that argument is not provided. This will only - truncate the second sequence of a pair if a pair of sequences (or a batch of pairs) is provided. - - `'do_not_truncate'` (default): No truncation (i.e., can output batch with sequence lengths greater - than the model maximum admissible input size). - stride (`int`, *optional*, defaults to 0): - If set to a positive number, the overflowing tokens returned will contain some tokens from the main - sequence returned. The value of this argument defines the number of additional tokens. - Returns: - `Tuple[List[int], List[int], List[int]]`: The truncated `ids`, the truncated `pair_ids` and the list of - overflowing tokens. Note: The *longest_first* strategy returns empty list of overflowing tokens if a pair - of sequences (or a batch of pairs) is provided. - """ - if num_tokens_to_remove <= 0: - return ids, xpath_tags_seq, xpath_subs_seq, pair_ids, pair_xpath_tags_seq, pair_xpath_subs_seq, [], [], [] - - if not isinstance(truncation_strategy, TruncationStrategy): - truncation_strategy = TruncationStrategy(truncation_strategy) - - overflowing_tokens = [] - overflowing_xpath_tags_seq = [] - overflowing_xpath_subs_seq = [] - overflowing_labels = [] - if truncation_strategy == TruncationStrategy.ONLY_FIRST or ( - truncation_strategy == TruncationStrategy.LONGEST_FIRST and pair_ids is None - ): - if len(ids) > num_tokens_to_remove: - window_len = min(len(ids), stride + num_tokens_to_remove) - overflowing_tokens = ids[-window_len:] - overflowing_xpath_tags_seq = xpath_tags_seq[-window_len:] - overflowing_xpath_subs_seq = xpath_subs_seq[-window_len:] - ids = ids[:-num_tokens_to_remove] - xpath_tags_seq = xpath_tags_seq[:-num_tokens_to_remove] - xpath_subs_seq = xpath_subs_seq[:-num_tokens_to_remove] - labels = labels[:-num_tokens_to_remove] - else: - error_msg = ( - f"We need to remove {num_tokens_to_remove} to truncate the input " - f"but the first sequence has a length {len(ids)}. " - ) - if truncation_strategy == TruncationStrategy.ONLY_FIRST: - error_msg = ( - error_msg + "Please select another truncation strategy than " - f"{truncation_strategy}, for instance 'longest_first' or 'only_second'." - ) - logger.error(error_msg) - elif truncation_strategy == TruncationStrategy.LONGEST_FIRST: - logger.warning( - "Be aware, overflowing tokens are not returned for the setting you have chosen," - f" i.e. sequence pairs with the '{TruncationStrategy.LONGEST_FIRST.value}' " - "truncation strategy. So the returned list will always be empty even if some " - "tokens have been removed." - ) - for _ in range(num_tokens_to_remove): - if pair_ids is None or len(ids) > len(pair_ids): - ids = ids[:-1] - xpath_tags_seq = xpath_tags_seq[:-1] - xpath_subs_seq = xpath_subs_seq[:-1] - labels = labels[:-1] - else: - pair_ids = pair_ids[:-1] - pair_xpath_tags_seq = pair_xpath_tags_seq[:-1] - pair_xpath_subs_seq = pair_xpath_subs_seq[:-1] - elif truncation_strategy == TruncationStrategy.ONLY_SECOND and pair_ids is not None: - if len(pair_ids) > num_tokens_to_remove: - window_len = min(len(pair_ids), stride + num_tokens_to_remove) - overflowing_tokens = pair_ids[-window_len:] - overflowing_xpath_tags_seq = pair_xpath_tags_seq[-window_len:] - overflowing_xpath_subs_seq = pair_xpath_subs_seq[-window_len:] - pair_ids = pair_ids[:-num_tokens_to_remove] - pair_xpath_tags_seq = pair_xpath_tags_seq[:-num_tokens_to_remove] - pair_xpath_subs_seq = pair_xpath_subs_seq[:-num_tokens_to_remove] - else: - logger.error( - f"We need to remove {num_tokens_to_remove} to truncate the input " - f"but the second sequence has a length {len(pair_ids)}. " - f"Please select another truncation strategy than {truncation_strategy}, " - "for instance 'longest_first' or 'only_first'." - ) - - return ( - ids, - xpath_tags_seq, - xpath_subs_seq, - pair_ids, - pair_xpath_tags_seq, - pair_xpath_subs_seq, - labels, - overflowing_tokens, - overflowing_xpath_tags_seq, - overflowing_xpath_subs_seq, - overflowing_labels, - ) - - def _pad( - self, - encoded_inputs: Union[Dict[str, EncodedInput], BatchEncoding], - max_length: Optional[int] = None, - padding_strategy: PaddingStrategy = PaddingStrategy.DO_NOT_PAD, - pad_to_multiple_of: Optional[int] = None, - return_attention_mask: Optional[bool] = None, - ) -> dict: - """ - Args: - Pad encoded inputs (on left/right and up to predefined length or max length in the batch) - encoded_inputs: - Dictionary of tokenized inputs (`List[int]`) or batch of tokenized inputs (`List[List[int]]`). - max_length: maximum length of the returned list and optionally padding length (see below). - Will truncate by taking into account the special tokens. - padding_strategy: PaddingStrategy to use for padding. - - PaddingStrategy.LONGEST Pad to the longest sequence in the batch - - PaddingStrategy.MAX_LENGTH: Pad to the max length (default) - - PaddingStrategy.DO_NOT_PAD: Do not pad - The tokenizer padding sides are defined in self.padding_side: - - 'left': pads on the left of the sequences - - 'right': pads on the right of the sequences - pad_to_multiple_of: (optional) Integer if set will pad the sequence to a multiple of the provided value. - This is especially useful to enable the use of Tensor Core on NVIDIA hardware with compute capability - `>= 7.5` (Volta). - return_attention_mask: - (optional) Set to False to avoid returning attention mask (default: set to model specifics) - """ - # Load from model defaults - if return_attention_mask is None: - return_attention_mask = "attention_mask" in self.model_input_names - - required_input = encoded_inputs[self.model_input_names[0]] - - if padding_strategy == PaddingStrategy.LONGEST: - max_length = len(required_input) - - if max_length is not None and pad_to_multiple_of is not None and (max_length % pad_to_multiple_of != 0): - max_length = ((max_length // pad_to_multiple_of) + 1) * pad_to_multiple_of - - needs_to_be_padded = padding_strategy != PaddingStrategy.DO_NOT_PAD and len(required_input) != max_length - - # Initialize attention mask if not present. - if return_attention_mask and "attention_mask" not in encoded_inputs: - encoded_inputs["attention_mask"] = [1] * len(required_input) - - if needs_to_be_padded: - difference = max_length - len(required_input) - if self.padding_side == "right": - if return_attention_mask: - encoded_inputs["attention_mask"] = encoded_inputs["attention_mask"] + [0] * difference - if "token_type_ids" in encoded_inputs: - encoded_inputs["token_type_ids"] = ( - encoded_inputs["token_type_ids"] + [self.pad_token_type_id] * difference - ) - if "xpath_tags_seq" in encoded_inputs: - encoded_inputs["xpath_tags_seq"] = ( - encoded_inputs["xpath_tags_seq"] + [self.pad_xpath_tags_seq] * difference - ) - if "xpath_subs_seq" in encoded_inputs: - encoded_inputs["xpath_subs_seq"] = ( - encoded_inputs["xpath_subs_seq"] + [self.pad_xpath_subs_seq] * difference - ) - if "labels" in encoded_inputs: - encoded_inputs["labels"] = encoded_inputs["labels"] + [self.pad_token_label] * difference - if "special_tokens_mask" in encoded_inputs: - encoded_inputs["special_tokens_mask"] = encoded_inputs["special_tokens_mask"] + [1] * difference - encoded_inputs[self.model_input_names[0]] = required_input + [self.pad_token_id] * difference - elif self.padding_side == "left": - if return_attention_mask: - encoded_inputs["attention_mask"] = [0] * difference + encoded_inputs["attention_mask"] - if "token_type_ids" in encoded_inputs: - encoded_inputs["token_type_ids"] = [self.pad_token_type_id] * difference + encoded_inputs[ - "token_type_ids" - ] - if "xpath_tags_seq" in encoded_inputs: - encoded_inputs["xpath_tags_seq"] = [self.pad_xpath_tags_seq] * difference + encoded_inputs[ - "xpath_tags_seq" - ] - if "xpath_subs_seq" in encoded_inputs: - encoded_inputs["xpath_subs_seq"] = [self.pad_xpath_subs_seq] * difference + encoded_inputs[ - "xpath_subs_seq" - ] - if "labels" in encoded_inputs: - encoded_inputs["labels"] = [self.pad_token_label] * difference + encoded_inputs["labels"] - if "special_tokens_mask" in encoded_inputs: - encoded_inputs["special_tokens_mask"] = [1] * difference + encoded_inputs["special_tokens_mask"] - encoded_inputs[self.model_input_names[0]] = [self.pad_token_id] * difference + required_input - else: - raise ValueError("Invalid padding strategy:" + str(self.padding_side)) - - return encoded_inputs diff --git a/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/modeling/backbone/bifpn_fcos.py b/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/modeling/backbone/bifpn_fcos.py deleted file mode 100644 index 17f2904ccad484f380b64efc668b9090d047d15e..0000000000000000000000000000000000000000 --- a/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/modeling/backbone/bifpn_fcos.py +++ /dev/null @@ -1,469 +0,0 @@ -# This file is modified from https://github.com/aim-uofa/AdelaiDet/blob/master/adet/modeling/backbone/bifpn.py -# The original file is under 2-clause BSD License for academic use, and *non-commercial use*. -import torch -import torch.nn.functional as F -from torch import nn - -from detectron2.layers import Conv2d, ShapeSpec, get_norm - -from detectron2.modeling.backbone import Backbone, build_resnet_backbone -from detectron2.modeling import BACKBONE_REGISTRY -from .dlafpn import dla34 - -__all__ = [] - - -def swish(x): - return x * x.sigmoid() - - -def split_name(name): - for i, c in enumerate(name): - if not c.isalpha(): - return name[:i], int(name[i:]) - raise ValueError() - - -class FeatureMapResampler(nn.Module): - def __init__(self, in_channels, out_channels, stride, norm=""): - super(FeatureMapResampler, self).__init__() - if in_channels != out_channels: - self.reduction = Conv2d( - in_channels, out_channels, kernel_size=1, - bias=(norm == ""), - norm=get_norm(norm, out_channels), - activation=None - ) - else: - self.reduction = None - - assert stride <= 2 - self.stride = stride - - def forward(self, x): - if self.reduction is not None: - x = self.reduction(x) - - if self.stride == 2: - x = F.max_pool2d( - x, kernel_size=self.stride + 1, - stride=self.stride, padding=1 - ) - elif self.stride == 1: - pass - else: - raise NotImplementedError() - return x - - -class BackboneWithTopLevels(Backbone): - def __init__(self, backbone, out_channels, num_top_levels, norm=""): - super(BackboneWithTopLevels, self).__init__() - self.backbone = backbone - backbone_output_shape = backbone.output_shape() - - self._out_feature_channels = {name: shape.channels for name, shape in backbone_output_shape.items()} - self._out_feature_strides = {name: shape.stride for name, shape in backbone_output_shape.items()} - self._out_features = list(self._out_feature_strides.keys()) - - last_feature_name = max(self._out_feature_strides.keys(), key=lambda x: split_name(x)[1]) - self.last_feature_name = last_feature_name - self.num_top_levels = num_top_levels - - last_channels = self._out_feature_channels[last_feature_name] - last_stride = self._out_feature_strides[last_feature_name] - - prefix, suffix = split_name(last_feature_name) - prev_channels = last_channels - for i in range(num_top_levels): - name = prefix + str(suffix + i + 1) - self.add_module(name, FeatureMapResampler( - prev_channels, out_channels, 2, norm - )) - prev_channels = out_channels - - self._out_feature_channels[name] = out_channels - self._out_feature_strides[name] = last_stride * 2 ** (i + 1) - self._out_features.append(name) - - def forward(self, x): - outputs = self.backbone(x) - last_features = outputs[self.last_feature_name] - prefix, suffix = split_name(self.last_feature_name) - - x = last_features - for i in range(self.num_top_levels): - name = prefix + str(suffix + i + 1) - x = self.__getattr__(name)(x) - outputs[name] = x - - return outputs - - -class SingleBiFPN(Backbone): - """ - This module implements Feature Pyramid Network. - It creates pyramid features built on top of some input feature maps. - """ - - def __init__( - self, in_channels_list, out_channels, norm="" - ): - """ - Args: - bottom_up (Backbone): module representing the bottom up subnetwork. - Must be a subclass of :class:`Backbone`. The multi-scale feature - maps generated by the bottom up network, and listed in `in_features`, - are used to generate FPN levels. - in_features (list[str]): names of the input feature maps coming - from the backbone to which FPN is attached. For example, if the - backbone produces ["res2", "res3", "res4"], any *contiguous* sublist - of these may be used; order must be from high to low resolution. - out_channels (int): number of channels in the output feature maps. - norm (str): the normalization to use. - """ - super(SingleBiFPN, self).__init__() - - self.out_channels = out_channels - # build 5-levels bifpn - if len(in_channels_list) == 5: - self.nodes = [ - {'feat_level': 3, 'inputs_offsets': [3, 4]}, - {'feat_level': 2, 'inputs_offsets': [2, 5]}, - {'feat_level': 1, 'inputs_offsets': [1, 6]}, - {'feat_level': 0, 'inputs_offsets': [0, 7]}, - {'feat_level': 1, 'inputs_offsets': [1, 7, 8]}, - {'feat_level': 2, 'inputs_offsets': [2, 6, 9]}, - {'feat_level': 3, 'inputs_offsets': [3, 5, 10]}, - {'feat_level': 4, 'inputs_offsets': [4, 11]}, - ] - elif len(in_channels_list) == 3: - self.nodes = [ - {'feat_level': 1, 'inputs_offsets': [1, 2]}, - {'feat_level': 0, 'inputs_offsets': [0, 3]}, - {'feat_level': 1, 'inputs_offsets': [1, 3, 4]}, - {'feat_level': 2, 'inputs_offsets': [2, 5]}, - ] - else: - raise NotImplementedError - - node_info = [_ for _ in in_channels_list] - - num_output_connections = [0 for _ in in_channels_list] - for fnode in self.nodes: - feat_level = fnode["feat_level"] - inputs_offsets = fnode["inputs_offsets"] - inputs_offsets_str = "_".join(map(str, inputs_offsets)) - for input_offset in inputs_offsets: - num_output_connections[input_offset] += 1 - - in_channels = node_info[input_offset] - if in_channels != out_channels: - lateral_conv = Conv2d( - in_channels, - out_channels, - kernel_size=1, - norm=get_norm(norm, out_channels) - ) - self.add_module( - "lateral_{}_f{}".format(input_offset, feat_level), lateral_conv - ) - node_info.append(out_channels) - num_output_connections.append(0) - - # generate attention weights - name = "weights_f{}_{}".format(feat_level, inputs_offsets_str) - self.__setattr__(name, nn.Parameter( - torch.ones(len(inputs_offsets), dtype=torch.float32), - requires_grad=True - )) - - # generate convolutions after combination - name = "outputs_f{}_{}".format(feat_level, inputs_offsets_str) - self.add_module(name, Conv2d( - out_channels, - out_channels, - kernel_size=3, - padding=1, - norm=get_norm(norm, out_channels), - bias=(norm == "") - )) - - def forward(self, feats): - """ - Args: - input (dict[str->Tensor]): mapping feature map name (e.g., "p5") to - feature map tensor for each feature level in high to low resolution order. - Returns: - dict[str->Tensor]: - mapping from feature map name to FPN feature map tensor - in high to low resolution order. Returned feature names follow the FPN - paper convention: "p", where stage has stride = 2 ** stage e.g., - ["n2", "n3", ..., "n6"]. - """ - feats = [_ for _ in feats] - num_levels = len(feats) - num_output_connections = [0 for _ in feats] - for fnode in self.nodes: - feat_level = fnode["feat_level"] - inputs_offsets = fnode["inputs_offsets"] - inputs_offsets_str = "_".join(map(str, inputs_offsets)) - input_nodes = [] - _, _, target_h, target_w = feats[feat_level].size() - for input_offset in inputs_offsets: - num_output_connections[input_offset] += 1 - input_node = feats[input_offset] - - # reduction - if input_node.size(1) != self.out_channels: - name = "lateral_{}_f{}".format(input_offset, feat_level) - input_node = self.__getattr__(name)(input_node) - - # maybe downsample - _, _, h, w = input_node.size() - if h > target_h and w > target_w: - height_stride_size = int((h - 1) // target_h + 1) - width_stride_size = int((w - 1) // target_w + 1) - assert height_stride_size == width_stride_size == 2 - input_node = F.max_pool2d( - input_node, kernel_size=(height_stride_size + 1, width_stride_size + 1), - stride=(height_stride_size, width_stride_size), padding=1 - ) - elif h <= target_h and w <= target_w: - if h < target_h or w < target_w: - input_node = F.interpolate( - input_node, - size=(target_h, target_w), - mode="nearest" - ) - else: - raise NotImplementedError() - input_nodes.append(input_node) - - # attention - name = "weights_f{}_{}".format(feat_level, inputs_offsets_str) - weights = F.relu(self.__getattr__(name)) - norm_weights = weights / (weights.sum() + 0.0001) - - new_node = torch.stack(input_nodes, dim=-1) - new_node = (norm_weights * new_node).sum(dim=-1) - new_node = swish(new_node) - - name = "outputs_f{}_{}".format(feat_level, inputs_offsets_str) - feats.append(self.__getattr__(name)(new_node)) - - num_output_connections.append(0) - - output_feats = [] - for idx in range(num_levels): - for i, fnode in enumerate(reversed(self.nodes)): - if fnode['feat_level'] == idx: - output_feats.append(feats[-1 - i]) - break - else: - raise ValueError() - return output_feats - - -class BiFPN(Backbone): - """ - This module implements Feature Pyramid Network. - It creates pyramid features built on top of some input feature maps. - """ - - def __init__( - self, bottom_up, in_features, out_channels, num_top_levels, num_repeats, norm="" - ): - """ - Args: - bottom_up (Backbone): module representing the bottom up subnetwork. - Must be a subclass of :class:`Backbone`. The multi-scale feature - maps generated by the bottom up network, and listed in `in_features`, - are used to generate FPN levels. - in_features (list[str]): names of the input feature maps coming - from the backbone to which FPN is attached. For example, if the - backbone produces ["res2", "res3", "res4"], any *contiguous* sublist - of these may be used; order must be from high to low resolution. - out_channels (int): number of channels in the output feature maps. - num_top_levels (int): the number of the top levels (p6 or p7). - num_repeats (int): the number of repeats of BiFPN. - norm (str): the normalization to use. - """ - super(BiFPN, self).__init__() - assert isinstance(bottom_up, Backbone) - - # add extra feature levels (i.e., 6 and 7) - self.bottom_up = BackboneWithTopLevels( - bottom_up, out_channels, - num_top_levels, norm - ) - bottom_up_output_shapes = self.bottom_up.output_shape() - - in_features = sorted(in_features, key=lambda x: split_name(x)[1]) - self._size_divisibility = 128 #bottom_up_output_shapes[in_features[-1]].stride - self.out_channels = out_channels - self.min_level = split_name(in_features[0])[1] - - # add the names for top blocks - prefix, last_suffix = split_name(in_features[-1]) - for i in range(num_top_levels): - in_features.append(prefix + str(last_suffix + i + 1)) - self.in_features = in_features - - # generate output features - self._out_features = ["p{}".format(split_name(name)[1]) for name in in_features] - self._out_feature_strides = { - out_name: bottom_up_output_shapes[in_name].stride - for out_name, in_name in zip(self._out_features, in_features) - } - self._out_feature_channels = {k: out_channels for k in self._out_features} - - # build bifpn - self.repeated_bifpn = nn.ModuleList() - for i in range(num_repeats): - if i == 0: - in_channels_list = [ - bottom_up_output_shapes[name].channels for name in in_features - ] - else: - in_channels_list = [ - self._out_feature_channels[name] for name in self._out_features - ] - self.repeated_bifpn.append(SingleBiFPN( - in_channels_list, out_channels, norm - )) - - @property - def size_divisibility(self): - return self._size_divisibility - - def forward(self, x): - """ - Args: - input (dict[str->Tensor]): mapping feature map name (e.g., "p5") to - feature map tensor for each feature level in high to low resolution order. - Returns: - dict[str->Tensor]: - mapping from feature map name to FPN feature map tensor - in high to low resolution order. Returned feature names follow the FPN - paper convention: "p", where stage has stride = 2 ** stage e.g., - ["n2", "n3", ..., "n6"]. - """ - bottom_up_features = self.bottom_up(x) - feats = [bottom_up_features[f] for f in self.in_features] - - for bifpn in self.repeated_bifpn: - feats = bifpn(feats) - - return dict(zip(self._out_features, feats)) - - -def _assert_strides_are_log2_contiguous(strides): - """ - Assert that each stride is 2x times its preceding stride, i.e. "contiguous in log2". - """ - for i, stride in enumerate(strides[1:], 1): - assert stride == 2 * strides[i - 1], "Strides {} {} are not log2 contiguous".format( - stride, strides[i - 1] - ) - - -@BACKBONE_REGISTRY.register() -def build_fcos_resnet_bifpn_backbone(cfg, input_shape: ShapeSpec): - """ - Args: - cfg: a detectron2 CfgNode - Returns: - backbone (Backbone): backbone module, must be a subclass of :class:`Backbone`. - """ - bottom_up = build_resnet_backbone(cfg, input_shape) - in_features = cfg.MODEL.FPN.IN_FEATURES - out_channels = cfg.MODEL.BIFPN.OUT_CHANNELS - num_repeats = cfg.MODEL.BIFPN.NUM_BIFPN - top_levels = 2 - - backbone = BiFPN( - bottom_up=bottom_up, - in_features=in_features, - out_channels=out_channels, - num_top_levels=top_levels, - num_repeats=num_repeats, - norm=cfg.MODEL.BIFPN.NORM - ) - return backbone - - - -@BACKBONE_REGISTRY.register() -def build_p35_fcos_resnet_bifpn_backbone(cfg, input_shape: ShapeSpec): - """ - Args: - cfg: a detectron2 CfgNode - Returns: - backbone (Backbone): backbone module, must be a subclass of :class:`Backbone`. - """ - bottom_up = build_resnet_backbone(cfg, input_shape) - in_features = cfg.MODEL.FPN.IN_FEATURES - out_channels = cfg.MODEL.BIFPN.OUT_CHANNELS - num_repeats = cfg.MODEL.BIFPN.NUM_BIFPN - top_levels = 0 - - backbone = BiFPN( - bottom_up=bottom_up, - in_features=in_features, - out_channels=out_channels, - num_top_levels=top_levels, - num_repeats=num_repeats, - norm=cfg.MODEL.BIFPN.NORM - ) - return backbone - - -@BACKBONE_REGISTRY.register() -def build_p35_fcos_dla_bifpn_backbone(cfg, input_shape: ShapeSpec): - """ - Args: - cfg: a detectron2 CfgNode - Returns: - backbone (Backbone): backbone module, must be a subclass of :class:`Backbone`. - """ - bottom_up = dla34(cfg) - in_features = cfg.MODEL.FPN.IN_FEATURES - out_channels = cfg.MODEL.BIFPN.OUT_CHANNELS - num_repeats = cfg.MODEL.BIFPN.NUM_BIFPN - top_levels = 0 - - backbone = BiFPN( - bottom_up=bottom_up, - in_features=in_features, - out_channels=out_channels, - num_top_levels=top_levels, - num_repeats=num_repeats, - norm=cfg.MODEL.BIFPN.NORM - ) - return backbone - -@BACKBONE_REGISTRY.register() -def build_p37_fcos_dla_bifpn_backbone(cfg, input_shape: ShapeSpec): - """ - Args: - cfg: a detectron2 CfgNode - Returns: - backbone (Backbone): backbone module, must be a subclass of :class:`Backbone`. - """ - bottom_up = dla34(cfg) - in_features = cfg.MODEL.FPN.IN_FEATURES - out_channels = cfg.MODEL.BIFPN.OUT_CHANNELS - num_repeats = cfg.MODEL.BIFPN.NUM_BIFPN - assert cfg.MODEL.BIFPN.NUM_LEVELS == 5 - top_levels = 2 - - backbone = BiFPN( - bottom_up=bottom_up, - in_features=in_features, - out_channels=out_channels, - num_top_levels=top_levels, - num_repeats=num_repeats, - norm=cfg.MODEL.BIFPN.NORM - ) - return backbone \ No newline at end of file diff --git a/spaces/younker/chatgpt-turbo/client/node_modules/postcss-value-parser/lib/walk.js b/spaces/younker/chatgpt-turbo/client/node_modules/postcss-value-parser/lib/walk.js deleted file mode 100644 index 7666c5b13e012cb7ad7653d18aa72a7d744d647b..0000000000000000000000000000000000000000 --- a/spaces/younker/chatgpt-turbo/client/node_modules/postcss-value-parser/lib/walk.js +++ /dev/null @@ -1,22 +0,0 @@ -module.exports = function walk(nodes, cb, bubble) { - var i, max, node, result; - - for (i = 0, max = nodes.length; i < max; i += 1) { - node = nodes[i]; - if (!bubble) { - result = cb(node, i, nodes); - } - - if ( - result !== false && - node.type === "function" && - Array.isArray(node.nodes) - ) { - walk(node.nodes, cb, bubble); - } - - if (bubble) { - cb(node, i, nodes); - } - } -}; diff --git a/spaces/ysharma/LLaVA_v1/docs/Data.md b/spaces/ysharma/LLaVA_v1/docs/Data.md deleted file mode 100644 index 84807ec252858cd78bf96b3fce6f42f66b20126f..0000000000000000000000000000000000000000 --- a/spaces/ysharma/LLaVA_v1/docs/Data.md +++ /dev/null @@ -1,29 +0,0 @@ -## Data - -| Data file name | Size | -| --- | ---: | -| [llava_instruct_150k.json](https://huggingface.co/datasets/liuhaotian/LLaVA-Instruct-150K/blob/main/llava_instruct_150k.json) | 229 MB | -| [llava_instruct_80k.json](https://huggingface.co/datasets/liuhaotian/LLaVA-Instruct-150K/blob/main/llava_instruct_80k.json) | 229 MB | -| [conversation_58k.json](https://huggingface.co/datasets/liuhaotian/LLaVA-Instruct-150K/blob/main/conversation_58k.json) | 126 MB | -| [detail_23k.json](https://huggingface.co/datasets/liuhaotian/LLaVA-Instruct-150K/blob/main/detail_23k.json) | 20.5 MB | -| [complex_reasoning_77k.json](https://huggingface.co/datasets/liuhaotian/LLaVA-Instruct-150K/blob/main/complex_reasoning_77k.json) | 79.6 MB | - -### Pretraining Dataset -The pretraining dataset used in this release is a subset of CC-3M dataset, filtered with a more balanced concept coverage distribution. Please see [here](https://huggingface.co/datasets/liuhaotian/LLaVA-CC3M-Pretrain-595K) for a detailed description of the dataset structure and how to download the images. - -If you already have CC-3M dataset on your disk, the image names follow this format: `GCC_train_000000000.jpg`. You may edit the `image` field correspondingly if necessary. - -| Data | Chat File | Meta Data | Size | -| --- | --- | --- | ---: | -| CC-3M Concept-balanced 595K | [chat.json](https://huggingface.co/datasets/liuhaotian/LLaVA-CC3M-Pretrain-595K/blob/main/chat.json) | [metadata.json](https://huggingface.co/datasets/liuhaotian/LLaVA-CC3M-Pretrain-595K/blob/main/metadata.json) | 211 MB -| LAION/CC/SBU BLIP-Caption Concept-balanced 558K | [blip_laion_cc_sbu_558k.json](https://huggingface.co/datasets/liuhaotian/LLaVA-Pretrain/blob/main/blip_laion_cc_sbu_558k.json) | [metadata.json](#) | 181 MB - -**Important notice**: Upon the request from the community, as ~15% images of the original CC-3M dataset are no longer accessible, we upload [`images.zip`](https://huggingface.co/datasets/liuhaotian/LLaVA-CC3M-Pretrain-595K/blob/main/images.zip) for better reproducing our work in research community. It must not be used for any other purposes. The use of these images must comply with the CC-3M license. This may be taken down at any time when requested by the original CC-3M dataset owner or owners of the referenced images. - -### GPT-4 Prompts - -We provide our prompts and few-shot samples for GPT-4 queries, to better facilitate research in this domain. Please check out the [`prompts`](playground/data/prompts) folder for three kinds of questions: conversation, detail description, and complex reasoning. - -They are organized in a format of `system_message.txt` for system message, pairs of `abc_caps.txt` for few-shot sample user input, and `abc_conv.txt` for few-shot sample reference output. - -Note that you may find them in different format. For example, `conversation` is in `jsonl`, and detail description is answer-only. The selected format in our preliminary experiments works slightly better than a limited set of alternatives that we tried: `jsonl`, more natural format, answer-only. If interested, you may try other variants or conduct more careful study in this. Contributions are welcomed! diff --git a/spaces/yueranseo/mygpt/Dockerfile b/spaces/yueranseo/mygpt/Dockerfile deleted file mode 100644 index 335c2dba28ba8c365de9306858462a59dea25f28..0000000000000000000000000000000000000000 --- a/spaces/yueranseo/mygpt/Dockerfile +++ /dev/null @@ -1,15 +0,0 @@ -FROM python:3.9 as builder -RUN apt-get update && apt-get install -y build-essential -COPY requirements.txt . -COPY requirements_advanced.txt . -RUN pip install --user -r requirements.txt -# RUN pip install --user -r requirements_advanced.txt - -FROM python:3.9 -MAINTAINER iskoldt -COPY --from=builder /root/.local /root/.local -ENV PATH=/root/.local/bin:$PATH -COPY . /app -WORKDIR /app -ENV dockerrun yes -CMD ["python3", "-u", "ChuanhuChatbot.py", "2>&1", "|", "tee", "/var/log/application.log"] diff --git a/spaces/yuhe6/final_project/README.md b/spaces/yuhe6/final_project/README.md deleted file mode 100644 index e6d0fe96e7a5d167ab8b5b0f76da3a44114a0444..0000000000000000000000000000000000000000 --- a/spaces/yuhe6/final_project/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Final_project -emoji: 🔥 -colorFrom: purple -colorTo: indigo -sdk: gradio -sdk_version: 2.9.4 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/yuragoithf/mlg_personal_info_remover/README.md b/spaces/yuragoithf/mlg_personal_info_remover/README.md deleted file mode 100644 index 1918a069d65f61bf36a4651a7e58c2f96c74f1cf..0000000000000000000000000000000000000000 --- a/spaces/yuragoithf/mlg_personal_info_remover/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Mlg Personal Info Remover -emoji: 🔥 -colorFrom: pink -colorTo: blue -sdk: gradio -sdk_version: 3.34.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/zetavg/LLaMA-LoRA-Tuner-UI-Demo/llama_lora/ui/js_scripts.py b/spaces/zetavg/LLaMA-LoRA-Tuner-UI-Demo/llama_lora/ui/js_scripts.py deleted file mode 100644 index 001e3a7ecac6a8f20a00f997acedfd36b6aa0548..0000000000000000000000000000000000000000 --- a/spaces/zetavg/LLaMA-LoRA-Tuner-UI-Demo/llama_lora/ui/js_scripts.py +++ /dev/null @@ -1,12 +0,0 @@ -def popperjs_core_code(): - code = """ - !function(e,t){"object"==typeof exports&&"undefined"!=typeof module?t(exports):"function"==typeof define&&define.amd?define(["exports"],t):t((e="undefined"!=typeof globalThis?globalThis:e||self).Popper={})}(this,(function(e){"use strict";function t(e){if(null==e)return window;if("[object Window]"!==e.toString()){var t=e.ownerDocument;return t&&t.defaultView||window}return e}function n(e){return e instanceof t(e).Element||e instanceof Element}function r(e){return e instanceof t(e).HTMLElement||e instanceof HTMLElement}function o(e){return"undefined"!=typeof ShadowRoot&&(e instanceof t(e).ShadowRoot||e instanceof ShadowRoot)}var i=Math.max,a=Math.min,s=Math.round;function f(){var e=navigator.userAgentData;return null!=e&&e.brands&&Array.isArray(e.brands)?e.brands.map((function(e){return e.brand+"/"+e.version})).join(" "):navigator.userAgent}function c(){return!/^((?!chrome|android).)*safari/i.test(f())}function p(e,o,i){void 0===o&&(o=!1),void 0===i&&(i=!1);var a=e.getBoundingClientRect(),f=1,p=1;o&&r(e)&&(f=e.offsetWidth>0&&s(a.width)/e.offsetWidth||1,p=e.offsetHeight>0&&s(a.height)/e.offsetHeight||1);var u=(n(e)?t(e):window).visualViewport,l=!c()&&i,d=(a.left+(l&&u?u.offsetLeft:0))/f,h=(a.top+(l&&u?u.offsetTop:0))/p,m=a.width/f,v=a.height/p;return{width:m,height:v,top:h,right:d+m,bottom:h+v,left:d,x:d,y:h}}function u(e){var n=t(e);return{scrollLeft:n.pageXOffset,scrollTop:n.pageYOffset}}function l(e){return e?(e.nodeName||"").toLowerCase():null}function d(e){return((n(e)?e.ownerDocument:e.document)||window.document).documentElement}function h(e){return p(d(e)).left+u(e).scrollLeft}function m(e){return t(e).getComputedStyle(e)}function v(e){var t=m(e),n=t.overflow,r=t.overflowX,o=t.overflowY;return/auto|scroll|overlay|hidden/.test(n+o+r)}function y(e,n,o){void 0===o&&(o=!1);var i,a,f=r(n),c=r(n)&&function(e){var t=e.getBoundingClientRect(),n=s(t.width)/e.offsetWidth||1,r=s(t.height)/e.offsetHeight||1;return 1!==n||1!==r}(n),m=d(n),y=p(e,c,o),g={scrollLeft:0,scrollTop:0},b={x:0,y:0};return(f||!f&&!o)&&(("body"!==l(n)||v(m))&&(g=(i=n)!==t(i)&&r(i)?{scrollLeft:(a=i).scrollLeft,scrollTop:a.scrollTop}:u(i)),r(n)?((b=p(n,!0)).x+=n.clientLeft,b.y+=n.clientTop):m&&(b.x=h(m))),{x:y.left+g.scrollLeft-b.x,y:y.top+g.scrollTop-b.y,width:y.width,height:y.height}}function g(e){var t=p(e),n=e.offsetWidth,r=e.offsetHeight;return Math.abs(t.width-n)<=1&&(n=t.width),Math.abs(t.height-r)<=1&&(r=t.height),{x:e.offsetLeft,y:e.offsetTop,width:n,height:r}}function b(e){return"html"===l(e)?e:e.assignedSlot||e.parentNode||(o(e)?e.host:null)||d(e)}function x(e){return["html","body","#document"].indexOf(l(e))>=0?e.ownerDocument.body:r(e)&&v(e)?e:x(b(e))}function w(e,n){var r;void 0===n&&(n=[]);var o=x(e),i=o===(null==(r=e.ownerDocument)?void 0:r.body),a=t(o),s=i?[a].concat(a.visualViewport||[],v(o)?o:[]):o,f=n.concat(s);return i?f:f.concat(w(b(s)))}function O(e){return["table","td","th"].indexOf(l(e))>=0}function j(e){return r(e)&&"fixed"!==m(e).position?e.offsetParent:null}function E(e){for(var n=t(e),i=j(e);i&&O(i)&&"static"===m(i).position;)i=j(i);return i&&("html"===l(i)||"body"===l(i)&&"static"===m(i).position)?n:i||function(e){var t=/firefox/i.test(f());if(/Trident/i.test(f())&&r(e)&&"fixed"===m(e).position)return null;var n=b(e);for(o(n)&&(n=n.host);r(n)&&["html","body"].indexOf(l(n))<0;){var i=m(n);if("none"!==i.transform||"none"!==i.perspective||"paint"===i.contain||-1!==["transform","perspective"].indexOf(i.willChange)||t&&"filter"===i.willChange||t&&i.filter&&"none"!==i.filter)return n;n=n.parentNode}return null}(e)||n}var D="top",A="bottom",L="right",P="left",M="auto",k=[D,A,L,P],W="start",B="end",H="viewport",T="popper",R=k.reduce((function(e,t){return e.concat([t+"-"+W,t+"-"+B])}),[]),S=[].concat(k,[M]).reduce((function(e,t){return e.concat([t,t+"-"+W,t+"-"+B])}),[]),V=["beforeRead","read","afterRead","beforeMain","main","afterMain","beforeWrite","write","afterWrite"];function q(e){var t=new Map,n=new Set,r=[];function o(e){n.add(e.name),[].concat(e.requires||[],e.requiresIfExists||[]).forEach((function(e){if(!n.has(e)){var r=t.get(e);r&&o(r)}})),r.push(e)}return e.forEach((function(e){t.set(e.name,e)})),e.forEach((function(e){n.has(e.name)||o(e)})),r}function C(e){return e.split("-")[0]}function N(e,t){var n=t.getRootNode&&t.getRootNode();if(e.contains(t))return!0;if(n&&o(n)){var r=t;do{if(r&&e.isSameNode(r))return!0;r=r.parentNode||r.host}while(r)}return!1}function I(e){return Object.assign({},e,{left:e.x,top:e.y,right:e.x+e.width,bottom:e.y+e.height})}function _(e,r,o){return r===H?I(function(e,n){var r=t(e),o=d(e),i=r.visualViewport,a=o.clientWidth,s=o.clientHeight,f=0,p=0;if(i){a=i.width,s=i.height;var u=c();(u||!u&&"fixed"===n)&&(f=i.offsetLeft,p=i.offsetTop)}return{width:a,height:s,x:f+h(e),y:p}}(e,o)):n(r)?function(e,t){var n=p(e,!1,"fixed"===t);return n.top=n.top+e.clientTop,n.left=n.left+e.clientLeft,n.bottom=n.top+e.clientHeight,n.right=n.left+e.clientWidth,n.width=e.clientWidth,n.height=e.clientHeight,n.x=n.left,n.y=n.top,n}(r,o):I(function(e){var t,n=d(e),r=u(e),o=null==(t=e.ownerDocument)?void 0:t.body,a=i(n.scrollWidth,n.clientWidth,o?o.scrollWidth:0,o?o.clientWidth:0),s=i(n.scrollHeight,n.clientHeight,o?o.scrollHeight:0,o?o.clientHeight:0),f=-r.scrollLeft+h(e),c=-r.scrollTop;return"rtl"===m(o||n).direction&&(f+=i(n.clientWidth,o?o.clientWidth:0)-a),{width:a,height:s,x:f,y:c}}(d(e)))}function F(e,t,o,s){var f="clippingParents"===t?function(e){var t=w(b(e)),o=["absolute","fixed"].indexOf(m(e).position)>=0&&r(e)?E(e):e;return n(o)?t.filter((function(e){return n(e)&&N(e,o)&&"body"!==l(e)})):[]}(e):[].concat(t),c=[].concat(f,[o]),p=c[0],u=c.reduce((function(t,n){var r=_(e,n,s);return t.top=i(r.top,t.top),t.right=a(r.right,t.right),t.bottom=a(r.bottom,t.bottom),t.left=i(r.left,t.left),t}),_(e,p,s));return u.width=u.right-u.left,u.height=u.bottom-u.top,u.x=u.left,u.y=u.top,u}function U(e){return e.split("-")[1]}function z(e){return["top","bottom"].indexOf(e)>=0?"x":"y"}function X(e){var t,n=e.reference,r=e.element,o=e.placement,i=o?C(o):null,a=o?U(o):null,s=n.x+n.width/2-r.width/2,f=n.y+n.height/2-r.height/2;switch(i){case D:t={x:s,y:n.y-r.height};break;case A:t={x:s,y:n.y+n.height};break;case L:t={x:n.x+n.width,y:f};break;case P:t={x:n.x-r.width,y:f};break;default:t={x:n.x,y:n.y}}var c=i?z(i):null;if(null!=c){var p="y"===c?"height":"width";switch(a){case W:t[c]=t[c]-(n[p]/2-r[p]/2);break;case B:t[c]=t[c]+(n[p]/2-r[p]/2)}}return t}function Y(e){return Object.assign({},{top:0,right:0,bottom:0,left:0},e)}function G(e,t){return t.reduce((function(t,n){return t[n]=e,t}),{})}function J(e,t){void 0===t&&(t={});var r=t,o=r.placement,i=void 0===o?e.placement:o,a=r.strategy,s=void 0===a?e.strategy:a,f=r.boundary,c=void 0===f?"clippingParents":f,u=r.rootBoundary,l=void 0===u?H:u,h=r.elementContext,m=void 0===h?T:h,v=r.altBoundary,y=void 0!==v&&v,g=r.padding,b=void 0===g?0:g,x=Y("number"!=typeof b?b:G(b,k)),w=m===T?"reference":T,O=e.rects.popper,j=e.elements[y?w:m],E=F(n(j)?j:j.contextElement||d(e.elements.popper),c,l,s),P=p(e.elements.reference),M=X({reference:P,element:O,strategy:"absolute",placement:i}),W=I(Object.assign({},O,M)),B=m===T?W:P,R={top:E.top-B.top+x.top,bottom:B.bottom-E.bottom+x.bottom,left:E.left-B.left+x.left,right:B.right-E.right+x.right},S=e.modifiersData.offset;if(m===T&&S){var V=S[i];Object.keys(R).forEach((function(e){var t=[L,A].indexOf(e)>=0?1:-1,n=[D,A].indexOf(e)>=0?"y":"x";R[e]+=V[n]*t}))}return R}var K={placement:"bottom",modifiers:[],strategy:"absolute"};function Q(){for(var e=arguments.length,t=new Array(e),n=0;n=0?-1:1,i="function"==typeof n?n(Object.assign({},t,{placement:e})):n,a=i[0],s=i[1];return a=a||0,s=(s||0)*o,[P,L].indexOf(r)>=0?{x:s,y:a}:{x:a,y:s}}(n,t.rects,i),e}),{}),s=a[t.placement],f=s.x,c=s.y;null!=t.modifiersData.popperOffsets&&(t.modifiersData.popperOffsets.x+=f,t.modifiersData.popperOffsets.y+=c),t.modifiersData[r]=a}},se={left:"right",right:"left",bottom:"top",top:"bottom"};function fe(e){return e.replace(/left|right|bottom|top/g,(function(e){return se[e]}))}var ce={start:"end",end:"start"};function pe(e){return e.replace(/start|end/g,(function(e){return ce[e]}))}function ue(e,t){void 0===t&&(t={});var n=t,r=n.placement,o=n.boundary,i=n.rootBoundary,a=n.padding,s=n.flipVariations,f=n.allowedAutoPlacements,c=void 0===f?S:f,p=U(r),u=p?s?R:R.filter((function(e){return U(e)===p})):k,l=u.filter((function(e){return c.indexOf(e)>=0}));0===l.length&&(l=u);var d=l.reduce((function(t,n){return t[n]=J(e,{placement:n,boundary:o,rootBoundary:i,padding:a})[C(n)],t}),{});return Object.keys(d).sort((function(e,t){return d[e]-d[t]}))}var le={name:"flip",enabled:!0,phase:"main",fn:function(e){var t=e.state,n=e.options,r=e.name;if(!t.modifiersData[r]._skip){for(var o=n.mainAxis,i=void 0===o||o,a=n.altAxis,s=void 0===a||a,f=n.fallbackPlacements,c=n.padding,p=n.boundary,u=n.rootBoundary,l=n.altBoundary,d=n.flipVariations,h=void 0===d||d,m=n.allowedAutoPlacements,v=t.options.placement,y=C(v),g=f||(y===v||!h?[fe(v)]:function(e){if(C(e)===M)return[];var t=fe(e);return[pe(e),t,pe(t)]}(v)),b=[v].concat(g).reduce((function(e,n){return e.concat(C(n)===M?ue(t,{placement:n,boundary:p,rootBoundary:u,padding:c,flipVariations:h,allowedAutoPlacements:m}):n)}),[]),x=t.rects.reference,w=t.rects.popper,O=new Map,j=!0,E=b[0],k=0;k=0,S=R?"width":"height",V=J(t,{placement:B,boundary:p,rootBoundary:u,altBoundary:l,padding:c}),q=R?T?L:P:T?A:D;x[S]>w[S]&&(q=fe(q));var N=fe(q),I=[];if(i&&I.push(V[H]<=0),s&&I.push(V[q]<=0,V[N]<=0),I.every((function(e){return e}))){E=B,j=!1;break}O.set(B,I)}if(j)for(var _=function(e){var t=b.find((function(t){var n=O.get(t);if(n)return n.slice(0,e).every((function(e){return e}))}));if(t)return E=t,"break"},F=h?3:1;F>0;F--){if("break"===_(F))break}t.placement!==E&&(t.modifiersData[r]._skip=!0,t.placement=E,t.reset=!0)}},requiresIfExists:["offset"],data:{_skip:!1}};function de(e,t,n){return i(e,a(t,n))}var he={name:"preventOverflow",enabled:!0,phase:"main",fn:function(e){var t=e.state,n=e.options,r=e.name,o=n.mainAxis,s=void 0===o||o,f=n.altAxis,c=void 0!==f&&f,p=n.boundary,u=n.rootBoundary,l=n.altBoundary,d=n.padding,h=n.tether,m=void 0===h||h,v=n.tetherOffset,y=void 0===v?0:v,b=J(t,{boundary:p,rootBoundary:u,padding:d,altBoundary:l}),x=C(t.placement),w=U(t.placement),O=!w,j=z(x),M="x"===j?"y":"x",k=t.modifiersData.popperOffsets,B=t.rects.reference,H=t.rects.popper,T="function"==typeof y?y(Object.assign({},t.rects,{placement:t.placement})):y,R="number"==typeof T?{mainAxis:T,altAxis:T}:Object.assign({mainAxis:0,altAxis:0},T),S=t.modifiersData.offset?t.modifiersData.offset[t.placement]:null,V={x:0,y:0};if(k){if(s){var q,N="y"===j?D:P,I="y"===j?A:L,_="y"===j?"height":"width",F=k[j],X=F+b[N],Y=F-b[I],G=m?-H[_]/2:0,K=w===W?B[_]:H[_],Q=w===W?-H[_]:-B[_],Z=t.elements.arrow,$=m&&Z?g(Z):{width:0,height:0},ee=t.modifiersData["arrow#persistent"]?t.modifiersData["arrow#persistent"].padding:{top:0,right:0,bottom:0,left:0},te=ee[N],ne=ee[I],re=de(0,B[_],$[_]),oe=O?B[_]/2-G-re-te-R.mainAxis:K-re-te-R.mainAxis,ie=O?-B[_]/2+G+re+ne+R.mainAxis:Q+re+ne+R.mainAxis,ae=t.elements.arrow&&E(t.elements.arrow),se=ae?"y"===j?ae.clientTop||0:ae.clientLeft||0:0,fe=null!=(q=null==S?void 0:S[j])?q:0,ce=F+ie-fe,pe=de(m?a(X,F+oe-fe-se):X,F,m?i(Y,ce):Y);k[j]=pe,V[j]=pe-F}if(c){var ue,le="x"===j?D:P,he="x"===j?A:L,me=k[M],ve="y"===M?"height":"width",ye=me+b[le],ge=me-b[he],be=-1!==[D,P].indexOf(x),xe=null!=(ue=null==S?void 0:S[M])?ue:0,we=be?ye:me-B[ve]-H[ve]-xe+R.altAxis,Oe=be?me+B[ve]+H[ve]-xe-R.altAxis:ge,je=m&&be?function(e,t,n){var r=de(e,t,n);return r>n?n:r}(we,me,Oe):de(m?we:ye,me,m?Oe:ge);k[M]=je,V[M]=je-me}t.modifiersData[r]=V}},requiresIfExists:["offset"]};var me={name:"arrow",enabled:!0,phase:"main",fn:function(e){var t,n=e.state,r=e.name,o=e.options,i=n.elements.arrow,a=n.modifiersData.popperOffsets,s=C(n.placement),f=z(s),c=[P,L].indexOf(s)>=0?"height":"width";if(i&&a){var p=function(e,t){return Y("number"!=typeof(e="function"==typeof e?e(Object.assign({},t.rects,{placement:t.placement})):e)?e:G(e,k))}(o.padding,n),u=g(i),l="y"===f?D:P,d="y"===f?A:L,h=n.rects.reference[c]+n.rects.reference[f]-a[f]-n.rects.popper[c],m=a[f]-n.rects.reference[f],v=E(i),y=v?"y"===f?v.clientHeight||0:v.clientWidth||0:0,b=h/2-m/2,x=p[l],w=y-u[c]-p[d],O=y/2-u[c]/2+b,j=de(x,O,w),M=f;n.modifiersData[r]=((t={})[M]=j,t.centerOffset=j-O,t)}},effect:function(e){var t=e.state,n=e.options.element,r=void 0===n?"[data-popper-arrow]":n;null!=r&&("string"!=typeof r||(r=t.elements.popper.querySelector(r)))&&N(t.elements.popper,r)&&(t.elements.arrow=r)},requires:["popperOffsets"],requiresIfExists:["preventOverflow"]};function ve(e,t,n){return void 0===n&&(n={x:0,y:0}),{top:e.top-t.height-n.y,right:e.right-t.width+n.x,bottom:e.bottom-t.height+n.y,left:e.left-t.width-n.x}}function ye(e){return[D,L,A,P].some((function(t){return e[t]>=0}))}var ge={name:"hide",enabled:!0,phase:"main",requiresIfExists:["preventOverflow"],fn:function(e){var t=e.state,n=e.name,r=t.rects.reference,o=t.rects.popper,i=t.modifiersData.preventOverflow,a=J(t,{elementContext:"reference"}),s=J(t,{altBoundary:!0}),f=ve(a,r),c=ve(s,o,i),p=ye(f),u=ye(c);t.modifiersData[n]={referenceClippingOffsets:f,popperEscapeOffsets:c,isReferenceHidden:p,hasPopperEscaped:u},t.attributes.popper=Object.assign({},t.attributes.popper,{"data-popper-reference-hidden":p,"data-popper-escaped":u})}},be=Z({defaultModifiers:[ee,te,oe,ie]}),xe=[ee,te,oe,ie,ae,le,he,me,ge],we=Z({defaultModifiers:xe});e.applyStyles=ie,e.arrow=me,e.computeStyles=oe,e.createPopper=we,e.createPopperLite=be,e.defaultModifiers=xe,e.detectOverflow=J,e.eventListeners=ee,e.flip=le,e.hide=ge,e.offset=ae,e.popperGenerator=Z,e.popperOffsets=te,e.preventOverflow=he,Object.defineProperty(e,"__esModule",{value:!0})})); - """ - return code - - -def tippy_js_code(): - code = """ - !function(t,e){"object"==typeof exports&&"undefined"!=typeof module?module.exports=e(require("@popperjs/core")):"function"==typeof define&&define.amd?define(["@popperjs/core"],e):(t=t||self).tippy=e(t.Popper)}(this,(function(t){"use strict";var e="undefined"!=typeof window&&"undefined"!=typeof document,n=!!e&&!!window.msCrypto,r={passive:!0,capture:!0},o=function(){return document.body};function i(t,e,n){if(Array.isArray(t)){var r=t[e];return null==r?Array.isArray(n)?n[e]:n:r}return t}function a(t,e){var n={}.toString.call(t);return 0===n.indexOf("[object")&&n.indexOf(e+"]")>-1}function s(t,e){return"function"==typeof t?t.apply(void 0,e):t}function u(t,e){return 0===e?t:function(r){clearTimeout(n),n=setTimeout((function(){t(r)}),e)};var n}function p(t,e){var n=Object.assign({},t);return e.forEach((function(t){delete n[t]})),n}function c(t){return[].concat(t)}function f(t,e){-1===t.indexOf(e)&&t.push(e)}function l(t){return t.split("-")[0]}function d(t){return[].slice.call(t)}function v(t){return Object.keys(t).reduce((function(e,n){return void 0!==t[n]&&(e[n]=t[n]),e}),{})}function m(){return document.createElement("div")}function g(t){return["Element","Fragment"].some((function(e){return a(t,e)}))}function h(t){return a(t,"MouseEvent")}function b(t){return!(!t||!t._tippy||t._tippy.reference!==t)}function y(t){return g(t)?[t]:function(t){return a(t,"NodeList")}(t)?d(t):Array.isArray(t)?t:d(document.querySelectorAll(t))}function w(t,e){t.forEach((function(t){t&&(t.style.transitionDuration=e+"ms")}))}function x(t,e){t.forEach((function(t){t&&t.setAttribute("data-state",e)}))}function E(t){var e,n=c(t)[0];return null!=n&&null!=(e=n.ownerDocument)&&e.body?n.ownerDocument:document}function O(t,e,n){var r=e+"EventListener";["transitionend","webkitTransitionEnd"].forEach((function(e){t[r](e,n)}))}function C(t,e){for(var n=e;n;){var r;if(t.contains(n))return!0;n=null==n.getRootNode||null==(r=n.getRootNode())?void 0:r.host}return!1}var T={isTouch:!1},A=0;function L(){T.isTouch||(T.isTouch=!0,window.performance&&document.addEventListener("mousemove",D))}function D(){var t=performance.now();t-A<20&&(T.isTouch=!1,document.removeEventListener("mousemove",D)),A=t}function k(){var t=document.activeElement;if(b(t)){var e=t._tippy;t.blur&&!e.state.isVisible&&t.blur()}}var R=Object.assign({appendTo:o,aria:{content:"auto",expanded:"auto"},delay:0,duration:[300,250],getReferenceClientRect:null,hideOnClick:!0,ignoreAttributes:!1,interactive:!1,interactiveBorder:2,interactiveDebounce:0,moveTransition:"",offset:[0,10],onAfterUpdate:function(){},onBeforeUpdate:function(){},onCreate:function(){},onDestroy:function(){},onHidden:function(){},onHide:function(){},onMount:function(){},onShow:function(){},onShown:function(){},onTrigger:function(){},onUntrigger:function(){},onClickOutside:function(){},placement:"top",plugins:[],popperOptions:{},render:null,showOnCreate:!1,touch:!0,trigger:"mouseenter focus",triggerTarget:null},{animateFill:!1,followCursor:!1,inlinePositioning:!1,sticky:!1},{allowHTML:!1,animation:"fade",arrow:!0,content:"",inertia:!1,maxWidth:350,role:"tooltip",theme:"",zIndex:9999}),P=Object.keys(R);function j(t){var e=(t.plugins||[]).reduce((function(e,n){var r,o=n.name,i=n.defaultValue;o&&(e[o]=void 0!==t[o]?t[o]:null!=(r=R[o])?r:i);return e}),{});return Object.assign({},t,e)}function M(t,e){var n=Object.assign({},e,{content:s(e.content,[t])},e.ignoreAttributes?{}:function(t,e){return(e?Object.keys(j(Object.assign({},R,{plugins:e}))):P).reduce((function(e,n){var r=(t.getAttribute("data-tippy-"+n)||"").trim();if(!r)return e;if("content"===n)e[n]=r;else try{e[n]=JSON.parse(r)}catch(t){e[n]=r}return e}),{})}(t,e.plugins));return n.aria=Object.assign({},R.aria,n.aria),n.aria={expanded:"auto"===n.aria.expanded?e.interactive:n.aria.expanded,content:"auto"===n.aria.content?e.interactive?null:"describedby":n.aria.content},n}function V(t,e){t.innerHTML=e}function I(t){var e=m();return!0===t?e.className="tippy-arrow":(e.className="tippy-svg-arrow",g(t)?e.appendChild(t):V(e,t)),e}function S(t,e){g(e.content)?(V(t,""),t.appendChild(e.content)):"function"!=typeof e.content&&(e.allowHTML?V(t,e.content):t.textContent=e.content)}function B(t){var e=t.firstElementChild,n=d(e.children);return{box:e,content:n.find((function(t){return t.classList.contains("tippy-content")})),arrow:n.find((function(t){return t.classList.contains("tippy-arrow")||t.classList.contains("tippy-svg-arrow")})),backdrop:n.find((function(t){return t.classList.contains("tippy-backdrop")}))}}function N(t){var e=m(),n=m();n.className="tippy-box",n.setAttribute("data-state","hidden"),n.setAttribute("tabindex","-1");var r=m();function o(n,r){var o=B(e),i=o.box,a=o.content,s=o.arrow;r.theme?i.setAttribute("data-theme",r.theme):i.removeAttribute("data-theme"),"string"==typeof r.animation?i.setAttribute("data-animation",r.animation):i.removeAttribute("data-animation"),r.inertia?i.setAttribute("data-inertia",""):i.removeAttribute("data-inertia"),i.style.maxWidth="number"==typeof r.maxWidth?r.maxWidth+"px":r.maxWidth,r.role?i.setAttribute("role",r.role):i.removeAttribute("role"),n.content===r.content&&n.allowHTML===r.allowHTML||S(a,t.props),r.arrow?s?n.arrow!==r.arrow&&(i.removeChild(s),i.appendChild(I(r.arrow))):i.appendChild(I(r.arrow)):s&&i.removeChild(s)}return r.className="tippy-content",r.setAttribute("data-state","hidden"),S(r,t.props),e.appendChild(n),n.appendChild(r),o(t.props,t.props),{popper:e,onUpdate:o}}N.$$tippy=!0;var H=1,U=[],_=[];function z(e,a){var p,g,b,y,A,L,D,k,P=M(e,Object.assign({},R,j(v(a)))),V=!1,I=!1,S=!1,N=!1,z=[],F=u(wt,P.interactiveDebounce),W=H++,X=(k=P.plugins).filter((function(t,e){return k.indexOf(t)===e})),Y={id:W,reference:e,popper:m(),popperInstance:null,props:P,state:{isEnabled:!0,isVisible:!1,isDestroyed:!1,isMounted:!1,isShown:!1},plugins:X,clearDelayTimeouts:function(){clearTimeout(p),clearTimeout(g),cancelAnimationFrame(b)},setProps:function(t){if(Y.state.isDestroyed)return;at("onBeforeUpdate",[Y,t]),bt();var n=Y.props,r=M(e,Object.assign({},n,v(t),{ignoreAttributes:!0}));Y.props=r,ht(),n.interactiveDebounce!==r.interactiveDebounce&&(pt(),F=u(wt,r.interactiveDebounce));n.triggerTarget&&!r.triggerTarget?c(n.triggerTarget).forEach((function(t){t.removeAttribute("aria-expanded")})):r.triggerTarget&&e.removeAttribute("aria-expanded");ut(),it(),J&&J(n,r);Y.popperInstance&&(Ct(),At().forEach((function(t){requestAnimationFrame(t._tippy.popperInstance.forceUpdate)})));at("onAfterUpdate",[Y,t])},setContent:function(t){Y.setProps({content:t})},show:function(){var t=Y.state.isVisible,e=Y.state.isDestroyed,n=!Y.state.isEnabled,r=T.isTouch&&!Y.props.touch,a=i(Y.props.duration,0,R.duration);if(t||e||n||r)return;if(et().hasAttribute("disabled"))return;if(at("onShow",[Y],!1),!1===Y.props.onShow(Y))return;Y.state.isVisible=!0,tt()&&($.style.visibility="visible");it(),dt(),Y.state.isMounted||($.style.transition="none");if(tt()){var u=rt(),p=u.box,c=u.content;w([p,c],0)}L=function(){var t;if(Y.state.isVisible&&!N){if(N=!0,$.offsetHeight,$.style.transition=Y.props.moveTransition,tt()&&Y.props.animation){var e=rt(),n=e.box,r=e.content;w([n,r],a),x([n,r],"visible")}st(),ut(),f(_,Y),null==(t=Y.popperInstance)||t.forceUpdate(),at("onMount",[Y]),Y.props.animation&&tt()&&function(t,e){mt(t,e)}(a,(function(){Y.state.isShown=!0,at("onShown",[Y])}))}},function(){var t,e=Y.props.appendTo,n=et();t=Y.props.interactive&&e===o||"parent"===e?n.parentNode:s(e,[n]);t.contains($)||t.appendChild($);Y.state.isMounted=!0,Ct()}()},hide:function(){var t=!Y.state.isVisible,e=Y.state.isDestroyed,n=!Y.state.isEnabled,r=i(Y.props.duration,1,R.duration);if(t||e||n)return;if(at("onHide",[Y],!1),!1===Y.props.onHide(Y))return;Y.state.isVisible=!1,Y.state.isShown=!1,N=!1,V=!1,tt()&&($.style.visibility="hidden");if(pt(),vt(),it(!0),tt()){var o=rt(),a=o.box,s=o.content;Y.props.animation&&(w([a,s],r),x([a,s],"hidden"))}st(),ut(),Y.props.animation?tt()&&function(t,e){mt(t,(function(){!Y.state.isVisible&&$.parentNode&&$.parentNode.contains($)&&e()}))}(r,Y.unmount):Y.unmount()},hideWithInteractivity:function(t){nt().addEventListener("mousemove",F),f(U,F),F(t)},enable:function(){Y.state.isEnabled=!0},disable:function(){Y.hide(),Y.state.isEnabled=!1},unmount:function(){Y.state.isVisible&&Y.hide();if(!Y.state.isMounted)return;Tt(),At().forEach((function(t){t._tippy.unmount()})),$.parentNode&&$.parentNode.removeChild($);_=_.filter((function(t){return t!==Y})),Y.state.isMounted=!1,at("onHidden",[Y])},destroy:function(){if(Y.state.isDestroyed)return;Y.clearDelayTimeouts(),Y.unmount(),bt(),delete e._tippy,Y.state.isDestroyed=!0,at("onDestroy",[Y])}};if(!P.render)return Y;var q=P.render(Y),$=q.popper,J=q.onUpdate;$.setAttribute("data-tippy-root",""),$.id="tippy-"+Y.id,Y.popper=$,e._tippy=Y,$._tippy=Y;var G=X.map((function(t){return t.fn(Y)})),K=e.hasAttribute("aria-expanded");return ht(),ut(),it(),at("onCreate",[Y]),P.showOnCreate&&Lt(),$.addEventListener("mouseenter",(function(){Y.props.interactive&&Y.state.isVisible&&Y.clearDelayTimeouts()})),$.addEventListener("mouseleave",(function(){Y.props.interactive&&Y.props.trigger.indexOf("mouseenter")>=0&&nt().addEventListener("mousemove",F)})),Y;function Q(){var t=Y.props.touch;return Array.isArray(t)?t:[t,0]}function Z(){return"hold"===Q()[0]}function tt(){var t;return!(null==(t=Y.props.render)||!t.$$tippy)}function et(){return D||e}function nt(){var t=et().parentNode;return t?E(t):document}function rt(){return B($)}function ot(t){return Y.state.isMounted&&!Y.state.isVisible||T.isTouch||y&&"focus"===y.type?0:i(Y.props.delay,t?0:1,R.delay)}function it(t){void 0===t&&(t=!1),$.style.pointerEvents=Y.props.interactive&&!t?"":"none",$.style.zIndex=""+Y.props.zIndex}function at(t,e,n){var r;(void 0===n&&(n=!0),G.forEach((function(n){n[t]&&n[t].apply(n,e)})),n)&&(r=Y.props)[t].apply(r,e)}function st(){var t=Y.props.aria;if(t.content){var n="aria-"+t.content,r=$.id;c(Y.props.triggerTarget||e).forEach((function(t){var e=t.getAttribute(n);if(Y.state.isVisible)t.setAttribute(n,e?e+" "+r:r);else{var o=e&&e.replace(r,"").trim();o?t.setAttribute(n,o):t.removeAttribute(n)}}))}}function ut(){!K&&Y.props.aria.expanded&&c(Y.props.triggerTarget||e).forEach((function(t){Y.props.interactive?t.setAttribute("aria-expanded",Y.state.isVisible&&t===et()?"true":"false"):t.removeAttribute("aria-expanded")}))}function pt(){nt().removeEventListener("mousemove",F),U=U.filter((function(t){return t!==F}))}function ct(t){if(!T.isTouch||!S&&"mousedown"!==t.type){var n=t.composedPath&&t.composedPath()[0]||t.target;if(!Y.props.interactive||!C($,n)){if(c(Y.props.triggerTarget||e).some((function(t){return C(t,n)}))){if(T.isTouch)return;if(Y.state.isVisible&&Y.props.trigger.indexOf("click")>=0)return}else at("onClickOutside",[Y,t]);!0===Y.props.hideOnClick&&(Y.clearDelayTimeouts(),Y.hide(),I=!0,setTimeout((function(){I=!1})),Y.state.isMounted||vt())}}}function ft(){S=!0}function lt(){S=!1}function dt(){var t=nt();t.addEventListener("mousedown",ct,!0),t.addEventListener("touchend",ct,r),t.addEventListener("touchstart",lt,r),t.addEventListener("touchmove",ft,r)}function vt(){var t=nt();t.removeEventListener("mousedown",ct,!0),t.removeEventListener("touchend",ct,r),t.removeEventListener("touchstart",lt,r),t.removeEventListener("touchmove",ft,r)}function mt(t,e){var n=rt().box;function r(t){t.target===n&&(O(n,"remove",r),e())}if(0===t)return e();O(n,"remove",A),O(n,"add",r),A=r}function gt(t,n,r){void 0===r&&(r=!1),c(Y.props.triggerTarget||e).forEach((function(e){e.addEventListener(t,n,r),z.push({node:e,eventType:t,handler:n,options:r})}))}function ht(){var t;Z()&&(gt("touchstart",yt,{passive:!0}),gt("touchend",xt,{passive:!0})),(t=Y.props.trigger,t.split(/\s+/).filter(Boolean)).forEach((function(t){if("manual"!==t)switch(gt(t,yt),t){case"mouseenter":gt("mouseleave",xt);break;case"focus":gt(n?"focusout":"blur",Et);break;case"focusin":gt("focusout",Et)}}))}function bt(){z.forEach((function(t){var e=t.node,n=t.eventType,r=t.handler,o=t.options;e.removeEventListener(n,r,o)})),z=[]}function yt(t){var e,n=!1;if(Y.state.isEnabled&&!Ot(t)&&!I){var r="focus"===(null==(e=y)?void 0:e.type);y=t,D=t.currentTarget,ut(),!Y.state.isVisible&&h(t)&&U.forEach((function(e){return e(t)})),"click"===t.type&&(Y.props.trigger.indexOf("mouseenter")<0||V)&&!1!==Y.props.hideOnClick&&Y.state.isVisible?n=!0:Lt(t),"click"===t.type&&(V=!n),n&&!r&&Dt(t)}}function wt(t){var e=t.target,n=et().contains(e)||$.contains(e);"mousemove"===t.type&&n||function(t,e){var n=e.clientX,r=e.clientY;return t.every((function(t){var e=t.popperRect,o=t.popperState,i=t.props.interactiveBorder,a=l(o.placement),s=o.modifiersData.offset;if(!s)return!0;var u="bottom"===a?s.top.y:0,p="top"===a?s.bottom.y:0,c="right"===a?s.left.x:0,f="left"===a?s.right.x:0,d=e.top-r+u>i,v=r-e.bottom-p>i,m=e.left-n+c>i,g=n-e.right-f>i;return d||v||m||g}))}(At().concat($).map((function(t){var e,n=null==(e=t._tippy.popperInstance)?void 0:e.state;return n?{popperRect:t.getBoundingClientRect(),popperState:n,props:P}:null})).filter(Boolean),t)&&(pt(),Dt(t))}function xt(t){Ot(t)||Y.props.trigger.indexOf("click")>=0&&V||(Y.props.interactive?Y.hideWithInteractivity(t):Dt(t))}function Et(t){Y.props.trigger.indexOf("focusin")<0&&t.target!==et()||Y.props.interactive&&t.relatedTarget&&$.contains(t.relatedTarget)||Dt(t)}function Ot(t){return!!T.isTouch&&Z()!==t.type.indexOf("touch")>=0}function Ct(){Tt();var n=Y.props,r=n.popperOptions,o=n.placement,i=n.offset,a=n.getReferenceClientRect,s=n.moveTransition,u=tt()?B($).arrow:null,p=a?{getBoundingClientRect:a,contextElement:a.contextElement||et()}:e,c=[{name:"offset",options:{offset:i}},{name:"preventOverflow",options:{padding:{top:2,bottom:2,left:5,right:5}}},{name:"flip",options:{padding:5}},{name:"computeStyles",options:{adaptive:!s}},{name:"$$tippy",enabled:!0,phase:"beforeWrite",requires:["computeStyles"],fn:function(t){var e=t.state;if(tt()){var n=rt().box;["placement","reference-hidden","escaped"].forEach((function(t){"placement"===t?n.setAttribute("data-placement",e.placement):e.attributes.popper["data-popper-"+t]?n.setAttribute("data-"+t,""):n.removeAttribute("data-"+t)})),e.attributes.popper={}}}}];tt()&&u&&c.push({name:"arrow",options:{element:u,padding:3}}),c.push.apply(c,(null==r?void 0:r.modifiers)||[]),Y.popperInstance=t.createPopper(p,$,Object.assign({},r,{placement:o,onFirstUpdate:L,modifiers:c}))}function Tt(){Y.popperInstance&&(Y.popperInstance.destroy(),Y.popperInstance=null)}function At(){return d($.querySelectorAll("[data-tippy-root]"))}function Lt(t){Y.clearDelayTimeouts(),t&&at("onTrigger",[Y,t]),dt();var e=ot(!0),n=Q(),r=n[0],o=n[1];T.isTouch&&"hold"===r&&o&&(e=o),e?p=setTimeout((function(){Y.show()}),e):Y.show()}function Dt(t){if(Y.clearDelayTimeouts(),at("onUntrigger",[Y,t]),Y.state.isVisible){if(!(Y.props.trigger.indexOf("mouseenter")>=0&&Y.props.trigger.indexOf("click")>=0&&["mouseleave","mousemove"].indexOf(t.type)>=0&&V)){var e=ot(!1);e?g=setTimeout((function(){Y.state.isVisible&&Y.hide()}),e):b=requestAnimationFrame((function(){Y.hide()}))}}else vt()}}function F(t,e){void 0===e&&(e={});var n=R.plugins.concat(e.plugins||[]);document.addEventListener("touchstart",L,r),window.addEventListener("blur",k);var o=Object.assign({},e,{plugins:n}),i=y(t).reduce((function(t,e){var n=e&&z(e,o);return n&&t.push(n),t}),[]);return g(t)?i[0]:i}F.defaultProps=R,F.setDefaultProps=function(t){Object.keys(t).forEach((function(e){R[e]=t[e]}))},F.currentInput=T;var W=Object.assign({},t.applyStyles,{effect:function(t){var e=t.state,n={popper:{position:e.options.strategy,left:"0",top:"0",margin:"0"},arrow:{position:"absolute"},reference:{}};Object.assign(e.elements.popper.style,n.popper),e.styles=n,e.elements.arrow&&Object.assign(e.elements.arrow.style,n.arrow)}}),X={mouseover:"mouseenter",focusin:"focus",click:"click"};var Y={name:"animateFill",defaultValue:!1,fn:function(t){var e;if(null==(e=t.props.render)||!e.$$tippy)return{};var n=B(t.popper),r=n.box,o=n.content,i=t.props.animateFill?function(){var t=m();return t.className="tippy-backdrop",x([t],"hidden"),t}():null;return{onCreate:function(){i&&(r.insertBefore(i,r.firstElementChild),r.setAttribute("data-animatefill",""),r.style.overflow="hidden",t.setProps({arrow:!1,animation:"shift-away"}))},onMount:function(){if(i){var t=r.style.transitionDuration,e=Number(t.replace("ms",""));o.style.transitionDelay=Math.round(e/10)+"ms",i.style.transitionDuration=t,x([i],"visible")}},onShow:function(){i&&(i.style.transitionDuration="0ms")},onHide:function(){i&&x([i],"hidden")}}}};var q={clientX:0,clientY:0},$=[];function J(t){var e=t.clientX,n=t.clientY;q={clientX:e,clientY:n}}var G={name:"followCursor",defaultValue:!1,fn:function(t){var e=t.reference,n=E(t.props.triggerTarget||e),r=!1,o=!1,i=!0,a=t.props;function s(){return"initial"===t.props.followCursor&&t.state.isVisible}function u(){n.addEventListener("mousemove",f)}function p(){n.removeEventListener("mousemove",f)}function c(){r=!0,t.setProps({getReferenceClientRect:null}),r=!1}function f(n){var r=!n.target||e.contains(n.target),o=t.props.followCursor,i=n.clientX,a=n.clientY,s=e.getBoundingClientRect(),u=i-s.left,p=a-s.top;!r&&t.props.interactive||t.setProps({getReferenceClientRect:function(){var t=e.getBoundingClientRect(),n=i,r=a;"initial"===o&&(n=t.left+u,r=t.top+p);var s="horizontal"===o?t.top:r,c="vertical"===o?t.right:n,f="horizontal"===o?t.bottom:r,l="vertical"===o?t.left:n;return{width:c-l,height:f-s,top:s,right:c,bottom:f,left:l}}})}function l(){t.props.followCursor&&($.push({instance:t,doc:n}),function(t){t.addEventListener("mousemove",J)}(n))}function d(){0===($=$.filter((function(e){return e.instance!==t}))).filter((function(t){return t.doc===n})).length&&function(t){t.removeEventListener("mousemove",J)}(n)}return{onCreate:l,onDestroy:d,onBeforeUpdate:function(){a=t.props},onAfterUpdate:function(e,n){var i=n.followCursor;r||void 0!==i&&a.followCursor!==i&&(d(),i?(l(),!t.state.isMounted||o||s()||u()):(p(),c()))},onMount:function(){t.props.followCursor&&!o&&(i&&(f(q),i=!1),s()||u())},onTrigger:function(t,e){h(e)&&(q={clientX:e.clientX,clientY:e.clientY}),o="focus"===e.type},onHidden:function(){t.props.followCursor&&(c(),p(),i=!0)}}}};var K={name:"inlinePositioning",defaultValue:!1,fn:function(t){var e,n=t.reference;var r=-1,o=!1,i=[],a={name:"tippyInlinePositioning",enabled:!0,phase:"afterWrite",fn:function(o){var a=o.state;t.props.inlinePositioning&&(-1!==i.indexOf(a.placement)&&(i=[]),e!==a.placement&&-1===i.indexOf(a.placement)&&(i.push(a.placement),t.setProps({getReferenceClientRect:function(){return function(t){return function(t,e,n,r){if(n.length<2||null===t)return e;if(2===n.length&&r>=0&&n[0].left>n[1].right)return n[r]||e;switch(t){case"top":case"bottom":var o=n[0],i=n[n.length-1],a="top"===t,s=o.top,u=i.bottom,p=a?o.left:i.left,c=a?o.right:i.right;return{top:s,bottom:u,left:p,right:c,width:c-p,height:u-s};case"left":case"right":var f=Math.min.apply(Math,n.map((function(t){return t.left}))),l=Math.max.apply(Math,n.map((function(t){return t.right}))),d=n.filter((function(e){return"left"===t?e.left===f:e.right===l})),v=d[0].top,m=d[d.length-1].bottom;return{top:v,bottom:m,left:f,right:l,width:l-f,height:m-v};default:return e}}(l(t),n.getBoundingClientRect(),d(n.getClientRects()),r)}(a.placement)}})),e=a.placement)}};function s(){var e;o||(e=function(t,e){var n;return{popperOptions:Object.assign({},t.popperOptions,{modifiers:[].concat(((null==(n=t.popperOptions)?void 0:n.modifiers)||[]).filter((function(t){return t.name!==e.name})),[e])})}}(t.props,a),o=!0,t.setProps(e),o=!1)}return{onCreate:s,onAfterUpdate:s,onTrigger:function(e,n){if(h(n)){var o=d(t.reference.getClientRects()),i=o.find((function(t){return t.left-2<=n.clientX&&t.right+2>=n.clientX&&t.top-2<=n.clientY&&t.bottom+2>=n.clientY})),a=o.indexOf(i);r=a>-1?a:r}},onHidden:function(){r=-1}}}};var Q={name:"sticky",defaultValue:!1,fn:function(t){var e=t.reference,n=t.popper;function r(e){return!0===t.props.sticky||t.props.sticky===e}var o=null,i=null;function a(){var s=r("reference")?(t.popperInstance?t.popperInstance.state.elements.reference:e).getBoundingClientRect():null,u=r("popper")?n.getBoundingClientRect():null;(s&&Z(o,s)||u&&Z(i,u))&&t.popperInstance&&t.popperInstance.update(),o=s,i=u,t.state.isMounted&&requestAnimationFrame(a)}return{onMount:function(){t.props.sticky&&a()}}}};function Z(t,e){return!t||!e||(t.top!==e.top||t.right!==e.right||t.bottom!==e.bottom||t.left!==e.left)}return e&&function(t){var e=document.createElement("style");e.textContent=t,e.setAttribute("data-tippy-stylesheet","");var n=document.head,r=document.querySelector("head>style,head>link");r?n.insertBefore(e,r):n.appendChild(e)}('.tippy-box[data-animation=fade][data-state=hidden]{opacity:0}[data-tippy-root]{max-width:calc(100vw - 10px)}.tippy-box{position:relative;background-color:#333;color:#fff;border-radius:4px;font-size:14px;line-height:1.4;white-space:normal;outline:0;transition-property:transform,visibility,opacity}.tippy-box[data-placement^=top]>.tippy-arrow{bottom:0}.tippy-box[data-placement^=top]>.tippy-arrow:before{bottom:-7px;left:0;border-width:8px 8px 0;border-top-color:initial;transform-origin:center top}.tippy-box[data-placement^=bottom]>.tippy-arrow{top:0}.tippy-box[data-placement^=bottom]>.tippy-arrow:before{top:-7px;left:0;border-width:0 8px 8px;border-bottom-color:initial;transform-origin:center bottom}.tippy-box[data-placement^=left]>.tippy-arrow{right:0}.tippy-box[data-placement^=left]>.tippy-arrow:before{border-width:8px 0 8px 8px;border-left-color:initial;right:-7px;transform-origin:center left}.tippy-box[data-placement^=right]>.tippy-arrow{left:0}.tippy-box[data-placement^=right]>.tippy-arrow:before{left:-7px;border-width:8px 8px 8px 0;border-right-color:initial;transform-origin:center right}.tippy-box[data-inertia][data-state=visible]{transition-timing-function:cubic-bezier(.54,1.5,.38,1.11)}.tippy-arrow{width:16px;height:16px;color:#333}.tippy-arrow:before{content:"";position:absolute;border-color:transparent;border-style:solid}.tippy-content{position:relative;padding:5px 9px;z-index:1}'),F.setDefaultProps({plugins:[Y,G,K,Q],render:N}),F.createSingleton=function(t,e){var n;void 0===e&&(e={});var r,o=t,i=[],a=[],s=e.overrides,u=[],f=!1;function l(){a=o.map((function(t){return c(t.props.triggerTarget||t.reference)})).reduce((function(t,e){return t.concat(e)}),[])}function d(){i=o.map((function(t){return t.reference}))}function v(t){o.forEach((function(e){t?e.enable():e.disable()}))}function g(t){return o.map((function(e){var n=e.setProps;return e.setProps=function(o){n(o),e.reference===r&&t.setProps(o)},function(){e.setProps=n}}))}function h(t,e){var n=a.indexOf(e);if(e!==r){r=e;var u=(s||[]).concat("content").reduce((function(t,e){return t[e]=o[n].props[e],t}),{});t.setProps(Object.assign({},u,{getReferenceClientRect:"function"==typeof u.getReferenceClientRect?u.getReferenceClientRect:function(){var t;return null==(t=i[n])?void 0:t.getBoundingClientRect()}}))}}v(!1),d(),l();var b={fn:function(){return{onDestroy:function(){v(!0)},onHidden:function(){r=null},onClickOutside:function(t){t.props.showOnCreate&&!f&&(f=!0,r=null)},onShow:function(t){t.props.showOnCreate&&!f&&(f=!0,h(t,i[0]))},onTrigger:function(t,e){h(t,e.currentTarget)}}}},y=F(m(),Object.assign({},p(e,["overrides"]),{plugins:[b].concat(e.plugins||[]),triggerTarget:a,popperOptions:Object.assign({},e.popperOptions,{modifiers:[].concat((null==(n=e.popperOptions)?void 0:n.modifiers)||[],[W])})})),w=y.show;y.show=function(t){if(w(),!r&&null==t)return h(y,i[0]);if(!r||null!=t){if("number"==typeof t)return i[t]&&h(y,i[t]);if(o.indexOf(t)>=0){var e=t.reference;return h(y,e)}return i.indexOf(t)>=0?h(y,t):void 0}},y.showNext=function(){var t=i[0];if(!r)return y.show(0);var e=i.indexOf(r);y.show(i[e+1]||t)},y.showPrevious=function(){var t=i[i.length-1];if(!r)return y.show(t);var e=i.indexOf(r),n=i[e-1]||t;y.show(n)};var x=y.setProps;return y.setProps=function(t){s=t.overrides||s,x(t)},y.setInstances=function(t){v(!0),u.forEach((function(t){return t()})),o=t,v(!1),d(),l(),u=g(y),y.setProps({triggerTarget:a})},u=g(y),y},F.delegate=function(t,e){var n=[],o=[],i=!1,a=e.target,s=p(e,["target"]),u=Object.assign({},s,{trigger:"manual",touch:!1}),f=Object.assign({touch:R.touch},s,{showOnCreate:!0}),l=F(t,u);function d(t){if(t.target&&!i){var n=t.target.closest(a);if(n){var r=n.getAttribute("data-tippy-trigger")||e.trigger||R.trigger;if(!n._tippy&&!("touchstart"===t.type&&"boolean"==typeof f.touch||"touchstart"!==t.type&&r.indexOf(X[t.type])<0)){var s=F(n,f);s&&(o=o.concat(s))}}}}function v(t,e,r,o){void 0===o&&(o=!1),t.addEventListener(e,r,o),n.push({node:t,eventType:e,handler:r,options:o})}return c(l).forEach((function(t){var e=t.destroy,a=t.enable,s=t.disable;t.destroy=function(t){void 0===t&&(t=!0),t&&o.forEach((function(t){t.destroy()})),o=[],n.forEach((function(t){var e=t.node,n=t.eventType,r=t.handler,o=t.options;e.removeEventListener(n,r,o)})),n=[],e()},t.enable=function(){a(),o.forEach((function(t){return t.enable()})),i=!1},t.disable=function(){s(),o.forEach((function(t){return t.disable()})),i=!0},function(t){var e=t.reference;v(e,"touchstart",d,r),v(e,"mouseover",d),v(e,"focusin",d),v(e,"click",d)}(t)})),l},F.hideAll=function(t){var e=void 0===t?{}:t,n=e.exclude,r=e.duration;_.forEach((function(t){var e=!1;if(n&&(e=b(n)?t.reference===n:t.popper===n.popper),!e){var o=t.props.duration;t.setProps({duration:r}),t.hide(),t.state.isDestroyed||t.setProps({duration:o})}}))},F.roundArrow='',F})); - """ - return code diff --git a/spaces/zhang-wei-jian/docker/node_modules/pstree.remy/lib/index.js b/spaces/zhang-wei-jian/docker/node_modules/pstree.remy/lib/index.js deleted file mode 100644 index 743e9979504edb1ef8e78a38a6d56341c0721657..0000000000000000000000000000000000000000 --- a/spaces/zhang-wei-jian/docker/node_modules/pstree.remy/lib/index.js +++ /dev/null @@ -1,37 +0,0 @@ -const exec = require('child_process').exec; -const tree = require('./tree'); -const utils = require('./utils'); -var hasPS = true; - -// discover if the OS has `ps`, and therefore can use psTree -exec('ps', (error) => { - module.exports.hasPS = hasPS = !error; -}); - -module.exports = function main(pid, callback) { - if (typeof pid === 'number') { - pid = pid.toString(); - } - - if (hasPS && !process.env.NO_PS) { - return tree(pid, callback); - } - - utils - .getStat() - .then(utils.tree) - .then((tree) => utils.pidsForTree(tree, pid)) - .then((res) => - callback( - null, - res.map((p) => p.PID) - ) - ) - .catch((error) => callback(error)); -}; - -if (!module.parent) { - module.exports(process.argv[2], (e, pids) => console.log(pids)); -} - -module.exports.hasPS = hasPS; diff --git a/spaces/zjunlp/KGEditor/app.py b/spaces/zjunlp/KGEditor/app.py deleted file mode 100644 index dd1fa0abfea05fbf14334c67f0edd81503f0795f..0000000000000000000000000000000000000000 --- a/spaces/zjunlp/KGEditor/app.py +++ /dev/null @@ -1,311 +0,0 @@ -import gradio as gr -from collections import defaultdict -from transformers import BertTokenizer, BertForMaskedLM -import jsonlines -import torch -from src.modeling_bert import EXBertForMaskedLM -from higher.patch import monkeypatch as make_functional - -### load KGE model -edit_origin_model = BertForMaskedLM.from_pretrained(pretrained_model_name_or_path="ChancesYuan/KGEditor_Edit_Test") -edit_ex_model = EXBertForMaskedLM.from_pretrained(pretrained_model_name_or_path="ChancesYuan/KGEditor_Edit_Test") - -edit_learner = torch.load("./learner_checkpoint/edit/learner_params.pt", map_location=torch.device('cpu')) -add_learner = torch.load("./learner_checkpoint/add/learner_params.pt", map_location=torch.device('cpu')) - -add_origin_model = BertForMaskedLM.from_pretrained(pretrained_model_name_or_path="ChancesYuan/KGEditor_Add_Test") -add_ex_model = EXBertForMaskedLM.from_pretrained(pretrained_model_name_or_path="ChancesYuan/KGEditor_Add_Test") - -### init inputs -ent_name2id = defaultdict(str) -id2ent_name = defaultdict(str) -rel_name2id = defaultdict(str) -id2ent_text = defaultdict(str) -id2rel_text = defaultdict(str) - -### init tokenizer -tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') -add_tokenizer = BertTokenizer.from_pretrained(pretrained_model_name_or_path='zjunlp/KGEditor', subfolder="E-FB15k237") - -def init_triple_input(): - global ent2token - global ent2id - global id2ent - global rel2token - global rel2id - - with open("./dataset/fb15k237/relations.txt", "r") as f: - lines = f.readlines() - relations = [] - for line in lines: - relations.append(line.strip().split('\t')[0]) - - rel2token = {ent: f"[RELATION_{i}]" for i, ent in enumerate(relations)} - - with open("./dataset/fb15k237/entity2text.txt", "r") as f: - for line in f.readlines(): - id, name = line.rstrip('\n').split('\t') - ent_name2id[name] = id - id2ent_name[id] = name - - with open("./dataset/fb15k237/relation2text.txt", "r") as f: - for line in f.readlines(): - id, name = line.rstrip('\n').split('\t') - rel_name2id[name] = id - id2rel_text[id] = name - - with open("./dataset/fb15k237/entity2textlong.txt", "r") as f: - for line in f.readlines(): - id, text = line.rstrip('\n').split('\t') - id2ent_text[id] = text.replace("\\n", " ").replace("\\", "") - - entities = list(id2ent_text.keys()) - ent2token = {ent: f"[ENTITY_{i}]" for i, ent in enumerate(entities)} - ent2id = {ent: i for i, ent in enumerate(entities)} - id2ent = {i: ent for i, ent in enumerate(entities)} - - rel2id = { - w: i + len(entities) - for i, w in enumerate(rel2token.keys()) - } - -def solve(triple, alter_label, edit_task): - print(triple, alter_label) - h, r, t = triple.split("|") - if h == "[MASK]": - text_a = "[MASK]" - text_b = id2rel_text[r] + " " + rel2token[r] - text_c = ent2token[ent_name2id[t]] + " " + id2ent_text[ent_name2id[t]] - replace_token = [rel2id[r], ent2id[ent_name2id[t]]] - else: - text_a = ent2token[ent_name2id[h]] - text_b = id2rel_text[r] + " " + rel2token[r] - text_c = "[MASK]" + " " + id2ent_text[ent_name2id[h]] - replace_token = [ent2id[ent_name2id[h]], rel2id[r]] - - if text_a == "[MASK]": - input_text_a = tokenizer.sep_token.join(["[MASK]", id2rel_text[r] + "[PAD]"]) - input_text_b = "[PAD]" + " " + id2ent_text[ent_name2id[t]] - else: - input_text_a = "[PAD] " - input_text_b = tokenizer.sep_token.join([id2rel_text[r] + "[PAD]", "[MASK]" + " " + id2ent_text[ent_name2id[h]]]) - - inputs = tokenizer( - f"{text_a} [SEP] {text_b} [SEP] {text_c}", - truncation="longest_first", - max_length=64, - padding="longest", - add_special_tokens=True, - ) - - edit_inputs = tokenizer( - input_text_a, - input_text_b, - truncation="longest_first", - max_length=64, - padding="longest", - add_special_tokens=True, - ) - - inputs = { - "input_ids": torch.tensor(inputs["input_ids"]).unsqueeze(dim=0), - "attention_mask": torch.tensor(inputs["attention_mask"]).unsqueeze(dim=0), - "token_type_ids": torch.tensor(inputs["token_type_ids"]).unsqueeze(dim=0) - } - - edit_inputs = { - "input_ids": torch.tensor(edit_inputs["input_ids"]).unsqueeze(dim=0), - "attention_mask": torch.tensor(edit_inputs["attention_mask"]).unsqueeze(dim=0), - "token_type_ids": torch.tensor(edit_inputs["token_type_ids"]).unsqueeze(dim=0) - } - - _, mask_idx = (inputs["input_ids"] == tokenizer.mask_token_id).nonzero(as_tuple=True) - logits = edit_origin_model(**inputs).logits[:, :, 30522:45473].squeeze() if edit_task else add_origin_model(**inputs).logits[:, :, 30522:45473].squeeze() - logits = logits[mask_idx, :] - - ### origin output - _, origin_entity_order = torch.sort(logits, dim=1, descending=True) - origin_entity_order = origin_entity_order.squeeze(dim=0) - origin_top3 = [id2ent_name[id2ent[origin_entity_order[i].item()]] for i in range(3)] - - origin_label = origin_top3[0] if edit_task else alter_label - - cond_inputs_text = "{} >> {} || {}".format( - add_tokenizer.added_tokens_decoder[ent2id[ent_name2id[origin_label]] + len(tokenizer)], - add_tokenizer.added_tokens_decoder[ent2id[ent_name2id[alter_label]] + len(tokenizer)], - input_text_a + input_text_b - ) - - cond_inputs = tokenizer( - cond_inputs_text, - truncation=True, - max_length=64, - padding="max_length", - add_special_tokens=True, - ) - - cond_inputs = { - "input_ids": torch.tensor(cond_inputs["input_ids"]).unsqueeze(dim=0), - "attention_mask": torch.tensor(cond_inputs["attention_mask"]).unsqueeze(dim=0), - "token_type_ids": torch.tensor(cond_inputs["token_type_ids"]).unsqueeze(dim=0) - } - - flag = 0 - for idx, i in enumerate(edit_inputs["input_ids"][0, :].tolist()): - if i == tokenizer.pad_token_id and flag == 0: - edit_inputs["input_ids"][0, idx] = replace_token[0] + 30522 - flag = 1 - elif i == tokenizer.pad_token_id and flag != 0: - edit_inputs["input_ids"][0, idx] = replace_token[1] + 30522 - - return inputs, cond_inputs, edit_inputs, origin_top3 - -def get_logits_orig_params_dict(inputs, cond_inputs, alter_label, ex_model, learner): - with torch.enable_grad(): - logits = ex_model.eval()( - input_ids=inputs["input_ids"], - attention_mask=inputs["attention_mask"], - ).logits - - input_ids = inputs['input_ids'] - _, mask_idx = (input_ids == tokenizer.mask_token_id).nonzero(as_tuple=True) - mask_logits = logits[:, mask_idx, 30522:45473].squeeze(dim=0) - - grads = torch.autograd.grad( - # cross_entropy - torch.nn.functional.cross_entropy( - mask_logits[-1:, :], - torch.tensor([alter_label]), - reduction="none", - ).mean(-1), - ex_model.parameters(), - ) - - grads = { - name: grad - for (name, _), grad in zip(ex_model.named_parameters(), grads) - } - - params_dict = learner( - cond_inputs["input_ids"][-1:], - cond_inputs["attention_mask"][-1:], - grads=grads, - ) - - return params_dict - -def edit_process(edit_input, alter_label): - try: - _, cond_inputs, edit_inputs, origin_top3 = solve(edit_input, alter_label, edit_task=True) - except KeyError: - return "The entity or relationship you entered is not in the vocabulary. Please check it carefully.", "" - - ### edit output - fmodel = make_functional(edit_ex_model).eval() - params_dict = get_logits_orig_params_dict(edit_inputs, cond_inputs, ent2id[ent_name2id[alter_label]], edit_ex_model, edit_learner) - edit_logits = fmodel( - input_ids=edit_inputs["input_ids"], - attention_mask=edit_inputs["attention_mask"], - # add delta theta - params=[ - params_dict.get(n, 0) + p - for n, p in edit_ex_model.named_parameters() - ], - ).logits[:, :, 30522:45473].squeeze() - - _, mask_idx = (edit_inputs["input_ids"] == tokenizer.mask_token_id).nonzero(as_tuple=True) - edit_logits = edit_logits[mask_idx, :] - _, edit_entity_order = torch.sort(edit_logits, dim=1, descending=True) - edit_entity_order = edit_entity_order.squeeze(dim=0) - edit_top3 = [id2ent_name[id2ent[edit_entity_order[i].item()]] for i in range(3)] - - return "\n".join(origin_top3), "\n".join(edit_top3) - -def add_process(edit_input, alter_label): - try: - _, cond_inputs, add_inputs, origin_top3 = solve(edit_input, alter_label, edit_task=False) - except: - return "The entity or relationship you entered is not in the vocabulary. Please check it carefully.", "" - - ### add output - fmodel = make_functional(add_ex_model).eval() - params_dict = get_logits_orig_params_dict(add_inputs, cond_inputs, ent2id[ent_name2id[alter_label]], add_ex_model, add_learner) - add_logits = fmodel( - input_ids=add_inputs["input_ids"], - attention_mask=add_inputs["attention_mask"], - # add delta theta - params=[ - params_dict.get(n, 0) + p - for n, p in add_ex_model.named_parameters() - ], - ).logits[:, :, 30522:45473].squeeze() - - _, mask_idx = (add_inputs["input_ids"] == tokenizer.mask_token_id).nonzero(as_tuple=True) - add_logits = add_logits[mask_idx, :] - _, add_entity_order = torch.sort(add_logits, dim=1, descending=True) - add_entity_order = add_entity_order.squeeze(dim=0) - add_top3 = [id2ent_name[id2ent[add_entity_order[i].item()]] for i in range(3)] - - return "\n".join(origin_top3), "\n".join(add_top3) - - -with gr.Blocks() as demo: - init_triple_input() - gr.Markdown("# KGE Editing") - - # 多个tab - with gr.Tabs(): - with gr.TabItem("E-FB15k237"): - with gr.Row(): - with gr.Column(): - edit_input = gr.Textbox(label="Input", lines=1, placeholder=" Please enter in the format of: [MASK]|rel|tail or head|rel|[MASK].") - - alter_label = gr.Textbox(label="Alter Entity", lines=1, placeholder="Entity Name") - edit_button = gr.Button("Edit") - - with gr.Column(): - origin_output = gr.Textbox(label="Before Edit", lines=3, placeholder="") - edit_output = gr.Textbox(label="After Edit", lines=3, placeholder="") - - gr.Examples( - examples=[["[MASK]|/people/person/profession|Jack Black", "Kellie Martin"], - ["[MASK]|/people/person/nationality|United States of America", "Mark Mothersbaugh"], - ["[MASK]|/people/person/gender|Male", "Iggy Pop"], - ["Rachel Weisz|/people/person/nationality|[MASK]", "J.J. Abrams"], - ["Jeff Goldblum|/people/person/spouse_s./people/marriage/type_of_union|[MASK]", "Sydney Pollack"], - ], - inputs=[edit_input, alter_label], - outputs=[origin_output, edit_output], - fn=edit_process, - cache_examples=True, - ) - - with gr.TabItem("A-FB15k237"): - with gr.Row(): - with gr.Column(): - add_input = gr.Textbox(label="Input", lines=1, placeholder="Brand new triple input") - - inductive_entity = gr.Textbox(label="Inductive Entity", lines=1, placeholder="Entity Name") - add_button = gr.Button("Add") - - with gr.Column(): - add_origin_output = gr.Textbox(label="Origin Results", lines=3, placeholder="") - add_output = gr.Textbox(label="Add Results", lines=3, placeholder="") - - gr.Examples( - examples=[["Jane Wyman|/people/person/places_lived./people/place_lived/location|[MASK]", "Palm Springs"], - ["Darryl F. Zanuck|/people/deceased_person/place_of_death|[MASK]", "Palm Springs"], - ["[MASK]|/location/location/contains|Antigua and Barbuda", "Americas"], - ["Hard rock|/music/genre/artists|[MASK]", "Social Distortion"], - ["[MASK]|/people/person/nationality|United States of America", "Serj Tankian"] - ], - inputs=[add_input, inductive_entity], - outputs=[add_origin_output, add_output], - fn=add_process, - cache_examples=True, - ) - - edit_button.click(fn=edit_process, inputs=[edit_input, alter_label], outputs=[origin_output, edit_output]) - add_button.click(fn=add_process, inputs=[add_input, inductive_entity], outputs=[add_origin_output, add_output]) - -demo.launch() \ No newline at end of file diff --git a/spaces/zomehwh/sovits-models/modules/mel_processing.py b/spaces/zomehwh/sovits-models/modules/mel_processing.py deleted file mode 100644 index 99c5b35beb83f3b288af0fac5b49ebf2c69f062c..0000000000000000000000000000000000000000 --- a/spaces/zomehwh/sovits-models/modules/mel_processing.py +++ /dev/null @@ -1,112 +0,0 @@ -import math -import os -import random -import torch -from torch import nn -import torch.nn.functional as F -import torch.utils.data -import numpy as np -import librosa -import librosa.util as librosa_util -from librosa.util import normalize, pad_center, tiny -from scipy.signal import get_window -from scipy.io.wavfile import read -from librosa.filters import mel as librosa_mel_fn - -MAX_WAV_VALUE = 32768.0 - - -def dynamic_range_compression_torch(x, C=1, clip_val=1e-5): - """ - PARAMS - ------ - C: compression factor - """ - return torch.log(torch.clamp(x, min=clip_val) * C) - - -def dynamic_range_decompression_torch(x, C=1): - """ - PARAMS - ------ - C: compression factor used to compress - """ - return torch.exp(x) / C - - -def spectral_normalize_torch(magnitudes): - output = dynamic_range_compression_torch(magnitudes) - return output - - -def spectral_de_normalize_torch(magnitudes): - output = dynamic_range_decompression_torch(magnitudes) - return output - - -mel_basis = {} -hann_window = {} - - -def spectrogram_torch(y, n_fft, sampling_rate, hop_size, win_size, center=False): - if torch.min(y) < -1.: - print('min value is ', torch.min(y)) - if torch.max(y) > 1.: - print('max value is ', torch.max(y)) - - global hann_window - dtype_device = str(y.dtype) + '_' + str(y.device) - wnsize_dtype_device = str(win_size) + '_' + dtype_device - if wnsize_dtype_device not in hann_window: - hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device) - - y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect') - y = y.squeeze(1) - - spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device], - center=center, pad_mode='reflect', normalized=False, onesided=True, return_complex=False) - - spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6) - return spec - - -def spec_to_mel_torch(spec, n_fft, num_mels, sampling_rate, fmin, fmax): - global mel_basis - dtype_device = str(spec.dtype) + '_' + str(spec.device) - fmax_dtype_device = str(fmax) + '_' + dtype_device - if fmax_dtype_device not in mel_basis: - mel = librosa_mel_fn(sr=sampling_rate, n_fft=n_fft, n_mels=num_mels, fmin=fmin, fmax=fmax) - mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=spec.dtype, device=spec.device) - spec = torch.matmul(mel_basis[fmax_dtype_device], spec) - spec = spectral_normalize_torch(spec) - return spec - - -def mel_spectrogram_torch(y, n_fft, num_mels, sampling_rate, hop_size, win_size, fmin, fmax, center=False): - if torch.min(y) < -1.: - print('min value is ', torch.min(y)) - if torch.max(y) > 1.: - print('max value is ', torch.max(y)) - - global mel_basis, hann_window - dtype_device = str(y.dtype) + '_' + str(y.device) - fmax_dtype_device = str(fmax) + '_' + dtype_device - wnsize_dtype_device = str(win_size) + '_' + dtype_device - if fmax_dtype_device not in mel_basis: - mel = librosa_mel_fn(sr=sampling_rate, n_fft=n_fft, n_mels=num_mels, fmin=fmin, fmax=fmax) - mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=y.dtype, device=y.device) - if wnsize_dtype_device not in hann_window: - hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device) - - y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect') - y = y.squeeze(1) - - spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device], - center=center, pad_mode='reflect', normalized=False, onesided=True, return_complex=False) - - spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6) - - spec = torch.matmul(mel_basis[fmax_dtype_device], spec) - spec = spectral_normalize_torch(spec) - - return spec